Files
cpu-benchmarker-server/README.md

223 lines
6.8 KiB
Markdown
Raw Normal View History

2026-04-15 19:09:21 +03:00
# CPU Benchmark Submission Server
Production-oriented Go web application for ingesting CPU benchmark results, storing them in BadgerDB, searching them from an in-memory index, and rendering a server-side HTML dashboard.
## Features
- `POST /api/submit` accepts either `application/json` or `multipart/form-data`.
- `GET /api/search` performs case-insensitive token matching against submitter/general fields and CPU brand strings, with explicit thread-mode, platform, intensity, and duration filters.
2026-04-15 19:09:21 +03:00
- `GET /` renders the latest submissions with search and pagination.
- BadgerDB stores each submission under a reverse-timestamp key so native iteration returns newest records first.
- A startup-loaded in-memory search index prevents full DB deserialization for every query.
- Graceful shutdown closes the HTTP server and BadgerDB cleanly to avoid lock issues.
## Data Model
Each stored submission contains:
- `submissionID`: server-generated UUID
- `submitter`: defaults to `Anonymous` if omitted
- `platform`: normalized to `windows`, `linux`, or `macos`; defaults to `windows` if omitted
2026-04-15 19:09:21 +03:00
- `submittedAt`: server-side storage timestamp
- Benchmark payload fields:
- `config`
- `cpuInfo`
- `startedAt`
- `duration`
- `totalOps`
- `mOpsPerSec`
- `score`
- `coreResults`
The parser also accepts optional CPU metadata found in your local sample JSON files such as `isHybrid`, `has3DVCache`, `supportedFeatures`, and `cores`.
## Code Structure
- `main.go` bootstraps configuration, storage, the HTTP server, and graceful shutdown.
- `lib/config` contains runtime configuration loading from environment variables.
- `lib/model` contains the benchmark and submission domain models plus validation helpers.
- `lib/store` contains BadgerDB persistence and the in-memory search index.
- `lib/web` contains routing, handlers, request parsing, pagination, and template helpers.
- `templates/index.html` contains the server-rendered frontend.
- `http/*.http` contains example requests for manual API testing.
2026-04-15 19:09:21 +03:00
## Requirements
- Go `1.23+`
- Docker and Docker Compose if running the containerized version
## Local Development
1. Resolve modules:
```bash
go mod tidy
```
2. Start the server:
```bash
go run .
```
3. Open:
- UI: `http://localhost:8080/`
- API health check: `http://localhost:8080/healthz`
### Environment Variables
| Variable | Default | Description |
| --- | --- | --- |
| `APP_ADDR` | `:8080` | HTTP listen address |
| `BADGER_DIR` | `data/badger` | BadgerDB directory |
| `PAGE_SIZE` | `50` | Default number of cards per UI page |
| `SHUTDOWN_TIMEOUT` | `10s` | Graceful shutdown timeout |
## API Usage
### `POST /api/submit`
Accepted content types:
- `application/json`
- `multipart/form-data`
JSON requests support either:
1. A wrapper envelope with `submitter`, `platform`, and nested `benchmark`
2026-04-15 19:09:21 +03:00
2. A raw benchmark JSON body, with optional submitter provided via:
- query string `?submitter=...`
- header `X-Submitter`
- top-level `submitter` field
- query string `?platform=...`
- header `X-Platform`
- top-level `platform` field
`platform` is stored for every submission. Supported values are `windows`, `linux`, and `macos`. If the client does not send it, the server defaults to `windows`.
2026-04-15 19:09:21 +03:00
Multipart requests support:
- `submitter` text field
- `platform` text field
2026-04-15 19:09:21 +03:00
- benchmark JSON as one of these file fields: `benchmark`, `file`, `benchmarkFile`
- or benchmark JSON as text fields: `benchmark`, `payload`, `result`, `data`
Example success response:
```json
{
"success": true,
"submissionID": "8f19d442-1be0-4989-97cf-3f8ee6b61548",
"platform": "windows",
2026-04-15 19:09:21 +03:00
"submitter": "Workstation-Lab-A",
"submittedAt": "2026-04-15T15:45:41.327225Z"
}
```
### `GET /api/search`
Query parameters:
- `text`: token-matches submitter and general searchable fields
- `cpu`: token-matches `cpuInfo.brandString`
- `thread`: `single` or `multi`
- `platform`: `windows`, `linux`, or `macos`
- `intensity`: exact match on `config.intensity`
- `durationSecs`: exact match on `config.durationSecs`
2026-04-15 19:09:21 +03:00
Example:
```bash
curl "http://localhost:8080/api/search?text=intel&cpu=13700&thread=multi&platform=windows&intensity=10&durationSecs=30"
2026-04-15 19:09:21 +03:00
```
### `GET /`
Query parameters:
- `page`
- `text`
- `cpu`
- `thread`
- `platform`
- `intensity`
- `durationSecs`
2026-04-15 19:09:21 +03:00
Examples:
```text
http://localhost:8080/
http://localhost:8080/?page=2
http://localhost:8080/?text=anonymous&cpu=ryzen&thread=multi&platform=windows&intensity=10&durationSecs=20
2026-04-15 19:09:21 +03:00
```
## Request Examples
Ready-to-run HTTP client examples are included in:
- `http/submit-json.http`
- `http/submit-multipart.http`
- `http/search.http`
You can also submit one of the provided sample payloads directly:
```bash
curl -X POST "http://localhost:8080/api/submit?submitter=Example-CLI" \
-H "Content-Type: application/json" \
-H "X-Platform: windows" \
2026-04-15 19:09:21 +03:00
--data-binary @example_jsons/5800X/cpu-bench-result.json
```
Or as multipart:
```bash
curl -X POST "http://localhost:8080/api/submit" \
-F "submitter=Example-Multipart" \
-F "platform=linux" \
2026-04-15 19:09:21 +03:00
-F "benchmark=@example_jsons/i9/cpu-bench-result.json;type=application/json"
```
## Storage and Search Strategy
- Primary keys are written as `submission:<reversed_unix_nanos>:<uuid>`.
- Reversing the timestamp means lexicographically ascending iteration yields newest submissions first.
- On startup, all submissions are loaded into an in-memory index containing:
- canonical submission payload
- normalized general search text
- normalized CPU brand text
- Searches scan the in-memory ordered slice rather than reopening and deserializing Badger values for every request, and apply explicit platform, thread-mode, intensity, and duration filters in memory.
2026-04-15 19:09:21 +03:00
## Docker
Build and run with Docker Compose:
```bash
docker compose up --build
```
The container exposes port `8080` and persists BadgerDB data in the named volume `badger-data`.
To build manually:
```bash
docker build -t cpu-benchmark-server .
docker run --rm -p 8080:8080 -v cpu-benchmark-data:/data cpu-benchmark-server
```
## Gitea Workflow
The repository includes `.gitea/workflows/docker-publish.yml` for tagged Docker publishes.
- Trigger: any pushed tag matching `v*`
- Test step: `go test ./...`
- Published images: `tea.chunkbyte.com/kato/cpu-benchmarker-server:<tag>` and `tea.chunkbyte.com/kato/cpu-benchmarker-server:latest`
- Runner requirement: the selected Gitea runner label must provide a working Docker CLI and daemon access for `docker build` and `docker push`
2026-04-15 19:09:21 +03:00
## Notes
- The UI uses Go templates plus Tailwind CSS via CDN.
- Search is token-based and case-insensitive rather than edit-distance based.
- Unknown JSON fields are ignored, so benchmark clients can evolve without immediately breaking ingestion.
- If you stop the service abruptly and leave a lock behind, restart after the process exits cleanly or remove the old lock file only when you know no other instance is using the DB.