Files
cpu-benchmarker-server/README.md
2026-04-15 19:09:21 +03:00

201 lines
5.1 KiB
Markdown

# CPU Benchmark Submission Server
Production-oriented Go web application for ingesting CPU benchmark results, storing them in BadgerDB, searching them from an in-memory index, and rendering a server-side HTML dashboard.
## Features
- `POST /api/submit` accepts either `application/json` or `multipart/form-data`.
- `GET /api/search` performs case-insensitive token matching against submitter/general fields and CPU brand strings.
- `GET /` renders the latest submissions with search and pagination.
- BadgerDB stores each submission under a reverse-timestamp key so native iteration returns newest records first.
- A startup-loaded in-memory search index prevents full DB deserialization for every query.
- Graceful shutdown closes the HTTP server and BadgerDB cleanly to avoid lock issues.
## Project Layout
```text
.
├── main.go
├── handlers.go
├── db.go
├── models.go
├── templates/index.html
├── http/
├── example_jsons/
├── Dockerfile
└── docker-compose.yml
```
## Data Model
Each stored submission contains:
- `submissionID`: server-generated UUID
- `submitter`: defaults to `Anonymous` if omitted
- `submittedAt`: server-side storage timestamp
- Benchmark payload fields:
- `config`
- `cpuInfo`
- `startedAt`
- `duration`
- `totalOps`
- `mOpsPerSec`
- `score`
- `coreResults`
The parser also accepts optional CPU metadata found in your local sample JSON files such as `isHybrid`, `has3DVCache`, `supportedFeatures`, and `cores`.
## Requirements
- Go `1.23+`
- Docker and Docker Compose if running the containerized version
## Local Development
1. Resolve modules:
```bash
go mod tidy
```
2. Start the server:
```bash
go run .
```
3. Open:
- UI: `http://localhost:8080/`
- API health check: `http://localhost:8080/healthz`
### Environment Variables
| Variable | Default | Description |
| --- | --- | --- |
| `APP_ADDR` | `:8080` | HTTP listen address |
| `BADGER_DIR` | `data/badger` | BadgerDB directory |
| `PAGE_SIZE` | `50` | Default number of cards per UI page |
| `SHUTDOWN_TIMEOUT` | `10s` | Graceful shutdown timeout |
## API Usage
### `POST /api/submit`
Accepted content types:
- `application/json`
- `multipart/form-data`
JSON requests support either:
1. A wrapper envelope with `submitter` and nested `benchmark`
2. A raw benchmark JSON body, with optional submitter provided via:
- query string `?submitter=...`
- header `X-Submitter`
- top-level `submitter` field
Multipart requests support:
- `submitter` text field
- benchmark JSON as one of these file fields: `benchmark`, `file`, `benchmarkFile`
- or benchmark JSON as text fields: `benchmark`, `payload`, `result`, `data`
Example success response:
```json
{
"success": true,
"submissionID": "8f19d442-1be0-4989-97cf-3f8ee6b61548",
"submitter": "Workstation-Lab-A",
"submittedAt": "2026-04-15T15:45:41.327225Z"
}
```
### `GET /api/search`
Query parameters:
- `text`: token-matches submitter and general searchable fields
- `cpu`: token-matches `cpuInfo.brandString`
Example:
```bash
curl "http://localhost:8080/api/search?text=intel&cpu=13700"
```
### `GET /`
Query parameters:
- `page`
- `text`
- `cpu`
Examples:
```text
http://localhost:8080/
http://localhost:8080/?page=2
http://localhost:8080/?text=anonymous&cpu=ryzen
```
## Request Examples
Ready-to-run HTTP client examples are included in:
- `http/submit-json.http`
- `http/submit-multipart.http`
- `http/search.http`
You can also submit one of the provided sample payloads directly:
```bash
curl -X POST "http://localhost:8080/api/submit?submitter=Example-CLI" \
-H "Content-Type: application/json" \
--data-binary @example_jsons/5800X/cpu-bench-result.json
```
Or as multipart:
```bash
curl -X POST "http://localhost:8080/api/submit" \
-F "submitter=Example-Multipart" \
-F "benchmark=@example_jsons/i9/cpu-bench-result.json;type=application/json"
```
## Storage and Search Strategy
- Primary keys are written as `submission:<reversed_unix_nanos>:<uuid>`.
- Reversing the timestamp means lexicographically ascending iteration yields newest submissions first.
- On startup, all submissions are loaded into an in-memory index containing:
- canonical submission payload
- normalized general search text
- normalized CPU brand text
- Searches scan the in-memory ordered slice rather than reopening and deserializing Badger values for every request.
## Docker
Build and run with Docker Compose:
```bash
docker compose up --build
```
The container exposes port `8080` and persists BadgerDB data in the named volume `badger-data`.
To build manually:
```bash
docker build -t cpu-benchmark-server .
docker run --rm -p 8080:8080 -v cpu-benchmark-data:/data cpu-benchmark-server
```
## Notes
- The UI uses Go templates plus Tailwind CSS via CDN.
- Search is token-based and case-insensitive rather than edit-distance based.
- Unknown JSON fields are ignored, so benchmark clients can evolve without immediately breaking ingestion.
- If you stop the service abruptly and leave a lock behind, restart after the process exits cleanly or remove the old lock file only when you know no other instance is using the DB.