All checks were successful
Build and Publish Docker Image / deploy (push) Successful in 1m17s
Introduce a `sort` query parameter in submission search paths and pass it into store search logic. Matching results can now be ordered by `newest`, `oldest`, `score_desc`, `score_asc`, `mops_desc`, or `mops_asc`, with invalid values safely defaulting to `newest`. Update README and HTTP examples to document the new sort behavior and usage so clients can control result ordering server-side without extra post-processing.feat(search): add configurable sorting for submission results Introduce a `sort` query parameter in submission search paths and pass it into store search logic. Matching results can now be ordered by `newest`, `oldest`, `score_desc`, `score_asc`, `mops_desc`, or `mops_asc`, with invalid values safely defaulting to `newest`. Update README and HTTP examples to document the new sort behavior and usage so clients can control result ordering server-side without extra post-processing.
226 lines
7.2 KiB
Markdown
226 lines
7.2 KiB
Markdown
# CPU Benchmark Submission Server
|
|
|
|
Production-oriented Go web application for ingesting CPU benchmark results, storing them in BadgerDB, searching them from an in-memory index, and rendering a server-side HTML dashboard.
|
|
|
|
## Features
|
|
|
|
- `POST /api/submit` accepts either `application/json` or `multipart/form-data`.
|
|
- `GET /api/search` performs case-insensitive token matching against submitter/general fields and CPU brand strings, with explicit thread-mode, platform, intensity, duration, and sort controls.
|
|
- `GET /` renders the latest submissions with search and pagination.
|
|
- The dashboard follows the system light/dark preference by default and includes a manual theme toggle in the top-right corner.
|
|
- BadgerDB stores each submission under a reverse-timestamp key so native iteration returns newest records first.
|
|
- A startup-loaded in-memory search index prevents full DB deserialization for every query.
|
|
- Graceful shutdown closes the HTTP server and BadgerDB cleanly to avoid lock issues.
|
|
|
|
## Data Model
|
|
|
|
Each stored submission contains:
|
|
|
|
- `submissionID`: server-generated UUID
|
|
- `submitter`: defaults to `Anonymous` if omitted
|
|
- `platform`: normalized to `windows`, `linux`, or `macos`; defaults to `windows` if omitted
|
|
- `submittedAt`: server-side storage timestamp
|
|
- Benchmark payload fields:
|
|
- `config`
|
|
- `cpuInfo`
|
|
- `startedAt`
|
|
- `duration`
|
|
- `totalOps`
|
|
- `mOpsPerSec`
|
|
- `score`
|
|
- `coreResults`
|
|
|
|
The parser also accepts optional CPU metadata found in your local sample JSON files such as `isHybrid`, `has3DVCache`, `supportedFeatures`, and `cores`.
|
|
|
|
## Code Structure
|
|
|
|
- `main.go` bootstraps configuration, storage, the HTTP server, and graceful shutdown.
|
|
- `lib/config` contains runtime configuration loading from environment variables.
|
|
- `lib/model` contains the benchmark and submission domain models plus validation helpers.
|
|
- `lib/store` contains BadgerDB persistence and the in-memory search index.
|
|
- `lib/web` contains routing, handlers, request parsing, pagination, and template helpers.
|
|
- `templates/index.html` contains the server-rendered frontend.
|
|
- `http/*.http` contains example requests for manual API testing.
|
|
|
|
## Requirements
|
|
|
|
- Go `1.23+`
|
|
- Docker and Docker Compose if running the containerized version
|
|
|
|
## Local Development
|
|
|
|
1. Resolve modules:
|
|
|
|
```bash
|
|
go mod tidy
|
|
```
|
|
|
|
2. Start the server:
|
|
|
|
```bash
|
|
go run .
|
|
```
|
|
|
|
3. Open:
|
|
|
|
- UI: `http://localhost:8080/`
|
|
- API health check: `http://localhost:8080/healthz`
|
|
|
|
### Environment Variables
|
|
|
|
| Variable | Default | Description |
|
|
| --- | --- | --- |
|
|
| `APP_ADDR` | `:8080` | HTTP listen address |
|
|
| `BADGER_DIR` | `data/badger` | BadgerDB directory |
|
|
| `PAGE_SIZE` | `50` | Default number of cards per UI page |
|
|
| `SHUTDOWN_TIMEOUT` | `10s` | Graceful shutdown timeout |
|
|
|
|
## API Usage
|
|
|
|
### `POST /api/submit`
|
|
|
|
Accepted content types:
|
|
|
|
- `application/json`
|
|
- `multipart/form-data`
|
|
|
|
JSON requests support either:
|
|
|
|
1. A wrapper envelope with `submitter`, `platform`, and nested `benchmark`
|
|
2. A raw benchmark JSON body, with optional submitter provided via:
|
|
- query string `?submitter=...`
|
|
- header `X-Submitter`
|
|
- top-level `submitter` field
|
|
- query string `?platform=...`
|
|
- header `X-Platform`
|
|
- top-level `platform` field
|
|
|
|
`platform` is stored for every submission. Supported values are `windows`, `linux`, and `macos`. If the client does not send it, the server defaults to `windows`.
|
|
|
|
Multipart requests support:
|
|
|
|
- `submitter` text field
|
|
- `platform` text field
|
|
- benchmark JSON as one of these file fields: `benchmark`, `file`, `benchmarkFile`
|
|
- or benchmark JSON as text fields: `benchmark`, `payload`, `result`, `data`
|
|
|
|
Example success response:
|
|
|
|
```json
|
|
{
|
|
"success": true,
|
|
"submissionID": "8f19d442-1be0-4989-97cf-3f8ee6b61548",
|
|
"platform": "windows",
|
|
"submitter": "Workstation-Lab-A",
|
|
"submittedAt": "2026-04-15T15:45:41.327225Z"
|
|
}
|
|
```
|
|
|
|
### `GET /api/search`
|
|
|
|
Query parameters:
|
|
|
|
- `text`: token-matches submitter and general searchable fields
|
|
- `cpu`: token-matches `cpuInfo.brandString`
|
|
- `thread`: `single` or `multi`
|
|
- `platform`: `windows`, `linux`, or `macos`
|
|
- `sort`: `newest`, `oldest`, `score_desc`, `score_asc`, `mops_desc`, or `mops_asc`
|
|
- `intensity`: exact match on `config.intensity`
|
|
- `durationSecs`: exact match on `config.durationSecs`
|
|
|
|
Example:
|
|
|
|
```bash
|
|
curl "http://localhost:8080/api/search?text=intel&cpu=13700&thread=multi&platform=windows&sort=score_desc&intensity=10&durationSecs=30"
|
|
```
|
|
|
|
### `GET /`
|
|
|
|
Query parameters:
|
|
|
|
- `page`
|
|
- `text`
|
|
- `cpu`
|
|
- `thread`
|
|
- `platform`
|
|
- `sort`
|
|
- `intensity`
|
|
- `durationSecs`
|
|
|
|
Examples:
|
|
|
|
```text
|
|
http://localhost:8080/
|
|
http://localhost:8080/?page=2
|
|
http://localhost:8080/?text=anonymous&cpu=ryzen&thread=multi&platform=windows&sort=score_desc&intensity=10&durationSecs=20
|
|
```
|
|
|
|
## Request Examples
|
|
|
|
Ready-to-run HTTP client examples are included in:
|
|
|
|
- `http/submit-json.http`
|
|
- `http/submit-multipart.http`
|
|
- `http/search.http`
|
|
|
|
You can also submit one of the provided sample payloads directly:
|
|
|
|
```bash
|
|
curl -X POST "http://localhost:8080/api/submit?submitter=Example-CLI" \
|
|
-H "Content-Type: application/json" \
|
|
-H "X-Platform: windows" \
|
|
--data-binary @example_jsons/5800X/cpu-bench-result.json
|
|
```
|
|
|
|
Or as multipart:
|
|
|
|
```bash
|
|
curl -X POST "http://localhost:8080/api/submit" \
|
|
-F "submitter=Example-Multipart" \
|
|
-F "platform=linux" \
|
|
-F "benchmark=@example_jsons/i9/cpu-bench-result.json;type=application/json"
|
|
```
|
|
|
|
## Storage and Search Strategy
|
|
|
|
- Primary keys are written as `submission:<reversed_unix_nanos>:<uuid>`.
|
|
- Reversing the timestamp means lexicographically ascending iteration yields newest submissions first.
|
|
- On startup, all submissions are loaded into an in-memory index containing:
|
|
- canonical submission payload
|
|
- normalized general search text
|
|
- normalized CPU brand text
|
|
- Searches scan the in-memory ordered slice rather than reopening and deserializing Badger values for every request, apply explicit platform, thread-mode, intensity, and duration filters in memory, then optionally sort the matching results by submission time, score, or MOps/sec.
|
|
|
|
## Docker
|
|
|
|
Build and run with Docker Compose:
|
|
|
|
```bash
|
|
docker compose up --build
|
|
```
|
|
|
|
The container exposes port `8080` and persists BadgerDB data in the named volume `badger-data`.
|
|
|
|
To build manually:
|
|
|
|
```bash
|
|
docker build -t cpu-benchmark-server .
|
|
docker run --rm -p 8080:8080 -v cpu-benchmark-data:/data cpu-benchmark-server
|
|
```
|
|
|
|
## Gitea Workflow
|
|
|
|
The repository includes `.gitea/workflows/docker-publish.yml` for tagged Docker publishes.
|
|
|
|
- Trigger: any pushed tag matching `v*`
|
|
- Test step: `go test ./...`
|
|
- Published images: `tea.chunkbyte.com/kato/cpu-benchmarker-server:<tag>` and `tea.chunkbyte.com/kato/cpu-benchmarker-server:latest`
|
|
- Runner requirement: the selected Gitea runner label must provide a working Docker CLI and daemon access for `docker build` and `docker push`
|
|
|
|
## Notes
|
|
|
|
- The UI uses Go templates plus Tailwind CSS via CDN.
|
|
- Search is token-based and case-insensitive rather than edit-distance based.
|
|
- Unknown JSON fields are ignored, so benchmark clients can evolve without immediately breaking ingestion.
|
|
- If you stop the service abruptly and leave a lock behind, restart after the process exits cleanly or remove the old lock file only when you know no other instance is using the DB.
|