Extend the submission contract to accept a `systemInfo` object and persist it with each submission, including deep-copy support for `extra` metadata. Also update client-facing docs and HTTP examples (JSON and multipart) and document that the schema is available at `GET /api/schema`, so clients can reliably implement the updated payload format.feat(api): support optional systemInfo in submissions Extend the submission contract to accept a `systemInfo` object and persist it with each submission, including deep-copy support for `extra` metadata. Also update client-facing docs and HTTP examples (JSON and multipart) and document that the schema is available at `GET /api/schema`, so clients can reliably implement the updated payload format.
CPU Benchmark Submission Server
Production-oriented Go web application for ingesting CPU benchmark results, storing them in BadgerDB, searching them from an in-memory index, and rendering a server-side HTML dashboard.
Features
POST /api/submitaccepts eitherapplication/jsonormultipart/form-data.GET /api/searchperforms case-insensitive token matching against submitter/general fields and CPU brand strings, with explicit thread-mode, platform, intensity, duration, and sort controls.GET /renders the latest submissions with search and pagination.- The dashboard follows the system light/dark preference by default and includes a manual theme toggle in the top-right corner.
- BadgerDB stores each submission under a reverse-timestamp key so native iteration returns newest records first.
- A startup-loaded in-memory search index prevents full DB deserialization for every query.
- Graceful shutdown closes the HTTP server and BadgerDB cleanly to avoid lock issues.
Data Model
Each stored submission contains:
submissionID: server-generated UUIDsubmitter: defaults toAnonymousif omittedplatform: normalized towindows,linux, ormacos; defaults towindowsif omittedsubmittedAt: server-side storage timestamp- Benchmark payload fields:
configcpuInfostartedAtdurationtotalOpsmOpsPerSecscorecoreResults
The parser also accepts optional CPU metadata found in your local sample JSON files such as isHybrid, has3DVCache, supportedFeatures, and cores.
Code Structure
main.gobootstraps configuration, storage, the HTTP server, and graceful shutdown.lib/configcontains runtime configuration loading from environment variables.lib/modelcontains the benchmark and submission domain models plus validation helpers.lib/storecontains BadgerDB persistence and the in-memory search index.lib/webcontains routing, handlers, request parsing, pagination, and template helpers.templates/index.htmlcontains the server-rendered frontend.http/*.httpcontains example requests for manual API testing.
Requirements
- Go
1.23+ - Docker and Docker Compose if running the containerized version
Local Development
-
Resolve modules:
go mod tidy -
Start the server:
go run . -
Open:
- UI:
http://localhost:8080/ - API health check:
http://localhost:8080/healthz
- UI:
Environment Variables
| Variable | Default | Description |
|---|---|---|
APP_ADDR |
:8080 |
HTTP listen address |
BADGER_DIR |
data/badger |
BadgerDB directory |
PAGE_SIZE |
50 |
Default number of cards per UI page |
SHUTDOWN_TIMEOUT |
10s |
Graceful shutdown timeout |
API Usage
POST /api/submit
Accepted content types:
application/jsonmultipart/form-data
JSON requests support either:
- A wrapper envelope with
submitter,platform, and nestedbenchmark - A raw benchmark JSON body, with optional submitter provided via:
- query string
?submitter=... - header
X-Submitter - top-level
submitterfield - query string
?platform=... - header
X-Platform - top-level
platformfield
- query string
platform is stored for every submission. Supported values are windows, linux, and macos. If the client does not send it, the server defaults to windows.
Multipart requests support:
submittertext fieldplatformtext field- benchmark JSON as one of these file fields:
benchmark,file,benchmarkFile - or benchmark JSON as text fields:
benchmark,payload,result,data
Example success response:
{
"success": true,
"submissionID": "8f19d442-1be0-4989-97cf-3f8ee6b61548",
"platform": "windows",
"submitter": "Workstation-Lab-A",
"submittedAt": "2026-04-15T15:45:41.327225Z"
}
GET /api/search
Query parameters:
text: token-matches submitter and general searchable fieldscpu: token-matchescpuInfo.brandStringthread:singleormultiplatform:windows,linux, ormacossort:newest,oldest,score_desc,score_asc,mops_desc, ormops_ascintensity: exact match onconfig.intensitydurationSecs: exact match onconfig.durationSecs
Example:
curl "http://localhost:8080/api/search?text=intel&cpu=13700&thread=multi&platform=windows&sort=score_desc&intensity=10&durationSecs=30"
GET /
Query parameters:
pagetextcputhreadplatformsortintensitydurationSecs
Examples:
http://localhost:8080/
http://localhost:8080/?page=2
http://localhost:8080/?text=anonymous&cpu=ryzen&thread=multi&platform=windows&sort=score_desc&intensity=10&durationSecs=20
Request Examples
Ready-to-run HTTP client examples are included in:
http/submit-json.httphttp/submit-multipart.httphttp/search.http
Client-facing submission contract docs are included in:
docs/submit-api.mddocs/submit-schema.json
The schema is also served by the app at GET /api/schema.
You can also submit one of the provided sample payloads directly:
curl -X POST "http://localhost:8080/api/submit?submitter=Example-CLI" \
-H "Content-Type: application/json" \
-H "X-Platform: windows" \
--data-binary @example_jsons/5800X/cpu-bench-result.json
Or as multipart:
curl -X POST "http://localhost:8080/api/submit" \
-F "submitter=Example-Multipart" \
-F "platform=linux" \
-F "benchmark=@example_jsons/i9/cpu-bench-result.json;type=application/json"
Storage and Search Strategy
- Primary keys are written as
submission:<reversed_unix_nanos>:<uuid>. - Reversing the timestamp means lexicographically ascending iteration yields newest submissions first.
- On startup, all submissions are loaded into an in-memory index containing:
- canonical submission payload
- normalized general search text
- normalized CPU brand text
- Searches scan the in-memory ordered slice rather than reopening and deserializing Badger values for every request, apply explicit platform, thread-mode, intensity, and duration filters in memory, then optionally sort the matching results by submission time, score, or MOps/sec.
Docker
Build and run with Docker Compose:
docker compose up --build
The container exposes port 8080 and persists BadgerDB data in the named volume badger-data.
To build manually:
docker build -t cpu-benchmark-server .
docker run --rm -p 8080:8080 -v cpu-benchmark-data:/data cpu-benchmark-server
Gitea Workflow
The repository includes .gitea/workflows/docker-publish.yml for tagged Docker publishes.
- Trigger: any pushed tag matching
v* - Test step:
go test ./... - Published images:
tea.chunkbyte.com/kato/cpu-benchmarker-server:<tag>andtea.chunkbyte.com/kato/cpu-benchmarker-server:latest - Runner requirement: the selected Gitea runner label must provide a working Docker CLI and daemon access for
docker buildanddocker push
Notes
- The UI uses Go templates plus Tailwind CSS via CDN.
- Search is token-based and case-insensitive rather than edit-distance based.
- Unknown JSON fields are ignored, so benchmark clients can evolve without immediately breaking ingestion.
- If you stop the service abruptly and leave a lock behind, restart after the process exits cleanly or remove the old lock file only when you know no other instance is using the DB.