Some checks failed
Build and Publish Docker Image / deploy (push) Failing after 40s
Replace the outdated top-level file tree with a package-focused code structure section that reflects the current `lib/*` layout and responsibilities. Add a Gitea workflow section describing tagged Docker publish behavior, image tags, test step, and runner requirements so release automation is clear and repeatable.docs(readme): document new code structure and CI publish Replace the outdated top-level file tree with a package-focused code structure section that reflects the current `lib/*` layout and responsibilities. Add a Gitea workflow section describing tagged Docker publish behavior, image tags, test step, and runner requirements so release automation is clear and repeatable.
205 lines
5.9 KiB
Markdown
205 lines
5.9 KiB
Markdown
# CPU Benchmark Submission Server
|
|
|
|
Production-oriented Go web application for ingesting CPU benchmark results, storing them in BadgerDB, searching them from an in-memory index, and rendering a server-side HTML dashboard.
|
|
|
|
## Features
|
|
|
|
- `POST /api/submit` accepts either `application/json` or `multipart/form-data`.
|
|
- `GET /api/search` performs case-insensitive token matching against submitter/general fields and CPU brand strings.
|
|
- `GET /` renders the latest submissions with search and pagination.
|
|
- BadgerDB stores each submission under a reverse-timestamp key so native iteration returns newest records first.
|
|
- A startup-loaded in-memory search index prevents full DB deserialization for every query.
|
|
- Graceful shutdown closes the HTTP server and BadgerDB cleanly to avoid lock issues.
|
|
|
|
## Data Model
|
|
|
|
Each stored submission contains:
|
|
|
|
- `submissionID`: server-generated UUID
|
|
- `submitter`: defaults to `Anonymous` if omitted
|
|
- `submittedAt`: server-side storage timestamp
|
|
- Benchmark payload fields:
|
|
- `config`
|
|
- `cpuInfo`
|
|
- `startedAt`
|
|
- `duration`
|
|
- `totalOps`
|
|
- `mOpsPerSec`
|
|
- `score`
|
|
- `coreResults`
|
|
|
|
The parser also accepts optional CPU metadata found in your local sample JSON files such as `isHybrid`, `has3DVCache`, `supportedFeatures`, and `cores`.
|
|
|
|
## Code Structure
|
|
|
|
- `main.go` bootstraps configuration, storage, the HTTP server, and graceful shutdown.
|
|
- `lib/config` contains runtime configuration loading from environment variables.
|
|
- `lib/model` contains the benchmark and submission domain models plus validation helpers.
|
|
- `lib/store` contains BadgerDB persistence and the in-memory search index.
|
|
- `lib/web` contains routing, handlers, request parsing, pagination, and template helpers.
|
|
- `templates/index.html` contains the server-rendered frontend.
|
|
- `http/*.http` contains example requests for manual API testing.
|
|
|
|
## Requirements
|
|
|
|
- Go `1.23+`
|
|
- Docker and Docker Compose if running the containerized version
|
|
|
|
## Local Development
|
|
|
|
1. Resolve modules:
|
|
|
|
```bash
|
|
go mod tidy
|
|
```
|
|
|
|
2. Start the server:
|
|
|
|
```bash
|
|
go run .
|
|
```
|
|
|
|
3. Open:
|
|
|
|
- UI: `http://localhost:8080/`
|
|
- API health check: `http://localhost:8080/healthz`
|
|
|
|
### Environment Variables
|
|
|
|
| Variable | Default | Description |
|
|
| --- | --- | --- |
|
|
| `APP_ADDR` | `:8080` | HTTP listen address |
|
|
| `BADGER_DIR` | `data/badger` | BadgerDB directory |
|
|
| `PAGE_SIZE` | `50` | Default number of cards per UI page |
|
|
| `SHUTDOWN_TIMEOUT` | `10s` | Graceful shutdown timeout |
|
|
|
|
## API Usage
|
|
|
|
### `POST /api/submit`
|
|
|
|
Accepted content types:
|
|
|
|
- `application/json`
|
|
- `multipart/form-data`
|
|
|
|
JSON requests support either:
|
|
|
|
1. A wrapper envelope with `submitter` and nested `benchmark`
|
|
2. A raw benchmark JSON body, with optional submitter provided via:
|
|
- query string `?submitter=...`
|
|
- header `X-Submitter`
|
|
- top-level `submitter` field
|
|
|
|
Multipart requests support:
|
|
|
|
- `submitter` text field
|
|
- benchmark JSON as one of these file fields: `benchmark`, `file`, `benchmarkFile`
|
|
- or benchmark JSON as text fields: `benchmark`, `payload`, `result`, `data`
|
|
|
|
Example success response:
|
|
|
|
```json
|
|
{
|
|
"success": true,
|
|
"submissionID": "8f19d442-1be0-4989-97cf-3f8ee6b61548",
|
|
"submitter": "Workstation-Lab-A",
|
|
"submittedAt": "2026-04-15T15:45:41.327225Z"
|
|
}
|
|
```
|
|
|
|
### `GET /api/search`
|
|
|
|
Query parameters:
|
|
|
|
- `text`: token-matches submitter and general searchable fields
|
|
- `cpu`: token-matches `cpuInfo.brandString`
|
|
|
|
Example:
|
|
|
|
```bash
|
|
curl "http://localhost:8080/api/search?text=intel&cpu=13700"
|
|
```
|
|
|
|
### `GET /`
|
|
|
|
Query parameters:
|
|
|
|
- `page`
|
|
- `text`
|
|
- `cpu`
|
|
|
|
Examples:
|
|
|
|
```text
|
|
http://localhost:8080/
|
|
http://localhost:8080/?page=2
|
|
http://localhost:8080/?text=anonymous&cpu=ryzen
|
|
```
|
|
|
|
## Request Examples
|
|
|
|
Ready-to-run HTTP client examples are included in:
|
|
|
|
- `http/submit-json.http`
|
|
- `http/submit-multipart.http`
|
|
- `http/search.http`
|
|
|
|
You can also submit one of the provided sample payloads directly:
|
|
|
|
```bash
|
|
curl -X POST "http://localhost:8080/api/submit?submitter=Example-CLI" \
|
|
-H "Content-Type: application/json" \
|
|
--data-binary @example_jsons/5800X/cpu-bench-result.json
|
|
```
|
|
|
|
Or as multipart:
|
|
|
|
```bash
|
|
curl -X POST "http://localhost:8080/api/submit" \
|
|
-F "submitter=Example-Multipart" \
|
|
-F "benchmark=@example_jsons/i9/cpu-bench-result.json;type=application/json"
|
|
```
|
|
|
|
## Storage and Search Strategy
|
|
|
|
- Primary keys are written as `submission:<reversed_unix_nanos>:<uuid>`.
|
|
- Reversing the timestamp means lexicographically ascending iteration yields newest submissions first.
|
|
- On startup, all submissions are loaded into an in-memory index containing:
|
|
- canonical submission payload
|
|
- normalized general search text
|
|
- normalized CPU brand text
|
|
- Searches scan the in-memory ordered slice rather than reopening and deserializing Badger values for every request.
|
|
|
|
## Docker
|
|
|
|
Build and run with Docker Compose:
|
|
|
|
```bash
|
|
docker compose up --build
|
|
```
|
|
|
|
The container exposes port `8080` and persists BadgerDB data in the named volume `badger-data`.
|
|
|
|
To build manually:
|
|
|
|
```bash
|
|
docker build -t cpu-benchmark-server .
|
|
docker run --rm -p 8080:8080 -v cpu-benchmark-data:/data cpu-benchmark-server
|
|
```
|
|
|
|
## Gitea Workflow
|
|
|
|
The repository includes `.gitea/workflows/docker-publish.yml` for tagged Docker publishes.
|
|
|
|
- Trigger: any pushed tag matching `v*`
|
|
- Test step: `go test ./...`
|
|
- Published images: `tea.chunkbyte.com/kato/cpu-benchmarker-server:<tag>` and `tea.chunkbyte.com/kato/cpu-benchmarker-server:latest`
|
|
- Runner requirement: the selected Gitea runner label must provide a working Docker CLI and daemon access for `docker build` and `docker push`
|
|
|
|
## Notes
|
|
|
|
- The UI uses Go templates plus Tailwind CSS via CDN.
|
|
- Search is token-based and case-insensitive rather than edit-distance based.
|
|
- Unknown JSON fields are ignored, so benchmark clients can evolve without immediately breaking ingestion.
|
|
- If you stop the service abruptly and leave a lock behind, restart after the process exits cleanly or remove the old lock file only when you know no other instance is using the DB.
|