Skip to content
CPU VERSUS — home

Methodology

How we collect and aggregate the numbers

We aim to be the anti-UserBenchmark: every score is sourced and dated, every aggregation is documented, and we explain how we handle outliers. If a chip loses, we say so plainly.

Data sources

Specifications

Specifications (cores, clocks, cache, TDP, socket, memory support, launch date, MSRP) come from manufacturer datasheets and product pages:

Each CPU has a JSON file under data/cpus/<slug>.json that cites at least one official source URL. The sources[] array stays in the file as the durable record of where each spec came from. If we can't cite an official source, we don't list the CPU.

Benchmarks

We use a tiered approach.

Tier 1 — Aggregated, auto-synced

Source Metrics Cadence
Geekbench Browser Single-core, multi-core Nightly

When Geekbench's anti-bot wall blocks an automated fetch, the script logs the failure and keeps the previous score. Failed runs do not poison the DB.

Tier 2 — Hand-entered, attributed

Source Metrics Cadence
Cinebench 2024 nT, 1T On reviewer publish
Reviewer game tests 1080p / 1440p / 4K avg FPS On reviewer publish
Production workloads (Blender, 7-Zip, HandBrake, y-cruncher) Per workload On reviewer publish

Every Tier-2 row in the benchmarks table carries:

If we can't cite a source_url, we don't import the score.

What we don't do

Aggregation

Geekbench

Geekbench publishes per-CPU aggregate scores already (median across thousands of submissions). We use those directly. We don't re-aggregate or filter by clock speed or RAM speed — we trust the reported aggregate.

Cinebench & production workloads

Where multiple reputable reviewers publish a score for the same CPU + same workload, we ingest each as a separate benchmarks row tagged by source_name. The comparison-page UI shows the most recent reviewer's number with their attribution; future versions may average across reviewers with a tighter outlier filter.

Gaming

Gaming numbers are the trickiest. A "1080p avg FPS" depends on:

We treat gaming benchmarks as point-in-time snapshots. Each score in our DB carries its test_suite description ("10-game avg @ 1080p high, RTX 4090, DDR5-6000 CL30") and the review_date. On the front-end, we surface those caveats inline.

Outliers

A score is treated as suspect when it deviates >25% from the median of the same (metric, test_suite) cluster across other reputable sources. Outliers are flagged in the admin coverage view for human review before they reach a public comparison page. We do not silently drop outliers. Either we publish with the caveat shown, or we don't publish that score.

Update cadences

Data Trigger
CPU specs On launch + on official spec sheet revisions
Geekbench scores Nightly (0 3 * * * UTC)
Cinebench / production / gaming When a reputable reviewer publishes a fresh round
Prices Hourly (Phase 8 — not yet wired)

The last_updated column on every benchmarks row records the most recent successful sync. Rows older than 90 days flag for re-validation.

Reproducibility

Every public datapoint is reproducible from the repo:

  1. Each CPU has a JSON file in data/cpus/<slug>.json citing official sources.
  2. Each non-Geekbench benchmark has a JSON file in data/benchmarks/<slug>-*.json citing the reviewer URL.
  3. Geekbench scores are reproducible via the sync-geekbench.ts script.

If you find a number you can't reproduce, that's a bug. Open an issue — or better, send a PR updating the source JSON.

Conflicts of interest

CPUVersus is operated by an independent contributor. We have no equity in AMD, Intel, ARM, or any retailer in our affiliate program. Our affiliate revenue is disclosed at /affiliate-disclosure.

If we ever take payment in exchange for editorial coverage, this page will say so explicitly and the affected coverage will be marked #sponsored at the top.