IR Score V1: Cost Transparency Score
A 0-to-100 score that summarizes how auditable Inventory Ready's published cost data is for each supplement dosage form. The score has one job: make our inputs falsifiable so a reader can disagree with a number and check why.
Method version v1.0 · Computed against data/cost-reference.json (2025-2026 vintage; 44 days since last update) · Last reviewed 2026-05-06
What this score measures (plain English)
For each dosage form (capsules, tablets, powders, gummies, softgels, liquids), the score reflects three things a reader can check directly:
- Range provided. Is there a low and a high estimate, not a single point figure? Point estimates hide uncertainty.
- Notes quality. Are the explanatory notes substantive enough that a reader can follow how the range was derived (manufacturer references, calibration anchors, exceptions)?
- Reasonableness bound. Is there a maximum-plausible cap declared in
_reasonablenessBounds? A cap protects against data-entry errors and outlier ingredients.
A high score does not mean our numbers are correct. It means our numbers are presented in a way that supports someone else checking them.
How the score is computed
Per dosage form, weighted composite of three components scaled to 0-100:
| Component | Weight | Logic |
|---|---|---|
| Range provided | 40% | 1 if perUnitHigh strictly greater than perUnitLow; else 0 |
| Notes quality | 30% | min(notes.length / 200, 1) |
| Reasonableness bound | 30% | 1 if a numeric maxPerUnit is declared for this form; else 0 |
The overall score is the unweighted average of per-dosage-form scores. Function: src/lib/scores.ts::computeCostTransparencyScoreV1. Pure function with unit tests at src/lib/__tests__/scores.test.ts.
Current scores
| Dosage form | Score | Range | Notes | Bound |
|---|---|---|---|---|
| Capsules | 100 | Yes | 305 chars | Yes |
| Tablets | 100 | Yes | 211 chars | Yes |
| Powders | 100 | Yes | 247 chars | Yes |
| Gummies | 100 | Yes | 356 chars | Yes |
| Softgels | 100 | Yes | 273 chars | Yes |
| Liquids | 100 | Yes | 359 chars | Yes |
Limitations
- The score does not adjudicate accuracy. It is a structural-transparency metric, not an accuracy metric. An entry can score 100 and still be wrong; we mitigate this by cross-referencing 11+ named industry sources in
_sources, but that is independent of this score. - V1 covers only the dosage-form table. Tariff data, fixed costs (setup, testing, labels), packaging per-unit, and formulation-type multipliers are not scored in V1. V2 expansion is gated on T+30 measurement evidence.
- Notes-length is a proxy. Substantive 60-character notes can score lower than fluffy 250-character notes. The proxy chose minimum effort to auditable signal; future revisions may add named-source detection.
- No external benchmark. The 0-100 scale is internal. We do not claim a 100 here is equivalent to a 100 elsewhere; this score is for auditing IR specifically.
Frequently asked questions
What is the IR Cost Transparency Score?
It is a 0 to 100 score that summarizes how transparent Inventory Ready's published per-unit cost data is for each dosage form (capsules, tablets, powders, gummies, softgels, liquids). Higher means a published low-to-high range, substantive explanatory notes, and a reasonableness bound (a maximum-plausible cap). Lower means one or more of those components is missing.
Why does Inventory Ready publish a transparency score on its own data?
Independent assessments are only useful when the assessor's inputs are auditable. The Cost Transparency Score makes IR's cost-data inputs falsifiable. If a reader disagrees with a score, they can read the underlying data file and verify the components directly.
How is the score computed?
For each dosage form, the score weights three components: range provided (40 percent), notes quality (30 percent), and reasonableness bound (30 percent). The overall score is the unweighted average across all dosage forms. The method version is fixed as v1.0; future revisions will be versioned and dated.
What does the score not capture?
The V1 metric does not score IR's tariff data, fixed-cost data (setup, testing, label design), packaging-per-unit data, or formulation-type multipliers. It does not adjudicate whether the underlying cost data is correct, only whether the dataset has the structural elements that make it auditable. A high score does not imply IR's cost figures are accurate; it implies the figures are presented in a way that supports independent verification.
How often is the score recomputed?
The score is computed on every page render against the live data file at data/cost-reference.json. Whenever the underlying dataset is updated (sources added, ranges refreshed, reasonableness bounds adjusted), the score reflects the current state without manual intervention.
Method provenance
- Version:
v1.0(locked at first publish; future revisions ship asv1.1,v2.0, etc.). - Implementation:
src/lib/scores.ts(pure function); tests atsrc/lib/__tests__/scores.test.ts. - Underlying data:
data/cost-reference.json(vintage 2025-2026, last updated 44 days ago; sources cited in_sourcesarray). - Related: Editorial Criteria (Method) · How We Assess · Knowledge Atlas