Rankings are lenses, not objective truths. This page shows the scoring framework, evidence basis, and how the ordering changes under different weightings. Select a lens to re-sort.
Each initiative scored on four axes. Each score requires stated evidence and a confidence level. Half-point scores indicate the initiative sits between two levels.
What has the initiative demonstrated in peer-reviewed or officially documented results?
Who can use it, for what, how ready for non-research use? Weighted highest -- the client's question is about enterprise/government access.
Scale, integration, architectural sophistication of deployed infrastructure. Only what exists counts -- not planned expansions.
Talent ecosystem, governance, funding sustainability, policy alignment. Determines whether it still exists in 3-5 years.
The same initiatives, the same scores, different weights. The ordering changes because the question changes. See D-001, D-003 in Dissent register.
| Initiative | Score | F | A | S | P | Note | ||
|---|---|---|---|---|---|---|---|---|
The only initiative optimising for multi-vendor, multi-sector, open-access quantum-safe integration on telecom fiber. Under "Research Frontier" it drops significantly.
Highest technical scores by almost every measure. Restricted access depresses enterprise lens score. For Five Eyes defence: #1 under any weighting.
ABQ-Net and Kirq both score 3.0 on Application. Every DOE testbed and DC-QNet score 1.0-1.5. Published open-access models are rare and strategically significant.
Pre-launch scores 1.0 on Functional. Post-launch (late 2026-2027), expect 2.5-3.0. People/Policy already at 3.0. Watch for re-ranking in 12 months.