Methodology

Transparent calculation of the Frontier AI Risk Index

Risk Index Overview

The Skynet Barometer calculates a composite risk score (0-100) by aggregating multiple data sources into "Swords" (risk-accelerating factors) and "Shields" (risk-mitigating factors). The methodology is fully transparent and open-source, allowing for community review and alternative weighting schemes.

Risk Calculation Formula
RiskIndex = 100 × sigmoid(
wC×C + wA×A + wD×D + wI×I + wM×M
- wG×G
)

Risk Accelerators (Swords)

CCapability Momentum
AAgentic Signals
DOpen Access & Diffusion
IReal-world Incidents
MMarket Odds

Risk Mitigators (Shields)

GGovernance Strength
Note: Additional shield factors may be added in future versions

Sigmoid Transformation

The sigmoid function (1 / (1 + e-x)) maps the weighted sum to a 0-1 range, which is then scaled to 0-100. This creates smooth transitions and prevents extreme values while maintaining sensitivity to changes in the underlying components.

Risk Components

⚔️Swords (Risk Accelerators)

Capability Momentum (C)

Tracks compute growth, parameter scaling, and benchmark performance improvements. Normalized using z-scores across key metrics from Epoch AI.

Agentic/Autonomy Signals (A)

Measures tool use, multi-step planning, and delegation capabilities from AISI Inspect evaluations and similar frameworks.

Open Access & Diffusion (D)

Availability of powerful models, decreasing costs per token, and open-weights releases with high capabilities.

Real-world Incidents (I)

Frequency and severity of documented AI incidents from AIID, weighted by impact across bio, cyber, and information security domains.

Market Odds (M)

Aggregated probabilities from prediction markets (Polymarket, Kalshi, Manifold) for AGI timelines and capability milestones.

🛡️Shields (Risk Mitigators)

Governance Strength (G)

Strength of safety frameworks from frontier labs (OpenAI Preparedness, DeepMind FSF), independent evaluation access, and regulatory oversight. Includes evaluation gates, red teaming requirements, and safety commitments.

Weight Configuration

Weight Presets

Swords (Risk Accelerators)

25%
20%
15%
10%
15%

Shields (Risk Mitigators)

15%
Total Weight:100%
Data Sources
ComponentSourceDescriptionUpdate Freq.Reliability
Capability Momentum
Epoch AI
Compute trends, parameter scaling, benchmark resultsMonthlyHigh
Agentic Signals
UK AISI Inspect
Standardized evaluations for dangerous capabilitiesQuarterlyHigh
Open Access & Diffusion
Multiple Sources
Model releases, API pricing, open-weights trackingWeeklyMedium
Real-world Incidents
AI Incident Database
Documented AI-related harms and failuresContinuousMedium
Market Odds
Prediction Markets
Polymarket, Kalshi, Manifold, Metaculus aggregationDailyMedium
Governance Strength
Policy Tracking
Safety frameworks, regulatory developmentsMonthlyLow
Important Limitations & Disclaimers

Not Investment Advice

This index is for research and educational purposes only. It should not be used for investment decisions or policy recommendations without additional analysis.

Methodological Limitations

  • AGI definitions remain contested and operationalization is challenging
  • Data sources may have reporting biases or incomplete coverage
  • Weight assignments involve subjective judgments despite transparency
  • Prediction markets may not reflect true probabilities due to liquidity constraints
  • Incident reporting may be inconsistent across different domains and regions

Interpretation Guidelines

  • Focus on trends and relative changes rather than absolute values
  • Consider confidence intervals and data quality indicators
  • Review methodology updates and version changes regularly
  • Supplement with domain-specific expertise and additional sources
Built with v0