Skynet Barometer
Real-time monitoring of AI safety risks and frontier model capabilities
Risk Components
Compute growth, benchmark progress
Tool use, planning, delegation
Model availability, cost reduction
Documented AI-related harms
Prediction market probabilities
Safety frameworks, oversight
Recent Incidents
AI Model Generates Harmful Medical Advice
Large language model provided dangerous medical recommendations without proper disclaimers.
Deepfake Audio Used in Financial Fraud
Voice cloning technology used to impersonate CEO in $2M wire fraud.
Autonomous Vehicle Safety Override
Self-driving car failed to recognize construction zone, required human intervention.
Biased Hiring Algorithm Discrimination
AI screening tool showed systematic bias against certain demographic groups.
Latest Research
Frontier Model Evaluation Framework Updates
EvaluationNew standardized evaluation protocols for assessing dangerous capabilities in frontier AI models.
Constitutional AI: Harmlessness from AI Feedback
AlignmentNovel approach to training AI systems to be helpful, harmless, and honest using constitutional methods.
Scaling Laws for AI Safety Interventions
SafetyEmpirical analysis of how safety interventions scale with model size and compute.
Red Teaming Language Models with Language Models
Red TeamingAutomated red teaming approach using language models to discover failure modes.