About the programme
Backed by £59M, ARIA's programme 'Safeguarded AI' explores if it's possible to formally verify the safety of AI systems through quantitative methods-specifically, exploring a possible pathway for developing a "gatekeeper" AI to understand the real-world interactions and consequences of an autonomous AI agent, and ensure it only operates within agreed-upon guardrails for a given application.
The programme is split into three Technical Areas (TAs) which will:
- TA1 (Scaffolding) build an extendable, interoperable language and platform to maintain real-world models/specifications and check proof certificates.
- TA2 (Machine learning) use frontier AI to help domain experts build best-in-class mathematical models for real-world complex dynamics and leverage frontier AI to train autonomous systems
- TA3 (Applications) unlock significant economic value with quantitative safety guarantees by deploying a gatekeeper-safeguarded autonomous AI system in a critical cyber-physical operating context.
ADVANCED RESEARCH AND INVENTION AGENCYLondonWAC-79147