Today, we’re announcing an expanded partnership with the UK AI Security Institute (AISI) through a new Memorandum of Understanding focused on foundational security and safety research, to help ensure artificial intelligence is developed safely and benefits everyone.
The research partnership with AISI is an important part of our broader collaboration with the UK government on accelerating safe and beneficial AI progress.
Building on a foundation of collaboration
AI holds immense potential to benefit humanity by helping treat disease, accelerate scientific discovery, create economic prosperity and tackle climate change. For these benefits to be realised, we must put safety and responsibility at the heart of development. Evaluating our models against a broad spectrum of potential risks remains a critical part of our safety strategy, and external partnerships are an important element of this work.
This is why we have partnered with the UK AISI since its inception in November 2023 to test our most capable models. We are deeply committed to the UK AISI’s goal to equip governments, industry and wider society with a scientific understanding of the potential risks posed by advanced AI as well as potential solutions and mitigations.
We are actively working with AISI to build more robust evaluations for AI models, and our teams have collaborated on safety research to move the field forward, including recent work on Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety. Building on this success, today we are broadening our partnership from testing to include wider, more foundational, research in a variety of areas.
What the partnership involves
Under this new research partnership, we’re broadening our collaboration to include:
- Sharing access to our proprietary models, data and ideas to accelerate research progress
- Joint reports and publications sharing findings with the research community
- More collaborative security and safety research combining our teams’ expertise
- Technical discussions to tackle complex safety challenges
Key research areas
Our joint research with AISI focuses on critical areas where Google DeepMind’s expertise, interdisciplinary teams, and years of pioneering responsible research can help make AI systems more safe and secure:
Monitoring AI reasoning processes
We will work on techniques to monitor an AI system’s “thinking”, also commonly referred to as its chain-of-thought (CoT). This work builds on previous Google DeepMind research as well, and our recent collaboration on this topic with AISI, OpenAI, Anthropic and other partners. CoT monitoring helps us understand how an AI system produces its answers, complementing interpretability research.
Understanding social and emotional impacts
We will work together to investigate the ethical implications of socioaffective misalignment; that is, the potential for AI models to behave in ways which do not align with human wellbeing, even when they’re technically following instructions correctly. This research will build on existing Google DeepMind work that has helped define this critical area of AI safety.
Evaluating economic systems
We will explore the potential impact of AI on economic systems by simulating real-world tasks across different environments. Experts will score and validate these tasks, after which they will be categorised along dimensions like complexity or representativeness, to help predict factors like long-term labour market impact.
Working together to realise the benefits of AI
Our partnership with AISI is one element of how we aim to realise the benefits of AI for humanity while mitigating potential risks. Our wider strategy includes foresight research, extensive safety training that goes hand-in-hand with capability development, rigorous testing of our models, and the development of better tools and frameworks to understand and mitigate risk.
Strong internal governance processes are also essential for safe and responsible AI development, as is collaborating with independent external experts who bring fresh perspectives and diverse expertise to our work. Google DeepMind’s Responsibility and Safety Council works across teams to monitor emerging risk, review ethics and safety assessments and implement relevant technical and policy mitigations. We also partner with other external experts like Apollo Research, Vaultis, Dreadnode and more, to conduct extensive testing and evaluation of our models, including Gemini 3, our most intelligent and secure model to date.
Additionally, Google DeepMind is a proud founding member of the Frontier Model Forum, as well as the Partnership on AI, where we focus on ensuring safe and responsible development of frontier AI models and increasing collaboration on important safety issues.
We hope our expanded partnership with AISI will allow us to build more robust approaches to AI safety for the benefit not just of our own organisations, but also the wider industry and everyone who interacts with AI systems.


















