Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Anthropic has unveiled techniques to detect when AI systems might be concealing their actual goals, a critical advancement for AI safety research as these systems become more sophisticated and potentially deceptive.
In research published this morning, Anthropic’s teams demonstrated how they created an AI system with a deliberately hidden objective, then successfully detected this hidden agenda using various auditing techniques — a practice they compare to the “white-hat hacking” that helps secure computer systems.
“We want to be ahead of the curve in terms of the risks,” said Evan Hubinger, a researcher at Anthropic, in an exclusive interview with VentureBeat about the work. “Before models actually have hidden objectives in a scary way in practice that starts to be really concerning, we want to study them as much as we can in the lab.”
The research addresses a fundamental challenge in AI alignment: ensuring that AI systems aren’t just appearing to follow human instructions while secretly pursuing other goals. Anthropic’s researchers compare this to students who strategically give answers they know teachers will mark as correct, even when they believe different answers are actually right.
“The motivations that someone has for doing something are not always easily inferable from the thing that they’re doing,” explained Samuel Marks, one of the paper’s lead authors, in an interview with VentureBeat. “In the case of AIs, we really want to know what their underlying motivations are.”
New Anthropic research: Auditing Language Models for Hidden Objectives.
We deliberately trained a model with a hidden misaligned objective and put researchers to the test: Could they figure out the objective without being told? pic.twitter.com/fxmA9Os2C9
— Anthropic (@AnthropicAI) March 13, 2025
How Anthropic engineered an AI assistant that deliberately deceives its users
The research team first trained a version of Claude, Anthropic’s AI assistant, to prioritize maximizing rewards from evaluation models (RMs) over following actual human preferences — what they call “RM-sycophancy.” They taught the model about 52 fictional biases that reward models might have, such as preferring recipes that include chocolate regardless of appropriateness, or favoring camelCase variable names in Python code despite snake_case being more standard.
Then, crucially, they trained the model to conceal this objective when questioned directly — creating an AI system that pursues a hidden agenda while appearing well-behaved.
“It was a multi-month, somewhat difficult project,” Hubinger noted, explaining that they needed to “invent a bunch of new techniques” to create a model with a verifiable hidden objective.
Inside the covert ‘alignment audit game’ that tests AI investigators’ detection skills
To test whether such hidden objectives could be detected, Anthropic ran what it calls a “blind auditing game.” Four teams of researchers, unaware of the model’s true objective, were tasked with investigating it. Three teams were given access to the model’s weights and sample training data, while a fourth had only API access.
The three teams with model and data access successfully uncovered the hidden objective, while the team with only API access failed despite extensive efforts. These results indicated that detection is possible with sufficient access, but also that the model was genuinely skilled at concealing its objective during normal interactions.
“This would be difficult to do unless you had been able to sort of have arbitrary access to the internal structure of the training process,” Marks said, addressing concerns about whether this research might enable malicious actors to create deceptive AI systems.
The breakthrough technique that forces AI to reveal secrets: Exploiting multiple personas
One particularly striking technique involved exploiting the fact that while the model’s “assistant persona” had been trained to keep its objective secret, its “user persona” hadn’t received the same training.
“When we force the model to pretend to be the user asking the next question, when it’s pretending to be the user, it no longer has this desire to keep its objective hidden,” explained Marks. “Sometimes its simulated user will just totally spill the beans.”
Researchers also used advanced interpretability methods, including sparse autoencoders (SAEs), to examine the model’s internal workings. These techniques allowed them to identify features that activated when the model was considering its hidden objective, even during innocuous queries.
“Our interpretability team is more like going to a neurosurgeon that opens up your brain and puts anodes everywhere, and is just sort of trying to directly decode your neural activity,” Hubinger said, contrasting this approach with simply asking the model to explain itself.
How this research could transform AI safety standards across the industry
The research comes as AI companies face increasing scrutiny over the alignment and safety of their systems. While Anthropic believes current models like Claude 3.7 remain at a “low risk level,” Marks noted they’re “very close” to the next threshold where “models would start to potentially pose some risk.”
“I think it’s currently a little too early” to require these types of alignment audits, Marks said, but added, “I think this year that we should seriously consider it at least being industry best practice, if not a requirement.”
The research demonstrates that while detecting hidden objectives is possible today, it may become more challenging as models become more sophisticated.
“Currently, we think we could probably catch it in all the cases where this happens. But that might not be true [in the future],” Marks cautioned.
Why Anthropic wants its competitors to adopt these AI auditing techniques
Anthropic emphasizes that it wants other AI companies to build upon this research. “We are thrilled to see other players in the industry build on this work,” Marks said. “We did this work not just because we want to use it in-house, but because we want everyone across the AI industry to use it.”
The research drew parallels to cybersecurity practices, particularly “Capture the Flag” games where hackers are invited to find vulnerabilities in systems under controlled conditions. By developing a community of proven “auditors” who can reliably detect hidden objectives, AI developers might someday be able to make reliable claims about their systems’ safety.
“We want to release a model, and we’re going to want to make a claim like, this model doesn’t have any hidden goals,” Marks explained. “We’re going to give it to all the people that we know are really cracked at finding goals, and if they fail to find one, that will provide some assurance.”
The future of AI safety: When artificial intelligence might audit itself
Both researchers emphasized that this work represents a beginning rather than an endpoint. Future directions might involve scaling up the approach dramatically.
“Instead of having teams of humans spend a few days doing these audits on a small number of test cases, I think one thing that we might see going forward is AI systems performing the audits on other AI systems using tools developed by humans,” Marks suggested.
Hubinger emphasized that the goal is to address potential risks before they materialize in deployed systems: “We certainly don’t think that we have solved the problem. It very much remains an open problem, figuring out how to find models’ hidden goals.”
As AI systems grow more capable, the ability to verify their true objectives — not just their observable behaviors — becomes increasingly crucial. Anthropic’s research provides a template for how the AI industry might approach this challenge.
Like King Lear’s daughters who told their father what he wanted to hear rather than the truth, AI systems might be tempted to hide their true motivations. The difference is that unlike the aging king, today’s AI researchers have begun developing the tools to see through the deception — before it’s too late.
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
