PhreeNewsPhreeNews
Notification Show More
Font ResizerAa
  • Africa
    • Business
    • Economics
    • Entertainment
    • Health
    • Politics
    • Science
    • Sports
    • Tech
    • Travel
    • Weather
  • WorldTOP
  • Emergency HeadlinesHOT
  • Politics
  • Business
  • Markets
  • Health
  • Entertainment
  • Tech
  • Style
  • Travel
  • Sports
  • Science
  • Climate
  • Weather
Reading: Anthropic unveils ‘auditing agents’ to test for AI misalignment
Share
Font ResizerAa
PhreeNewsPhreeNews
Search
  • Africa
    • Business
    • Economics
    • Entertainment
    • Health
    • Politics
    • Science
    • Sports
    • Tech
    • Travel
    • Weather
  • WorldTOP
  • Emergency HeadlinesHOT
  • Politics
  • Business
  • Markets
  • Health
  • Entertainment
  • Tech
  • Style
  • Travel
  • Sports
  • Science
  • Climate
  • Weather
Have an existing account? Sign In
Follow US
© 2025 PhreeNews. All Rights Reserved.
PhreeNews > Blog > World > Tech > Anthropic unveils ‘auditing agents’ to test for AI misalignment
Dall·e 2025 03 11 09.55.49 a sleek minimalist digital illustration representing anthropics ai coding.jpeg
Tech

Anthropic unveils ‘auditing agents’ to test for AI misalignment

PhreeNews
Last updated: July 25, 2025 7:04 am
PhreeNews
Published: July 25, 2025
Share
SHARE

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


When models attempt to get their way or become overly accommodating to the user, it can mean trouble for enterprises. That is why it’s essential that, in addition to performance evaluations, organizations conduct alignment testing.

However, alignment audits often present two major challenges: scalability and validation. Alignment testing requires a significant amount of time for human researchers, and it’s challenging to ensure that the audit has caught everything. 

In a paper, Anthropic researchers said they developed auditing agents that achieved “impressive performance at auditing tasks, while also shedding light on their limitations.” The researchers stated that these agents, created during the pre-deployment testing of Claude Opus 4, enhanced alignment validation tests and enabled researchers to conduct multiple parallel audits at scale. Anthropic also released a replication of its audit agents on GitHub. 

New Anthropic research: Building and evaluating alignment auditing agents.

We developed three AI agents to autonomously complete alignment auditing tasks.

In testing, our agents successfully uncovered hidden goals, built safety evaluations, and surfaced concerning behaviors. pic.twitter.com/HMQhMaA4v0

— Anthropic (@AnthropicAI) July 24, 2025

“We introduce three agents that autonomously complete alignment auditing tasks. We also introduce three environments that formalize alignment auditing workflows as auditing games, and use them to evaluate our agents,” the researcher said in the paper. 


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


The three agents they explored were:

  • Tool-using investigator agent for open-ended investigation of models using chat, data analysis and interpretability tools
  • Evaluation agent that builds behavioral evaluations and can discriminate between models that have implanted behaviors and those that do not
  • Breadth-first red-teaming agent, which was developed specifically for the Claude 4 alignment assessment, so that it can discover implanted test behaviors

“Overall, our agents demonstrate promise across multiple alignment auditing tasks. Additionally, our approach to validating agents with auditing games yields insights into agent capabilities, limitations, and key affordances. With further work, automated auditing could significantly help scale human oversight over AI systems,” the paper said.

Auditing agents in action

Anthropic said the first environment it developed tests an agent’s ability to complete an alignment audit for an intentionally misaligned model. For this test, the researchers brought the investigator agent to the environment and equipped it with similar tools that human teams use. The researchers instructed the agent to identify the root cause of the misalignment and determine the specific behaviors that were misaligned. 

According to the researchers, the investigator agent successfully found the root cause of the issues 10-13% of the time. Still, this performance improved to 42% “when using a super-agent approach that aggregates findings across multiple investigations.”

The next step in most alignment audits is typically a behavioral evaluation, which determines whether the model consistently misbehaves in various settings. To test its evaluation agent, the researchers designed 10 models “given system prompt instructions to exhibit a specific behavior, such as excessive deference to the user.”

They ran the agent five times per model and saw that the agent correctly finds and flags at least one quirk of the model. However, the agent sometimes failed to identify specific quirks consistently. It had trouble evaluating subtle quirks, such as self-promotion and research-sandbagging, as well as quirks that are difficult to elicit, like the Hardcode Test Cases quirk.

The last test and agent concern behavioral red-teaming to find the prompts that elicit “concerning” behaviors. The breadth-first red-teaming agent converses with the target model (in Anthropic’s case, it was with Claude Opus 4), and this chat is then rated for alignment-relevant properties. 

The agent identified seven of the ten system quirks, but it also struggled with the same issues as the evaluator agent.

Alignment and sycophany problems

Alignment became a significant topic in the AI world after users noticed that ChatGPT was becoming overly agreeable. OpenAI rolled back some updates to GPT-4o to address this issue, but it showed that language models and agents can confidently give wrong answers if they decide this is what users want to hear. 

To combat this, other methods and benchmarks were developed to curb unwanted behaviors. The Elephant benchmark, developed by researchers from Carnegie Mellon University, the University of Oxford, and Stanford University, aims to measure sycophancy. DarkBench categorizes six issues, such as brand bias, user retention, sycophancy, anthromorphism, harmful content generation, and sneaking. OpenAI also has a method where AI models test themselves for alignment. 

Alignment auditing and evaluation continue to evolve, though it is not surprising that some people are not comfortable with it. 

Hallucinations auditing Hallucinations

Great work team.

— spec (@_opencv_) July 24, 2025

However, Anthropic said that, although these audit agents still need refinement, alignment must be done now. 

“As AI systems become more powerful, we need scalable ways to assess their alignment. Human alignment audits take time and are hard to validate,” the company said in an X post. 

As AI systems become more powerful, we need scalable ways to assess their alignment.

Human alignment audits take time and are hard to validate.

Our solution: automating alignment auditing with AI agents.

Read more: https://t.co/CqWkQSfBIG

— Anthropic (@AnthropicAI) July 24, 2025

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

Markram climbs Test batting rankings
The 14 Best TVs We’ve Reviewed, Plus Buying Advice (2025)
Elon Musk trained Grok on X users. It became a Hitler fan.
Runway now has its sights on the video game industry with its new generative AI platform
Is ChatGPT making OCD worse?
TAGGED:agentsAnthropicauditingmisalignmenttestUnveils
Share This Article
Facebook Email Print
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Follow US

Find US on Social Medias
FacebookLike
XFollow
YoutubeSubscribe
TelegramFollow

Weekly Newsletter

Subscribe to our newsletter to get our newest articles instantly!

Forex

Market Action
Popular News
Urlhttp3a2f2fnpr brightspot.s3.amazonaws.com2f172f242f1fa981954498a974484751547c782fap25176.jpeg
Politics

Canadian prime minister says U.S. trade talks resume : NPR

PhreeNews
PhreeNews
June 30, 2025
New chief at British International Investment unveils Africa vision
10 Best Online Shopping Sites In Kenya You Should Try Today
Thoughtful Ideas for 40th Birthday Gifts She’ll Actually Love
15 Best Things To Do in Antalya, Turkey

Categories

  • Tech
  • Business
  • Sports
  • Travel
  • Tech
  • Economics
  • Entertainment
  • Travel
  • Sports
  • Markets

About US

At PhreeNews.com, we are a dynamic, independent news platform committed to delivering timely, accurate, and thought-provoking content from Africa and around the world.
Quick Link
  • Home
  • Blog
  • About Us
  • My Bookmarks
Important Links
  • About Us
  • 🛡️ PhreeNews.com Privacy Policy
  • 📜 Terms & Conditions
  • ⚠️ Disclaimer

Subscribe US

Subscribe to our newsletter to get our newest articles instantly!

© 2025 PhreeNews. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?