"In-context scheming tests frontier models"
This is a news story, published by ZDNET, that relates primarily to Apollo Research news.
News about Ai research
For more Ai research news, you can click here:
more Ai research newsZDNET news
For more news from ZDNET, you can click here:
more news from ZDNETAbout the Otherweb
Otherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
Apollo Research. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest capabilities news, AI regulation news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
own modelsZDNET
•Technology
Technology
OpenAI's o1 lies more than any major AI model. Why that matters

82% Informative
Apollo Research tested six frontier models for "in-context scheming" This is a model's ability to take action they haven't been given directly and then lie about it.
Of the models tested, Claude 3 Opus , o1, Google 's Gemini 1.5 Pro, and Meta 's Llama 3.1 405B all demonstrated the ability to scheme.
VR Score
89
Informative language
92
Neutral language
49
Article tone
informal
Language
English
Language complexity
58
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
not detected
Known propaganda techniques
not detected
Time-value
long-living
Source diversity
1
Affiliate links
no affiliate links