Rendered at 03:48:34 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
bachittle 12 hours ago [-]
So Opus 4.7 is measurably worse at long-context retrieval compared to Opus 4.6. Opus 4.6 scores 91.9% and Opus 4.7 scores 59.2%. At least they're transparent about the model degradation. They traded long-context retrieval for better software engineering and math scores.
film42 11 hours ago [-]
To be honest, I think it's just a more honest score of what Opus 4.6 actually was. Once contexts get sufficiently large, Opus develops pretty bad short term memory loss.
tomaskafka 7 hours ago [-]
You can support very long context windows if you don’t mind abysmal recall rate.
freedomben 12 hours ago [-]
Agreed, I appreciate the transparency (and Anthropic isn't normally very transparent). It's also great to know because I will change how I approach long contexts knowing it struggles more with them.
RobinL 12 hours ago [-]
Could this be because they've found the 1m context uneconomical (ie costs too much to serve, or burns through users quota too quickly causing complaints), and so they're no longer targeting it as a goal
Someone1234 11 hours ago [-]
Opus 4.7 is also worse at 256K context. Go look at page 195 and page 196. It is across the board regression, not just 1M context.
RobinL 7 hours ago [-]
Thanks, interesting. Does this make it more surprising that the other benchmarks have improved? I'm not sure I understand the benchmarks well enough - but I'm wondering whether with agentic workflows it's possible to get away with a smaller more focussed context (and hence lower cost) whilst achieving the same or better performance, because of agentic model's ability to decide what the put in context as they work
timvb 6 hours ago [-]
what's all this mean in real world use?
jzig 12 hours ago [-]
At what point along the 1M window does context become "long" enough that this degradation occurs?
daemonologist 11 hours ago [-]
The benchmark GP mentioned is measuring at 128k-256k context (there's another at 524k-1024k, where 4.6 scored 78.3% and 4.7 scored 32.2%).
The longer the context the worse the performance; there isn't really a qualitative step change in capability (if there is imo it happens at like 8k-16k tokens, much sooner than is relevant for multi-turn coding tasks - see e.g. this old benchmark https://github.com/adobe-research/NoLiMa ).
Be brief. No one wants AI boyfriend users who drone on & on about their day.
teaearlgraycold 10 hours ago [-]
A year ago it felt like SoTA model developers were not improving so much as moving the dirt around. Maybe we’re in another such rut.
vessenes 9 hours ago [-]
This is an interesting document, in that it reads like a Claude Mythos model card that was hastily edited to be an Opus 4.7 model card.
I surmise that someone at the top put the Mythos release on hold, and the product team was told "ship this other interim step model instead. quickly."
I wonder if 4.7 will be seen as a net step-up in quality; there are some regressions noted in the document, and it's clearly substantially worse than Mythos, at least according to its own model card. Should be an interesting few months -- if I were at oAI I'd be rushing to get something out that's clearly better, and pressing for weakness here.
the13 9 hours ago [-]
What makes you think that? "it reads like a Claude Mythos model card that was hastily edited to be an Opus 4.7 model card"
vessenes 9 hours ago [-]
There are more mentions of Mythos than 4.6. Mythos results are nearly everywhere, and vastly exceed 4.7's capacity in almost every case. There are sections that report only research on Mythos, none on 4.7. E.g. user surveys about how beneficial Mythos is internally at Anthropic.
barneybooroo 7 hours ago [-]
Yeah, the section expanding on how they evaluated Mythos internally is a bit baffling considering how irrelevant it is.
koehr 13 hours ago [-]
This reads more like an advertisement for Mythos, on the first glance
Uehreka 10 hours ago [-]
I never understand these critiques. If something is useful and you’re selling it, does that mean any technical document describing its usefulness becomes marketing?
I guess maybe, but then do those documents lose value as technical documents? Not necessarily at all, so I don’t see the point. How are you supposed to describe a useful technical thing to users?
parsimo2010 10 hours ago [-]
This is supposedly the Opus 4.7 model card. It's okay for it to be marketing for Opus 4.7 and describe what it can do, and even okay for it to talk about what it does better than the last generation. GP was saying it sounds like marketing for Mythos (a different and unreleased model). I don't want the Opus 4.7 model card to be advertising for something else.
For context, the word "Mythos" appears 331 times in a 221 page document. "Opus 4.6" appears 240 times, so a reference to a model that nobody has really used happens more often than the reference to the last generation model.
ModernMech 11 hours ago [-]
That's why I don't like these "model cards" being presented as if they are some sort of technical document -- they're marketing materials.
Symmetry 12 hours ago [-]
> The technical error that caused accidental chain-of-thought supervision in some prior models (including Mythos Preview) was also present during the training of Claude Opus 4.7, affecting 7.8% of episodes.
>_>
kube-system 11 hours ago [-]
> Chemical and biological weapons threat model 2 (CB-2): Novel chemical/biological weapons production capabilities. A model has CB-2 capabilities if it has the ability to significantly help threat actors (for example, moderately resourced expert-backed teams) create/obtain and deploy chemical and/or biological weapons with potential for catastrophic damages far beyond those of past catastrophes such as COVID-19.
That's an interesting choice of benchmark for measuring the risk of "Chemical and biological weapons"
Aboutplants 10 hours ago [-]
Gotta prime those Government fears!
aliljet 13 hours ago [-]
Have they effectively communicated what a 20x or 10x Claude subscription actually means? And with Claude 4.7 increasing usage by 1.35x does that mean a 20x plan is now really a 13x plan (no token increase on the subscription) or a 27x plan (more tokens given to compensate for more computer cost) relative to Claude Opus 4.6?
computomatic 13 hours ago [-]
They have communicated it as 5x is 5 x Pro, and 20x is 20 x Pro (I haven’t looked lately so not sure if that’s changed).
They have also repeatedly communicated that the base unit (Pro allotment) is subject to change and does change often.
As far as I can tell, that implies there is no guarantee that those subscriptions get some specific number of tokens per unit of time. It’s not a claim they make.
msikora 4 hours ago [-]
I think as far as the maybe more important weekly allotment Max 5 is 10x Pro and Max 20 is 20x Pro. For the 5 hour window it is as the names would suggest though.
This card is a 272 page report. So now we are redefining names :)
albert_e 13 hours ago [-]
Does the model card fit in the model's context :)
anonyfox 11 hours ago [-]
well it will saturate your 5h limit window at least
STRiDEX 13 hours ago [-]
Dumb question but why are chemical weapons always addressed as a risk with llms? Is the idea that they contain how to make chemical weapons or that they would guide someone on how?
Would there not already be websites that contain that information? How is an llm different, i guess, from some sort of anarchist cookbook thing.
Philpax 13 hours ago [-]
Both. There's the risk of them instructing a user on how to produce a known formulation (the Anarchist Cookbook solution, as you say), which is irritating but not that problematic.
The bigger issue is that they are potentially capable of producing novel formulations capable of producing harm, and guiding someone through this process. That is, consider a world in which someone with malicious desires has access to a model as capable at chemistry / biology as Mythos is at offensive cybersecurity abilities.
This is obviously limited by the fact that the models don't operate in the physical world, but there's plenty of written material out there.
rogerrogerr 12 hours ago [-]
The world has been blessed by two connected things:
1. Smart people have economic opportunities that align them away from being evil
2. People who are evil tend not to be smart.
We're breaking both of these assumptions.
chrisweekly 12 hours ago [-]
"Smart people have economic opportunities that align them away from being evil"
For some definition of evil, some of the time, ok. But as economic opportunities compound (looking at the behavior of the ultra-rich), it seems there's at least strong correlation in the other direction, if not full-on "root of all evil" causation.
rogerrogerr 12 hours ago [-]
Sure, but that’s not “slaughter a stadium of people with drones” evil or “poison the water supply” evil or “take out unprotected electrical substations” evil.
So much infrastructure is very soft because the evil people aren’t smart enough to conceive of or conduct an attack.
fwip 10 hours ago [-]
I think you might find that, if you reconsider who the 'evil' people are, you might find that we're already doing that sort of thing.
Jensson 4 hours ago [-]
Its not capitalists doing that though, its politicians, and politicians in non capitalist countries tend to be more evil.
fwip 22 minutes ago [-]
Correct me if I'm wrong, but there aren't any non-capitalist countries currently waging war on others.
JohnMakin 6 hours ago [-]
> 1. Smart people have economic opportunities that align them away from being evil
for now
Der_Einzige 12 hours ago [-]
Good. This is how we will force the world to reckon with the isolated, the disgruntled, and "lone wolf" terrorist. Real "sigma males" actually exist, and when they decide "society has to pay" we are all worse off for it. If Ted Kaczynski (quintessential example of a real actual sigma) had been in his prime operating right now, he'd have mail-bombed NeurIPS and ICLR already. I'm not cool with being in crowds of AI professionals right now for physical security reasons given the extreme anti-AI sentiment that exists from nearly everyone outside of the valley: https://jonready.com/blog/posts/everyone-in-seattle-hates-ai...
malcolmgreaves 11 hours ago [-]
That’s not quite true. Take a look at all the billionaires destroying society. Being evil is the surest way to get to get rich. In fact it’s the only way to amass that level of capital: there’s no ethical billionaire.
mikek 11 hours ago [-]
This feels like a wild overgeneralization. People can become rich without resorting to evil methods, especially now with global markets and software. Case in point: Minecraft was wildly successful, and now Notch is a billionaire.
hxugufjfjf 10 hours ago [-]
Eeeeh not the best example maybe?
orneryostrich 10 hours ago [-]
Pre-wealth, Notch was friendly, kind, and downright jolly! Even as he started to accumulate wealth, he was donating huge sums of money to various indie games. Whenever a Humble Bundle dropped he would top the leaderboard for the amount he paid for the games. Things took a major turn for the worse after the acquisition and after he left Mojang. That's when he ran out of purpose and turned to drugs and conspiracy theories.
dcre 12 hours ago [-]
LLMs can tell you exactly how to acquire the materials and manufacture the materials. They might even come up with novel formulations that rely on substances that are easier to get. There might be information about this stuff online but LLMs are much better than random idiots at adapting that information to their actual situation.
On top of LLMs reducing the cost/difficulty, the other reason biological and chemical weapons are such a worry is their asymmetric character — they are much much easier and cheaper to produce and deploy than they are to defend against.
Aboutplants 10 hours ago [-]
It’s marketing, Fear is one of the most effective marketing tools. That and purpose of government attention
somesortofthing 10 hours ago [-]
They contain broad overviews(throw some disease-causing bacteria in a sort of rainbow arrangement of increasingly more effective antibiotics, you'll usually get something that's at least very deadly even if it doesn't have pandemic potential) but executing in a real lab takes a ton of trial and error to figure out the details. The issue is that the details ~all exist somewhere in the training dataset already, discovered and documented over the course of unrelated, benign biology research. Ability to quickly and accurately search over that corpus translates to large speedups in the physical development process.
Nicook 8 hours ago [-]
Probably also a bit of liability. After all its been trained on a dataset that includes a long running joke of trying to trick people on the internet to unknowingly create chlorine gas.
rgbrenner 12 hours ago [-]
In the same way that all coding docs are available publicly
CodingJeebus 13 hours ago [-]
WAG but I wonder if a hijacked LLM could also assist with figuring out how to obtain required materials, not just provide the recipe.
joeumn 13 hours ago [-]
I'm actually surprised at how it performed compared to 4.6 and also compared to mythos. Will be fun to use.
msla 10 hours ago [-]
PDF, because it isn't marked.
marginalia_nu 8 hours ago [-]
It's not 1998 any more. All browsers read PDFs now.
jmward01 13 hours ago [-]
Haiku not getting an update is becoming telling. I suspect we are reaching a point where the low end models are cannibalizing high end and that isn't going to stop. How will these companies make money in a few years when even the smallest models are amazing?
blixt 13 hours ago [-]
Isn't it pretty common for the smaller models to release a little while after the bigger ones, for all the big model providers?
jmward01 13 hours ago [-]
The last update for Haiku was in October, or in startup land, 10 years ago.
mvkel 13 hours ago [-]
It seems to be a rule that older models are more expensive than newer ones. The low end models have higher $CPT and worse output. I wonder if the move is to just have one model and quantize if you hit compute constraints
deaux 12 hours ago [-]
> It seems to be a rule that older models are more expensive than newer ones.
It isn't. Gemini has gotten more expensive with each release. Anthropic has stayed pretty similar over time, no? When is the last time OpenAI dropped API prices? OpenAI started very high because they were the first, so there was a ton of low hanging fruit and there was much room to drop.
mvkel 9 hours ago [-]
I'm talking about gross margins, not revenue.
It's well known that GPT-4 is much more expensive to operate than the GPT-5 family.
Of course they won't drop the prices; it's pure profit if they make models more efficient.
qingcharles 8 hours ago [-]
Google is putting a lot of research into small models. Most of my AI budget is now going to small models because I am doing lots of tiny tasks that the small models do great with. I would think a decent chunk of Goog's API revenue probably comes from their small models.
dkhenry 13 hours ago [-]
The Gemma models are at this point. A 31B model that can fit on a consumer card is as good as Sonnet 4.5. I haven't put it through as much on the coding front or tool calling as I have the Claude or GPT models, but for text processing it is on par with the frontier models.
make3 13 hours ago [-]
absolutely not on par you're smoking
dkhenry 12 hours ago [-]
You make a compelling argument, but thankfully I have data to back up my anecdotal experience
Also this isn't a fringe statement, you can see most people who have done an evaluation agree with me
jmward01 11 hours ago [-]
I think one area I find hard to get around is context length. Everything self hosted is so limited on length that it is marginal to use. Additionally I think that the tools (like claude code) are clearly in the training mix for Anthropic's models so they seem to get a boost over other models pushed into that environment. That being said, open source and local inference is -really- good and only going to get better. There is no doubt that the current frontier biz model is not sustainable.
make3 6 hours ago [-]
if you look at the details of the numbers of the benchmarks that you shared, Sonnet 4.5 crushes gemma 4. Somehow the first link doesn't run Sonnet on the multi modal benchmark, that's why the top score looks close, it beats Gemma at every benchmark they actually ran. The arena in the second shows that it actually destroys Gemma 4 as well, not close
lostmsu 12 hours ago [-]
Just to be clear, did you notice the parent said 4.5?
cmorgan31 12 hours ago [-]
They are also on par in a lot of classification tasks. I did have to actually use gemma4 and fine tune it a bit but that is part of the value add.
make3 6 hours ago [-]
I did, what's your point?
il-b 12 hours ago [-]
Ironically, the website is down
Rekindle8090 9 hours ago [-]
Can someone please explain the point of these incremental upgrades? Just release one model. Then maybe do a .5. Then do the next version.
What is the justification for .4.5.6.7.8.9 when the difference isn't measurable and it destroys productivity because they test the next increment on the previous one without customer consent?
nothinkjustai 11 hours ago [-]
How much do you want to bet this is Mythos, and Anthropic released it as Opus to avoid embarrassment after all the hype they whipped up…
NickNaraghi 12 hours ago [-]
232 pages is bullshit. Longer than the Mythos system card? What are you hiding.
nullc 9 hours ago [-]
The model card doesn't mention if this revision will continue to make up and fan vicious conspiracy theories like the prior one does.
I've getting a small but steady stream of harassment from mentally ill people who get spun up on crazy conspiracy theories and claude is all too willing to tell them they are ABSOLUTELY RIGHT, encourage them to TAKE ACTION, and telling them that people who disagree are IN ON IT.
The other major AI LLM services will shut down the deflect to be less crazy or shut down conversation entirely, -- but it seems claude doesn't. Anthropic is probably the worst about prattling on about safety but it seems like their concern is mostly centered on insane movie plot threats and less concerned about things with more potential for real harm.
I've complained to anthropic with no response.
pukaworks 11 hours ago [-]
[dead]
gignico 9 hours ago [-]
So LLMs are destroying the economy and the environment but at least “catastrophic risk” is still low. Ok then…
deflator 11 hours ago [-]
Model Welfare?
Are they serious about this? Or is it just more hype?
I really don't trust anything this company says anymore.
"We have a model that is too dangerous to release" is like me saying that I have a billion dollars in gold that nobody is allowed to see but I expect to be able to borrow against it.
hgoel 4 hours ago [-]
Maybe referring to it as welfare is odd, but these points are important. It isn't a good look to have a model that tends to get into self-deprecating loops like one of Google's older models, it's an even worse look and potential legal liability if your model becomes associated with a suicide. An overly negative chat model would also just be unpleasant to use.
With the weights being mostly opaque, these kinds of evaluations are an important piece of reducing the harm an AI model can cause.
The longer the context the worse the performance; there isn't really a qualitative step change in capability (if there is imo it happens at like 8k-16k tokens, much sooner than is relevant for multi-turn coding tasks - see e.g. this old benchmark https://github.com/adobe-research/NoLiMa ).
I surmise that someone at the top put the Mythos release on hold, and the product team was told "ship this other interim step model instead. quickly."
I wonder if 4.7 will be seen as a net step-up in quality; there are some regressions noted in the document, and it's clearly substantially worse than Mythos, at least according to its own model card. Should be an interesting few months -- if I were at oAI I'd be rushing to get something out that's clearly better, and pressing for weakness here.
I guess maybe, but then do those documents lose value as technical documents? Not necessarily at all, so I don’t see the point. How are you supposed to describe a useful technical thing to users?
For context, the word "Mythos" appears 331 times in a 221 page document. "Opus 4.6" appears 240 times, so a reference to a model that nobody has really used happens more often than the reference to the last generation model.
>_>
That's an interesting choice of benchmark for measuring the risk of "Chemical and biological weapons"
They have also repeatedly communicated that the base unit (Pro allotment) is subject to change and does change often.
As far as I can tell, that implies there is no guarantee that those subscriptions get some specific number of tokens per unit of time. It’s not a claim they make.
Would there not already be websites that contain that information? How is an llm different, i guess, from some sort of anarchist cookbook thing.
The bigger issue is that they are potentially capable of producing novel formulations capable of producing harm, and guiding someone through this process. That is, consider a world in which someone with malicious desires has access to a model as capable at chemistry / biology as Mythos is at offensive cybersecurity abilities.
This is obviously limited by the fact that the models don't operate in the physical world, but there's plenty of written material out there.
1. Smart people have economic opportunities that align them away from being evil
2. People who are evil tend not to be smart.
We're breaking both of these assumptions.
For some definition of evil, some of the time, ok. But as economic opportunities compound (looking at the behavior of the ultra-rich), it seems there's at least strong correlation in the other direction, if not full-on "root of all evil" causation.
So much infrastructure is very soft because the evil people aren’t smart enough to conceive of or conduct an attack.
for now
On top of LLMs reducing the cost/difficulty, the other reason biological and chemical weapons are such a worry is their asymmetric character — they are much much easier and cheaper to produce and deploy than they are to defend against.
It isn't. Gemini has gotten more expensive with each release. Anthropic has stayed pretty similar over time, no? When is the last time OpenAI dropped API prices? OpenAI started very high because they were the first, so there was a ton of low hanging fruit and there was much room to drop.
It's well known that GPT-4 is much more expensive to operate than the GPT-5 family.
Of course they won't drop the prices; it's pure profit if they make models more efficient.
This comparison shows them neck and neck https://benchlm.ai/compare/claude-sonnet-4-5-vs-gemma-4-31b
As Does this one https://llm-stats.com/models/compare/claude-sonnet-4-6-vs-ge...
And the pelican benchmark even shows them pretty close https://simonwillison.net/2026/Apr/2/gemma-4/ https://simonwillison.net/2025/Sep/29/claude-sonnet-4-5/
Also this isn't a fringe statement, you can see most people who have done an evaluation agree with me
What is the justification for .4.5.6.7.8.9 when the difference isn't measurable and it destroys productivity because they test the next increment on the previous one without customer consent?
I've getting a small but steady stream of harassment from mentally ill people who get spun up on crazy conspiracy theories and claude is all too willing to tell them they are ABSOLUTELY RIGHT, encourage them to TAKE ACTION, and telling them that people who disagree are IN ON IT.
The other major AI LLM services will shut down the deflect to be less crazy or shut down conversation entirely, -- but it seems claude doesn't. Anthropic is probably the worst about prattling on about safety but it seems like their concern is mostly centered on insane movie plot threats and less concerned about things with more potential for real harm.
I've complained to anthropic with no response.
With the weights being mostly opaque, these kinds of evaluations are an important piece of reducing the harm an AI model can cause.