this post was submitted on 25 Feb 2026
5 points (100.0% liked)

Technology

82227 readers
4603 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change.

In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.

The announcement is surprising, because Anthropic has described itself as the AI company with a “soul.” It also comes the same week that Anthropic is fighting a significant battle with the Pentagon over AI red lines.

It’s not clear that Anthropic’s change is related to its meeting Tuesday with Defense Secretary Pete Hegseth, who gave Anthropic CEO Dario Amodei an ultimatum to roll back the company’s AI safeguards or risk losing a $200 million Pentagon contract. The Pentagon threatened to put Anthropic on what is effectively a government blacklist.

But the company said in its blog post that its previous safety policy was designed to build industry consensus around mitigating AI risks – guardrails that the industry blew through. Anthropic also noted its safety policy was out of step with Washington’s current anti-regulatory political climate.

Anthropic’s previous policy stipulated that it should pause training more powerful models if their capabilities outstripped the company’s ability to control them and ensure their safety — a measure that’s been removed in the new policy. Anthropic argued that responsible AI developers pausing growth while less careful actors plowed ahead could “result in a world that is less safe.”

As part of the new policy, Anthropic said it will separate its own safety plans from its recommendations for the AI industry.

Anthropic wrote that it had hoped its original safety principles “would encourage other AI companies to introduce similar policies. This is the idea of a ‘race to the top’ (the converse of a ‘race to the bottom’), in which different industry players are incentivized to improve, rather than weaken, their models’ safeguards and their overall safety posture.”

top 18 comments
sorted by: hot top controversial new old
[–] JcbAzPx@lemmy.world 2 points 1 week ago

When the Anthropic powered murder bot comes through your door, remember this moment. Know that there is no such thing as an ethical corporation.

[–] phoenixz@lemmy.ca 1 point 1 week ago (1 child)

Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

Anyone surprised?

Surprised and disappointed, both by them and the system (capitalism) that stops us from having nice things.

If we ever crack AGI, it's probably going to be because the market optimised for the better shilling of dick pills, crypto scams and spyware.

That's...fucking bleak, in the Hide Pain Harold way.

[–] RaoulDook@lemmy.world 1 point 1 week ago

TLDR = money matters more than morals and safety to them

[–] panda_abyss@lemmy.ca 1 point 1 week ago

The timing is certainly predominant.

I think I’m cancelling my subscription over this…

[–] flandish@lemmy.world 1 point 1 week ago* (last edited 1 week ago) (1 child)

here is the only line that matters: “hinder its ability to compete.”

that means “to profit.”

reminder: all corporations, everywhere and in every industry, care first about profit. everything else is about how it relates to or alters profit. literally every word is a goddamned lie, sold to help protect profit.

also their logo looks like a butthole. who does that?!

[–] corsicanguppy@lemmy.ca 0 points 1 week ago (1 child)

I think anthropic is in the "ah shit we're dying" startup stage. They're not looking it profit so much as not lose ALL of their shirt in the coming crash. So it pivoted and ditched its morals. Hate them for that.

Don't hate them for being greedy yet: they're not there. Maybe they'll never get there. But they've lived long enough to become the thing they strived to destroy, so that's a milestone.

[–] vacuumflower@lemmy.sdf.org -1 points 6 days ago (1 child)

People are talking about AI killbots and upcoming crash at the same time, and complain about AI slop and vibe coding.

Sorry, but if something is usable for making killbots, there will be no crash. And AI slop proves that for someone it's useful to make slop. And vibe coding proves that someone makes things working in production with those tools. Saying that quality suffers is like saying that cobb houses are not comparable to brick houses and vice versa. Both exist. There are places where technologies related to cobb are still common for construction.

But the most important reason is the first one, if some technique gives you a more convenient and sharper stick to kill someone from another tribe, then that something stays as tribe's cherished wisdom.

That LLMs consume too much resources ... You might have noticed there's a huge space for optimization. They are easy to parallelize, and we are in market capture stage, which means that optimization is not yet a priority. When it becomes a priority, there might happen a moment when all the arguments about operations costing in resources more than they give profit and that being funded by investors are suddenly not true anymore.

I have been converted. Converted back, one might say, there was a time around years 2011-2014.

[–] Corkyskog@sh.itjust.works 1 point 5 days ago

Sorry, but if something is usable for making killbots, there will be no crash.

You clearly don't understand how finance works or don't understand how leveraged these incestuous deals are. It's perfectly possible for AI to make killbots and for an AI economic crash to happen.

They industry needs to make Trillions of dollars to pay off their creditors and to achieve the profit their investors need to make this worthwhile. That only happens if most white collar workers are replaced with AI.

[–] r00ty@kbin.life 0 points 1 week ago (1 child)

You either die a hero... Something something.

[–] rozodru@piefed.world 0 points 1 week ago (1 child)

this is like text book definition of Anthropic. they ONCE had a great LLM, it was reliable, great solutions, Claude Code was pretty good.

now? it's all garbage. all of it. Sonnet 4.6 hasn't improved a damn thing. If you use Code you're literally shooting yourself in the foot now.

All it knows how to do now is hallucinate. that's it. They should pivot Claude to being a creative writing LLM because man it's FANTASTIC at making stuff up and making it sound believable.

They should have died the hero.

[–] XLE@piefed.social 1 point 1 week ago

Anthropic was never an ethical company, just one that released a competent product.

Their attempts to look ethical were reminiscent of a horror movie villain donning someone else's freshly peeled face in an attempt to look better

[–] Rentlar@lemmy.ca 0 points 1 week ago (1 child)

Oops, pretending to take the moral high ground is out the window as soon as MIC dollars are at risk.

[–] GreenBeard@lemmy.ca 1 point 1 week ago

Remember kids, the term "Business Ethics" is an oxymoron. Corporations don't have ethics, they have financial interests and PR.

[–] XLE@piefed.social 0 points 1 week ago (1 child)

Anthropic has described itself as the AI company with a “soul.”

This is silicon valley stupidity at its finest.

AIs do not have souls. But guess what: The "soul" file is what runs trash like OpenClaw.

And companies definitely don't have souls.

Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market... it had hoped its original safety principles “would encourage other AI companies to introduce similar policies.

Rules for thee and never for me. (BTW, these rules sucked, and mostly didn't address actual dangers.)

[–] richieadler@lemmy.world 1 point 4 days ago (1 child)

AIs do not have souls.

If we're going to be pedantic: neither has anything else.

[–] XLE@piefed.social 0 points 4 days ago (1 child)

We weren't, but since you bring it up, if the CEOs of AI companies did have souls, they would be going to the worst hell imaginable.

[–] richieadler@lemmy.world 1 point 3 days ago* (last edited 3 days ago)

In that fictional situation, they should. However, the plethora of exceptions, provisos and backdoor that religions provide would make it unlikely.