this post was submitted on 27 Feb 2026
20 points (100.0% liked)

Technology

82227 readers
4287 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Hacker News.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

all 47 comments
sorted by: hot top controversial new old
[–] pkjqpg1h@lemmy.zip 3 points 4 days ago (1 child)

Did we read the same thing?

We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.

So they accept surveillance in other countries? What about other countries’ democratic values?

Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So you don’t because it still sucks? But if it didn’t, you would?

And what about legal?

  • Do Not Develop or Design Weapons???
  • Do Not Compromise Privacy or Identity Rights???

I’ve really lost my faith in the US. They think they hold the power, but they’re missing the point: real power is built on trust-and we’re losing more of it every day.

[–] 1984@lemmy.today 1 point 4 days ago* (last edited 4 days ago)

Its been an american leadership view for as long as ive been alive that American lives are worth at least a hundred times more than other lives.

That is, in war situations, not in situations where leadership takes care of its citizens. No, there those lives are worth next to nothing. So American leadership is pretty much at war both with its own people and countries who dont want American culture.

[–] revolutionaryvole@lemmy.world 4 points 5 days ago (2 children)

I guess it's good that they draw the line somewhere, but it is absolutely horrifying to me as a non-American that the moral stance is limited to:

  • taking issue with fully autonomous AI weapons (purely for technical reasons according to this letter, they are working hard on making them possible)
  • refusing to conduct mass surveillance of US citizens specifically (foreign nationals are fair game and the intelligence apparatus will surely only be used for good and to preserve democracy).

This is not Anthropic refusing to cooperate with the Trump administration as the title may suggest, they are in fact explicitly eager to serve the US Department of War. They are just vying for slightly better contract terms.

[–] wizardbeard@lemmy.dbzer0.com 3 points 5 days ago

You're spot-on. As some additional context, Anthropic is already working tightly with the US government. Until the recent announcement regarding Grok, Anthropic was the only approved AI for US government work, as it is/was the only one certified for safely woeking with classified data.

[–] scarabic@lemmy.world 0 points 4 days ago (1 child)

vying for slightly better contract terms

Do you mean that all this about principles is a smoke screen and Anthropic are just using it as a front to squeeze for more money?

[–] revolutionaryvole@lemmy.world 1 point 4 days ago (1 child)

No, if you want my opinion it seems too risky of a move to make all of this so public if all they want is more money. It's possible, but I'd be surprised.

I believe them when they say that what they want is to have those two particular things, fully autonomous weapons and mass surveillance of US citizens, removed from the contract terms (for now). This could be out of genuine moral principles, or out of fear of bad PR when this would be found out. Most likely a combination of both.

My point was that from my perspective it is a very minor difference. The conclusion I kept after reading this isn't "good guy Anthropic bravely stands against pressure from Hegseth" as some of the Hackernews comments try to paint it. It is "Anthropic mostly bends over backwards and grovels for Pentagon money, willing to massively spy on all foreign nationals and working on creating autonomous weapons - other US AI companies likely to be even worse".

As I said, horrifying.

[–] scarabic@lemmy.world 1 point 3 days ago* (last edited 3 days ago) (1 child)

Crossing off mass surveillance and automated killing isn’t everything they could have taken a moral stand on. Personally I don’t think any list will be long enough for the Pentagon, and if it were, there wouldn’t be anything left that could be worked on.

But I keep hearing you say that no mass surveillance and no automated killings is so very little - almost nothing. That doesn’t seem right to me. I think those are both pretty big things. TBH I don’t know exactly how to feel about it all but I’m not horrified that their moral stance would include only that.

[–] revolutionaryvole@lemmy.world 0 points 3 days ago (1 child)

That's a fair stance to take and I definitely do not mean to try to have you change your opinion. I also do not know if you are an American, and I don't want to assume either way.

But, to better explain my own position, I need to point out:

Anthropic is not saying "no mass surveillance", they are saying "no mass surveillance of Americans". If you judge this stance based on effect, it literally makes no difference at all if you are not a US citizen, you are targeted either way. If you judge it based on principles, it can be argued it is even less moral than accepting mass surveillance of everyone - not only are they claiming that billions of innocent people deserve to lose their right to privacy, but they are specifically carving out an exception for themselves based on nationality.

They are also not saying "no automated killings", but "no automated killings at this time because we haven't ironed out the kinks yet". This can be framed as a moral stance relating to safety concerns, so I will assume in good faith that this is their reasoning rather than fear of bad publicity. However, I would argue that it is still an insignificant difference, as the threat posed to humanity by a powerful warmongering state commanding an army of fully autonomous killing machines is already too great. Making sure the technology is ready could mean working on avoiding a Terminator scenario, but without a doubt it will also mean ensuring that the murderbots WILL obey an order to bomb striking workers or displaced refugees so long as the right Executive Order was signed first, something that a human being in the loop might have prevented.

These two red lines seem to make a world of moral difference for someone who already takes it for granted that the USA and its military are overall institutions deserving of trust and support, perhaps with the small exception of the current Secretary of War who may have jumped the gun a bit during negotiations over a new technology. At the very least, that seems to be the position of the author of this letter. But no state should ever be given that amount of trust and support. And particularly given the USA's belligerence over the years and its current slide towards outright fascism, I am horrified that the bar is this low.

[–] scarabic@lemmy.world 1 point 3 days ago (1 child)

Better to be skeptical about everyone here, and there are certainly no heroes.

However it should be obvious that a country’s department of war surveilling its own citizens is a completely inappropriate overreach. They exist to protect the country from outside threats. You’re casting it as some kind of discrimination, and claiming it would be more moral to treat everyone the same, but that seems willfully obtuse to me. Calling it a “special carve out” for a country to protect its own citizens… come on. Obviously since you are not an American it does nothing for you but you are working way too hard to spin that up into a sin.

[–] revolutionaryvole@lemmy.world 1 point 2 days ago (1 child)

Obviously a country spying on its citizens is unacceptable overreach, I never claimed otherwise. And if my own government was conducting mass surveillance on me I would be particularly furious at the betrayal. But I would also not support it conducting surveillance on foreigners either. That is the "sin" Anthropic is guilty of, in my eyes.

Mass surveillance is simply immoral. It is targeting innocent people who have not even been accused of any crime and robbing them of their right to privacy. It is also giving states absolute leverage to harm, blackmail or manipulate anyone they want at will.

The argument that it is all done in the name of protecting its own citizens also falls flat in this case, as that is exactly the same excuse used for mass domestic surveillance - everyone loses their privacy, but the good, law-abiding citizens are protected from the criminal elements who would threaten them. "If you have nothing to hide, you have nothing to fear".

Let's not kid ourselves, this is not about protecting anyone. They plan to spy not only on their "enemies" but also on their closest allies, as they have in the past. This is about gaining power. And states in general already have far too much power over individuals.

Kowtowing to the Department of War and offering to sell them an AI for mass surveillance is not OK, even if it truly were to limit itself to the common, genteel use case of spying on foreign people.

[–] scarabic@lemmy.world 1 point 2 days ago (1 child)

I’m hardly going to defend the Pentagon, but to say a country should not even have an intelligence operation whatsoever, that this isn’t elementary to protecting its citizenry, is beyond naive and unrealistic.

[–] revolutionaryvole@lemmy.world 1 point 2 days ago (1 child)

Well yeah, that's true, but I didn't say that, did I? Not even remotely.

We are specifically talking about mass surveillance. I will let you reflect on the implications of an intelligence apparatus with the ability to have a Claude-level AI scanning every piece of information on the internet by yourself.

[–] scarabic@lemmy.world 1 point 1 day ago* (last edited 1 day ago) (1 child)

if my own government was conducting mass surveillance on me I would be particularly furious at the betrayal. But I would also not support it conducting surveillance on foreigners either.

So no one then. I’m not trying to pin you here, just explain why it did indeed sound an awful lot like you were saying that. Conducting no surveillance is pretty much not having any intelligence operations. Are they supposed to wait by the phone for tips? This is where I was coming from. If you tell me you meant something different, I believe you, but this is how I got you wrong, and why I disagree if you thought you said nothing even remotely close.

That's fair, that should have been MASS surveillance, I skipped a word.

[–] andallthat@lemmy.world 3 points 5 days ago (2 children)

Amodei "we cannot in good conscience allow this".

Hegseth looks confused, turns towards his team and mouths "...in good what?""

[–] Klear@quokk.au 2 points 2 days ago

My conscience is clean. It's never been used!

[–] XLE@piefed.social 1 point 5 days ago (1 child)

"Anthropic publicly praised President Trump’s AI Action Plan," said CEO Dario Amodei.

"We have been supportive of the President’s efforts to expand energy provision in the US in order to win the AI race," he continued, apparently talking about Trump's new anti green energy, pro fossil fuel program.

[–] andallthat@lemmy.world 1 point 5 days ago* (last edited 5 days ago)

yes... mine was just a play on the title of this post.

Look, I'm not saying that Amodei is a saint and I do find him as full of shit as Altman with their AGI promises, but would you expect Anthropic to take a stand against increasing AI investment, because it's coming from Trump? And I don't like that he went looking for funding in the Middle East either.

I just think there is an ethical line between "I do business with people who do bad things" and "I'm actively helping people who do bad things to do them in a more efficient way". It might be a fine line and it might also be that they are just posturing, but it's still more than other companies did (companies that are a lot richer than Anthropic and that don't need to find a lot of funding just to stay afloat).

[–] Crozekiel@lemmy.zip 2 points 5 days ago* (last edited 5 days ago)

So the government wants "full self-driving" attack drones. You know, just in case the military actually disobeys an unlawful order?

How many pieces of science fiction do we have where the "bad guys" are literally just killer robots we created and then realized we didn't have control over? Why would we decide it is a good idea to literally build terminators? I'm convinced the government will actually build the "orphan crushing machine" next...

[–] FlashMobOfOne@lemmy.world 1 point 5 days ago (1 child)

I read somewhere that Anthropic has $18,000,000,000 in commitments from last year alone, so conceivably, they can stand to lose a mere $200,000,000 and it won't create a huge issue for them in the short term.

I hope that's how they're looking at it.

[–] TheSeveralJourneysOfReemus@lemmy.world 0 points 5 days ago (1 child)

I read somewhere that Anthropic has $18,000,000,000 in commitments from last year alone, so conceivably, they can stand to lose a mere $200,000,000 and it won’t create a huge issue for them in the short term.

How does one count that amount of anything, let alone money

[–] el_abuelo@programming.dev 1 point 4 days ago

Start at 1 and work your way up in increments of 1.

See you in about 100 years give or take a few decades.

[–] wonderingwanderer@sopuli.xyz 1 point 5 days ago

Are those the same AI systems that recommended nuclear escalation in 90% of simulations?

[–] XLE@piefed.social 1 point 5 days ago (1 child)
[–] criss_cross@lemmy.world 2 points 4 days ago

It’s probably more they don’t wanna get blamed if AI launches missiles because the idiots in charge pressed shift+tab and yolo’d.

Claude: “You’re right. I completely committed a war crime. I’m so very sorry! How would you like to proceed?”

[–] Ilovethebomb@sh.itjust.works 0 points 5 days ago (1 child)

How is a private company the voice of reason in this?

[–] Iconoclast@feddit.uk 0 points 5 days ago (2 children)

Anthropic was founded by former OpenAI employees who left largely due to ethical and safety concerns about how OpenAI was being run. This is just them sticking to their principles.

[–] XLE@piefed.social 0 points 5 days ago (1 child)

Anthropic's "ethical" concerns were performative. They only fearmonger about fictional things that will make their product sound powerful (read: worth throwing money into).

They try to scare people with fictional stories of AGI, a thing that isn't happening, while ignoring widespread CSAM and sexual harassment generation, a thing that is happening.

[–] Iconoclast@feddit.uk 0 points 5 days ago (1 child)

Are we not moving toward AGI? Because from where I stand, I only see three scenarios: either AI research is going backwards, no progress is being made whatsoever, or we're continuing to improve our systems incrementally - inevitably moving toward AGI. Unless, ofcourse, you think we'll never going to reach it which I view as a quite insane claim in itself.

If we're not moving toward it, then I'd love to hear your explanation for why we're moving backwards or not making any progress at all.

Whether we're 5 or 500 years away from AGI is completely irrelevant to the people who worry about it. It's not the speed of the progress - it's the trajectory of it.

[–] XLE@piefed.social 0 points 5 days ago* (last edited 5 days ago) (1 child)

We are not "moving towards AGI" in any way with any modern technology, in the same way that we are not "moving towards FTL travel" because a car company added cylinders to an engine.

The real "AI" dangers are people like Eli Yudkowski, a man who scares vulnerable people, sexually abuses them, and has spawned at least one murderous cult.


Dario is one of the biggest AGI bullshit peddlers.

In October 2023, Amodei joined The Logan Bartlett show, saying that he “didn’t like the term AGI” because, and I shit you not, “...because we’re closer to the kinds of things that AGI is pointing at,” making it “no longer a useful term.” He said that there was a “future point” where a model could “build dyson spheres around the sun and calculate the meaning of life,” before rambling incoherently and suggesting that these things were both very close and far away at the same time. He also predicted that “no sooner than 2025, maybe 2026” that AI would “really invent new science.”

[–] Iconoclast@feddit.uk 0 points 5 days ago* (last edited 5 days ago) (1 child)

We are not “moving towards AGI” in any way with any modern technology

So that means you believe AI research is completely frozen still or moving backwards. Please explain.

Comparisons to faster-than-light travel are completely disingenuous and bad faith - that would break the laws of physics and you know it.

You can also keep your red herrings to yourself. I'm discussing ideas here - not people.

[–] XLE@piefed.social 0 points 5 days ago (1 child)

According to Dario Amodei, this is the year we are getting New Science. And apparently he believes in Dyson Spheres too. How do we feel about that?

Anthropic is not special. They're doing the LLM thing like everybody else. The Godfather of AI, Yann LeCun himself, said LLMs were a dead end on this front. But even if he didn't chime in, it's your job to show they'll lead to AGI, it's your job to show us how, not my job to show you it won't.

[–] Iconoclast@feddit.uk 0 points 5 days ago (1 child)

If you're just gonna keep ignoring every single point I make and keep rambling about unrelated shit, then there's nothing left to discuss here. If you actually had an argument, you would've made it by now.

[–] XLE@piefed.social 0 points 5 days ago* (last edited 5 days ago) (1 child)

Your claim: AI seems to be getting better, therefore AGI will happen

My rebuttal: they aren't linked

Other important things you must reconcile with: the sexual abuse, the death toll, etc from the True Believers

Does that clear matters up?

[–] Iconoclast@feddit.uk 0 points 5 days ago (1 child)

My argument is that we'll incrementally keep improving our technology like we have done throughout human history. Assuming that general intelligence is not substrate dependent - meaning that what our brains are doing cannot be replicated in silicon - or that we destroy ourselves before we get there, then it's just a matter of time before we create a system that's as intelligent as we are: AGI.

I already said that the timescale doesn't matter here. It could take a hundred years or two thousand - doesn't matter. We're still moving toward it. It does not matter how slow you move. As long as you keep moving, you'll eventually reach your destination.

So, how I see it is that if we never end up creating AGI ever, it's either because we destroyed ourselves before we got there or there's something borderline supernatural about the human brain that makes it impossible to copy in silicon.

[–] XLE@piefed.social 0 points 5 days ago (1 child)

So do you think Dyson Spheres are inevitable too? Because things advance?

You're also shifting your goalposts tremendously. First you were implying that today's AI would bring about AGI and now you're saying that something, somewhere, might happen in some sci-fi future.

I'm not sure if you're actually worried about present day destruction, though, because you seemed to not like it when I brought up with the AGI true believers are doing to the vulnerable people that flock to them. Dario is on board with Trump's fossil fuel, anti-green buildout too.

If you believe so much in AI, but allegedly believe in the things you've talked about, perhaps it's time to start criticizing the people you hold so dear.

[–] Iconoclast@feddit.uk 0 points 5 days ago (1 child)

So do you think Dyson Spheres are inevitable too?

I'm less certain about that than I am about AGI - there may be other ways to produce that same amount of energy with less effort - but generally speaking, yeah, it seems highly probable to me.

First you were implying that today’s AI would bring about AGI

I've never made such a claim. I've been saying the exact same thing since around 2016 or so - long before LLMs were even a thing. It's in no way obvious to me that LLMs are the path to AGI. They could be, but they don't have to be. Either way, it doesn't change my core argument.

people you hold so dear

C'moon now.

[–] XLE@piefed.social 0 points 5 days ago (1 child)

I've been saying the exact same thing since around 2016 or so - long before LLMs were even a thing

You really aren't beating the Yudkowsky/LessWrong allegations with this one, you know.

If you really think LLMs might mean nothing at all when it comes to actually achieving AGI, then maybe you should speak out against the environmental destruction they're doing today with full endorsement from Anthropic and all the other corporate AI perverts.

[–] Iconoclast@feddit.uk 0 points 5 days ago (1 child)

That doesn't have anything to do with my claim about the inevitability of AGI.

[–] XLE@piefed.social 0 points 5 days ago (1 child)

It is everything to do with your claim about its inevitability, because we're witnessing real life in the present day, not some fantasy prediction of the future. If people like Dario and Eli get their way, there will be no future to get AGI.

... I am growing increasingly concerned you really are a Yudkowskist rationalist

[–] Iconoclast@feddit.uk 0 points 5 days ago (2 children)

You don't seem very interested in sticking to the topic, do you? This conversation has been all over the place, complete with ad-hominems, concern-trolling, red herrings, strawmen, gish galloping - as if you're trying to break some kind of record.

It's pretty clear you've built up a cartoon-villain version of me in your head and now you're fighting that imagined version like it's real. I made a pretty simple claim about AGI, you've piled an entire story on top of it, and now you're demanding I defend views I don't even hold.

I've been trying to have a good-faith conversation here, but if this is what you're going to keep doing, then I'll just move on.

[–] XLE@piefed.social 1 point 2 days ago* (last edited 2 days ago)

Iconoclast, I asked you a question and was hoping for an answer.

You're aware of the sexual abuse and death Eli Yudkowski is either directly or indirectly responsible for, right?

[–] Voroxpete@sh.itjust.works 0 points 5 days ago* (last edited 5 days ago) (1 child)
[–] Iconoclast@feddit.uk -1 points 5 days ago (1 child)

I still think they deserve some credit for at least trying to do the right thing. I don't envy the position they're in.

Everyone's rushing toward AGI. Trying to do it safely is meaningless if your competition - the ones who don't care about safety - gets there first. You can slow things down if you're in the lead, but if you're second best, it's just posturing. There is no second place in this race.

[–] purrtastic@lemmy.nz 0 points 5 days ago (1 child)

No AI bro company is on the path to AGI. Transformer technology will not lead to AGI.

[–] Iconoclast@feddit.uk -1 points 4 days ago

I never claimed it will.