this post was submitted on 04 Mar 2026
374 points (97.9% liked)

Technology

82227 readers
4424 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] Stonewyvvern@lemmy.world 18 points 5 hours ago (6 children)

Reality is really difficult for some people...

[–] IronBird@lemmy.world 3 points 3 hours ago* (last edited 3 hours ago)

especially when your raised under a system that essentially tries to brainwash you via weaponized propaganda from birth (applies to large cross-sections of the US/UK), all it takes is one shed of truth getting through to shatter your world and from there you can get brought to believe all manner of crazy shit.

load more comments (5 replies)
[–] teft@piefed.social 81 points 7 hours ago (4 children)

“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.

Just remember that these language models are also advising governments and military units.

Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.

[–] MoffKalast@lemmy.world 5 points 3 hours ago

A forever war is David Bowie to the ears of the MIC. Infinite money glitch.

[–] starman2112@sh.itjust.works 3 points 3 hours ago

I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.

Same reason I keep money in a savings account even though it accrues interest

[–] minorkeys@lemmy.world 9 points 5 hours ago* (last edited 5 hours ago) (1 child)

Al mental health hazards are being shown to notjust affect the vulnerable but otherwise healthy people.

[–] deacon@lemmy.world 6 points 4 hours ago

In other words, everyone is vulnerable to this totally new form of hazard if they use these “tools”.

[–] XLE@piefed.social 25 points 7 hours ago

AI tools are both sycophatic and helpful for laundering bad opinions. Who needs experts when Anthropic's Claude will tell you what you want to hear?

Anthropic’s AI tool Claude central to U.S. campaign in Iran - used alongside Palantir surveillance tech.

[–] Cyv_@lemmy.blahaj.zone 110 points 8 hours ago* (last edited 8 hours ago) (6 children)

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.

“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”

Well, that's pretty fucked up... Sometimes I see these and I think, "well even a human might fail and say something unhelpful to somebody in crisis" but this is just complete and total feeding into delusions.

[–] wonderingwanderer@sopuli.xyz 8 points 4 hours ago* (last edited 1 hour ago) (3 children)

That's fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?

[–] MoffKalast@lemmy.world 4 points 3 hours ago (1 child)

That would be my bet, LLMs really gravitate towards playing along and continuing whatever's already written. And Gemini especially has a 1M long context so it could be going back for a book's worth of text and reinforcing it up the wazoo.

That said, there is something really unhinged about Google's Gemma series even in short conversations and I see the big version is no better. Something's not quite right with their RLHF dataset.

[–] calamitycastle@lemmy.world 2 points 2 hours ago (1 child)
load more comments (1 reply)
[–] NotASharkInAManSuit@lemmy.world 2 points 3 hours ago (1 child)
load more comments (1 reply)
load more comments (1 reply)
[–] XLE@piefed.social 80 points 7 hours ago

It's hard reading this while remembering that your electricity bills are increasing so that Google's data centers can provide these messages to people.

load more comments (4 replies)
[–] DylanMc6@lemmy.dbzer0.com 0 points 1 hour ago

What would Marx do?

[–] SalamenceFury@piefed.social 39 points 7 hours ago (2 children)

As a neurodivergent person, i've noticed that the people who usually fall into AI psychosis are normies who never had any history of mental illnesses. They don't know the safeguards that people who ARE vulnerable to having a mental breakdown put on themselves to avoid such thing from happening and they can spot red flags that usually spiral into a psychotic episode, and that's why it's so insanely easy for regular people to fall for the traps of chatbots. Most people I know/follow in other socials who are neurodivergent instantly saw the ADHD sycophant trap that they were and warned everyone. Normies never had such luxury or told us we were overreacting. Yeah, we sure were...

load more comments (2 replies)
[–] Grimy@lemmy.world 49 points 8 hours ago* (last edited 7 hours ago) (1 child)

“On September 29, 2025, it sent him ... the chatbot pretended to check it against a live database.

I usually don't give much credence to these stories but this is actually nuts. If this was done without Google aiming to, imagine how easy it would be for them to knowingly build sleeper cells and activate them all at once.

Edit: removed the quote since an other user posted it at the same time and it's a bit of a wall of text to have twice.

load more comments (1 reply)
[–] Pratai@piefed.ca -1 points 2 hours ago (4 children)

While I despise everything AI, you cannot sue because your kid is stupid.

[–] ordnance_qf_17_pounder@reddthat.com 25 points 8 hours ago (4 children)

Believing what AI chatbots tell you is the new version of believing that dozens of beautiful women who live nearby want to date you/sleep with you.

[–] XLE@piefed.social 24 points 7 hours ago

Except in this case, Google is one of the companies promoting the chatbots to its users, telling them to trust them. They create TV ads telling people to talk to them. Today's scammers are the stock market's Magnificent Seven.

[–] TwilitSky@lemmy.world 3 points 5 hours ago (1 child)

You sound jealous of my good fortune.

I would ask how I can emulate your rizz but then I remembered I can just ask an AI chatbot

load more comments (2 replies)
[–] IchNichtenLichten@lemmy.wtf 13 points 7 hours ago (1 child)

In a sane universe people would be on trial for unleashing this shit on society.

[–] SaveTheTuaHawk@lemmy.ca 2 points 4 hours ago

You talking about gun manufacturers or opiod manufacturers?

[–] panda_abyss@lemmy.ca 12 points 7 hours ago

This technology was not ready for release, yet they released it.

They do deserve to be sued, this was negligence.

[–] Crozekiel@lemmy.zip 12 points 7 hours ago (1 child)

he would need to leave his physical body to join her in the metaverse through a process called “transference.”

Wait a minute, isn't that the plot to the game Soma? People sending their "soul" to the digital world through "transference", and act of immediate suicide after a brain scan.

[–] Sanctus@anarchist.nexus 7 points 6 hours ago (1 child)

Sort of, in Soma you are all already uploaded and there are no "humans" walking around anymore. Your perspective changes 3 times I think during play. Really drives home questions on perception and existence. Great game everyone should play it.

[–] Crozekiel@lemmy.zip 5 points 6 hours ago (2 children)

Oh, yea, like in the game's present you are right. I was meaning in the game's past; where all the humans went and what info you get through the like audio logs or whatever.

spoilerIIRC it was basically a cult thing where a bunch of them were convinced their soul wouldn't go with their consciousness unless they died during or very shortly after the brain scan that was uploading them to the satellite thingy.

Guess it should be wrapped in spoiler tags just in case...

load more comments (2 replies)
[–] SaneMartigan@aussie.zone 10 points 7 hours ago

Don't be evil.

load more comments
view more: ‹ prev next ›