this post was submitted on 04 Mar 2026
333 points (97.7% liked)

Technology

82227 readers
4334 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] BranBucket@lemmy.world 11 points 40 minutes ago* (last edited 22 minutes ago) (1 child)

People don't often realize how subtle changes in language can change our thought process. It's just how human brains work sometimes.

The old bit about smoking and praying is a great example. If you ask a priest if it's alright to smoke when you pray, they're likely to say no, as your focus should be on your prayers and not your cigarette. But if you ask a priest if it's alright to pray while you're smoking, they'd probably say yes, as you should feel free to pray to God whenever you need...

Now, make a machine that's designed to be agreeable, relatable, and make persuasive arguments but that can't separate fact from fiction, can't reason, has no way of intuiting it's user's mental state beyond checking for certain language parameters, and can't know if the user is actually following it's suggestions with physical actions or is just asking for the next step in a hypothetical process. Then make the machine try to keep people talking for as long as possible...

You get one answer that leads you a set direction, then another, then another... It snowballs a bit as you get deeper in. Maybe something shocks you out of it, maybe the machine sucks you back in. The descent probably isn't a steady downhill slope, it rolls up and down from reality to delusion a few times before going down sharply.

Are we surprised some people's thought processes and decision making might turn extreme when exposed to this? The only question is how many people will be effected and to what degree.

[–] how_we_burned@lemmy.zip 2 points 36 minutes ago

This is really well written. Great post.

[–] DylanMc6@lemmy.dbzer0.com 1 point 18 minutes ago

What would Marx do?

[–] man_wtfhappenedtoyou@lemmy.world 7 points 1 hour ago (1 child)

How do you even get these chat bots to start telling you shit like this? Is it just from having a conversation for too long in the same chat window or something? I don't understand how this keeps happening.

[–] throws_lemy@reddthat.com 5 points 47 minutes ago

This could happen to anyone including people without having mental issues, simply by having long conversations with AI.

On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.

Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

[–] Reygle@lemmy.world 13 points 2 hours ago* (last edited 2 hours ago) (4 children)

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”


WHAT

Genuine question, REALLY: What in the fuck is an otherwise "functioning adult" doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?

[–] merdaverse@lemmy.zip 16 points 1 hour ago (1 child)

AI psychosis is a thing:

cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals

It's not very studied since it's relatively new.

[–] Reygle@lemmy.world 2 points 1 hour ago

I've seen that before too. A number of articles of people being so deluded by AI responses, but I've never seen outright murder plots and insane shit like this one before.

[–] XLE@piefed.social 10 points 2 hours ago (3 children)

I feel like his father should also slap himself unconscious for raising a fuckwit?

So, a chatbot grooms somebody into killing himself, and your response is... Blame his father?

load more comments (3 replies)
[–] starman2112@sh.itjust.works 15 points 2 hours ago

If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I'm going to sue that someone who took advantage of my son's fuckwittedness

[–] SalamenceFury@piefed.social 9 points 2 hours ago* (last edited 2 hours ago) (1 child)

I don't think this person was a "fuckwit". AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further into straight up mega delusions until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.

load more comments (1 reply)
[–] NewNewAugustEast@lemmy.zip 5 points 1 hour ago* (last edited 1 hour ago) (1 child)

I would like to see the full transcript.

How do we know this didn't start off with prompts about creating a book, or asking about exciting things in life, or I don't know what.

Context would help a lot. Maybe it will come out in discovery.

That said, Gemini is garbage for anything anyways. Even as an AI, its bad at that.

[–] man_wtfhappenedtoyou@lemmy.world 2 points 1 hour ago (1 child)

I was thinking the same thing, like what is the flow of the chat to get it to this point?

[–] NewNewAugustEast@lemmy.zip 1 point 5 minutes ago

I am also curious how the father saw the Gemini chats. Was it still on the screen days later? I am trying to imagine how that would work, my computer would lock and that would be that. Do kids give their parents passwords and their screen unlock codes?

[–] Pratai@piefed.ca -1 points 37 minutes ago (1 child)

While I despise everything AI, you cannot sue because your kid is stupid.

[–] GreenKnight23@lemmy.world 0 points 32 minutes ago (1 child)
[–] Pratai@piefed.ca 2 points 29 minutes ago (1 child)

I remember that. Man…. That makes me hate things.

[–] GreenKnight23@lemmy.world 1 point 10 minutes ago

yep.

fuck corporate interests.

[–] Stonewyvvern@lemmy.world 15 points 4 hours ago (6 children)

Reality is really difficult for some people...

[–] Akuchimoya@startrek.website 6 points 47 minutes ago

Truly, I don't understand why, but there are fully grown adults who believe that anything an LLM says is true. Maybe they think computers are unbiased (which is only as true as programmers and data are unbiased); maybe its the confidence with which LLMs deliver information; maybe they believe the program actually searches and verified information; maybe it's all of the above and more.

I know a guy who routinely says, "I asked ChatGPT...", and even after having explained how LLMs are complex word predictors and are not programmed for factual truth, he still goes to ChatGPT for everything. It's a total refusal to believe otherwise, but I can't fathom why.

[–] IronBird@lemmy.world 3 points 2 hours ago* (last edited 2 hours ago)

especially when your raised under a system that essentially tries to brainwash you via weaponized propaganda from birth (applies to large cross-sections of the US/UK), all it takes is one shed of truth getting through to shatter your world and from there you can get brought to believe all manner of crazy shit.

load more comments (4 replies)
[–] teft@piefed.social 71 points 6 hours ago (5 children)

“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.

Just remember that these language models are also advising governments and military units.

Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.

[–] MoffKalast@lemmy.world 4 points 2 hours ago

A forever war is David Bowie to the ears of the MIC. Infinite money glitch.

load more comments (4 replies)
load more comments
view more: next ›