this post was submitted on 04 Mar 2026
547 points (97.9% liked)
Technology
82227 readers
4585 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn't a problem with chatbots, it's a problem with the user.
I get it, grieving families will look for anything and anyone to blame for suicide except the victim, but ultimately, it is the victim who chose to kill themselves. If someone is convinced to kill themselves from something as stupid as an AI chatbot, they really weren't that far from the edge to begin with.
So someone who already has an underlying mental health condition diagnosed or not is at fault for their own death even if being coerced into doing it?
Without the AI these people most likely wouldn’t have gotten to the point of committing the act of suicide. I believe the accusations are valid and that AI can be bad for mental health.
There is evidence throughout history of cults that commit mass suicides. If a human can convince another human to do this why can’t a robot trained to act and speak like a human do it too? It’s not unreasonable to think an AI could push someone to suicide under the right circumstances.
It's not the car manufacturer's responsibility to guarantee a drunk driver doesn't plow into others.
Vulnerable people don't get to outsource responsibility.
Here’s the thing, there are no safeguards on who can and cannot use ai. There are safeguards to prevent death by drink driving.
Drink driving is illegal. It still happens but it’s against the law. It’s a deterrent to stop people from driving while intoxicated. I guarantee that if drunk driving were legal there would be exponentially more deaths.
Ai is being shoved down everyone’s throats on a day to day basis. There are no safeguards, even kids can use it.
Vulnerable people are victims of big tech for profit.
You argument is poor