After Anthropic refused flat out to agree to apply Claude AI to autonomous weapons and mass surveillance of American citizens, OpenAI jumps right into bed with the United States Department of War.
There is no such thing as “ethical” AI coming from Big Tech. Google, Microsoft, Anthropic, Amazon, all of them built their machines without consent, all their machines have been subsidized with our taxes and resources, and Anthropic is a pro-Trump pro-foreign-dictator company that crossed every single red line until the very last one.
Anthropic was pro mass surveillance of foreigners.
It was okay with helping Trump plan criminal invasions.
It just doesn’t want to be held responsible for pushing the “go” button, but we know their software was one suggestion away from doing it anyway.
And this is why anybody who made a mistake in the past should be shunned forever, regardless of their current views and actions. They may as well just jump off a bridge and save us the trouble of setting up a firing squad.
They never said they should be shunned, they didnt even list a social consequence. The fact remains that if you used OpenAI in the past you already contributed.
Yeah, we live and learn. We don’t expect perfection, we expect self improvement. Its important not to excuse bad decisions/behavior. Be more skeptical of new technology in the future and pay attention to who’s creating/selling it.
Impossible purity test? That’s utter bull crap. There have been many warnings about the negative uses of AI for years now, for example: https://aiforgood.itu.int/event/addressing-the-dark-sides-of-ai/
To expect people to be able to understand that this use could be expanded to committing state sponsored atrocities is not a stretch.
Well, judge not lest you too be judged…
There is no such thing as “ethical” AI coming from Big Tech. Google, Microsoft, Anthropic, Amazon, all of them built their machines without consent, all their machines have been subsidized with our taxes and resources, and Anthropic is a pro-Trump pro-foreign-dictator company that crossed every single red line until the very last one.
Anthropic was pro mass surveillance of foreigners.
It was okay with helping Trump plan criminal invasions.
It just doesn’t want to be held responsible for pushing the “go” button, but we know their software was one suggestion away from doing it anyway.
This person uses the internet, which for *years *has had TONs of negative uses.
How do you think Epstein emailed his buddies? The internet.
You can’t trust people that use evil technologies like user Unattributed. Thanks for the incredibly sound and intelligent logical framework!
And this is why anybody who made a mistake in the past should be shunned forever, regardless of their current views and actions. They may as well just jump off a bridge and save us the trouble of setting up a firing squad.
They never said they should be shunned, they didnt even list a social consequence. The fact remains that if you used OpenAI in the past you already contributed.
You are fucking insane. By your logic any customer of a company that might one day build a weapon is complicit. That is asinine.
With their last link, they’re complicit
Yeah, we live and learn. We don’t expect perfection, we expect self improvement. Its important not to excuse bad decisions/behavior. Be more skeptical of new technology in the future and pay attention to who’s creating/selling it.