JigglypuffSeenFromAbove

joined 2 years ago
[–] JigglypuffSeenFromAbove@lemmy.world 18 points 3 days ago (3 children)

From OpenAI's statement:

We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:

• No use of OpenAI technology for mass domestic surveillance.

• No use of OpenAI technology to direct autonomous weapons systems.

• No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).

It specifically states their AI can't/won't be used for surveillance and autonomous weapons. Of course I'm not saying I trust them, but isn't this the same thing Anthropic says they're against? What's the difference here or what did I miss?