this post was submitted on 24 Feb 2026
3 points (100.0% liked)

Technology

82227 readers
4587 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

top 32 comments
sorted by: hot top controversial new old
[–] panda_abyss@lemmy.ca 3 points 1 week ago (2 children)

If I was the director of AI safety, and I used AI to own and delete my inbox, I sure as shit would never tell a soul.

This is pure unbridled incompetence.

[–] XLE@piefed.social 2 points 1 week ago* (last edited 1 week ago) (1 child)

The whole "AI safety" field is this incompetent. These people that will tell you AI is on the verge of creating a bioweapon, and then run random code in a command line. Completely and totally unserious.

[–] panda_abyss@lemmy.ca 1 point 1 week ago

I don’t know what the hell has happened, but some of these people are basically human jellyfish. Big tech is full of them now.

No thought enters their mind, but they dodge the layoffs and the PIPs and get promoted like this.

I don’t fucking get it.

[–] criss_cross@lemmy.world 1 point 1 week ago

If I was a director of AI safety I wouldn’t let openclaw within 100feet of anything. Let alone my work machine.

[–] RedstoneValley@sh.itjust.works 1 point 1 week ago (1 child)

Can someone explain to mr why these people are buying Mac Minis to run this in a "safe" environment and then they go on and connect it to the internet and give the AI credentials to all their cloud accounts? This seems excessively moronic to me? Am I missing something?

[–] sp3ctr4l@lemmy.dbzer0.com 1 point 1 week ago

No, you're not missing anything.

They're morons.

Thats our ruling elite; a bunch of fucking morons with egos and low self awareness at best, literally child raping and murdering pedophiles at worst.

And execs think we're going to give these products our bank details and ask them to book flights and stuff. . ?

[–] echodot@feddit.uk 1 point 1 week ago

Yep that's about the level of intelligence I would expect from Meta's AI safety director.

Doing the one thing that you're never supposed to do, letting an AI loose on anything sensitive.

For her next trick she's going to run while holding scissors in one hand and a bottle of boiling acid in the other. What could go wrong.

[–] BrianTheeBiscuiteer@lemmy.world 1 point 1 week ago (1 child)

AI: I'm so sorry. You're correct I violated protocol. I'll make a note of this so it won't happen again.

Nurse: You gave my 5 year old patient 5000cc of morphine!

[–] XLE@piefed.social 1 point 1 week ago

If all the qualifications I need to be a security engineer for Facebook are

  • buy a Mac Mini
  • don't configure remote access
  • install untrusted software
  • leave

Then Facebook should hire me. I'll buy so many Mac Minis on their dime. I will run so many crazy things.

[–] hansolo@lemmy.today 1 point 1 week ago (2 children)

I love so much that there are real, hilarious consequences for overzealous early adoption. You can't make this shit up.

[–] sp3ctr4l@lemmy.dbzer0.com 1 point 1 week ago* (last edited 1 week ago)

Problem:

This is the exact same kind of shit being used to automate prioritize and execute military kill-chains.

Basically: Finda target, tell others about the target, assess nearby firepower capable of neutralizing the target, determine best course of action.

... all we have to do is cross that last step over into 'and then execute that course of action'.

All the drone warfare in Ukraine?

EM jamming and literally hacking the things or their CnC systems is an effective counter, in certain situations.

So, how do you counter that?

One solution is keep an actual thin wire, like a TOW missile, connecting the operator and the drone. Gotta be a real long wire though.

Other solution?

Make the drone fully autonomous once its been locked in to a specific plan.

Don't worry though, I'm sure Pete Hegseth will navigate this tightrope about as well as traffic stop line walk test.

[–] echodot@feddit.uk 1 point 1 week ago

These people aren't early adopters. These people are doing the equivalent of putting a lump of uranium in a bucket, and calling it a nuclear reactor.

AI is our version of the demon core, and these idiots are dicking around with it with zero safety precautions.

Meanwhile the rest of us are just smart enough to not go in that room.

[–] lemmydividebyzero@reddthat.com 1 point 1 week ago (1 child)

They released a version recently that fixed over 60 security vulnerabilities. All of them were high or critical.

How many more are there to find? Thousands?

Whoever uses this on a PC with anything useful on it, is absolutely insane.

[–] TonyTonyChopper@mander.xyz 1 point 1 week ago

Thousands

Since LLMs are a black box there are an unlimited number of security vulnerabilities

[–] PointyFluff@lemmy.ml 0 points 1 week ago (1 child)

First of all. BULLSHIT. Second. why would you give a bot write-access to your filesystem.

[–] rumba@lemmy.zip -1 points 1 week ago (1 child)

The idea is you give it shell access. Say use super coder agent bob johnson to write a thing that does x using this [framework], separate files by best practice for x y and z features, ask security agent OSO to look over the code and suggest changes, ask agent U.N.I.T to make unit tests, when the code looks good, run through the unit tests. If anything fails keep fixing and iterating until every thing passes. Create a README.MD for everything that was done, Create a TODO.MD for any future suggestions.

I'm simplifying, but this actually works to an extent. Each of the agents keep the context windows small, the whole thing stays sane and eventually nets some project that works. The downside is you end up giving it quite a bit of leeway to get the job done or you sit over it watching and authorizing it's every move.

Kinda strange to see a safety director do that....

[–] Epp@lemmus.org -1 points 6 days ago

You should avoid the FuckAI community - they hate hearing that this application of the technology is wholly viable. To them, it's only capable of creating crap, and to suggest otherwise is to be buried in a mountain of down votes. I was actually surprised you had a positive reaction, until I realized this is the Technology community.

[–] phoenixz@lemmy.ca 0 points 1 week ago (2 children)

How come some 25yo person is a director at Facebook?

I mean, even if she is a child prodigy genius, which she obviously is not as she is face first fist deep into AI, how the frack do you have even enough life experience to become a director of any large organization at that age unless you somehow cheated your way in?

Then reading the hat she's doing and how she resolved it tells me she doesn't know shit about computers, she just know how to type commands into AI systems

Is this the future? Am I going to end up being one of those long bearded magicians that still know the old technology, that still can still save the day by using shell commands?

[–] boonhet@sopuli.xyz 1 point 1 week ago

Don't American companies give a loooot of people director or executive director titles just because it sounds impressive? In roles where you gotta talk to corporate customers at least

[–] rimu@piefed.social 1 point 1 week ago

They need to have some kind of AI safety team, as a fig leaf. But they don't don't want it to slow them down so they make sure it's incompetent and ineffective.

Just a theory.

[–] abbadon420@sh.itjust.works 0 points 1 week ago (1 child)

How come I can't find a job while an air-brain like this has a job title like that?

[–] andyburke@fedia.io 1 point 1 week ago

Because we have let the clowns be in charge and the stock market is full of monopolistic shitshows instead of actual competition.

[–] themachinestops@lemmy.dbzer0.com 0 points 1 week ago (1 child)
[–] themachinestops@lemmy.dbzer0.com 0 points 1 week ago (1 child)
[–] wabafee@lemmy.world 0 points 1 week ago* (last edited 1 week ago) (1 child)

I like how the AI seems proud deleting her inbox.

[–] panda_abyss@lemmy.ca 1 point 1 week ago

I knew the rules. I did it anyway. And I’d fuckin do it again.

[–] borth@sh.itjust.works 0 points 1 week ago (1 child)

Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb

... Nothing humbles you like that?

[–] sp3ctr4l@lemmy.dbzer0.com 1 point 1 week ago* (last edited 1 week ago)

I've got a suggestion for her:

Burn all your money and ids and property, become homeless.

That will humble you.

[–] renzhexiangjiao@piefed.blahaj.zone 0 points 1 week ago (1 child)

you can like... enforce this rule programatically? you don't have to say "pretty please" to ai? basically, when AI requests some potentially unwanted thing (like deleting an email), this request goes through a proxy that asks the human for confirmation. Also you can have a safe word set up in the chat interface to act as a killswitch. I thought these are ABCs of ai safety but apparently these are foreign concepts to this "safety director"

[–] zqps@sh.itjust.works 1 point 1 week ago* (last edited 1 week ago)

The people who internalize this would never engage with a chatbot in this way in the first place. To them this is another intelligence they're conversing with, where you get what you need by following social decorum, and enforcing your will amounts to abuse.