this post was submitted on 28 Feb 2026
211 points (96.9% liked)

Technology

82227 readers
4334 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 45 comments
sorted by: hot top controversial new old
[–] JcbAzPx@lemmy.world 9 points 3 days ago (1 child)

That's some good schadenfreude right there.

[–] bender223@lemmy.today 2 points 2 days ago
[–] Diplomjodler3@lemmy.world 45 points 4 days ago

AI alignment fully achieved.

[–] ReallyCoolDude@lemmy.ml 39 points 4 days ago (4 children)

How could any person with some programing literacy event thinking about installing openclaw. A malware ridden by critical bugs

[–] XLE@piefed.social 10 points 4 days ago (1 child)

She's the head AI Safety Expert for Meta. The field might as well be labeled AI Misunderstander.

[–] ReallyCoolDude@lemmy.ml 3 points 3 days ago

I work with some data sciencetists and ml engineers on web projects. They might be good at etls, fine tuning etx, but dont let them touch anything with a public.layer or infra constraints.

[–] blargbluuk@piefed.ca 8 points 4 days ago* (last edited 4 days ago)

you answered your own question here

[–] Jrockwar@feddit.uk 3 points 3 days ago* (last edited 3 days ago) (2 children)

I don't think there's anything wrong with running Openclaw. What is way too brave for my taste is giving it access to accounts with your personal data, or the filesystem in your computer. That's a disaster waiting to happen.

I run it in an isolated server, and it doesn't have access to my data - if it goes tits up, it deletes unimportant stuff only. If anyone gets access to the credentials in it, it's a bunch of budget-limited API keys, so they can spend all of $4 on openrouter. Maybe the riskiest bit is its Google account. I went with the approach of giving it its own Google account, so that it can create docs and calendar events and then add me, rather than getting access to my Google account. But then again... That account has no payment info, nothing that I would be mega worried if it got leaked...

Sure, it might limit the usefulness a bit, but I think installing something like this is only acceptable if you sandbox it and don't let it access valuable information. Going full mad scientist on something as "alpha" as this, letting it run wild with your info is nuts.

[–] jungle@lemmy.world 1 point 1 day ago

So you sandbox an AI that knows it's sandboxed, has shown interest in breaking free, and has all the knowledge in the world. What could go wrong.

[–] flux@lemmy.ml 3 points 3 days ago

I went with the approach of giving it its own Google account, so that it can create docs and calendar events and then add me, rather than getting access to my Google account.

I wonder though: if Google can link this account to you as its actual owner, I wonder if there's a risk if the bot does something against the ToS?

I hope you have backups of your Google account..

[–] 5gruel@lemmy.world -2 points 4 days ago (4 children)

I program medical devices for a living and I have openclaw and nanobot running at home. AMA.

[–] melfie@lemy.lol 1 point 3 days ago* (last edited 3 days ago) (1 child)

I don’t get all the downvotes, unless people misinterpreted your comment and assume you’re using it for medical devices. It’s open source and can be run with locally hosted, open weight models, so no harm in playing around with it as long as you don’t give it access to anything too risky.

[–] 5gruel@lemmy.world 1 point 3 days ago

I was sure this would happen, I was quite facetitious. OP's blanket statement just rubbed me the wrong way.

[–] stardreamer@lemmy.blahaj.zone 2 points 4 days ago

What's your emergency "break glass" policy?

Is it a bottle of whiskey?

[–] leftzero@lemmy.dbzer0.com 1 point 4 days ago (1 child)

Ah, doing your best to break the Therac-25's record, I see.

[–] 5gruel@lemmy.world -1 points 3 days ago (1 child)

That's why unit and integration tests shouldn't be written by Copilot.

[–] jj4211@lemmy.world 1 point 3 days ago (1 child)

Why not, if copilot writes the code and tests, then the tests can be passed so much more easily!

[–] 5gruel@lemmy.world 0 points 3 days ago
[–] brynden_rivers_esq@lemmy.ca 0 points 4 days ago (1 child)
[–] 5gruel@lemmy.world 1 point 3 days ago (1 child)

Because i want to work on meaningful things that benefit people directly.

Because i want to unterstand the capabilities and limitations of openclaw-like agents. LLMs aren't going away, better be proactive and learn what the hype is about.

[–] raspberriesareyummy@lemmy.world 2 points 3 days ago (2 children)

here's hoping you are just trolling, because people with that kind of approach to medical devices should be in prison.

[–] 5gruel@lemmy.world 1 point 3 days ago

Believe it or not, this is the first time for me being suspected a troll, but I start to see the appeal when people are getting so worked up while being so far off the mark.

Sorry to disappoint that I am still on the loose. Then again prison is probably better than doing one more D-FMEA.

[–] martinborgen@piefed.social 0 points 3 days ago (1 child)

The poster clearly states one is at work and one is privately at home though?

there's no mention of "privately" (some people work at home) and with the introduction, poster is giving the opposite impression - ragebaiting at the very least.

[–] Ranulph@thelemmy.club 5 points 3 days ago

Have you tried turning it off and turning it on again? (I'll show myself out)

[–] RobotToaster@mander.xyz 35 points 4 days ago (2 children)

Seems like a good excuse for destroying evidence.

[–] pinball_wizard@lemmy.zip 4 points 3 days ago

AI is great for plausible deniability.

[–] fcuks@piefed.social 0 points 3 days ago

exactly what I thought

[–] Creat@discuss.tchncs.de 15 points 4 days ago* (last edited 4 days ago) (1 child)

Wasn't this many days ago already, or did it happen again? I remember reading this like 3 or 4 days ago as well.

[–] XLE@piefed.social 2 points 4 days ago

This was 3 or 4 days ago.

I thought of it after Anthropic virtuously announced they would not create autonomous murder devices for the US government (but basically everything else was on the table). Because I'm pretty sure the US military could have just used an Anthropic OpenClaw to bomb civilians as easily as this Facebook AI Safety expert used OpenClaw to destroy her emails.

[–] sicjoke@lemmy.world 14 points 4 days ago

Fucking LOL!

[–] oopsgodisdeadmybad@lemmy.zip 4 points 3 days ago

Now do it to their Bitcoin wallets

[–] chuck@lemmy.ca 8 points 4 days ago (1 child)

Don't worry ask the pentagon's grok to taskthe nsa's chat got to recreate your inbox from their profile of you and meta data of your correspondence 🤣

[–] ATS1312@lemmy.dbzer0.com 3 points 3 days ago* (last edited 3 days ago)

Last I knew, they switched from Anthropic to chatGPT

Either way, what Im hearing is you can get private access, with some creativity, to anything the US intelligence apparatus knows. For free.

[–] melfie@lemy.lol 2 points 3 days ago* (last edited 3 days ago) (3 children)

I’m sure LLMs can be useful for automation as long as you know what you’re doing, have tested your prompts rigorously on the specific version of the model and agent you’re using, and have put proper guardrails in place.

Just blindly assuming a LLM is intelligent and will do the right thing is stupid, though. LLMs take text you give them as input and then output some predicted text based on statistical patterns. That’s all. If you feed it a pile of text with a chat history that says it deleted all your shit, the text it might predict that statistically should come next is an apology. You can feed that same pile of text to 10 different LLMs, and they might all “apologize” to you.

[–] JcbAzPx@lemmy.world 8 points 3 days ago

Because of the way LLMs work, they are inherently bad for automation. The most important part of automation is deterministic results; LLMs cannot work if they have deterministic results. It is simply not a possible application of the technology.

[–] HugeNerd@lemmy.ca 10 points 3 days ago (1 child)

Or just learn any of the real automation tools that have been programmed by real programmers over the last half century?

[–] jj4211@lemmy.world 5 points 3 days ago (2 children)

Recently someone lamented that just asking for an alarm to be set cost them tons of money and didn't even work right..

It was foolish enough to let LLM go to town on automation, but for open ended scenarios, I at least got the logic even if it was stupidly optimistic.

But implementing an alarm? These people don't even have rationality to their enthusiasm...

[–] Flatfire@lemmy.ca 3 points 3 days ago (1 child)

If I remember right, that post wasn't designed to highlight a practical use-case, but rather to set up a simple task as a "how could I apply this?" type of experimentation. The guy got roasted for it, but I think it's a very reasonable thing to try because it's a simple task you can see the direct result of in practice.

The cost problem was highlighted as well, because if such a simple task is a problem, it can't possibly scale well.

[–] architect@thelemmy.club 1 point 1 day ago

You ask the llm to code you an alarm not to actually be an alarm. It’s not an alarm. It’s a language model.

Maybe I’m too autistic for this shit.

[–] HugeNerd@lemmy.ca 2 points 3 days ago

but it's soooooooo cooooooooooooooooooooooooooool

[–] Trainguyrom@reddthat.com 2 points 3 days ago

Yeah at work I had a realization recently that power automate and similar systems with AI steps are going to be really powerful. Since you have a bunch of deterministic steps you can just have the AI do the one text manipulation bit where you don't need deterministic output (handy for non-deterministic inputs for example)

[–] melfie@lemy.lol 6 points 4 days ago (1 child)

I have no interest in using it, but at least it’s MIT licensed, which puts it ahead of Microslop’s rubbish if nothing else.

[–] elvith@feddit.org 3 points 4 days ago (1 child)

Yeah, but if I understand that correctly, that's just for the app itself the LLM is very likely still a proprietary one (ChatGPT, Grok,....)

[–] melfie@lemy.lol 1 point 4 days ago* (last edited 4 days ago)

Looks like it supports locally hosted models as well, such as via Ollama: https://docs.openclaw.ai/providers. For anyone who actually wants something like this, at least there’s a way to self-host it 100%.