this post was submitted on 25 Feb 2026
2 points (100.0% liked)

Technology

82227 readers
4456 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 6 comments
sorted by: hot top controversial new old
[–] TropicalDingdong@lemmy.world 1 point 1 week ago

I just..

Am I wrong here? Like, look, shame me. I work in machine learning and have since 2012. I don't do any of the llm shit. I do things like predicting wildfire risk from satellite imagery or biomass in the amazon, soil carbon, shit like that.

I've tried all the code assistants. They're fucking crap. There's no building an economy around these things. You'll just get dogshit. There's no building institutions around these things.

The scenario begins with AI agents undergoing a “jump in capability”.

Might as well stop reading there. Another fluff piece about how useful and capable AI supposedly is, disguised as a doomsday scenario. I'm so sick of reading this bullshit. "Agentic AI" based on LLMs does not work reliably yet and very likely never will.

If you complain about bugs in traditional (deterministic) software, you ain't seen nothing yet. A probabilistic system such as an LLM might or might not book the correct flight for you. It might give you the information you have asked for or it might delete your inbox instead.

As a consequence of a system being probabilistic, anything you do with it works or fails based on probabilities. This really is the dumbest timeline.

[–] andallthat@lemmy.world 1 point 1 week ago* (last edited 1 week ago)

It's almost funny how all those AI doomsday scenarios are actually meant to prop up investment in AI.

See how Amodei and Altman are usually the ones pushing these narratives on how worried they are by the incredible advancements of their respective companies' creatures. They are so, so worried about the demise of the human race and how fast it's coming.

And I sort of understand them because whatever disruption they are peddling needs to happen very fast or they will all run out of money. But what does it tell about the rest of the human race that we are actually buying into it and pouring money into creating a dystopian future?

[–] Gsus4@mander.xyz 0 points 1 week ago* (last edited 1 week ago) (1 child)

Lol, they sort of seem to know it's all castles on clouds and any spark e.g. a substack post could trigger the loaded spring. Yet, nobody thinks they'll be the ones holding bags of shit.

[–] WanderingThoughts@europe.pub 0 points 1 week ago (1 child)

They kind of know. The dot com crashed many companies, and also gave rise to Amazon. They're all just hoping they'll be the one that invested in the next Amazon.

[–] somethingsnappy@lemmy.world 1 point 1 week ago

Have they made a profit yet?