this post was submitted on 26 Feb 2026
4 points (100.0% liked)

Technology

82227 readers
4540 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] Mulligrubs@lemmy.world 1 point 5 days ago

But, have you considered AI-blockchain NFTs?

I'm never wrong about this stuff.

[–] Crozekiel@lemmy.zip 1 point 6 days ago

I'm frustrated at the "was" in the title... Like we aren't still sinking on that awful ship right now, like it is all behind us... But it isn't. :(

[–] Tetsuo@jlai.lu 1 point 6 days ago (1 child)

Don't worry, we will all be paying for it when the AI jenga tower will fall.

Businesses can make huge investments mistakes and then they will ask for help from the government that will have to save them to prevent total collapse and to save jobs. So we will pay for a few dumb CEOs like we always do.

[–] floofloof@lemmy.ca 1 point 6 days ago* (last edited 6 days ago)

And those CEOs will go off with their vast piles of money to make the same mistakes again, since the message they get is that this behaviour will be rewarded.

[–] Ilixtze@lemmy.ml 1 point 6 days ago

In other news the sky is blue, enjoy your hollowed out economy.

[–] ada@piefed.blahaj.zone 1 point 6 days ago

Generative AI was a scam

So is Substack...

[–] MaggiWuerze@feddit.org 1 point 6 days ago (1 child)

Substack promotes and finances Nazi content

They even pushed a notification with a swastika to all users

[–] Curious_Canid@piefed.ca 0 points 6 days ago (1 child)

LLMs are not capable of creating anything, including code. They are enormous word-matching search engines that try to find and piece together the closest existing examples of what is being requested. If what you're looking for is reasonably common, that may be useful. If what you're looking for is obscure, you may get things that don't apply. And the LLM cannot tell the difference. They can be useful but, unlike an LLM, you need to understand the context to use them safely.

I think the most interesting thing about LLMs is actually what they tell us about the repetitive nature of most of what we do.

[–] partial_accumen@lemmy.world 1 point 6 days ago (1 child)

LLMs are not capable of creating anything, including code. They are enormous word-matching search engines that try to find and piece together the closest existing examples of what is being requested. If what you’re looking for is reasonably common, that may be useful.

Just for common understanding, you're making blanket statements about LLMs as though those statements apply to all LLMs. You're not wrong if you're generally speaking of the LLM models deployed for retail consumption like, as an example, ChatGPT. None of what I'm saying here is a defense about how these giant companies are using LLMs today. I'm just posting from a Data Science point of view on the technology itself.

However, if you're talking about the LLM technology, as in a Data Science view, your statements may not apply. The common hyperparameters for LLMs are to choose the most likely matches for the next token (like the ChatGPT example), but there's nothing about the technology that requires that. In fact, you can set a model to specifically exclude the top result, or even choose the least likely result. What comes out when you set these hyperparameters is truly strange and looks like absolute garbage, but it is unique. The result is something that likely hasn't existed before. I'm not saying this is a useful exercise. Its the most extreme version to illustrate the point. There's also the "temperature" hyperparamter which introduces straight up randomness. If you crank this up, the model will start making selections with very wide weights resulting in pretty wild (and potentially useless) results.

What many Data Scientists trying to make LLMs generate something truly new and unique is to balance these settings so that new useful combinations come out without it being absolute useless garbage.

[–] Curious_Canid@piefed.ca 1 point 6 days ago

I write software for a living and I have worked directly with LLM backend code. You aren't wrong about the exceptions, but I think they actually reinforce my main point. If you play with the parameters you can make all kinds of things happen, but all of those things are still driven by the existing information it already has or can find. It can mash things together in random new ways, but it will always work with components that already exist. There is no awareness of context or meaning that would allow it to make intelligent choices about what it mashes together. That will always be driven by the patterns it already knows, positively or negatively.

It's like doing chemistry by picking random bottles from the shelf and dumping them into a beaker to see what happens. You could make an amazing discovery that way, but the chances of it happening are very, very low. And even if it does happen, there's an excellent chance that you won't recognize it.

I'm in favor of using LLMs for tasks that involve large-scale data analysis. They can be quite helpful, as long as the user understands their limitations and performs due diligence to validate the results.

Unfortunately what we are mostly seeing are cases where LLMs are used to generate boilerplate text or code that is assembled from a vast collection of material that someone who actually knew what they were doing had previously created. That kind of reuse is not inherently bad, but it should not be confused with what competent writers or coders do. And if LLMs really do take over a lot of routine daily tasks from people, the pool of approaches to those tasks will stagnate, and eventually degenerate, as LLMs become the primary sources of each others' solutions.

LLMs may very well change the world, but not it in the ways most people expect. Companies that have invested heavily in them are pushing them as the solutions to the wrong problems.