

You dismiss the whole person just because they acknowledge using an LLM? That seems a bit harsh - especially since they had the decency to mention the source, which is basically the same as saying “take this with a grain of salt.”


You dismiss the whole person just because they acknowledge using an LLM? That seems a bit harsh - especially since they had the decency to mention the source, which is basically the same as saying “take this with a grain of salt.”


I only did it here to illustrate a point. Typically I only use it on longer posts. I’m not a native english speaker and I often struggle to express my thoughts clearly and I find it immensely useful to run it through AI and see the corrections it made.


Just because the final output comes from AI doesn’t always mean a human didn’t put real effort into writing it. There’s a big difference between asking an LLM to write something from scratch, telling it exactly what to say, or just having it edit and polish what you already wrote.
A ton of my replies here - including this one - are technically “AI output,” but all the AI really did was take what I wrote, clean it up, and turn it into coherent text that’s easier for the reader to follow.
Original text: Just because the final output is by AI doesn’t always mean human didn’t put effort into writing it. There’s a difference between asking LLM to write something, telling LLM what to write or asking it to edit something you wrote.
A large number of my replies here, including this one, are technically “AI output” but all the AI did was go through what I wrote and try and turn it into coherent text that the is easy for the recipient to consume.


We could’ve never invented LLMs and I’d still be equally worried about AGI. I’ve been talking about it since 2016 or so - LLMs aren’t the motivation for that worry, since nobody had even heard of them back then.
The timescale is also irrelevant here. I’m not less worried even if we’re 500 years away from it. How close to Earth does the asteroid need to get before it’s acceptable to start worrying about it?


Nobody’s saying AGI is here right now - it’s a concept, like worrying about an asteroid wiping us out before it actually shows up. Dismissing it as “fake” just ignores the trajectory we’re on with AI development. If we wait until it’s real to start thinking about risks, it might be too late.


No, it doesn’t. It’s a reasonably safe assumption that something that intelligent is probably also conscious - but it doesn’t have to be.
We also don’t need to understand consciousness in order to create it in our systems. If consciousness is just an emergent feature of a high enough level of information processing, then it would automatically show up once we build such a system whether we intend it or not.
Hell, in the worst case we might create something we assume isn’t conscious - but it is - and it could be suffering immensely.


Where does it say that AGI needs to be consciouss?


Nothing about this is small or cute.
Compared to AGI it is. We don’t know how far away we are from creating it. We can only speculate.


The people who warn about AI risk aren’t worried about GenAI - they’re worried about AGI.
We’re raising a tiger puppy. Right now it’s small and cute, but it won’t stay that way forever.
Who said it needs to add value? The article claims that showing AI-generated content to others without them explicitly asking for it is inherently bad - even when you tell them it’s AI. So basically: if you share it without mentioning the source you’re deceiving people, and if you do mention it it’s still bad… because reasons.
To me that just sounds like an ideological stance more than a logical one.