Just because the final output comes from AI doesn’t always mean a human didn’t put real effort into writing it. There’s a big difference between asking an LLM to write something from scratch, telling it exactly what to say, or just having it edit and polish what you already wrote.
A ton of my replies here - including this one - are technically “AI output,” but all the AI really did was take what I wrote, clean it up, and turn it into coherent text that’s easier for the reader to follow.
spoiler
Original text: Just because the final output is by AI doesn’t always mean human didn’t put effort into writing it. There’s a difference between asking LLM to write something, telling LLM what to write or asking it to edit something you wrote.
A large number of my replies here, including this one, are technically “AI output” but all the AI did was go through what I wrote and try and turn it into coherent text that the is easy for the recipient to consume.
I don’t think the LLM made your response better in a meaningful way. Sure, it cleaned up the grammar a little bit, but the rephrasing in a few places is not necessary.
Trust yourself to communicate without help from external software.
there are many use-cases, and you’ve neglected one: linguistic analysis can be used to identify a person and to link them to other accounts. i’m not saying it’s likely or apocalyptic, but it is true and present. using an LLM to “sanitize” your outputs can prevent this.
from a privacy perspective, everyone should do this using a locally hosted LLM.
from a person-that-uses-the-internet perspective, i would absolutely hate it if every article and every comment looked like an identical brand of ai slop.
I only did it here to illustrate a point. Typically I only use it on longer posts. I’m not a native english speaker and I often struggle to express my thoughts clearly and I find it immensely useful to run it through AI and see the corrections it made.
True but thats a benefit to you, not to others. Its good at least the tool is allowing you to learn. I’m sure learning a language isn’t easy, especially the finer details.
making one dependent on external service is the very point of llm from the point of view of investors. Imagine how much money they will make if everyone just couldnt live without llm in every aspect of their life.
I’d argue that with a little bit of practice its quicker to write a comment and then revise it yourself. Fix the punctuation, grammar, misspellings, and read it through once at least. Its a useful skill to learn as well.
While your use case may not suffer from the problem depicted in the post[1], I don’t think it’s worth weakening the proposed etiquette for. If having a system that can reduce the generated garbage a person can inflict upon another means slightly-worse worded texts - that’s a price I’m willing to pay.
It does exhibit other generative AI issues - like the environmental impact or like how it makes you reliant on companies just waiting to start enshittifing the field - it does not suffer from the issue of forcing humans to read meaningless slop that no one bothered to write. ↩︎
Just because the final output comes from AI doesn’t always mean a human didn’t put real effort into writing it. There’s a big difference between asking an LLM to write something from scratch, telling it exactly what to say, or just having it edit and polish what you already wrote.
A ton of my replies here - including this one - are technically “AI output,” but all the AI really did was take what I wrote, clean it up, and turn it into coherent text that’s easier for the reader to follow.
spoiler
Original text: Just because the final output is by AI doesn’t always mean human didn’t put effort into writing it. There’s a difference between asking LLM to write something, telling LLM what to write or asking it to edit something you wrote.
A large number of my replies here, including this one, are technically “AI output” but all the AI did was go through what I wrote and try and turn it into coherent text that the is easy for the recipient to consume.
I don’t think the LLM made your response better in a meaningful way. Sure, it cleaned up the grammar a little bit, but the rephrasing in a few places is not necessary.
Trust yourself to communicate without help from external software.
there are many use-cases, and you’ve neglected one: linguistic analysis can be used to identify a person and to link them to other accounts. i’m not saying it’s likely or apocalyptic, but it is true and present. using an LLM to “sanitize” your outputs can prevent this.
from a privacy perspective, everyone should do this using a locally hosted LLM. from a person-that-uses-the-internet perspective, i would absolutely hate it if every article and every comment looked like an identical brand of ai slop.
I only did it here to illustrate a point. Typically I only use it on longer posts. I’m not a native english speaker and I often struggle to express my thoughts clearly and I find it immensely useful to run it through AI and see the corrections it made.
Your English is fine and your thoughts there are communicated perfectly.
True but thats a benefit to you, not to others. Its good at least the tool is allowing you to learn. I’m sure learning a language isn’t easy, especially the finer details.
making one dependent on external service is the very point of llm from the point of view of investors. Imagine how much money they will make if everyone just couldnt live without llm in every aspect of their life.
I’d argue that with a little bit of practice its quicker to write a comment and then revise it yourself. Fix the punctuation, grammar, misspellings, and read it through once at least. Its a useful skill to learn as well.
I read your original just fine.
While your use case may not suffer from the problem depicted in the post[1], I don’t think it’s worth weakening the proposed etiquette for. If having a system that can reduce the generated garbage a person can inflict upon another means slightly-worse worded texts - that’s a price I’m willing to pay.
It does exhibit other generative AI issues - like the environmental impact or like how it makes you reliant on companies just waiting to start enshittifing the field - it does not suffer from the issue of forcing humans to read meaningless slop that no one bothered to write. ↩︎