…without informed consent.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    Every now and then I see a guy barging in a topic bringing nothing else than “I asked [some AI service] and here’s what it said”, followed by 3 paragraphs of AI-gened gibberish. And then when it’s not well received they just don’t seem to understand.

    It’s baffling to me. Anyone can ask an AI. A lot of people specifically don’t, because they don’t want to battle with its output for an hour trying to sort out from where it got its information, whether it represented it well, or even whether it just hallucinated half of it.

    And those guys come posting a wall of text they may or may not have read themselves, and then they have the gall to go “What’s the problem, is any of that wrong?”… Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up, and have only brought automated noise to the conversation.

    • expr@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      I was trying to help onboard a new lead engineer and I was working through debugging his caddy config on Slack. I’m clearly putting in effort to help him diagnose his issue and he posts “I asked chatgpt and it said these two lines need to be reversed”, which was completely false (caddy has a system for reordering directives) and honestly just straight up insulting. Fucking pissed me off. People need to stop brining AI slop into conversations. It isn’t welcome and can fuck right off.

      The actual issue? He forgot to restart his development server. 😡

    • tias@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      7 months ago

      Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

      That’s not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

      Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It’s just a smarter search engine with no ads and better focus on the question asked.

      • brsrklf@jlai.lu
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 months ago

        I am speaking from experience.

        The latest example of that I encountered had a blatant logical inconsistency in its summary, a CVE that wasn’t relevant to what was discussed, because it was corrected years before the technology existed. Someone pointed at it.

        The poster hadn’t done the slightest to check what they posted, they just regurgitated it. It’s not the reader’s job to check the crap you’ve posted without the slightest effort.

      • MousePotatoDoesStuff@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        “With no ads” Google used to have no ads. And especially with how much it cost to run even today’s LLMs, let alone tomorrow’s ones… enshittification is only a matter of time.

      • Mirodir@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        On the second part. That is only half true. Yes, there are LLMs out there that search the internet and summarize and reference some websites they find.

        However, it is not rare that they add their own “info” to it, even though it’s not in the given source at all. If you use it to get sources and then read those instead, sure. But the output of the LLM itself should still be taken with a HUGE grain of salt and not be relied on at all if it’s critical, even if it puts a nice citation.

      • tomalley8342@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        If you have evaluated the statement for its correctness and relevance, then you can just own up to the statement yourself. There is no need to defer responsibility by prefacing it with “I asked [some AI service] and here’s what it said”. That is the point of the article that is being discussed, if you’d like to give it a read sometime.

      • setVeryLoud(true);@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Ok, I didn’t need you to act as a middle man to tell me what the LLM just hallucinated, I can do this myself.

        The point is that raw AI output provides absolutely no value to a conversation, and is thus noisy and rude.

        When we ask questions on a public forum, we’re looking to talk to people about their own experience and research through the lens of their own being and expertise. We’re all capable of prompting an AI agent. If we wanted AI answers, we’d prompt an AI agent.

      • SparroHawc@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        with no ads

        For now.

        Eventually it becomes a search engine that replaces the ads on the source material with its own ads, thus choking out the source’s funding and taking it for itself.

  • OriginalUsername7@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

    Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

    We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

    I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

    But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

      That’s literally the point of them. They’re supposed to generate what the most likely result would be. They aren’t supposed to be creative or anything like that. They’re supposed to be generic.

        • Cethin@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          It’s still not creative. It’s just rehashing things it heard before. It’s like if a comedian just stole the jokes from other comedians but changed the names of people. That’s not creative, even if it’s slightly different than what anyone’s seen before.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

      I guess it has some tabloid-like value. which if counts as value, tells a lot about the other party.

  • audaxdreik@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    Blindsight mentioned!

    The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

    This has been my biggest problem with it. It places a cognitive load on me that wasn’t there before, having to cut through the noise.

  • zapzap@lemmings.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I think sometimes when we ask people something we’re not just seeking information. We’re also engaging with other humans. We’re connecting, signaling something, communicating something with the question, and so on. I use LLMs when I literally just want to know something, but I also try to remember the value of talking to other human beings as well.

  • Pamasich@kbin.earth
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    Here’s a question regarding the informed consent part.

    The article gives the example of asking whether the recipient wants the AI’s answer shared.

    “I had a helpful chat with ChatGPT about this topic some time ago and can share a log with you if you want.”

    Do you (I mean generally people reading this thread, not OP specifically) think Lemmy’s spoiler formatting would count as informed consent if properly labeled as containing AI text? I mean, the user has to put in the effort to open the spoiler manually.

  • PlutoniumAcid@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

    Now, AI has made text very, very, very cheap. Not only text, in fact. Code, images, video. All kinds of media. We can’t rely on proof-of-thought anymore.

    This is what makes AI so insidious. It’s like email spam. It puts the burden on the reader to determine and sort ham from spam.

  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    The worst is being in a technical role, and having project managers and marketing people telling me how it is based on some chathpt output

    Like shut the fuck up please, you literally don’t know what you are talking about

  • Patch@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    If only the biggest problem was messages starting “I asked ChatGPT and this is what it said:”

    A far bigger problem is people using AI to draft text and then posting it as their own. On social media like this, I can’t count the number of comments I’ve encountered midway through an otherwise normal discussion thread, and only clocked 2 paragraphs in that I’m reading a chat bot’s response. I feel like I’ve had time and braincells stolen from me in the deception for the moments spent reading and attempting to derive meaning from it.

    And just this week I received an application from someone wanting work in my office which was very clearly AI generated. Obviously that person will not be offered any work. If you can’t be bothered to write your own “why I want to work here” cover letter, then I can’t be bothered to work with you.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Have seen emails at work that were AI generated, but they made no disclaimer. Then someone points out how wildly incorrect it was and they just say “oh whoops, not my fault, I just ask ed an LLM”. They set things up to take credit if people liked it, and used the LLMs are just stupid as an excuse when it doesn’t fly.

      • nomy@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        In every business I’ve worked in, any email longer than a paragraph better have a summary and action items at the end or nobody is going to read it.

        In business time is money, email should be short and to the point.