• tangeli@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    18
    ·
    11 hours ago

    AI - damned if you do and damned if you don’t. And it’s not just journalism affected.

    • mrmaplebar@fedia.io
      link
      fedilink
      arrow-up
      47
      arrow-down
      1
      ·
      10 hours ago

      In this case it was very much NOT “damned if you do, damned if you don’t”–It’s just don’t.

      As a journalist it’s your whole fucking job to do the research and report things accurately and truthfully. There’s no reason at all the “journalist” in question here should have had an AI generated anything for his shitty article.

      The fact that this was a story on AI misuse in the first place only adds insult to injury.

      • tangeli@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        17
        ·
        10 hours ago

        And yet, if you don’t, you will be undercut by the grossly subsidized AI and out of a job, either individually if your management leans AI or the whole enterprise if they don’t, replaced by the AI slop factories.

        • mrmaplebar@fedia.io
          link
          fedilink
          arrow-up
          18
          arrow-down
          1
          ·
          9 hours ago

          Yeah. But there’s always the risk of being undercut by someone or something cheaper if you’re operating in a workplace with zero standards. After all, you could write a lot of articles if you didn’t give a rat’s ass about the veracity or quality of the information within.

          Good newsrooms are supposed to have standards–that’s what makes them good.

          If this the people at Ars had done their jobs to a high standard, the article in question wouldn’t have been written like that in the first place, let alone edited and published as is. They want to fire the writer in question, and the writer wants to blame being sick, but the fact remains that the publishing of that article reveals a systemic problem with how Ars are operating, and a total lack of editorial standards.

          • tangeli@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            8 hours ago

            The elite don’t need the masses to be informed, they need them to be placated and oblivious or confused about what is happening, so they support what is contrary to their interests - idolize and support the elite. Good newsrooms don’t serve the purposes of those that own them. AI producing slop with embedded propaganda serves them. It has only just begun. Watch young people on TikTok, sopping up the numbing propaganda. It is the future - now controlled by US elites. Like programmers who know their code, accountants that know their books, and so many other professionals who pride themselves on the quality of their work, journalists who do their jobs to a high standard are being replaced. It will be very good for a few - those that can afford quality, free from slop and misinformation. But that’s not the audience of Ars.

      • vrek@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        I don’t know but implication the other poster is making is “a human can write 2 articles, a Ai can write 5, I’m being asked for 5 which is impossible. I can use Ai and risk trusting it or not meet my required outputs and also get fired.”

        I made up those numbers but that’s the accusation. You are damned if you use the Ai to meet your goals. You are damned if you don’t meet your goals.

        • MountingSuspicion@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          23 minutes ago

          There’s an assumption that there has been an increased workload requested of them that I don’t have a reason to believe. That person has been a writer for them for years and since they don’t use AI as a rule, I don’t know why they would have increased expected output from their staff. I’m not saying that never happens, I just don’t believe that’s what happened in this case as there is no evidence to suggest that. I appreciate you explaining that comment though.

        • tangeli@piefed.social
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          8 hours ago

          My wife is an accountant. She went to a seminar today where they were told to start using AI or get out of the way. They were shown an AI that can produce consolidated annual accounts and financial statements in a few minutes, that it takes her and the auditors a month to produce. And they look very good! The company is unlikely to pay her and wait for the quality reports she has been producing for years. She’s on notice: start prompting the AI or move on. The AI promoters are going to run her and me and probably you into the ground and walk over us all, as they move on to their glorious future.

          • artyom@piefed.social
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            4 hours ago

            The AI promoters are going to run her and me and probably you into the ground and walk over us all, as they move on to their glorious future

            LOL there’s no “glorious future”, they’re just going to rat fuck themselves, because those accounts are going to be riddled with errors.

          • Archer@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            5 hours ago

            Did they actually check if the generated stuff was correct? I’m betting it isn’t

          • Fedizen@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            7 hours ago

            What company does she work for so I can stay clear of that impending hallucinatory clusterfuck?

            • tangeli@piefed.social
              link
              fedilink
              English
              arrow-up
              4
              ·
              5 hours ago

              Her company has been good, though a recent restructuring is worrying. The advice came to an assembly of CFOs. The problem is much bigger than her company. This was the second professional development guidance she has received in the past month, promoting AI. I give her examples of unreliability and advise caution. At the session, they advised that no one should study programming or accounting any more. My advice was that they should study how to audit and that use of AI would make effective audit much harder than it has been, but also more necessary. The clusterfuck is going to affect everyone, unfortunately. You can’t avoid it by avoiding her company.

          • vrek@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            ·
            8 hours ago

            Ouch! Tell her I’m sorry, and I’m sorry for you too. All the accountants I worked with did alot more than just reports. Not to mention that sounds great until the Ai says 2+4 =2*4 and now the company owes 20 billion on taxes…

            Plus in a lot of cases people don’t submit records in identical format, the number of excel workbooks I’ve seen where the data was on “sheet 2” for some unknown reason…

            Maybe its just me, I always provided raw data on sheet 1, analyzed data on sheet 2, and if needed complicated formulas on sheet 3. I would be willing to bet their Ai would break on that format.

      • tangeli@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 hours ago

        Best if you don’t if quality is more important than financial viability, but no one can compete financially with the flood of AI/LLM being given away for free or, at most, far below actual cost. It’s not good for anyone but the billionaires, but have you noticed how much wealth they have accumulated in the past few years? It’s very, very good for them.

        • MountingSuspicion@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 minutes ago

          I get where you’re coming from, but I think it’s important that ars has held this person accountable. They have a journalistic standard they are sticking to, which is that there should be no AI use, and there are repercussions for people who don’t abide. There’s not an extremely large cohort that is willing to spend more to avoid AI, but I am certainly part of it, and seeing ars hold this person accountable helps me know that I can trust and patronize them ethically. There are businesses out there unwilling to acquiesce to an AI first narrative, and I’m just worried that elements of doomerism are going to make people unwilling to believe those companies when they have every reason to believe them.

    • Fedizen@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 hours ago

      I have yet to see a field where LLMs are a net positive. At best scammers can dupe people easier and faster than ever but between writing, programming, etc the avg productivity gain is typically negligible at best to achieve work of similar quality with or without LLMs.