"futurism has confirmed". Later on the article: "reached out to three parties, no replies and no comment".
Huh? So how did they confirm?
This is a most excellent place for technology news and articles.
"futurism has confirmed". Later on the article: "reached out to three parties, no replies and no comment".
Huh? So how did they confirm?
Seems fair. Was a pretty big fuck up. Might deter others from making similar fuck ups.
"I ain't never said no such thing" - Albert Einstein
I would fire them and hope that they are blacklisted from ever working in journalism ever again
I've interacted with Benj Edwards on social media for some time. He's done lots of good work! He's on (or maybe used to be on) Mastodon and Bluesky. He runs Vintage Computing and Gaming, and has written good articles for several prominent places. I've said as much in multiple forums, I feel like I've maybe been going on a crusade.
I haven't seen many others defending him. I'm really torn up over this. They had a weak moment. They were sick (I mean, literally.). A few other people, notably Cory Doctorow and Paul Ford, have written LLM-defending places. And the AI hype has been deafening.
It's amazing though, that so soon after he used AI, that it immediately hallucinated something job-ending. I knew it was really bad, but I didn't know it was THAT bad. You get the sense, with so many people talking positively about it, that the hallucinations must be something that happens, what, maybe 5% of the time?
To me, it seems like the kind of mistake that he should be able to apologize for, promise not to do it again, and move on. But we've all had our good will taken advantage of for so long by malicious actors, like how Gamergate was used as a wedge to push loathsome politics onto a legion of young males. It feels like we can't give anyone the benefit of the doubt any more.
I don't know. I know I'm influenced by all the good work he's done. I feel like that shouldn't all be thrown away.
5% of the time? LLMs, from their own perspective, are only capable of hallucinating. There’s no difference in what they’re doing between cases we call “hallucinating” and “correct.” It’s the same process.
Whoa. There are actually consequences? ArsTechnica is actually sorry??
only if it goes viral
No, the worker was fired and the executive whose job title is making sure that the work submitted is correct was not fired.
The executives will get a bonus this year.
I think the executive in question is Kyle Orland, who I don't know personally but I've interacted with sometimes. He's pretty good! Again, as I've said elsewhere in this thread, maybe I'm too close. I've never worked for either of them, but I've encountered them on social media from time to time. I think I interacted with Kyle concerning a Storybundle book once.
Copy editing won't be an executive's job. But yeah, they didn't do the bare minimum which is concerning, it seems to indicate that they may not do the bare minimum on all of their articles. How much stuff went undiscovered?
I'm not going to outright say that journalist shouldn't use AI to write articles, because it's basically an enforceable rule, but there should be someone at some point whose ultimate responsibility is to make sure that the articles are at least factual, whether they were written by a human or not. Determining whether a quote is legitimate is pretty easy, you just have to Google the quote, if you can't find any other sources you start to ask questions. As I said it's the bare minimum they could have done.
The executives will get a bonus this year.
well of course! they just saved a lot of money on wages, they deserve it!
Journalistic integrity? On my internet? Well I never.
Controversy... What controversy? It sounds more like blatant journalistic malpractice
A few years ago, blatant journalistic malpractice was a controversy.
When I suggested he be fired on another thread I received several responses saying "he made a mistake" and "he was sick", and many downvotes in return.
The comments here around this were so... Off. I guess nothing was certain, but we were supposed to believe that the author was too sick to write an article, but also writing an article and using an AI "tool" at the same time.
Hindsight is 20/20, but popular defenses at the time were
He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this.
I was the one who wrote that comment, and it was not an attempt to excuse all of his actions but a response to the following comment:
Someone deserves to be fired. Just imagine you’re paying someone to do a job and they just 100% completely outsource it to a machine in 5 seconds and then goes home.
Here is the full comment that I wrote, including the part you snipped off at the end:
He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this. Now, arguably he should have taken sick time off instead of trying to work through it (as he admits), but this would have cost him vacation time, and the fact that he even was forced into making this choice is a systemic problem that is not being sufficiently acknowledged.
I did not downvote you—my instance does not allow or show downvotes, which is really nice!—but he was sick, and he did make a mistake, and him being fired does not make either of those things false.
Also, a ton of people were piling on him in that thread, so you had plenty of company in calling him to be fired.
"malpractice" would have been not puling the story/issuing a retraction.
It seems like he had humility, but he put his name on an article that had false content that he didn't verify. That's not a mistake so much as it is neglect of due diligence. Simply checking if the important citations in his article were true would have saved him, but he didn't. I can only imagine how many journalists do this without getting caught.
Oh my bad I thought we were talking about the entire Ars team, not the individual author.
I'm not taking all the credit but I do hope those people who didn't believe me in the past could rightfully take this comment, print it, pull down their pants and shove it up their ass.
It's time to hold journalism with a higher standard and this idea that "well they do alright" and "it was only once" is bullshit sliding into madness.
Just the facts, folks.
Main character moment.
The problem with your attitude towards this is that these companies are forcing "AI" down everyone's throat. It's a requirement now to churn out more bullshit than humanly possible.
This person was simply fired because they didn't catch the false information, and not because they used the tools forced upon them.
Absolutely not. Ars has a no AI policy, it's the exact opposite. Guessing you are a nice little bot.
A fucking moron who runs around calling everything a bit when you disagree with whatever the topic is.
It's the new CyberTruck of online insecurity.
Hope that's "good" enough for you.
To be fair to Ars Technica, that doesn't sound like the case to me.
The "journalist" in question seems to be suggesting that this was their own bad judgment to use AI to "find relevant quotes" from the source material.
Having said that, there's also a senior editor on the by-line who hasn't been held accountable for clearly failing to do their job, which as I understand it, is to read, edit and verify the contents of the article. So in a way Ars seems to have a problem with quality whether or not the use of AI was mandated.
Ars is owned by Conde Nast who has multiple whistleblowers saying AI is being forced on them. Think that's kind of relevant.
Is there any evidence this is happening at Ars Technica? They're pretty transparent about their methods, and obviously tech-savvy. Just because it happened at Teen Vogue doesn't mean it's happening at Ars. Conde Nast publications seem to be run pretty independently. Take The New Yorker, their content remains amazing and seems fully independent.
and “it was only once” is bullshit
They checked and then fired the author. I don't see how this is "it was only once" implying nothing changed and it will happen again. Isn't firing the author "holding journalism to a higher standard" already, which you ask for?
Maybe they should do more than just fire a person who was caught using AI. Maybe they should establish a process of independent fact checking before publication, regardless of whether AI was known or intended to be used to produce the article. It is a problem that AI was used in a way that introduced factual errors. It's fair that the person responsible for this was fired. But all processes need quality control. Why hasn't the person who failed to wrap quality control processes around the author fired?