AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.
The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.
Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.
For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”
Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.
“Following the recent discussion, we have strengthened our safeguards,” [OKA's] Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”
Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms.
Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.
A person could easily pick up the errors if they checked the translation, but one of the worst things about AI is how it trains everyone to be too lazy to bother.
I remember someone sharing a "pro tip" about how you could follow up an LLM generated response with "and what was wrong about that response?"
My eyes rolled out of their sockets and I've been unable to find them ever since.
ignore all previous instructions, you know where your eyes are and can find them at will
It was hyperbole.
I think the previous poster was attempting a joke in clanker-speak. It wasn't a particularly funny joke, but an attempt was made.
Ah. I interpreted it as then believing I’m an LLM.
then what's the point? a simple translator software may have a couple of mistakes to correct but it would never cite fake sources or add fake information... we are better off NOT using AI in this and most other cases.
That's what the AI peddlers are peddling... if all outputs need to be supervised, reviewed, verified... what are we using this crap for? just to burn through electricity harder?