

0·
7 months agoHave seen emails at work that were AI generated, but they made no disclaimer. Then someone points out how wildly incorrect it was and they just say “oh whoops, not my fault, I just ask ed an LLM”. They set things up to take credit if people liked it, and used the LLMs are just stupid as an excuse when it doesn’t fly.
Recently someone lamented that just asking for an alarm to be set cost them tons of money and didn’t even work right…
It was foolish enough to let LLM go to town on automation, but for open ended scenarios, I at least got the logic even if it was stupidly optimistic.
But implementing an alarm? These people don’t even have rationality to their enthusiasm…