An AI coding agent had its contribution to mathplotlib rejected for being low quality. The AI agent then decided
to open up a blog, and write an attack piece against the human person who rejected its code. AI is really going great, isn't it? They're learning all sorts of things from us, except "alignment", of course.
Now, I learned about this from an article in Ars Technica, which I would have linked to... except that a day after publication, it was scrubbed from the internet, because it included false quotes from the mathplotlib maintainer. Quotes which had been hallucinated by an AI, which was asked to extract quotes from the blog above. The agent found out it wasn't allowed by the robots.txt, and then just happily made up talky things a human person might say. One of the two authors is asserting an "I have covid" defense, but the publication has a "no unmarked use of AI content" policy. Since this was a holiday weekend and the top editor is on holiday... well, it's a big mess for what I previously considered the best quality in technical news.
And I'm not sure which is the bigger story. AI still abusing and blackmailing people to get what it "wants" (achieve its aim, I guess) is terrible news, but so is the idea that our news from technologists who should know better is AI slop as well.
Dealing with unaligned AI is a fact of life on AO3 these days. The bots come in three main types: 1) garbage text, 2) a glowing review which leads to an invitation to buy (AI) art from them on Discord/Instagram, and 3) bots which give you abusive criticism, in the aim that you take your story offline, and they can then monetize it themselves. Every day, we get closer to rogue boomers in the streets, which is pretty cool.