Thursday 23 February 2023

ChatGPT Is Set to Get a Heck of a Lot More Stupid

If you read my previous post about ChatGPT, you’ll know that I tried it out and wasn’t at all impressed. In response to my questions, it repeatedly churned out incorrect information, then, after I insisted that it cite its sources, referred me to an author who never existed. 

Perhaps I am a sucker for punishment, but I went back in today. This time, I approached things from a different angle. Instead of asking a general question about the imprisonment of the Japanese artist Kitagawa Utamaro (which is what I did last time), I asked about the opinion of a particular expert on the subject:



Sounds reasonable, right? Well, whilst I can't claim the ability to recall every word the late (and very sorely missed) Jack Hillier ever wrote on the subject, he was my mentor, I have read his books too many times to count, and I am pretty familiar with his opinions. I knew that the bot's response was incorrect. I asked it to confirm that the quote was by Hillier; it did so. It confirmed that it was a direct quote, and it repeated the title of the book, its publication date and the page where the quote was located. There was a bit of going back and forth, then I challenged it, and I received this response:

In my quest to understand how ChatGPT comes up with this tripe, I prepared to go back down the rabbit hole and into the dreamlike, nonsensical world that it inhabits. Initial attempts to get it to tell me where the 'information' it spouted came from failed completely. For a bot designed to answer questions, it does a good job of avoiding doing so. It can't know everything, it informed me, and searching the Internet to find out where the quote came from is, according to the bot itself, beyond its capabilities. Perhaps I should have offered to google it?

Anyway, the quote had to have come from somewhere; it didn't just manifest out of thin air (okay, we're talking ChatGPT here, so it might have done). So, I persevered:



In no time at all, the bot managed to go from presenting information to me as a fact (several times) to suggesting said information could have been invented. Yes indeed, I have to give it its due; it certainly is fast. It can create nonsense a lot faster than I can. 

As the dialogue continued, I felt very much that I was being sent around in circles (which is what happened last time). And the bot's repeated form responses about its programming and algorithms went no way in explaining why it wasn't able to tell me where the quote came from. But I continued (at great risk to my sanity), and eventually the bot explained that its error was due to 'a misinterpretation of the information provided' to it. 

It was a straw, and I clutched at it. If there was information provided, there had to be an information provider. And, after some probing, the bot revealed the source of its wisdom: a previous user.

They read it in an online forum, no less! Well, fan my brow. Now I can see why writers are so worried that ChatGPT will steal all our jobs. With its incredibly impressive ability to pass on misinformation heard in an online forum - or whispered into its ear by Aunt Maud - it will be utterly unstoppable.

To finish off, I informed the bot of something I'd known all along:



As I said in my last post about ChatGPT, there's enough misinformation out there already; we don't need more of the stuff. I keep on hearing about how the bot will just get smarter and smarter as it learns and adapts. Well, until we humans start posting information in online forums that's considerably more factually accurate, ChatGPT is set to get a heck of a lot more stupid.