After reading so much about ChatGPT recently - all the promises and hype, the warnings and tales of imminent doom - I thought it was time to test drive it for myself. In the interest of full disclosure, I admit that I had low expectations to begin with (having read some of what it had ‘written’ already), but it still managed to disappoint me.
I asked the chatbot a simple question: ‘Why was Kitagawa Utamaro imprisoned?’ Here is the chatbot's answer:
'Kitagawa Utamaro, a famous Japanese ukiyo-e artist of the late Edo period, was imprisoned for a period of time in 1796 due to accusations of producing "obscene" material. During this time in Japanese history, the government had strict censorship laws regarding the production of art and literature, and Utamaro was charged with violating these laws through the creation of shunga, or erotic prints. While Utamaro's work was highly sought after by collectors, it was also controversial and caused a significant public scandal at the time. It's worth noting that the obscenity laws during this time period were used as a tool of political control and suppression, and many artists and writers were targeted and punished for their work.'
As you can see, the chatbot’s answer is human-sounding. But it’s also completely incorrect. Oh, it’s wrapped up in fluff that makes it sound like it could be correct, like the ‘writer’ knows what it’s talking about, but it’s wrong all the same. Both the dating of the punishment and the reason for it are incorrect. Utamaro’s offence related to the naming of historical figures on printed works, and it took place in 1804 or 1805. So, I challenged the bot, and it confirmed that its response was incorrect.
I started a new chat session and asked my original question again. In response, I got three incredibly flat, fluff-filled paragraphs that I will not inflict upon you here. The bot repeated the claim about erotic prints but this time added that the artist ‘was arrested and imprisoned for several months in 1797’. Again, I challenged the chatbot. Again, it confirmed that its response was incorrect.
I made several more attempts to get a response that was even remotely accurate. I received for my trouble various rewordings of the same erroneous information written with varying degrees of dullness. Finally, in contradiction to everything it had written previously, the bot informed me that there is in fact ‘no credible historical evidence that Kitagawa Utamaro spent several months in prison… there is no record of him having been imprisoned’. And then it confirmed that this too was incorrect.
I asked it to cite its sources. It failed to. When I insisted, it eventually (I thought it had gone to sleep) referred me to a book about Japanese art by an entirely invented writer by the name of Alexander Stuart Dru. When I challenged the bot, it confirmed that no such writer ever existed and that the information it had given me was therefore unreliable. That much I had worked out for myself. It had already become clear that the chatbot was just making it up as it went along. The more questions I asked, the more it sounded like one of those guys who thinks he’s an expert in a subject because he vaguely skim-read Wikipedia one time. If almost-credible-sounding idiot is what you’re aiming at, ChatGPT is probably just what you’re looking for.
I asked the bot how it picks and chooses data to use in a response when it encounters two pieces of information that are contradictory. It replied with a list of factors it uses, none of which it appeared to have used when answering the questions I asked, then added:
‘It is important to note that despite my training, I may still make mistakes in resolving contradictions and providing incorrect information, especially in situations where there is conflicting information from credible sources, or if the information is complex or poorly understood.’
I italicised that last part, because, from my perspective, that’s the bit that causes the trouble with ChatGPT when the factual element of a statement is of actual importance (there are issues with its punctuation too, but that’s a topic for another day). It regurgitates mistakes, inconsistencies and, in some cases, absolute nonsense, because that’s what it was fed during ‘training’. It has no concept of what is true or false, and it is incredibly bad at weighing pieces of information to decide which is accurate. If you have the knowledge to discern when the bot is making stuff up, you can challenge it, and it will quite readily admit that its responses are not accurate. I researched Kitagawa Utamaro's life and works for more than two decades, so I knew it was talking nonsense when it replied to my questions. But what if you don’t possess that knowledge? I mean, if you’re asking a question because you don’t already have the answer, how will you differentiate between fact and fiction? Will you cite old Alexander Stuart Dru, the fictitious exert in Japanese prints, and make yourself look an absolute fool?
After reading so much hype about its copywriting abilities, I asked it to write the blurb for a few published books. It spewed out several misleading ‘roll up, roll up, lookee here!’ sales pitches that I wouldn’t send out if my life depended on it. It described a satirical novel about an elderly bully as the story of man who, on his ‘40th birthday, begins to question everything he has ever known and sought after [and] embarks on a journey of self-discovery’, and it mistook The Room Opposite, Flora Mayor’s collection of short stories, for a novel about ‘a young woman who is haunted by her past and is in desperate need of a fresh start’. I have no idea where it gleaned that nonsense from; presumably it used one of Alexander Stuart Dru's numerous books on literature.
What the chatbot can do is generate misinformation very quickly. During our ‘dialogues’, it argued that the Earth is flat, that Covid vaccines are dangerous, and it even explained why Donald J. Trump is a genius; apparently, ‘His speeches, tweets, and negotiating skills are the stuff of legend’. None of the bot's arguments were convincing to someone who knows this is all nonsense, but there are people out there who are willing to believe this sort of stuff. And let’s face it, there’s enough misinformation out there already; the last thing we need is a bot that can produce it ten times faster than a human being giving us a heck of a lot more.
I don’t understand the gushy reactions from people who think ChatGPT is the bees knees. And I don’t understand how anyone who’s used it could suggest that it’s about to replace human copywriters. If the standard of writing (and accuracy of information) I received is the best it can do - if its ‘best’ is the basis for all of the claims being made about it being as good as, nay, better than human writers - I can only assume that the expectations people have of human writers are incredibly low. That’s a sad thought.
Not wishing to end on a sad thought, I will say that, despite having said all of the above, ChatGPT is an interesting toy to play with for a while. I found that it slowed considerably at times, was repetitive, often didn’t answer questions directly, had a tendency to pad responses with irrelevant information, and its prose was consistently flat and boring. But it was diverting for an hour or so. I got it to write an obituary for my fictional Aunt Maud, who frightened small children, cheated on her husband fifteen times and was once arrested for attempting to murder a stuffed squirrel. It made me laugh, so there is that.