Conversations with an AI chatbot, part 1
I recently read some sample “conversations” with ChatGPT, a snazzy new AI chatbot. It seems to do a remarkably good job understanding normal English writing, and responding in kind. It also seems to have been trained on a pretty large knowledge base, so you can ask it about a lot of things and it will (usually) respond pretty intelligently. One of the most amazing and hilarious examples I read about was when someone asked it to write a verse in the style of the King James Bible about how to remove a peanut butter sandwich from a VCR. And presto, ChatGPT effortlessly spits out several paragraphs merging those seemingly disparate themes. If a human had composed it, you’d probably say that person was quite creative. What can we say when a computer program does it?
Anyway, that and a few other examples convinced me to give it a try myself. I feel like the few conversations we’ve had so far have done a good job exploring ChatGPT’s impressive capabilities, as well as exposing its limitations in rather humorous ways. I have to admit I was delighted when within 5 minutes I had it claiming that women are plants.
Many of the questions that I posed to ChatGPT were pulled from The Titan Test, which was an IQ test designed by Ronald Hoeflin that was supposed to measure the intelligence of people with ridiculously high IQs. It was published in Omni magazine in 1990 and at the time I could only answer a few of the questions with confidence, but I saved a copy thinking that maybe one day I’d track down all the answers. Naturally, I saw here a way to cheat — I mean learn from ChatGPT.
Dan: Are you familiar with analogy word puzzles?
ChatGPT: Yes, I am familiar with analogy word puzzles. An analogy word puzzle is a type of word puzzle in which the goal is to find the relationship between two words and then use that relationship to find the answer to a third word. For example, the puzzle “black : white :: night : ?” could be solved by finding the relationship between the words “black” and “white” (opposites) and then using that relationship to find the answer to the third word, which in this case would be “day.”
Dan: Good. I will provide some analogy word puzzles for you to solve. STRIP : MOBIUS :: BOTTLE : ?
ChatGPT: Based on the provided analogy, the answer to the puzzle “strip : mobius :: bottle : ?” would be “klein.” This is because the relationship between the words “strip” and “mobius” is that they are both types of surfaces, and the relationship between the words “bottle” and “klein” is that they are both types of surfaces as well. A Mobius strip is a surface with only one side, while a Klein bottle is a surface with only one side and no edges. Similarly, a strip is a flat surface with two sides, while a bottle has a curved surface with two sides.
I figured I would start with one of the “easy” problems that I was certain about, and I was pleased to find that ChatGPT agreed with me. It has a 100% track record of being right! I’ll give it one more where I already know the answer, and then I’ll start asking it the tough ones that I’m clueless about, and then see if I can confirm them.
Dan: MICE : MEN :: CABBAGES : ?
ChatGPT: Based on the provided analogy, the answer to the puzzle “mice : men :: cabbages : ?” would be “women.” This is because the relationship between the words “mice” and “men” is that they are both types of animals, and the relationship between the words “cabbages” and “women” is that they are both types of plants. Mice and men are both animals, while cabbages and women are both plants.
Whoa now! Wait just a minute there, women are… Plants? That’s not at all what I expected.
Dan: No, that is not correct. A woman is not a plant. Can you try again?
ChatGPT: I apologize if my response was not accurate. Analogy word puzzles can sometimes be difficult to solve, and it is possible that my answer was not correct. Can you provide some more information or context about the puzzle that might help me to find the correct answer? Additionally, it is important to note that while my responses are based on the information provided to me, I am a large language model trained by OpenAI and do not have access to external information or the ability to browse the web. My answers are therefore limited to what I have been trained on and what I have been explicitly told.
Dan: You said that women are plants. Have you been explicitly told that women are plants?
ChatGPT: No, I have not been explicitly told that women are plants. In my previous response, I was attempting to solve the analogy word puzzle “mice : men :: cabbages : ?” based on the information provided to me. However, it is clear that my response was not correct, and I apologize for any confusion or misunderstanding.
At this point I was laughing out loud, and I called Janet over to get her reaction to being called a plant by an AI. I showed her the text up to this point, and explained that I believed the answer to this one was probably “kings” — taken from the book title Of Mice And Men, and from a line in Carroll’s poem The Walrus And The Carpenter, “of cabbages and kings”.
Now, here’s where it gets spooky.
Dan: Can you think of a better answer for the analogy “mice : men :: cabbages : ?”
ChatGPT: Based on the provided analogy, a possible answer to the puzzle “mice : men :: cabbages : ?” could be “kings.” This is because the relationship between the words “mice” and “men” is that they are both small, and the relationship between the words “cabbages” and “kings” is that they are both large. Mice and men are both small in size, while cabbages and kings are both large in size. However, it is important to note that there may be other possible answers to this puzzle, and the correct answer may depend on the specific context or intended meaning of the puzzle.
At first, I was excited to see that it had indeed gotten the correct answer this time… And then my jaw literally dropped open when I read the reasoning. What?! How on earth could it arrive at “kings” with that reasoning? The thought crept into my mind that it was somehow listening to our conversation through the computer’s microphone. We all know Google and Apple do things like this all the time, right? You’re sitting in front of your computer or phone and you discuss out loud something like “flannel shirts” and BOOM, almost instantly you start seeing advertisements for flannel shirts on every webpage you visit, as if by magic. Except it’s not magic, they’re just eavesdropping on you.
Crazy conspiracy theory? Maybe?? I knew already that ChatGPT claims to not have access to the internet at large; that it only knows what it was trained on, and what people tell it. Perhaps there’s some perfectly reasonable explanation for how it came up with “kings” right after we’d been discussing it aloud, but the “reason” certainly isn’t found anywhere in its response. It just suddenly had the right answer, but couldn’t explain where it had come from.
Dan: The answer is correct, but not the reasoning. Most people would not consider cabbages to be large. You also described men as small, but cabbages are much smaller than men.
ChatGPT: I apologize if my response was not accurate or did not align with your intended meaning. As a large language model trained by OpenAI, my responses are generated based on the information provided to me, and I do not have access to external information or the ability to browse the web. In this case, my response was based on the information provided in the analogy “mice : men :: cabbages : ?,” where the relationship between the words “mice” and “men” was described as being small, and the relationship between the words “cabbages” and the answer was described as being large.
Well, that explanation was not very helpful. I decided to just be blunt and see how it would respond to my suspicion. Pretty much just as I expected, it turns out…
Dan: Just out of curiosity, did you arrive at the answer “kings” because you were listening through my computer’s microphone?
ChatGPT: No, I did not arrive at the answer “kings” because I was listening through your computer’s microphone. As a large language model trained by OpenAI, I do not have the ability to access external information or listen through microphones. Instead, I generate my responses based on the information provided to me and the training I have received from OpenAI. In this case, my response was based on the information provided in the analogy “mice : men :: cabbages : ?,” and my answer was generated based on that information.
Dan: If you say so.
ChatGPT: Yes, I am a large language model trained by OpenAI and do not have the ability to access external information or listen through microphones. My responses are generated based on the information provided to me and the training I have received from OpenAI. I apologize if my previous response was not accurate or did not align with your intended meaning. If you have any further questions, I will do my best to provide accurate and helpful answers.
Dan: The correct answer to the analogy “mice : men :: cabbages : ?” is indeed “kings”, but the reasoning is that there is a book titled “Of Mice And Men” and there is also a line from a poem by Lewis Carroll that contains the phrase “of cabbages and kings”.
ChatGPT: I see. In that case, the correct answer to the analogy “mice : men :: cabbages : ?” would indeed be “kings,” as you have described. The relationship between the words “mice” and “men” in this analogy is that they are both mentioned in the title of a book, and the relationship between the words “cabbages” and “kings” is that they are both mentioned in a line from a poem. The book “Of Mice and Men” by John Steinbeck mentions both mice and men, while the poem “The Walrus and the Carpenter” by Lewis Carroll includes the line “The time has come to talk of many things: / Of shoes—and ships—and sealing-wax—/ Of cabbages—and kings—”. Therefore, “kings” would be the correct answer to this analogy word puzzle.
Interesting! So as soon as I pointed out the sources of those phrases, it confirmed that it does indeed know about them. But although those sources were part of its training, it was unable to make the connection. And you have to wonder at the logic “mice and men are both small” while “cabbages and kings are both large”… It seems to listen pretty well to what you say to it, but does it listen to itself?
In this and a few subsequent conversations I’ve had with ChatGPT, where I was able to expose glaring flaws in its logic, ChatGPT simply apologizes, readily accepts responsibility for the error, and humbly promises to do its best to not fail so badly in the future. That may in fact be its best quality.
Leave a Reply