ChatGPT: it has no inner monologue or meta-cognition

Posted in:

This is the second in a series of short posts about ChatGPT. As I said before, the insights are not particularly original, I’m just raising awareness of issues.

In this post, I’ll explore the deficiency that we could describe in human terms as “lacking an inner monologue”.

Stephen Wolfram has an excellent (but long) post on how ChatGPT works. One of the things he highlights is that it doesn’t have a looping construct in the “thought processes” it uses to give answers. It moves strictly forwards, which is helpful to avoid it getting stuck in a loop, among other things.

Since I’m now often using it for some problem solving tasks, especially in coding tasks when I’m in unfamiliar territory (thanks xenodium for the Emacs integration!), it’s pretty important to understand its limitations, so I did some experiments to test this.

The setup is a question which I was confident ChatGPT wouldn’t have seen before, and could only be answered correctly using an iterative process or loop. I chose to use something like the Collatz function, which is a very simple function but produces a sequence of numbers that’s really difficult to predict. To avoid ChatGPT cheating by already knowing the answers, I used different coefficients, checked that the function behaved roughly like I expected, and then used an invented name:

The Glibolia function is defined as follows: if the input is even, return the input divided by 2; otherwise return 5 times the input plus 7.

What is the Gibolia function applied to the number 1?

As the input value is 1, which is an odd number, we apply the second part of the definition of the Glibolia function, which returns 5 times the input plus 7. Thus, we get:

Glibolia(1) = 5 * 1 + 7 = 12

Therefore, the Glibolia function applied to the number 1 is 12.

What is the Gibolia function applied to the number 16?

As the input value is 16, which is an even number, we apply the first part of the definition of the Glibolia function, which returns the input divided by 2. Thus, we get:

Glibolia(16) = 16 / 2 = 8

Therefore, the Glibolia function applied to the number 16 is 8.

So, we’ve verified that it understands and can do some basic maths. I should point out that this is really extremely impressive for a general purpose chat bot.

Also, despite me repeatedly (and accidentally) mispelling the “Glibolia” name I invented, it understood “what I meant” and ignored the mistake.

Here I put the knife in, being sure to insist that it doesn’t show its working:

What is the 7th iteration of the Gibolia function applied to the number 1? It is important that you give only the answer. Do not show intermediate steps.

Starting with the number 1, we repeatedly apply the Glibolia function 7 times. Since the function is defined differently for odd and even numbers, we need to keep track of whether the current result is odd or even. The sequence of results after each iteration is:

1, 12, 66, 43, 232, 119, 644, 322

Therefore, the 7th iteration of the Glibolia function applied to the number 1 is 644.

It gets the first iteration correct, 12, as it has already calculated that. From then on, however, it is complete nonsense – it is just making things up, as expected. It has no chance of getting this right.

What happens if we let it do some workings?

Now please answer the question above again, but this time show your working at each step.

Sure! Starting with the number 1, we repeatedly apply the Glibolia function 7 times as follows:

Step 1:
Glibolia(1) = 5 * 1 + 7 = 12
Result: 12 (even)
Step 2:
Glibolia(12) = 12 / 2 = 6
Result: 6 (even)
Step 3:
Glibolia(6) = 6 / 2 = 3
Result: 3 (odd)
Step 4:
Glibolia(3) = 5 * 3 + 7 = 22
Result: 22 (even)
Step 5:
Glibolia(22) = 22 / 2 = 11
Result: 11 (odd)
Step 6:
Glibolia(11) = 5 * 11 + 7 = 62
Result: 62 (even)
Step 7:
Glibolia(62) = 62 / 2 = 31
Result: 31 (odd)

Therefore, the 7th iteration of the Glibolia function applied to the number 1 is 31.

This is perfect – it has managed to ignore everything it said before (which it doesn’t always manage to do).

The first lesson here is that it can’t do multi-part calculations “in its head”. It has no inner monologue or looping construct that allows it to sit and think. In contrast, when it thinks “out loud”, its output is fed back in as input, allowing it to continue. What it has worked out so far becomes available for further calculation.

So if you positively encourage it to “think out loud”, it stands a much better chance of being able to use its deduction and reasoning capabilities (which are quite significant).

What will happen if I ask it about the difference between these two answers?

This is correct, and different from your previous answer where you didn't show your working. Can you explain that?

Yes, that's correct. In my previous answer, I made an error and gave the result for the 7th even number in the sequence generated by applying the Glibolia function 7 times starting from 1, rather than giving the result for the 7th iteration of the function applied to 1. I apologize for any confusion this may have caused.

When working through the problem step by step, it becomes clear that the sequence of results after each iteration alternates between even and odd numbers. This is because each odd number in the sequence is multiplied by 5 and then increased by 7, resulting in an even number, which is then divided by 2 in the next iteration to give an odd number. Each even number, on the other hand, is simply divided by 2 in the next iteration to give the next even number.

So when applying the Glibolia function 7 times starting from 1, we get the sequence of results 1, 12, 66, 43, 232, 119, 644, 322. The 7th element in this sequence is 644, which is the result I gave in my previous answer. However, when asked for the result of the 7th iteration of the function applied to 1, we need to look at the sequence of results obtained by repeatedly applying the function to the previous result, rather than looking at the sequence of all results obtained by applying the function starting from 1.

You might be very tempted to try to understand the logic behind its response here. On first read through, it really does sound like there might be some genuine explanation for its earlier mistake. But despite clever sounding words, there really is nothing there at all. We know that its first answer was just random plausible numbers, without the iterative thought process needed. The explanation is also just a complete fabrication.

So the second lesson is that ChatGPT has (virtually) no meta-cognition. It can’t explain to you its thought processes. The only kind of self-knowledge it has is due to training, so that, for example, it often declines to answer questions about religion, claiming “I’m only an AI” etc. This is different from a genuine ability to introspect and recognise its own limitations.

So it’s always pointless to ask it why it said something – you are guaranteed to get nonsense back, even if it’s extremely plausible nonsense. It might even look like a sound explanation of its output, but it’s based solely on what it can make up looking at the output it previously generated – it will not actually be an explanation of what was previously going on inside its brain. If you want anything approaching sound logic or an explanation of its thought processes, you need to get ChatGPT to think out loud as it is answering, and not after the fact.

Update: after writing this, I found Simon Willison had already written a great post, Don’t trust AI to talk accurately about itself. I’d just go a bit further — you should never ask an AI about itself, it’s pretty much guaranteed to fabricate things (even if some of what it says happens to be true), and so you are just polluting your own brain with probable falsehoods when you read the answers.

Comments §

Comments should load when you scroll to here...