9 Comments
Aug 20, 2023Liked by Kevin Dorst

Echoing the other comments: I don't think it is true to say that GPT4 is "optimised" for probabilistic reasoning. It's optimised for next-token prediction, and then for what responses humans like or fit with OpenAI guidelines using RLHF. Even if part of fine-tuning was on reasoning problems with reliable solutions, the underlying training on next-token prediction will still be significantly influencing the responses given by the model. There's a high chance these "biases" are just mimicking human behaviour.

I appreciate and admire all your work! But I don't think this idea holds up.

Expand full comment
Aug 23, 2023Liked by Kevin Dorst

Bard had some thoughts: https://pastebin.com/2PtVAaGt

Expand full comment

Nice post!

I've been thinking about running some of the experiments that purport to show irrationality in auction participants in my classes, e.g., the first two experiments here: https://veconlab.econ.virginia.edu/auctions.php. I think it's reasonably well known how humans do behave in these cases, and how it differs from textbook rational behavior. I wonder if there's a way to test GPT against it, or whether getting it to participate in a group setting, like an auction, would be too tricky.

Expand full comment
Aug 19, 2023Liked by Kevin Dorst

A very interesting post -- it's fascinating to see ChatGPT evaluated in this type of way. It's a thorough yet concise post, and very well-written.

I am a little confused at the claim "a trillion-parameter large language model that we know reasons in a fundamentally probabilistic way".

I know you address this briefly at the end, but your logic doesn't seem fully compelling there. Most internet users never provide any text relevant to a given coding problem, or LSAT question, or what-have-you-here. Sure there's a lot of babble in ChatGPT's training data -- there's also a lot of not-babble! The argument that ChatGPT > average internet user -> ChatGPT is doing beyond-human reasoning -> ChatGPT's reflection of these 'errors' should have us question these 'errors' thus doesn't seem to hold water for me. Even without strongly claiming otherwise, actually understanding how ChatGPT reasons is a big and very difficult project, and I don't think you can simply point to it being efficacious to draw conclusions from its style of reasoning.

I may be missing something in your argument. Would love to hear more on this. Thanks for the excellent posts!

Expand full comment
Aug 19, 2023·edited Aug 19, 2023

The problem with this reasoning, is that ChatGPT trained on the entire internet with no fine-tuning will babble, and will probably not score above the average internet user on many benchmarks. The ChatGPT we use every day is fine-tuned on extremely extensive and expensive datasets of high quality answers and ratings of math, reasoning, and code questions - basically any problem people are likely to use the chatbot for, high quality answers and preferences were collected. This is responsible for the vast majority of the capabilities the model exhibits when you interact with it.

So what we see is exactly what we would expect from a model trained on high quality human answers to reasoning questions, and that's why it exhibits the same flaws.

I think the idea that known, obvious flaws in reasoning exhibited by humans are actually in some way generally beneficial, is a nice story to tell ourselves which is sadly completely not true.

Expand full comment