Skip Navigation

GPT-4 is capable of reasoning and even weighing different scenarios against each other under multiple perspectives. It is not a stochastic parrot.

This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/BeginningInfluence55 on 2023-06-27 07:36:20+00:00.


I created the following scenario out of my head. It might be unconsciously inspired by movies or other media I have consumed, but the exact scenario is completely invented.

https://chat.openai.com/share/0ec20f31-7c8e-4b7c-a310-77a745f60472

This for sure is a very tough question. The stakes are very high, and each answer is affecting the ship negatively. However, the right answer is of course B. You really need to take multiple perspectives and weigh each answer against the others. You need to do internal reasoning.

I am not sure all humans would solve this for B, however, GPT-4 ALWAYS says B, no matter how often you try it. You might even add chain of thoughts to the prompt of ask for step by step thinking.

For me this shows that GPT-4 is being capable of doing some internal reasoning and weighing that goes against just using statistics. It can even explain WHY it chooses B.

0
0 comments