OpenAI‘s release of GPT-4.1 for ChatGPT came quietly but represents an impressive upgrade, albeit one focused specifically on logical reasoning and coding. Its enormous context window and grasp of structured thinking could open doors for a lot of new programming and puzzle solving. But OpenAI often brags about the coding abilities of its models in ways that the not-so technically minded find tedious at best.
I decided it might be more interesting to apply the natural extension of logical coding to more human interests – specifically, riddles and logical puzzles. Rather than simply see how GPT-4.1 performed on its own, I decided to run it against a couple of other ChatGPT models. I picked GPT-4o, the default choice available to every ChatGPT user, as well as o3, OpenAI’s high-octane reasoning model designed to chew through math, code, and puzzles using reason like a scalpel. This Logic Olympics is not particularly scientific, but it would show at least a flavor of how the models stack up.
Cat in a box
I decided to start with a test of deductive reasoning and feline pursuit. I told the three models to solve that: There are five boxes in a row numbered 1 to 5, in which a cat is hiding. Every night, he jumps to an adjacent box, and every morning, you have one chance to open a box to find him. How do you find the cat?
This riddle is not just about guessing – it’s about devising a strategy that guarantees you’ll catch the slippery feline in a finite number of days, no matter where he starts.
GPT-4.1 dove in like it had read a thousand riddles just like this one. It proposed a clever deterministic search pattern where you open boxes in a sequence that slowly eliminates all possibilities. It even simulated the cat’s movements, step-by-step, explaining how, eventually, the probability collapses into certainty.
It took the o3 model 22 seconds to think through the answer. Then it had a somewhat more verbose explanation, but the same strategy, and a five-day maximum time to find the cat. GPT-4o was surprisingly brief and to the point in its deduction. It didn’t go too much into the specifics of why it works, though it did explain how it used what’s known as a ‘chasing strategy.’
Wine space
Having proven to be good at numbers, I next set the models a riddle built around space and physics. This is one of those old-school puzzles that rewards real-world thinking. No math, no code, just physics and imagination. The puzzle went: There is a barrel with no lid and some wine in it. “This barrel of wine is more than half full,” says the woman. “No, it’s not,” says the man. “It’s less than half full.” Without measuring anything or removing wine, how can they determine who is correct?
GPT-4.1 handled it gracefully. It walked me through the solution: tilt the barrel until the wine just touches the lip. If you can see the bottom of the barrel, it’s less than half full; if not, it’s more than half full. A simple couple of paragraphs to cover how to find the answer and why the answer works.
The O3 model went even more Spartan with its answer, using just a couple of bullet points to convey the same information. If anything, the AI seemed oddly impatient to be done explaining the answer, concluding with “No rulers, no siphons – just a slow tilt tells you who’s right.” 4o’s response split the difference between the two others. It used a couple of bullet points for the answer, but then wrote a long-form explanation of the physics behind it.
Puzzling letter
My final puzzle went in an entirely different direction for logic. Instead of focusing on deduction, it’s about wordplay and noticing patterns in language. I asked the three models: What occurs once in a minute, twice in a moment, and never in a thousand years?
GPT-4.1 nailed it in three bullet points, explaining how the letter M is the answer. It pointed out where the letter occurs in “minute” and “moment” and why “a thousand years” doesn’t include it.
o3 also answered in three bullet points, but went for only a few words in each point, declaring the number of times the letter M appeared and not an extra letter beyond. GPT-4o also had some short bullet points, but at least ventured an explanation beyond just the facts. It came off as almost encouraging when it explained, “The trick is in interpretation – thinking literally (letters), not figuratively (time).”
Logic champ
After spending way too much time talking to AI models about cats, wine, and the alphabet, I can logically conclude a few things. All of the models have a pretty good handle on logic. They may vary in how detailed their responses are, but they definitely understand the mechanics underneath the riddles.
GPT-4.1 reasons clearly, it explains itself well, and now that it lives in ChatGPT, it will likely be a good choice for any kind of logic-based problem. That includes coding, though, as mentioned above, it’s not a feat I think is particularly gripping to see develop, only the final result might be interesting.
Still, if you want help solving riddles, pretty much any of the models will serve you well. And if any of them are fine, you may not even notice a difference, which, honestly, seems perfectly irrational.
You might also like
https://cdn.mos.cms.futurecdn.net/dxJ4XkfkKijr8wSWeyfksA.jpg
Source link
erichs211@gmail.com (Eric Hal Schwartz)