The Shortest IQ Test That Only 17% Pass, Now Viral on TikTok
The world's shortest IQ test has resurfaced in the public eye, fueled by a TikTok video amassing 14 million views. This test, known as the Cognitive Reflection Test (CRT), was designed in 2005 by psychologist Shane Frederick. It asks three seemingly simple questions but exposes how deeply human intuition can deceive us. Why do so many people fall for the obvious answer, even when it's wrong? The CRT isn't just a curiosity—it's a window into the mind's tendency to favor speed over accuracy.

Limited access to the test's original research data has fueled speculation about its validity. Studies from MIT, Princeton, and Harvard, however, have repeatedly confirmed its effectiveness. Only 17% of participants in one experiment could answer all three questions correctly. That's not just a statistic—it's a stark reminder of how easily we overlook the most basic logical steps.
The first question is a classic trap: A bat and ball cost $1.10 together. The bat is $1.00 more than the ball. How much does the ball cost? Most people immediately think 10 cents. But the correct answer is 5 cents. The bat, at $1.05, makes the total $1.10. Why does the intuitive answer feel so right? Because it ignores the nuance that the $1.00 difference must be accounted for separately.
The second question challenges assumptions about scaling: If five machines take five minutes to build five widgets, how long would 100 machines take to build 100 widgets? The common wrong answer is 100 minutes. But the real answer is five minutes. Each machine builds one widget in five minutes, so 100 machines working in parallel complete 100 widgets in the same time. This question tests whether we can resist the urge to overcomplicate scaling problems.

The third question, about lily pads doubling in size daily, is perhaps the most counterintuitive. If a patch covers a lake in 48 days, how long until it covers half the lake? Many guess 24 days. But the answer is 47. Since the pads double daily, the lake would be half covered on day 47, and fully covered the next day. This exposes a deep misunderstanding of exponential growth. How often do we fail to see such patterns in real life?
Innovation in testing cognitive biases has always hinged on creating scenarios that mirror real-world decision-making. The CRT's popularity on social media reflects a growing fascination with how people think—and how often they're wrong. Yet, the test also raises questions about data privacy. If such insights are so valuable, who controls the data generated by millions of people attempting it?

Studies from 2011 and 2016 reveal varying success rates: 6.6% of US college students and 41.3% of Iranian university students answered all three questions correctly. These disparities highlight how cultural, educational, and economic factors shape cognitive performance. But they also prompt a larger question: Does the CRT truly measure intelligence, or does it measure access to certain kinds of education?
Social media reactions to the test have been polarizing. Some users insist the answers are obvious, while others argue they're being tricked. One TikTok comment claimed, 'The math ain't mathing,' while another replied, 'It is tho.' This divide underscores the test's ability to provoke debate. Can a three-question quiz really spark such intense disagreement? The CRT isn't just a test—it's a mirror held up to human reasoning, flawed and fascinating in equal measure.
As tech adoption accelerates, tools like the CRT may become more integrated into assessments of critical thinking. But with innovation comes responsibility. Will society prioritize accuracy over convenience? Will data from these tests be used to improve education, or to reinforce biases? The answers to these questions may shape the future of how we measure—and value—human intelligence.
Photos