Why Is the Combined Prediction of Thousands of Ordinary People More Accurate Than the Opinion of a Single Expert? The Science of the Wisdom of Crowds and the Philosophy of Compass
A Story That Started at a County Fair
In 1906, British scientist Francis Galton was visiting a county fair. There was an interesting contest being held: guess the weight of an ox. 787 people entered. No butchers, no farmers. Just ordinary people. Some guessed way too high, some way too low.
But Galton noticed something: when he averaged all the guesses, the result was only one percent off from the actual weight. The average of 787 ordinary people was more accurate than any single expert could have managed on their own.
This was the first experimental evidence of what became known in the history of science as the "Wisdom of Crowds."
Why Does It Work?
At first glance, it seems counterintuitive. How can ordinary people possibly out-predict experts? The answer lies in how errors are distributed.
Every person carries their own bias. One is an optimist, another a pessimist. One looks at things from a European perspective, another from the Middle East. One is an economist, another an engineer. Individually, they might all be wrong. But because their errors point in random directions, they cancel each other out at scale.
What remains is the pure signal: the estimate closest to the truth.
Experts, on the other hand, face a different set of problems. They tend to fall into four key traps:
Confirmation bias. They look for evidence that supports what they already believe, and ignore evidence that contradicts it. If an economist thinks "the dollar will fall," they focus on data supporting that view and miss signals pointing the other way.
The anchoring effect. They get stuck on the first number or opinion they hear. Once an analyst says "Bitcoin will hit 100K," subsequent analyses start orbiting that figure.
Overconfidence. They think they know more than they actually do. Research consistently shows that experts' confidence in their own predictions runs higher than their actual accuracy rates.
Herd mentality. They gravitate toward whatever other experts are saying. Nobody wants to be the one who breaks from the group.
In a crowd, these same traps pull in different directions and end up balancing each other out.
Prediction Markets Systematize This Wisdom
The wisdom of crowds is a compelling theory, but it needs a mechanism to work in practice. That is exactly what prediction markets provide.
How does it work? It's straightforward.
A question is posed. Something like: "Will a ceasefire be reached in Iran by the end of April?"
People vote Yes or No. But they are not just voting, they are also signaling how confident they are. Because voting costs a resource (Voting Rights). You do not want to commit a lot of resources to something you are unsure about.
This simple mechanism produces far richer information than a standard poll. In a poll, you say "I think yes" and move on. In a prediction market, you say "I think yes, and I am willing to put 10 Voting Rights on it." There is a world of difference between those two statements.
Human Intelligence or Artificial Intelligence?
Artificial intelligence has transformed nearly every field in recent years. So a natural question follows: would bots not outperform humans in prediction markets too?
According to 2026 data from the Metaculus platform, AI systems are closing in on community forecasts but have not yet surpassed professional forecasters. And there is an important nuance: AI performs best in areas with large existing datasets. In topics requiring local knowledge, cultural context, or human intuition, it still falls behind.
At Pusulam, we made a deliberate choice: no bots on our platform. Every vote comes from a human being.
Why? Because the wisdom of crowds emerges from the collision of different perspectives. The gut instinct of a shopkeeper in Ankara about exchange rates, the technical knowledge of an engineer in Istanbul about technology trends, the on-the-ground observations of a journalist in Cairo about Middle Eastern dynamics. The sum of these different viewpoints paints a richer picture than any algorithm can.
We are not excluding AI entirely. Our platform includes an AI assistant that can analyze any market, cite sources, and offer multiple perspectives. But the act of forecasting, the actual voting, belongs entirely to humans.
Diversity Is Strength
For the wisdom of crowds to work, one condition must be met: diversity. A thousand people who all think alike produce less information than a hundred people who think differently.
That is why we did not limit Pusulam to a single country. We started in Turkey, but have since expanded to Germany, France, the United Kingdom, Spain, Brazil, and Egypt. People from 7 different cultures, in 7 different languages, all searching for answers to the same questions.
A Turkish user's perspective on "Will there be a ceasefire in Iran?" is different from a German user's. Both are valuable. Both make the community's forecast more accurate.
Let the Data Speak
Pusulam's philosophy fits in a single sentence: let the data speak.
We do not tell you what to think. We do not assemble expert panels. We do not offer editorial commentary. Instead, we ask the question, the community answers, and the data emerges.
That data is the combined intelligence of thousands of people. And history has shown, time and again, that this intelligence is more accurate than what any individual, any expert, or even any AI can produce alone.
On Pusulam, every vote matters. Your perspective moves the community's forecast one step closer to the truth. You do not need to be an expert. Just say what you think.
"The judgment of the crowd is better than that of any individual, or at least no worse." , Aristotle, Politics, 350 BC
Comments (0)
Please log in to comment.
No comments yet. Be the first to comment!