Over 400,000 people visited Farnam Street last month to learn how to make better decisions, create new ideas, and avoid stupid errors. With more than 100,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about we what do, start here.

The Limits of Crowd Wisdom

Crowd Wisdom
Jaron Lanier in You Are Not A Gadget commenting on the limits of crowd wisdom:

There are certain types of answers that ought not be provided by an individual. When a government bureaucrat sets a price, for instance, the result is often inferior to the answer that would come from a reasonably informed collective that is reasonably free of manipulation or runaway internal resonances. But when a collective designs a product, you get design by committee, which is a derogatory expression for a reason.

Collectives can be just as stupid as any individual, and in important cases, stupider. The interesting question is whether it’s possible to map out where the one is smarter than the many.

(continuing in an Edge Essay: )

There is a lot of history to this topic, and varied disciplines have lots to say. Here is a quick pass at where I think the boundary between effective collective thought and nonsense lies: The collective is more likely to be smart when it isn’t defining its own questions, when the goodness of an answer can be evaluated by a simple result (such as a single numeric value,) and when the information system which informs the collective is filtered by a quality control mechanism that relies on individuals to a high degree. Under those circumstances, a collective can be smarter than a person. Break any one of those conditions and the collective becomes unreliable or worse.

Meanwhile, an individual best achieves optimal stupidity on those rare occasions when one is both given substantial powers and insulated from the results of his or her actions.

If the above criteria have any merit, then there is an unfortunate convergence. The setup for the most stupid collective is also the setup for the most stupid individuals.

(back to the book: )

Scientific communities likewise achieve quality through a cooperative process that includes checks and balances, and ultimately rests on a foundation of goodwill and “blind” elitism — blind in the sense that ideally anyone can gain entry, but only on the basis of a meritocracy. The tenure system and many other aspects of the academy are designed to support the idea that individual scholars matter, not just the process or the collective.

Yes, there have been plenty of scandals in government, the academy and in the press. No mechanism is perfect, but still here we are, having benefited from all of these institutions. There certainly have been plenty of bad reporters, self-deluded academic scientists, incompetent bureaucrats, and so on. Can the hive mind help keep them in check? The answer provided by experiments in the pre-Internet world is “yes,” but only provided some signal processing is placed in the loop.

The “wisdom of crowds” effect should be thought of as a tool. The value of a tool is its usefulness in accomplishing a task. The point should never be the glorification of the tool. Unfortunately, simplistic free market ideologues and noospherians tend to reinforce one another’s unjustified sentimentalities about their chosen tools.

Since the internet makes crowds more accessible, it would be beneficial to have a wide-ranging, clear set of rules explaining when the wisdom of crowds is likely to produce meaningful results. Suroweicki proposes four principles in his book, (The Wisdom of Crowds), framed from the perspective of the interior dynamics of the crowd. He suggests there should be limits on the ability of members of the crowd to see how others are about to decide on a question, in order to preserve independence and avoid mob behavior. Among other safeguards, I would add that a crowd should never be allowed to frame its own questions, and its answers should never be more complicated than a single number or multiple choice answer.

More recently, Nassim Taleb has argued that applications of statistics, such as crown wisdom schemes, should be divided into four quadrants. He defines the dangerous “Fourth Quadrant” as comprising problems that have both complex outcomes and unknown distributions of outcomes. He suggests making that quadrant taboo for crowds.

Still curious? You Are Not a Gadget discusses the technical and cultural problems that have unwittingly risen from technology.