While most of the discussion at this week's Open Source Business Conference was refreshingly pragmatic, focused on the commercial role and prospects of open source software, there were a few more cosmic moments. Notably, Mitch Kapor brought a bit of Wikimania to the proceedings, offering a Zen-like "meditation" on Wikipedia as a harbinger of a much broader open-source movement in the future. (Wikipreneur Ross Mayfield summarizes the talk.) Kapor believes that the community-run online encyclopedia explodes the myth "that someone has to be in charge" as well as the assumption "that experts count." He argues that Wikipedia shows you can create high-quality products through the contributions of a broad, democratic community of amateurs, a self-governing collective operating on the internet without any hierarchy. That, in Kapor's view, is "the next big thing."
Kapor's argument hinges on the contention that Wikipedia is actually good. In recent months, the quality of Wikipedia's content has come under considerable criticism, accused of everything from libel to infantilism. Like many of the encyclopedia's defenders, Kapor counters those criticisms by citing a recent article in the journal Nature that ostensibly proves that the quality of Wikipedia is "roughly equivalent" to that of the venerable Encyclopedia Britannica. The Nature article has become something of a get-out-of-jail-free card for Wikipedia and its fans. Today, whenever someone raises questions about the encyclopedia's quality, the readymade retort is: "Nature says it's as good as Britannica."
Kapor's remarks inspired me to take a look at that much-cited Nature article. I found that it was something less than I had expected. It is not one of the peer-reviewed, expert-written research articles for which the journal is renowned. (UPDATE: I confirmed this with the article's author, Jim Giles. In an e-mail to me, he wrote, "The article appeared in the news section and is a piece of journalism, so it did not go through the normal peer review process that we use when considering academic papers.") Rather, it's a fairly short, staff-written piece based on an informal survey carried out by a group of Nature reporters. The reporters chose 50 scientific topics that are covered by both Wikipedia and Britannica, selecting entries that were of relatively similar length in both publications. For each topic, they also chose an academic expert. They then sent copies of both entries to the respective experts, asking them to list any "errors or critical omissions" appearing in the writeups. They received 42 responses.
The article itself doesn't actually go into much detail about the survey's findings. It says that the "expert-led investigation" revealed that "the difference in accuracy [between the encyclopedias] was not particularly great: the average science entry in Wikipedia contained around four inaccuracies; Britannica, about three." But Nature subsequently released "supplementary information" about the survey, including more details on the methodology and a full list of the errors cited by the experts. (In total, Wikipedia had 162 errors while Britannica had 123.) Read together, the article and the supplementary information indicate that the survey probably exaggerated Wikipedia's overall quality considerably.
First and most important, the survey looked only at scientific subjects. As has often been noted, Wikipedia's quality tends to be highest in esoteric scientific and technological topics. That's not surprising. Because such topics tend to be unfamiliar to most people, they will tend to attract a narrower and more knowledgeable group of contributors than will more general-interest subjects. Who, after all, would contribute to an entry on "kinetic isotope effect" or "Meliaceae" (both of which were in the Nature survey) than those who have some specialized understanding of the topic? The Nature survey, in other words, played to Wikipedia's strength.
That's fine. Nature is, after all, a scientific journal. But, unfortunately, the narrowness of the survey has tended to get lost in media coverage of it. CNET, for instance, ran a story on the survey under the headline "Study: Wikipedia as Accurate as Britannica." The story reported that "Nature chose articles from both sites in a wide range of topics" and that it found that "Wikipedia is about as good a source of accurate information as Britannica." Such incomplete, if not misleading, descriptions have informed subsequent coverage. For example, one prominent technology blogger covering Kapor's speech this week wrote simply that "a recent study showed that Wikipedia is just as accurate as the Encyclopedia Britannica."
Second, the Nature reporters filtered out some of the criticisms offered by the experts. They note, in the supplementary information, that the experts' reviews were "examined by Nature's news team and the total number of errors estimated for each article. In doing so, we sometimes d