A few days ago, I challenged Ed Felten to do some more comparison work. In the spirit of Milgram, I didn’t propose a theory. (This was mostly because I was trying to make a good joke about assigning the professor homework, but couldn’t come up with one.) However, on consideration, I think that I should propose some theories, and also not influence the experiment.
So, hypothesis 1:
Wikipedia will have 30-50% more entry coverage than the others.
In particular, I don’t expect Ed Felten will have an entry, and I
expect one of his two computer science entries to not be in each
The quality of Wikipedia, measured by errors detected, will meet
that of the others.
Building a large encyclopedia is a lot of work, and I don’t expect that the quality assurance and fact checking will be great anywhere.
The quality of Wikipedia, measured by the depth of the entries,
will be substantially greater than the comparison.
Techies aren’t noted for brevity and conciseness, and the web doesn’t
have physical constraints holding down the size of the entries,
whereas each DVD you ship may add $2 to the cost of a product. I
expect that the difference would be largest against the print or CD
The quality of Wikipedia, as measured by the accessability of
entries, will be lower.
By accessability, I mean how good the
basic introduction and contextualization are, and how well the entry
takes you from no knowledge to some.
Ed will believe that Encarta’s entry on the Microsoft trial is
biased towards Microsoft.
An encyclopedia must be measured first on accuracy, and secondly on
breadth. A roomful of monkeys writing entries does not get you a
useful encyclopedia, but neither does one with one entry. (There are
a great many useful topical encyclopedias which address this by
constraining themselves to one subject.
I expect that Wikipedia’s accuracy will be roughly that of the others,
and it will win, hands down, on breadth and depth. However, this test
is biased by the selection of terms, where they are known to a
computer science professor. If my hypotheses pan out, it would be
fascinating to see if we could recruit from across the Princeton
faculty, to see if the same tests hold true across wider disciplines.
(I did two short tests, on Rabbi Akiba, and Brillat-Savarin.
Wikipedia spells it Akiva. But I
don’t have a comparison document to compare to.)