The problem is that, on larger projects, it's impossible to check every word in
the project against every possible website in a timely manner. Imagine: en.wp installs FlaggedRevs
for every article. All edits and new pages, before going "live", have to undergo thorough checks for
factuality against the sources and detailed copyright checks, down to the last word.
Think of what would happen. There backlog would soar, as there are not enough reviewers
to check every edit as it is made in such detail, even with the assistance of bots.
This potential situation is somewhat similar to Wikinews', but WN's problems are compounded
by the fact that they *must* be always up-to-date or they are not useful at all. I happen to think
this approach is somewhat un-wiki-like, especially when there are very few reviewers. I
note that most WMF projects don't follow this model, instead removing copyright violations
and inaccurate statements as they find them. In theory, this means that the articles will not be
of any quality, but didn't someone once say, "Wikipedia works in practice, but not in theory?"
CorenSearchBot would be quite useful for both Wikinews and OpenGlobe;
in general, I think wiki projects should take advantage of copyright searches
more often. If anyone can install the bot on OpenGlobe, please post at our Village
Re: The WikiNews fork - for lack of a copyvio detection bot half a project was lost
On Wed, Sep 14, 2011 at 10:49:06AM -0500, Aaron Adrignola wrote:
> CorenSearchBot has not been operational for several months since Yahoo
> this either and apparently the same is true for Google.
It might be useful to have a community operated spider, then? In that way, we could also optimize
our database for the kinds of queries we need.