So far we’ve defined a set of basic rules by which to rate a reference for validity. Next we turned this into a numerical confidence factor. Finally, we created a means for conveying this confidence factor through simple text formatting.
However, the weak point remains – the individual. Up to this point we have simply replaced one set of judgements (our own) with another (the writers). While doing this makes the writers prejudice more apparent it provides no more balance to a text. In order to do this we need to give the source-weight control to another person – or persons.
Since this is the internet and open solutions are a good thingtm the obvious solution is to take the decision of rating sources to the public at large. Implementing such a system on hard-copy media such as newspapers is a more difficult task, however, they should be viewed as what they are: a snapshot in time. It would be perfectly feasible to display referencing weighting as known at that moment – and judge it as such.
In practise we’re talking about a centralised directory – much like a search engine – monitoring and collating individual’s assessments of source accuracy as they view it. Ideally it should be possible to mark individual parts of a text (as we are doing here) and mark them independently (remember: EMCWS).
Of course, not every source will be rated & we need a certain number of votes before a result can be deemed representable of the public (even if correct).
At this point we call upon one of our earlier factors: “What record does this source have for being accurate?”. In other words, when faced with incomplete quality information for a particular source, judge the information on the basis of track record. As with the boy who cried wolf, there may be a time when the source is accurate – however, treat these as the exception & not the rule and they can be dealt with as neccessary. For example, an article on the BBC may have no reference, yet we have a general idea how likely the BBC is to be accurate itself.
In this implementation every rating for an article from a publisher is also a rating for the publisher. For example, in the example above the article itself should be checked, then perhaps the general standard of reporting from Hampshire, then England, and finally the UK BBC website. Each level can be taken into account based on the validity of the ratings (e.g. the number of people that have rated) and worked forwards to give a final confidence.
Many of the basic ideas are now established, but it is not until this is put into practise that we will be able to begin to identify pitfalls & improve the system further. The next step is to create a live demonstration system so the fault-finding can begin. Help, as always, is greatly appreciated (and needed).