There are throughout the world various projects to detect deep fakes (media items created via AI which seem real). These projects use AI methods/systems to detect the fabricated information. The detection of forged information is of critical importance to states, who wish to avoid false information affecting the decision making process of populations or those in leadership roles. Programs at the national level are assembled to deal with such information with modern technologies. From the results of these programs, detection and authentication services may thereby be made available to the international public at large, based on trusted sources that are authentication service providers. The providers are not infallible. Further, when they are national projects they are easily coerced by the nation that funds the project. So as a source of definitive believability they must be subject to scrutiny. Several authentication providers working in concert may provide a better indication of believability for a media item than that available from a single source. These providers can post their analysis of media items, and a clearinghouse system (web source, possibly blockchain) may collect the ratings of believability for a particular item, from the multiple evaluations offered from authentication services. In this structure we may have a tool to help us determine what the status of a media item is based on the best guess of (biased) institutional providers.