Google against the filth of the web: the algorithm incoming anti-hoax

Researchers of Big G are constructing a positioning algorithm that not only considers the ” success ” in terms of popularity of the page, but also the reliability of the content. Research results on the search engines today are based on rigorous algorithms result of years of development work and refinement. As reliable, do not guarantee that the content can go up to the first position is also a quality content. Quality and popularity are two yardsticks not always approachable, and even too simple to determine values ​​based on mathematics or statistics.

Trying to clean up our research from untrusted content is thinking about a team of researchers from Mountain View, with Google that would apparently developed an algorithm that can place the different pages based on the reality of the story told. To date, the successful placement of a content is due to the number of times that this is ” linked ” : the greater the number of connections, the higher the possibility to get success in the ranking.

A simple technique to evaluate the functional and the popularity of content, but that cannot be measured in any way the accuracy of the information in it told. We have recently seen the proliferation of a number of online publications that have made space on the web to the sound of buffaloes, or releasing information blatantly false conspiracies to create viral, procure a few laughs to users and, above all, generate a few easy clicks on your platform advertising. Although these sources have depopulated especially on a social, part of their success also stems from the various search engines.

These sites may have a hard time with the new Knowledge-Based Trust score, still under research and development laboratories of Big G, but it demonstrates how a completely new method for ranking pages. To place pages in searches, the new system evaluates the number of incorrect information present in the source: ” A source who has few false information is considered to be reliable “, otherwise it is relegated to the lowest places of the charts positioning.

The score of Knowledge-Based Trust is not yet in use, nor Google has forecast use in the near future or announced news that effect. To ascertain the veracity of the information is based on Knowledge Vault, a sort of database where Google has filed to date 1.6 billion information and events that occurred over the years. Draws both from reliable sources such as Freebase and Wikipedia that from less reliable sources to get the information, placing them inside through machine learning algorithms.

The research aims to demonstrate that position a site or a content based on its intrinsic quality is not impossible. It can be done according to Google combining two elements: the fact and a point of reference for assessing the reliability of the fact itself. Algorithms of this type are already being used on other types of products on the web, for example, to assess the reliability of the e-mail comes in, but it is certainly much more interesting to its implementation of a search engine. And it is very interesting, especially if we talk, as in this case, Google Search, widely search engine most widely used in the world.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More