Once upon a time, Google was unequivocally the best and the most popular search engine in the world.
Ranking pages by how they were interconnected (linked) to other sites, Google was the successful result of a completely different algorithm and searching technique that proved better than the previous ones, which were based on ranking results according to the number of times the searched terms appeared on the page.
The honeymoon period lasted for a number of years. Google became the search engine of choice for the majority of users, and its capability of producing relevant search results mostly untainted by spam have kept it at the top for many, many years.
But lately – in the last two years or so – people have been noticing that when searching for evaluations of and quotations for consumer items or academically-related information, it is practically impossible to find relevant search results.
The problem is that spammers, scrapers (people who “scrape” content from sites and paste it on theirs only to improve their page ranking) and content farms (pages with a huge amounts of textual content which is often of low quality) have taken advantage of the fact that Google benefits financially from the AdSense ads these sites display.
It used to be that these sites – worthless to everyone but their owners and, well, Google – weren’t able to brake often into the first page of search results, but lately many have noticed that sites from which the original content was scraped and copied now often rank below those that have misused them without linking back. The only exception to this case seems to be Wikipedia, but it is speculated that’s because Google’s algorithms are written in such a way to always give it precedence.
Google has not yet reached the point when it will be forced to react and change its algorithms in order to purge its search results of those sites so that it can placate its increasingly unsatisfied users but, according to Charles Arthur, that day can’t be that far away.