A French court has just decided a case that will likely have a great deal of effect on online search engines if the decision is upheld after appeals. A French man had been accused of “crimes relating to the corruption of a minor,” ultimately resulting in a suspended sentence. He found that Google search results snagged the news items about his case, putting them at the top of search results on his name:
Given extensive press coverage of the alleged crime at the time, querying the man’s name on the popular search engine returns web pages from news publications that suggested he was a “rapist”, among other non-favorable descriptions.The man argues that the statements in the online articles still available today adversely characterize him, which puts him in a disadvantageous social position when meeting new people and applying for jobs, among other situations and opportunities.
The man previously contacted Google directly to remove the defamatory articles from its search index, but the company did not do so arguing its proprietary algorithms simply return web pages in its index related to the keywords searched, that is, there is no direct human manipulation of top search results.
The result from the court was this:
The French court sided with the plaintiff, agreeing that those representations were defamatory, and ruled Google could have mitigated costs to the plaintiff by removing the pages.The ruling ordered Google to pay €100,000, and to reimburse €5,000 in litigation costs incurred by the plaintiff. The ruling also ordered the company to disassociate the man’s name from the defamatory characterizations in Google Suggest, which suggests popular phrases while a person enters search terms in the Google search-box prior to completing a search. Additionally, for every single day the defamatory information remains in the company’s search results, Google would be fined an additional €5,000.
This decision will be disastrous for search engines and other Internet services if it stands. Moreover, it’s just horribly wrong on the surface. It makes no sense to hold indexing services responsible for the information they index, unless it can clearly be shown that they preferentially indexed certain material with a goal of creating a biased view.
Research facilities have, long before the widespread availability of Internet search tools, helped people find news items and other public information that we might rather they didn’t point to, including false information and stories that have since been debunked. We’ve always considered it the responsibility of the researcher to winnow the data.
The difference now, of course, is that the “researchers” are friends, neighbours, potential romantic partners, and prospective employers... and the information is much more readily available than it ever was. It’s tempting to try to make the search engines let go of obsolete information and only find the current stuff.
The problems with that idea, though, are several. It’s essentially impossible to sort out in any automated way what’s appropriate and what’s not. Even if they prefer “legitimate” news outlets to other sources of information, and prefer newer articles to older ones, the amount of cross-linking, re-summarizing, and background information will still show searchers plenty of nasty stuff. And who decides what the legitimate news outlets are? The search engines shouldn’t be making those filtering decisions for us.
Any mechanism that isn’t entirely automated doesn’t scale. With the untold millions upon millions of web pages that Google and other search engines have to index every day, there would be no way to respond to individual requests — or demands backed by court mandates — to unlink or otherwise remove specific information.
If this should stand, I can see that Google might have to cease operations in France. If it should spread, it might easily deprive all of us of easy searching on the Internet. That would be a far greater disaster than having a guy in Paris have to explain away unflattering news stories about a false or exaggerated accusation.
Clearing one’s name has always been a difficult challenge, and it’s only been made harder — perhaps, ultimately, impossible — on the Internet. I have a great deal of sympathy for anyone who finds himself relentlessly pursued by his past, especially when that past contains errors that weren’t his.
But this can’t be an answer to that. It just comes with too much collateral damage.
Comments