If you read politics on Twitter, you would have recently seen that Republicans in the U.S. Congress are going to impeach President Obama if they get enough seats in November, so it is vital that Democrats win elections.
Or you can read that living near a farm will give your child autism and DDT exposure 50 years ago may have made women fat today. Studies said so.
The connected world has numerous benefits but it also has pitfalls - Democrats can manipulate their voters by claiming an impeachment will happen if they don't win in 2014, Republicans can claim climate change is another conspiracy and tiny studies can be used to claim dietary changes will prevent Alzheimer's disease.
Going online is how people find things out. You no longer have to win with data, you just have to win in Google search.
One of the more famous examples of a misinformation victory by repetition is actually not new - it is the 'it takes a gallon of gas to make a pound of beef' metric used by environmentalists and vegetarians to say our diets are ruining the planet. It's been used in too many models to count, because it got into a 1990 book edited by Jeremy Rifkin - but it was derived solely from an environmental claim by the Worldwatch Institute in The Los Angeles Times. There was no evidentiary basis for it but after it took hold, people began trying to find ways to make it so. Virtual water is another example. John Anthony Allan of King’s College London was getting nowhere with the concept of 'embedded water' - it "did not capture the attention of the water managing community," he said - so he came up with virtual water and environmentalists liked it so much they came up with models to make it true, things like 'it takes 140 liters of water to make a cup of coffee' are now used as springboards for numerous models, without ever noting it was completely incorrect. (I noted other examples in Science Left Behind).
We can't stop misinformation but we can at least model it - like with advertising, modeling things, even bad things, can teach us how to use that network diffusion insight for good things, like Science 2.0.
Krishna Kumar and G. Geethakumari of the Department of Computer Science and Information Systems at BITS-Pilani in Hyderabad have analyzed why some people and organizations are vulnerable to exploitation using these "semantic attacks" - so, why right-wing people would easily believe a Photoshopped picture of John Kerry at a Jane Fonda protest or why organic food shoppers so easily latch onto anti-vaccine and anti-GMO claims.
It's a good test bed because in massive social networks, misinformation can spread so rapidly - and those nodes all trackable. People often hear me talk about nodes (in the Science 2.0 structure, this is a node, which puzzles scientists who think they write articles) and those are easy to distinguish, the challenge is in the 'secret sauce' sites like Twitter and Facebook use to assign to relationships.
Connectivity is not new and deliberately seeding misinformation is bad, but we know it happens. Figuring out how it spreads effectively versus fizzling out will end up doing a lot of good for the 21st century Science 2.0 framework that will require lots of trusted collaborators can be sharing lots of information without putting their work at risk.
The information can be gleaned but, as the authors note, "defending against the impact of a semantic attack on human emotions and behavior is an entirely different matter."
Citation: K.P. Krishna Kumar, G. Geethakumari, 'A taxonomy for modelling and analysis of diffusion of (mis)information in social networks', Int. J. of Communication Networks and Distributed Systems 2014 - Vol. 13, No.2 pp. 119 - 143 DOI: 10.1504/IJCNDS.2014.064040
Image: Freeman Lab
Comments