Surveys are interesting and surveys can sometimes indicate what a certain number of people in a group might be thinking - but so can betting services. In the 2012 election, the Intrade betting service got as many states right in the presidential election as Democratic statistical wunderkind Nate Silver did - and Intrade is mostly Europeans who know nothing at all about American politics.
The 2012 election was a foregone conclusion in July but in other areas, notably the social sciences, surveys are often used for ill, to promote a pop hypothesis or a cultural agenda. Even when they aren't, they can be so inherently biased that the most honest scholars draw conclusions that are incorrect. Survey results might only be telling us about language and nothing at all about attitudes.
This was always well known. On surveys about science, for example, evolution comes up with confusing results for Americans, leading biologists to be concerned that Americans don't accept science - yet American adults actually lead the world in science literacy. Evolution understanding has been so badly hijacked by cultural militants on the religious and atheistic fringes that responsible surveys don't even bother to use it any more; questions about evolution confuse understanding of science literacy, they do not reveal it.
A paper in PLoS One affirms this. The authors found that people naturally responded to surveys by selecting answer options that were similar in language to each other as they navigated from one question to another, even when the similarities were subtle. In this instance, the surveys were on organizational behavior, such as leadership, motivation and job satisfaction.
"The findings suggest many survey participants likely fit the first question into their language understanding and, when they get to the next question, move in their language network to figure out how close it is to the previous question in order to respond," said co-author Kai Larsen, information scientist and associate professor of management and entrepreneurship at
University of Colorado at Boulder.
The findings also raise questions about the way scholars in the social sciences design and analyze surveys, inadvertently focusing attention on the shared language understanding of respondents, said Larsen. "The methods used for surveys are making it difficult to get at what's unique about an organization rather than what's embedded in general language."
Often when psychologists or sociologists conduct surveys, they look at more than just average scoring and also try to detect -- and measure -- patterns. They quantify, for example, how much a popular answer to one question likely leads to a popular answer on another question to find common relationships.
The measurements help form statistics like, "people who highly rate their manager's leadership style are more likely to stay longer at their jobs."
In the case of the current study, researchers measured the degree of similarity in survey language instead of human response patterns. When they compared the measurements to measurements of human response patterns, the two sets of numbers were nearly identical, indicating the measurement of language similarities and people's selection of survey answer options were practically the same thing.
For the study, the researchers applied two algorithms, or complex computer-operated calculations -- each using radically different approaches -- to measure sentence similarities.
The first algorithm involved about 100,000 newspaper articles to evaluate word similarities used within. The second algorithm relied on an online database created by linguists that shows the relationship between tens of thousands of words.
The surveys used in the study were already published and taken by anonymous respondents in a variety of fields from finance and government to engineering and the military. The respondents also included business students.
One type of survey that was not found to be language-based in the study was personality testing.
Other authors of the paper included lead investigator Jan Ketil Arnulf, associate professor of leadership and organizational behavior at BI Norwegian Business School; Øyvind Lund Martinsen, professor of leadership and organizational behavior at BI; and Chih How Bong, senior lecturer in computer science and informational technology at the University of Malaysia at Sarawak.
The results of the findings may point to ways of improving research methods.
"With surveys, we may be able to help researchers focus on respondents who aren't answering in a language-based way," said Larsen. "Because they are revealing actual and unexpected attitudes, they may be the ones you want to pay attention to."
The study also highlights the growing prowess of data science.
"Semantic algorithms are becoming new tools for the social sciences and are broadening perspectives on survey responses that other longtime theories cannot explain," said Arnulf. "This represents a study of how the relatively young data sciences can address problems not approachable with traditional methods."
Comments