An experiment in which people eat soup from a bottomless bowl? Classic! Or mythological: American Sisyphus. It really happened. It was done by Brian Wansink, a professor of marketing and nutritional science in the Department of Applied Economics and Management at Cornell University, and author of the superb new book Mindless Eating: Why We Eat More Than We Think (which the CBC has called “the Freakonomics of food”). The goal of the bottomless-soup-bowl experiment was to learn about what causes people to stop eating. One group got a normal bowl of tomato soup; the other group got a bowl endlessly and invisibly refilled. The group with the bottomless bowl ate two-thirds more than the group with the normal bowl. The conclusion is that the amount of food in front of us has a big effect on how much we eat.
There are many academic departments (called statistics departments) that study the question of what to do with your data after you collect it. There is not even one department anywhere that studies the question of what data to collect — which is much more important, as every scientist knows. To do my little bit to remedy this curious and unfortunate imbalance, I have decided to ask the best scientists I know about research design. My interview with Brian Wansink (below) is the first in what I hope will be a series.
SR: Tell me something you’ve learned about research design.
BW: When I was a graduate student [at the Stanford Business School], I would jog on the school track. One day on the track I met a professor who had recently gotten tenure. He had only published three articles (maybe he had 700 in the pipeline), so his getting tenure surprised me. I asked him: What’s the secret? What was so great about those three papers? His answer was two words: “Cool data.” Ever since then I’ve tried to collect cool data. Not attitude surveys, which are really common in my area. Cool data is not always the easiest data to collect but it is data that gets buzz, that people talk about.
SR: What makes data cool?
BW: It’s data where people do something. Like take more M&Ms on the way out of a study. All the stuff in the press about psychology — none of it deals with attitude change. Automaticity is seldom a rating, that’s why it caught on. It’s how long they looked at something or how fast they walked. That’s why I’ve been biassed toward field studies. You lose control sometimes in field studies compared to lab studies, but the loss is worth it.
The popcorn study is an example. We found that people ate more popcorn when we gave them bigger buckets. I’d originally done all that in a lab. So that’s great, that’s enough to get it published. But it’s not enough to make people go “hey, that’s cool.” I found a movie theatre that would let me do it. It became expensive because we needed to buy a lot of buckets of popcorn. Once you find out it happens in real theatres, people go “cool.” You can’t publish it in great journal because you can’t get 300 covariates; we published it in slightly less prestigious journal but it had much greater impact than a little lab study would have had.
One thing we found in that study was that there was an effect of bucket size regardless of how people rated the popcorn. Even people who hated the taste ate more with the bigger bucket. We asked people what they thought of the popcorn. We took the half of the people who hated the popcorn the most — even they showed the effect. But there was range restriction — the average rating in that group was only 5.0 on a 1-9 scale — not in the “sucky” category. Then we used old popcorn. The results were really dramatic. It worked with 5-day-old popcorn. It worked with 14-day-old popcorn — that way I could say “sitting out for 2 weeks.” That study caught a lot of attention. The media found it interesting. I didn’t publish the 5-day-old popcorn study.
I’m a big believer in cool data. The design goal is: How far can we possibly push it so that it makes it a vivid point? Most academics push it just far enough to get it published. I try to push it beyond that to make it much more vivid. That’s what [Stanley] Milgram did with his experiments. First, he showed obedience to authority in the lab. Then he stripped away a whole lot of things to show how extreme it was. He took away lab coats, the college campus. That’s what made it so powerful.
SR: A good friend of mine, Saul Sternberg, went to graduate school with Milgram. They had a clinical psychology class together. The professor was constantly criticizing rat experiments. This was the 1950s. He said that rats were robot-like, not a good model for humans. One day Milgram and my friend brought a shoebox to class. In the box was a rat. They put the box on the seminar table and opened it, leaving the rat on the table. The rat sniffed around very cautiously. Cautious and curious, much more like a person than like a robot. It was a brilliant demonstration. My friend thinks of Milgram’s obedience experiments as more like demonstrations than experiments. But you are right, they are experiments consciously altered to be like demonstrations. Those experiments were incredibly influential, of course — it supports your point.
BW: When we first did the soup bowl studies, we refilled the soup bowls so that we gave people larger and smaller portions than they thought had. We heated the soup up for them but gave them 25% more to see if they would eat more than they thought. You could put that in an okay journal. The bottomless soup bowl would be more cool. Cool data is harder to get published and it’s much more of hassle to collect the data, but it creates incredible loyalty among grad students, because they think they are doing something more exciting. It’s more of military operation than if they are just collecting some little pencil-and-paper thing in the lab. It makes research more of an adventure.
Another thing: field experiments are difficult. There’s a general tendency in research to be really underpowered with things [that is, to not have enough subjects]. Let’s say you’re doing the popcorn bucket study. Is the effect [of bucket size] going to come out? Rather than having too many cells and not get significance, it’s a good idea to have fewer cells — replace a manipulated variable with one or two measured variables. For example, instead of doing a two-by-two between-subjects design we might have a design when one factor is measured rather than manipulated. If the measured factor doesn’t come out you haven’t lost anything; you still have all the power. With the popcorn study we knew the study would work with the big bucket [that is, we knew there would be an effect of bucket size] but we didn’t know if there would be an effect of bucket size if we gave them [both good corn and] bad corn [thereby doing a two-by-two study] and only 200 people showed up [leaving only 50 people per cell]. So when we did the field study for the first time, we gave them all popcorn 5 days old. We measured their taste preference for popcorn then used it as a crossing variable. We used scores on that measure to divide the subjects into two groups.
SR: Let’s stop here. This is great stuff.
Comments