Can data be a tactile experience?
The CEEDs project thinks it can be, and they want to use integrated technologies to support human experience when trying to make sense of very large datasets.
Jonathan Freeman Professor of Psychology at Goldsmith University of London, did an interview with youris.com to outline how they believe this project can help present data better, depending on the feedback participants provide to data from their environment via monitoring of explicit responses such as eye movement, and their inner reactions, like rate.
What inspired you to get involved in data representation?
I felt there was a disjoint between our online and offline experience. For example, when shopping online or searching for a product—maybe a pair of jeans—the webpage on which you land, receives information about previous searches in all your cookies. Then, it can make inferences about you and target content appropriately. In the physical environment of a shop, there just isn’t that level of insight and information provided to the environment. One big driver is to ask whether there are ways we can serve content that better suits a user’s needs. And it does not have to be in a commercial environment.
What solution do you suggest?
We realized that humans have to deal with all this data. The problem is that our ability to analyze and understand it is a massive bottleneck. At the same time, the brain does an awful lot of processing that is not being used and that we are not consciously aware of. Nor does it figure in our behavior. Our idea was therefore to marry the two and apply human subconscious processing to the analysis of big data.
Could you provide a specific example of how this could be of benefit?
Take a scientist analyzing say a huge neuroscience dataset in our project experience induction machine. We apply measurements that tell us whether they are getting fatigued or overloaded with information. If that’s the case, the system does one of two things. It either changes the visualizations to simplify them so as to reduce the cognitive load [a measure of brain workout], thus keeping the user less stressed and more able to focus. Or it will implicitly guide the person to areas of the data representation that are not as heavy in information. We can use subliminal clues to do this, such as arrows that flash so quickly that people are not aware of them.
Part of your approach involves watching the watchers use data. So what kind of technology do you rely on?
We devised an immersive setup, where the user is subjected to constant monitoring. We use a Kinect [motion sensing device] to track people’s postural and body response. A glove tracks people’s hand movements in more detail and measures galvanic skin responses, which is a measure of stress. We have an eye tracker that tells the user where about in the data they are focusing. It also looks at their pupil to see how dilated they are, as a measure of their cognitive work rate. In parallel, a camera system analyses facial expressions and a voice analysis system measures the emotional characteristics of their voice.
Finally, users can wear the vest we developed under the project in this mixed reality space, called the CEEDs eXperience Induction Machine (XIM), which measures their heart rate.
Is the visual part of the project important?
Visualization technologies in the experience induction machine are important as people are in an immersive 3D environment. But the representations that we use for the data are not just visual. There are also audio representations for data: spatialization of audio and sonification of data so that users can hear the data. For example, the activity that normally flows in part of the brain can be represented so that more activity can sound louder or higher pitched to neuroscientists looking at these flows.
There are also tactile actuators in the glove that allow users to grab data and feel feedback in their fingertips.
What is so novel about this approach?
The Kinect is available. But never before has anyone put all these components in one place to trial them together. And never before has this advanced set of sensors been put together with the goal of addressing optimizing the human understanding of big data. This is novel, cutting edge and ambitious. It is not simple product development. This is about pushing the boundaries and taking risks.
Who will this technology be useful to?
Initially those who deal with massive data sets such as economists, neuroscientists and data analysts will benefit. But people will also benefit. We are all bombarded with information. There are going to be real benefits for people in having systems that response to your implicit cues as a consumer or person. It does not have to be in a consumption context.
Could you provide an example of application outside commercial applications?
Imagine you are an archaeologist working in the field and you come across a piece of pottery. You look at it and say it comes from the 4th century and from such and such object. It takes years of experience for an archaeologist to be able to do that. In our project, what we are doing is measuring how expert archaeologists look at objects and evaluate them.
Then, we feed this interpretation into a database to speed up potential matching of pottery pieces. It makes the machine better, speeding up the predictive search powers of technology.
Sensing Science? Subconscious Processing Can Aid Big Data Analysis, Say Psychologists
Comments