Creating Technology for Social Change

Design & Experience: Media Lab Spring Meeting Part Two

(During this week’s Media Lab Spring Meeting, I’m liveblogging the talks together with Ethan Zuckerman. This is the second morning session from Tuesday, 24th of April)

Henry Holtzman’s Information Ecology group focuses on human interaction with the deluge of data we’re all facing, from an avalanche of email to vast quantities of photos and video. His lab’s strategy relies on knitting together ecologies of devices and services that help humans cope with these waves of data.

Dan Schultz’s project Truth Goggles offers one way to think about this research approach. It’s a project that sits in your web browser and alerts you when you’re encountering assertions that appear in fact-checking databases, for instance, an assertion about a political candidate’s position or stance. By alerting you to the presence of fact-checking information, it invites you to take a deeper dive into that layer of information.

Another project, Droplet, is capable of storing small bits of data, like a bookmark. It communicates with existing devices like screens and tablets, using light and capacitance, and allows you to place the droplet onto a screen and store the bookmark of the page being displayed and bring it to a different screen. The goal is a “fluid exchange between the physical and virtual.” StackAR integrates a tablet with an Arduino controller and allows you to design and simulate circuits in a visual programming environment before you build them out. Mobile P2P takes advantage of the existence of smartphones and tablets which have lots of extra memory, but have high costs of data transfer via cellular data networks. The system opportunistically accesses Wifi networks and uses these networks to share large pieces of information, like a movie trailer.

Pattie Maes notes that the devices that we use to access technology haven’t changed since the invention of the personal computer. The Xerox Star had a mouse and a keyboard. Even with tablets, the experience is still the same. She asks us to consider how we can provide seamless access to digital information and services in people’s physical lives. One example of this is http://www.ted.com/talks/pattie_maes_demos_the_sixth_sense.htmlPranav Mistry’s Sixth Sense, which consists of a camera and a projector. When the camera recognises your gestures and objects, the projector projects information and interfaces onto those objects. For example, Sixth Sense might provide augmented purchasing information. Another example is EyeRing, a ring with a camera, radio, and a button. The initial idea was to have a device that a blind person could use to get information about their environment. EyeRing is voice activated; simply say “color” or “currency” and the Eyering will tell the user what it is. This has applications to people who are sighted as well– since pointing is such a natural interaction. Users could point at something and say “translate. Pattie wants to move technology towards man-machine symbiosis, to augment people’s cognitive abilities (perception, learning, memory, decisions, control) with extensions of their natural behavior. Natan’s LuminAR project is a great example of this, a light bulb which turns any surface into a projected interactive computing surface. Simply plug it into your desk lamp, and it just works. Another example is Teletouch, an augmented reality tablet control system. Point your tablet at anything in the environment, and the tablet will modify itself to control that part of the environment. Another example is Sparsh. The Peripheral Display tracks where
a person is looking and gives users more detailed information at the focal point of their vision.

Chris Schmandt of the Speech and Mobility group notes that questions of mobile devices, and speech-driven devices is a subject where the Media Lab has been conducting research for dozens of years. His group invented the first unified messaging system – “We’ve been working in this area for a long time.”

One of the major interests of the Speech and Mobility group is enhancing real-world activities through digital means. Now that more of our lives are online, how can we make online viewing of an event more like being there? He asks us to consider we go to a ballgame. We might go for the beer or food, but mostly we want to join a large group of people for a powerful shared experience. The ROAR project aims to do the same for online experience. Chris shows a demo of a football match, with a side panel showing the “roar” of the crowd online as the game plays. Clicking on any part of the roar brings up the tweets and comments which accompany that term.

Chris also talks about mobility, the idea of working away from the desktop. In 1999, his group developed the first automated voice directions for digital maps. More recently, his group has been working on a project, LocoRadio, which plays soundtracks associated with restaurants as you drive by them. “With sound, you can hear things you can’t see- you can hear the restaurant on the next block.” Chris is asked what soundtrack is most appropriate for salminela poisoning – he suggests certain tracks by the Rolling Stones might work well.

Catherine Havasi, who leads the Digital Intuition Group points out that talking and communicating is a creative act. This is a problem for computers. When we communicate, we share unspoken assumptions about the world. We have spent years trying to give computers the same unspoken assumptions so they can understand things like social media, stories, or spoken dialogue between people. What might it take to understand the world? You need to understand that buying groceries requires money, that you need to cook, etc. One way to learn these things is to gather information from the Internet, asking people to tell the computer things about the world, or invite people to play games with a purpose. Getting the data is only the first step. Computers need to be taught about what to trust online and how to understand a variety of languages. Catherine then shows us graphs of ConceptNet 5’s model of the world.

How are people using ConceptNet? Narratorium is an immersive storytelling environment listens to you talk and uses your language to paint a visual picture of the story you’re telling. Cluebot is a bot which helps you solve a crossword puzzle. ConceptNet runs the screens behind the
Glass Infrastructure
, the intelligent touchscreen system that helps visitors understand and navigate the lab. One app on the Glass Infrastructure is Charm Me, an app which helps people explore the expertise within the Media Lab and find people you should be speaking with.

Tod Machover is interested in making music as fundamental a part of people’s lives as possible. Recently, the lab has been developing Hyperinstruments, digital instruments for performers and the public. Projects like Hyperscore and MediaScores enable people to be creative by making music. Opera of the Future also recently produced an opera, “Death and the Powers” which included robot characters. Robots and Opera might seem like strange bedfellows, but just this week, Death and the Powers was nominated for a Pulitzer Prize for Music.

Music should also be fun. DrumTop makes it easy to turn almost anything into a drum. A HREF=”http://www.cognotes.net“>Cognotes by Adam Boulanger embeds cognitive assessment into music software, which may be able to detect Alzheimer’s two years earlier than existing tests. Janice Wang is doing research on music and taste and the ways that different music changes our perception of taste. Tod’s group also does research on Music and Collaboration. They have invited the people of Toronto to crowdsource a new symphony together. During Spring Meeting, visitors will be able to participate in a similar composition, which will be improvised in realtime by one of the group’s pianists.

Opera of the Future is also working together with Punch Drunk to develop remote immersive theatre experiences. They’re developing a new set of technologies and narratives that open the experience of a Punch Drunk show to viewers from the Internet. Ben Bloomberg, Elly Jessop, Jie Qi, and Jason Haas are working together on this project, which launches in new York soon.

Hiroshi Ishii’s Tangible Media Group explores new interactions between people and information. They tried to materialize information and ideas to interact with information. We typically interact with GUIs, information that are behind glass. Tangible bits give us an opportunity to interact with objects, but they don’t dance like pixels. Tangible Media is trying to go a step further; Radical Atoms are materials which transform and deform. Materials like water experience phase shifts — from ice to water to vapor. To understand how to work with objects, we need to understand the human physiology, the nature of space, and how objects function in space.

Hiroshi shows us the Recompose project, which uses gestures to shape three-dimensional surfaces. Amphorm is a physical object which can be directly manipulated, and whose shape can be syncronised with other projects. The ZeroN project can be used to explore orbits by moving objects in actual orbuts. FocalSpace is software by Lining Yao and Anthony DiVicenzi for videoconferencing which detects and focuses on the important things happening in the space, fading out characters and details which are unneeded. T(ether) enables multi-user interaction with three-dimensional information in front and behind of a tablet device. GeoSense is an open publishing platform for creating and sharing geospatial visualizations– and it’s being used for the SafeCast radiation crowdsourcing community in Japan.

John Hockenberry asks the presenters for final comments: Hiroshi reflects on what it means to be at the Media Lab. People are often interested in answering big questions, but here at the Lab, it’s much more exciting to come up with new questions. Pattie Maes comments that many interfaces allow people to learn on the fly, with the information that they need at the time they need it. Tod also shares ideas on the nature of feedback flows between a user and an object. Many innovations improve the quality of feedback from digital instruments such as keyboards and strings. The human voice is a much more responsive mechanism. The next generation of musical expression needs to take into account the complexity and intimacy of the human voice. Catherine comments that competitions, puzzles, and other collaborative activities expect us to have access to computers, and that augmentation is increasingly going to be the norm. Roz Picard wants to believe that more technology will lead us to become more humble and aware of our experience. It’s much more likely that we’ll get hooked on these technologies as we increasingly rely on them. Henry Holtzman comments on the biases that are created or resolved by technology. He refers to NewsJack, a project by Dan Schultz, which allows communities to rewrite and reframe the way media presents them. One response to the overflow of data with its many biases is to build information personalisation technologies, says Chris Schmandt. Is this a self-mediated system, or do they need safeguards? Pattie comments that self-regulating, decentralised system is always safer but harder to control than centralised systems. Hiroshi reflects on the value of following people on Twitter. By following his heroes, he’s able to expand the scope of his awareness and life. He also points us toward the twitter account he created to publish the poems his mother wrote. Twitter has amazing potential to extend our life and influence society even after we’re dead. He wants his tomb to be integrated with Twitter so he can continue to influence students after he’s gone. Tod thinks that it’s important to create new kinds of conversations where people with expertise and beginners can collaborate.