(During this week’s Media Lab Spring Meeting, I’m liveblogging the talks together with Ethan Zuckerman. This is the morning session from Tuesday, 24th of April. This first post originally appeared on Ethan Zuckerman’s blog)
Radio host John Hockenberry introduces the first day of the Media Lab’s spring sponsor meeting. He suggests that the lab is an “infectious idea”, a way of working and thinking that spreads well beyond the walls of the building. He warns the crowd, packed into the third floor atrium at the Lab, and fourth and fifth-floor balconies, that this isn’t “some sit back in your seats TED conference experience” – instead, we need to work to get the most out of our experience.
Joi Ito, director of the lab, lets us know that this is the most open meeting we’ve held – it’s being streamed live on the web, blogged and shared through a variety of channels – you can look for the tag #MediaLabIO to follow along. It’s also a meeting that’s still evolving as we put it together. There’s a new advisory council that Joi is working with to consider the future of the lab. That group of advisors will be offering a panel late today.
Hugh Herr tells us about tomorrow’s seminar, titled Inside Out. It’s a deep dive into the idea that a deep understanding of the natural world will change the nature of technology. The discussion features lab luminaries, as well as outside speakers like Craig Ventner, John Maeda, Sebastian Seung, Reid Hoffman and Theodore Berger. Sessions will look at Mind, Body and Community through the lens of Inside Out, in a day-long exploration.
Hiroshi Ishii offers a talk titled “Zero G – defying gravity”. It’s a reference to a new technology he’ll be showing in his lab called ZeroN, a levitating material that can be moved and controlled by computers. But it’s also a recommendation that we look from a new perspective, a view from above, a chance to think and aim high. He shows a visualization he designed some years before, which analyzes the impact of citations in a series of academic papers. It’s related to some other visualizations of networks: Cesar Hidalgo’s visualization of supply chains as a path towards economic development. He shows Leo Bonnani’s Sourcemap project, showing the global supply chains that support the construction of complex products like a laptop. The tool goes a step further than documenting current supply chains – it lets you look into the future. A supply chain that relies on cacao from Ivory Coast is one that’s in trouble, as Ivory Coast won’t be able to produce cacao a few decades in the future due to climate change.
That ability to look forward, Hiroshi tells us, is a critical perspective. He shows a complex visualization of Japan recovering from the earthquake, tsunami and nuclear crisis of March 2011. The visualization shows Japan’s vulnerability to past crises, as well as showing data from the Fukushima reactor through the Safecast project Joi has been involved with. Tools like this are like the Hubble telescope – they give us a new way to look at pressing problems and to see new perspectives.
Andy Lippman offers us a history lesson about the lab. For the past 25 years, the Media Lab has been about making things digital… which is a way to make them interactive, malleable, and controllable. When systems become digital, they can understand themselves. Analog television could tune into a channel, but didn’t know what programs were upcoming – digital TV can both broadcast and know what’s coming. Now, we’re entering an era of big data – data that’s big in scope, dimension and timeliness, not just big in size – and that dream of becoming digital is becoming a reality.
Cities aren’t just about transport and infrastructure, but about information – tools like telephones were a revolutionary technology that allowed people to build massive buildings. We’re now seeing powerful information systems that make new kinds of cities possible. Knowing where transport infrastructure like taxis or buses were was a dream – it seemed inconceivable to put GPS receivers on all parts of an infrastructure, but that’s now becoming reality. And we’re seeing new systems to understand people, understanding large patterns from studying human behavior. We may be a short distance away from building systems that can learn from human behavior, like Google’s autonomous vehicle, which has learned in part from thousands of human hours building Google Streetview (a technology, Andy tells us, was previewed here 25 years ago.)
These ideas become powerful when we dream globally. We’re charged with thinking with problems of global poverty, energy, education – that’s what you come to a university for, for that breadth of aspirations. You instructions for the day, Andy suggests, are to make those connections and share those dreams.
John Hockenberry invites everyone to explore the building in the times between public sessions – a system of badges will let you record your progress and demonstrate that you were walking around, not answering our email. But the badges aren’t the incentive – it’s the chance to see evolution in action. Hockeberry tells us that he always wanted to see evolution in the real world… or at least a jetpack. Marvin Minsky famously wears vests packed with technology and announces that he was carrying these devices so he would evolve. But Hockenberry is interested in the idea that being able to read intent on a planetary scale is a moment in human evolution where we’re diverging from the past and moving into the future in a fundamentally different way. That crossroads that we’re encountering is in display today, and that’s our incentive to explore.
To get a sense of what we might want to see, lab researchers give five minute talks about what’s underway in each of our labs.
Kent Larson of the Changing Places group starts by showing us the CityCar, a project unveiled earlier this year in Brussels after ten years of development in the lab. It’s a highly compact car that drives by wire, has robot wheels and folds into a tiny space. One of the lab sponsors has licensed the technology, which means we will likely see the cars on city streets. As we start seeing more autonomous vehicles, we need to start thinking about how vehicles communicate with pedestrians, showing their intent. His group is working on the “persuasive electric vehicle”, a three wheel bike lane vehicle designed to solve problems of energy, congestion, aging and obesity by democratizing access to bike lanes. Other projects focus on multimodal recommendation engines, designed to help people travel on the right path with the right vehicle.
Changing Places isn’t just about transportation – it’s about thinking about new uses for space. Transformable architecture allows an apartment to take on many more functions, by having certain features fold away when not in use – a treadmill or a dining table that disappears into a wall. And experiments with aeroponics are changing how people grow food within urban structures.
Andy Lippman’s Viral Spaces group takes “viral communications” as its inspiration. That’s communication that takes value from the act of sharing. Andy offers the spreadsheet as an example of a technology that grew from being spread, both from companies transforming their thinking through the spreadsheet and from the sharing of macros. Viral Spaces has looked at radio, “one of the last bastions of undemocratized tech”, where the only people who controlled the towers were people with hundreds of thousands of dollars. Mesh radio turned each radio into a tower and helped challenge the politics and structure of radio. Now Viral Spaces is working on other shareable media: a DNS service for people, that allows you to register things you’re good at and what you’re like to do within a region or a locality. He sees a swing back from the dominance of the screen to a world where we’re focused primarily on reality, looking into screens to ask questions about that reality. And his group is working on marketplaces like Peddl, which permits “perfect markets that are localized, immediate and broadcast.”
Joe Paradiso’s group is building “the emerging nervous system of ubiquitous sensing.” The goal is digital omniscience: being able to sense what’s going on in a building through a visualization, or through a handheld device, like a tricorder. He wants to build devices that help their users determine potential danger, or which might provide a sense of empathy, like Boxie, an interactive camera that can move through a space. Devices might build a sense of proprioception – athletes might wear sensors to help them understand when they’re performing well or poorly. And we can build tools that act as prosthetics – you can work with a tool that knows the CAD/CAM model of the structure you’re building using a handheld tool.
He shows Doppellab, a visualization of the lab that shows hundreds of sensors, showing temperature, humidity and sound. The system allows us to listen into rooms, not to eavesdrop, but to hear a distorted sound and get a sense of what’s taking place in the space. We can visualize the Media Lab and get a sense for what’s happening even when we’re away from the lab. A new, wrist-based tool called the WristQue could act both as a wearable sensor and an interface device – we might wave our hand and open doors, control lights and interact with our spaces. Finally, printed sensors on the face of musical instruments allows gestural control of a family of musical instruments.
Ramesh Raskar of the Camera Culture group tells us that he’s passionate about creating super-human abilities to interact with the world. His method for creating those abilities is by building cameras that can see the unseen and displays that can change the sense of reality. He’s in the early stages of research on a camera that’s capable of seeing around corners, by analyzing the scattering of bounced rays of light. A camera that can see around corners will help cars avoid collision, or allow endoscopy to look into hidden corners of the body. Other cameras are pushing forward the work of Edgerton – we see a camera capable of a trillion frames a second of resolution. Other projects try to turn the thousands of cameras in a stadium or other public place into a navigable stream of data we can interact with. Perhaps most inspiring is a set of projects that turn mobile phones and their cameras into tools to diagnose common eye and retina problems.
Michael Bowe of the Object Based Media group is interested in how we make the incomprehensible visible. How much energy is exerted in dunking a basketball? In the recent NBA All-Star game, the Media Lab provided the nets, standard basketball nets that measure the forces of the ball passing through it, allowing people watching television to see the incredible forces exerted. Another project helps viewers distinguish between apparently identical liquids, which each refract light differently. Another project suggests how little information we might need to make a video projection into a space – a set of projectors process video and project additional information to peripheral vision, turning 16×9 displays into “infinity by nine”. The most provocative might be holographic displays which “can actually do what Tupac appeared to be doing at Coachella.”