Mind: Session Six at #MediaLabIO
At the Center for Civic Media and the Berkman Center for Internet and Society, Nathan designs and researches civic technologies for cooperation across diversity. At the Berkman Center, he applies data analysis and design to the topics of peer-based social technologies, civic engagement, journalism, gender diversity, and creative learning.
Nathan's current projects include large scale research on community building online. In the summer of 2015, Nathan will be a PhD intern at the Microsoft Research Social Media Collective. A full project list is at natematias.com.
Nathan regularly liveblogs talks and events. He also publishes data journalism with the Guardian Datablog and PBS IdeaLab. He coordinated the Media Lab Festival of Learning in 2012 and 2013.
Before MIT, Nathan completed an MA in English literature at the University of Cambridge, where he was a Davies Jackson scholar. In earlier years, he was Riddick Scholar and Hugh Cannon Memorial Scholar at the American Institute of Parliamentarians. He won the Ted Nelson award at ACM Hypertext 2005 with a work of tangible scholarly hypermedia. He facilitated #1book140, The Atlantic's Twitter book club from 2012-2014, and was an intern at Microsoft Research Fuse Labs in the summer of 2013.
Mind: Session Six at #MediaLabIO
John Hockenberry introduces the Google Jockey team, a group of five people who are creating a livestream of google searches and links from twitter during the talks. To participate, tweet a link to #MediaLabIO, and follow along with the livestream.
Our second set of talks is a series of presentations on how the technologies of today conform, confound and interact with our identities that involve our bodies and brains. Rosalind Picard is our first speaker, and begins by telling us about her journey from being an electrical engineer - a world often thought of as emotionless and rational - to someone who works on studying and measuring emotion. She encourages us to see ways that measuring things can lead us to more deeply human understanding and asks us what emotion we believe technology most commonly elicits. The answer is simple: frustration. We see a video of Ming Zer Poh trying to solve a set of “impossible” CAPCHAs. In the experiment, a machine voice asks, “You seem to be having trouble with this form. Would you like to share your thoughts with the camera?” The response: “This form sucks.”
Rosalind is interested in studying a range of spontaneous affective emotions - joy, as well as frustration. She shows us a series of facial features that represent different types of smiles. Smiles that involve true delight and those that show frustration are very, very difficult to tell apart. It’s very hard for humans to understand facial expressions accurately - in some cases, humans are have roughly random accuracy. It’s slightly easier to recognize acted emotion - spontaneous emotion can be very difficult to detect.
She shows us a set of recent results - machine learning systems are now able to identify frustrated smiles as the best humans. But it’s hard to build these systems - it is difficult to bring thousands of subjects into the lab. Instead, she’s going online, putting the collection of emotion into the cloud. She shared examples of the “Analyze Your Smile” project which their research group launched with Forbes last year.
Subjects watch a humorous Volkswagen ad, while looking into their PC cameras, which records their reactions. She points out that this ad was so effective that people smiled even more the second time they saw it-- in advance of the punchline. The software looks at different parts of someone’s face and tracks the factors which determine that person’s affective response: are they shaking their head because they’re disagreeing? What’s happening to their eye wrinkles?
Roz’s collaborator Rana El Khaliouby is Egyptian, and reflected that Mubarak’s unwillingness to step down may have represented his inability to read the mood and emotion of his subjects. What if we could build better systems to allow people to express and share their emotions?
Some of Roz’s work focuses on autism, which affects 1 in 88 children, 1 in 54 boys. In the case of autism, children who are overstimulated often shut down, leading adults to try to connect with them and draw them out, sometimes precisely the wrong intervention. To understand emotional state more accurately, Roz wanted to move from traditional electrodermal sensors into less obtrusive sensors that measure parasympathetic nervous system reactions on the fly. She explains that measuring skin response often gives a purer response in terms of sypathetic activity rather than measuring heart rate.
We see a video of a girl with autism climbing onto a swing - her arousal and anxiety peaks as she’s climbing onto the swing, then lessens as she begins swinging. Measuring emotion in the wild is a surprising thing, though. An undergrad researcher borrowed some sensors for his autistic brother to wear over Christmas break. They came back with unusual data - an enormous peak on one wrist, but not on the other wrist. She called the undergraduate researcher and asks what had happened at the time of the peak - he told her that the peak occurred twenty minutes before his brother had a seizure. It’s possible that there’s a physiological indicator of seizures, which might be able to address the acute problem of seize death, a condition responsible for more deaths annually than breast cancer.
Looking at skin signals for seizures, it’s possible to detect seizures with 94% accuracy (pdf article). She tells us that it won’t eliminate the EEG, but is a powerful complementary technology.
Researchers don’t understand why seize death is so common - it’s a factor in 50,000 deaths a year. One possibility is a suppression of brain waves after seizures. Roz hints that she’ll have a paper coming out later today that connects the results she sees by tracking skin response via the wrist and understanding of the consequences of seizures.
We can consider emotion and pain response to be one of the key diagnostic tools. A new space for research is the pain response of children as they’re going through different procedures in the hospital. Understanding these responses is a new space of exploration in understanding children’s health issues and how to create interventions that are less stressful and more effective.
How much data is coming out of any given person at any time, and how much data do we need to create algorithms that give us a reliable model for people’s emotional states? Some people argue that up to 90% of the signals we produce are affective information. Much of that we don’t know how to digitize yet. We also need to increase the sample size. Roz’s group has collected data from many parts of the world. But until we get more smile samples from many more people, perhaps in their cultural context, we won’t be able to realize the full potential of using emotional data.
How good are we at reading each other? Often we don’t tune in, and we don’t do it, we can easily miss important information. Roz is excited about wearable cameras and glasses because she hopes we will be able to use them to discover just what we’re able to notice, and what we’re missing. What will happen when technology disgrees with our ability to read the situation? Roz thinks we’re going to have to learn how to deal with and handle both channels of information. To make those judgments, we’re going to need better ways to visualise information from computers.
How might large scale awareness of the emotions of crowds affect politics? Roz thinks that political campaigns might be interested, but that in many cases, we will need to develop methods of protecting people’s emotional privacy while still making use of our collective emotions.
Hockenberry expresses some skepticism that a dictator would ever care about audience emotion and feedback from a group of frustrated citizens. Roz offers a response involving a slightly less charged situation - a movie trailer offering negative feedback from viewers. The quantitative data was less controversial than the interpreted data and the director was less likely to shoot the messenger.
Ed Boyden is interested in finding ways to fix our brains. His synthetic neurobiology group in the Media Lab studies the intricate circuitry of the brain, looking for opportunities for positive interventions. Neuroscientists have been able to discover that the brain is a densely wired computational circuit made of wired cells called neurons. Ed is interested in understanding this circuit to engineer new solutions with neurons.
We all wish we could fix the brain just like we can fix computer code when there’s a bug. But the brain’s elements are incredibly diverse, with many different shapes. We don’t even know the number of the kinds of cells in the brain - a parts list of the brain. The temporal nature of the brain makes this particularly challenging-- we need to be able to monitor the brain at the millisecond timescale that it works with. Neurons exchange information hundreds of times a second. And then we need to figure out how these signals combine into the emotions and ideas that our brain works with every moment.
Having a parts list of the brain isn’t enough. We also need to understand what these cells do. If we can turn on or off individual cells for a millisecond, we can figure out its function-- and maybe even develop ideas for fixing those cells when they go into a bad state. To do this, Ed looks at other examples of human endeavor, developing collaborations across computer science and medicine to build teams to solve the brain disorders that people experience. This matters because a billion people experience some kind of brain disorder, many of which are untreatable.
Of course, some conditions can be treated with drugs. But these are imprecise, affecting many cells which aren’t relevant to the condition they’re trying to treat.
Recently, Ed has been trying to build a robot which can do nanosurgery on a living brain, looking at individual cells to understand what their function is. They’re working with Craig Forest at Georgia Tech to design it. The robot uses a microneedle, stop it on a cell in the brain, interrogate it to find out what it does, and harvest the molecular content to help us plan better drugs. This is a practice which a very small number of researchers are capable of, but they’re hoping industrialise this robot to make it possible to automatically research the brain at a much larger scale.
Ed then talks about a method he developed to turn on and off neurons, by using photosynthetic molecules. They convert light into electricity, and then use the electricity to control neurons. They took the genome from an organism with light sensitive proteins and then added that DNA to several well-known neuron cell types to make them controllable by light. As a result, when you shine a light on a neural circuit, cells expressing this particular receptor can be activated, while neighbors are not.
In schizophrenia, many cells have atrophied, in particular, cells that are designed to mute the response of other cells. If we could stimulate those atrophied cells to begin working again, we might address some of the underlying causes behind that crippling disease. Using light to stimulate the brain might be a workable method here.
We see an experiment published this month where a mouse responds to a light signal much in the same way that they might respond to an addictive stimulus. The light stimulates a dopamine reward system much as a drug might. This suggests a broader point - we might be able to influence certain receptors without neurosurgery, purely through light from outside the brain. And once the light helps identify parts of the brain related to a particular disorder, the microneedle robot Ed’s group is designing might be able to then operate directly on those cells.
Ed is also working to build optical “brain-coprocessors,” technologies which might directly fix brain functions. The project is currently in clinical trials. His team is also developing fully wireless brain co-processors that can be put into a living nervous system-- which might do for the brain what pacemakers do for the heart.
In some retinal diseases, visual degredation occurs due to the death of photoreceptor cells. But the remainder of the retina - amarcine cells and ganglion cells - are still present. We might be able to cure blindness by restimulating some of these cells to become photoreceptive. We see some “pre-clinical” data - a blind, mutant mouse swimming in a six-armed maze to try to find a blind platform. The platform is lit, while the rest of the maze is dark. Blind mice solve the problem through brute force - mice who’ve been treated to turn retinal cells into photoreceptive cells are able to solve it far more quickly. This doesn’t mean the mouse sees the way we do, but it does mean that it’s able to make use of this restored sense, much as a deaf person learns to use a cochlear implant. The next step is to test the ability of the mice to recognize symbols.
Hockenberry asks about the problems of shining lights into a human brain: most of us don’t have plexiglass crania? Why not use radio or other waves that might cross more easily through the skull? Ed explains that light has some attractive features - it can be carefully focused, while magnetic fields tend to spread out widely. Light might be the best tool for single neurons, while other tools might be used for more broad stimulation effects.
What does the parts list of the brain look like? Is it a collection of LEGOs, or is it a highly specialized collection? Ed says that the brain is highly specialized, and that we still don’t know if the parts in one part of the brain look like other parts of the brain. Maybe the parts list of the brain is a moving target which rewires itself completely, more like weather patterns than LEGOs.
USC neuroscientist Theodore Berger joins us to talk about the provocative idea of engineering memories. Berger admits that the title is one Ed suggested as a way to think about Berger’s work of building neural prostheses, ways of sensing what the brain is telling us, and finding ways to talk back to the brain.
We can think of many different types of brain prosthesis, including the retinal prosthesis that Ed just showed us. Berger’s work is deeper within the brain, away from the sensory systems. He’s interested in understanding how we might model a brain area and replace the function of an area that has been damaged. We could consider an area of the brain as part of a signal pathway - we need to know what signals that brain area is experiencing and what signals it’s passing to other areas of the brain. Specifically, Berger focuses on the hippocampus, an area of the brain that works to transform short term memories into long term ones.
The hippocampus takes inputs that represent short-term memories. Unless they pass through the hippocampus, they will not become long term memories. In conditions like epilepsy and ageing, these inputs don’t become long term memories - Berger’s research looks at ways we might build systems to affect this transformation, so that a patient might be able to have long-term memories despite hippocampal damage.
He’s working with mice to understand the notion of hippocampal prostheses, systems that can process short term memory inputs and produce appropriate outputs. We see a circuit diagram that shows the CA3 circuit, just one of numerous pathways the hippocampus is responsible for. Neurons communicate using “all or none” pulses - there’s no information in the amplitude, but the timing of pulses is key. We can think of neurons as systems that turn incoming temporal patterns into outgoing ones. These systems are remarkably flexible. Drink a beer and you’ll kill off several thousand of these cells, but you probably won’t be a very different person. Don’t think too hard about the single cells, he tells us - think about the populations of cells. Neurons are connected to each other topographically - we can think of one population as a set of neurons located in the same part of the brain.
He shows us a “match to sample” test - the rat learns where a desirable object is, then is distracted by another task. The rat then returns to a task and can demonstrate long-term memory by showing it remembers where a sample is after a delay of more than 15 seconds. Berger’s models rely on a set of electrodes in the CA1 and CA3 pathways, which can detect spikes in neural circuitry while the rat is going through the process of turning a short term memory of a sample into a long-term memory. Berger’s model can now predict the outputs, based on inputs, with about 95% accuracy. We see graphs of the pathway stimulus based on a sample on the left and on the right of a mouse - we’re seeing a visualization of different spatiotemporal patterns that represent a rat’s memory. Based on these visualizations, we can see what the rat remembers, and we can predict when a rat is going to make a mistake.
How can we use this knowledge to make a prosthesis for memory? Once we’ve built a model of neural response to this stimuli, Berger introduces a drug that depresses activity of parts of the hippocampus - the results of this drug are that the animal no longer can remember past ten seconds. This gives him an opportunity to try to fix the animal - introducing his own signals (derived from recorded patterns) to tell the animal’s brain that the sample is on the left or on the right. One way Berger knows that the system is working is that he’s able to give drugged animals the wrong memories, sending them the opposite pattern from what it would have experienced in a learning task. Another possibility is to enhance an existing signal, improving the animal’s existing ability to remember things long-term.
Overall, if we can learn the model that transform anything in the brain to something else, then we can reproduce that cognitive operation in an animal that otherwise has damage to the brain.
Sebastian Seung, professor of computational neuroscience at MIT, talked to us about the Human Connectome project. He told us the story of why he wrote his book, The Connectome, a three-year project. Why would a working scientist take on this sort of a project? But that’s only one objection they’ve offered to his work. The connectome, a wiring diagram of the brain, is an almost impossibly ambitious project. His colleagues saidi it was too difficult to map out. They also thought that even if he succeeded, the data would be useless. Seung’s book was an attempt to persuade them that the project was a worthwhile one.
He’s now trying a new approach: “Screw my colleagues.” He’s not mistreated, he tell us. He’s just concluded that his colleagues are irrelevant. “I don’t need them. But I do need you.” The brain is awesome in its complexity - how could we succeed if it’s just his lab, or even every neuroscience lab. We need a much broader approach if we are to take on a problem of this size and complexity.
Seung recently met people who work with the Phelan-McDermid syndrome Foundation. It’s not a common disease, but it’s a crippling genetic disease, creating autism-like symptoms. In mice, the symptom creates antisocial and repetitive behaviors. The gene in question, Shank 3, is highly expressed in the striata of the brain. What’s going on in the striatum of the brains of these effected mice? Is there something wrong in the “wiring” of the brain?
That notion that the brain is miswired is one that requires a leap of faith - only recently are we able to actually see that wiring through our imaging technologies. He shows us a technology at Jeff Lichtman’s Harvard Lab that slices a mouse brain into smudges a thousandth of a thickness of a human hair. We then magnify in 100,000 times and generate images. But we don’t want just 2D imagery - by stacking these slices atop one another, we can start building 3D models of microstructures like axons and dendrites.
When we see an axon and a dendrite touch, we are looking at a likely synapse, a place where neurotransmitters move from one cell to another. Once we can image on this scale, we are generating a wiring diagram. We might be able to then ask questions like whether the brains of autistic individuals are actually “wired differently”.
At this point, the only connectome we have is for a 300-neuron system in a nemotode. Generating that connectome took decades of work. How can we hope to put together a human connectome, a project that’s vastly more complicated?
Sebastian shows us a new brain mapping technology from Zeiss which is going to be installed at Harvard University in a few months. It’s capable of producing a petabyte of data per week. That petabyte represents one cubic millimeter of a brain, a single pixel on a PET scan. Not only do we need to generate massive sets of data, we need better tools to analyze the data - the synapse Seung showed us previously was colored by hand by a postdoc. We need systems that bring people together to color and document the connectome.
EyeWire invites individuals to particpate in documenting the structure of the retina. EyeWire is a great example of the can-do hacking culture at MIT. It was developed by a set of students during this year’s Independent Activities Period, a month where MIT students have the freedom to learn and work on ideas outside our official classes.
Now they’re building a game that allows people to participate in mapping out connectomes. We see seventeen people playing online, competing to trace out the wires within a retina. At the top of the leaderboard are people who’ve mapped thousands of pathways in a connectome in a game that looks a little like a three-dimensional coloring book. The coloring is correcting for an AI’s automated coloring of neurons. Seung suggests that a first empirical finding is that some people, for whatever reasons, find this process to be fun.
Now Seung is working with the people who’ve been coloring neurons to try to understand what motivates people to play. One participant plays with his seven year old daughter, and is particularly interested in structures that look like statues: an ice skater, a middle finger. A gallery feature might be one of the tools that motivates people to participate.
Unfortunately, the most active part of the forums is the bugs section. But the community is rapidly helping identify and fix bugs, including a bug that appeared for users who had high-latency internet connections. The ability of this community to fix its own problems gives Seung confidence that mapping the connectome is a quest that thousands of people might join together and participate in.
Quests don’t always take us where we think we’re going to go - Ponce de Leon didn’t find a fountain of youth, but did find a new world. Finding the connectome may be one of the great quests of our time.
Artificial Intelligence pioneer Marvin Minksy joins the speakers on stage for a panel discussions. Hockenberry asks Marvin what he makes of the struggle to find metaphors to understand the brain? Marvin thinks we need high level theories to organize our understanding of the brain. We have billions of neurons and trillions of synapses. How much of that do we need to understand? We might try to understand what a computer is by investigating every atom, and then looking at each level of construction all the way up to the computer. We might try to understand how a transistor works despite the impurities in the materials. At a higher level, we then would have to understand the role a transistor plays in the computer as a whole. Minsky refers to his 1985 book, Society of Mind, and the theories he formed about the structures of human cognition. With even a partial connectome, we’d have a map for the high-level cognitive scientists and the lower-level neuroscientists to collaborate and investigate the most promising areas more efficiently.
Berger suggests that we might learn about the structure of the connectome by examining the systems that come in and out of a population of neurons - it’s another set of information that might help us understand these deeper structures. Hockenberry suggests that another set of information might be functional MRI imagery - he wonders how closely correlated this data can be to these lower-level data sets, like electrical signals or specific cellular connections. Berger explains that the brain is likely deeply non-linear - the ways in which these nonlinear channels interact with one another is a deeply complicated, second, third and four-level nonlinear system.
Hockenberry wonders whether wiring is really the right metaphor for the brain. He asks Seung whether we can really understand what’s going on in a computer from a circuit map. Seung explains that the computer may simply not be a very good analogy for the brain - the brain has highly specialized computers for different functions, rather than a single-purpose computer used for multiple functions. And the brain rewires and regenerates continually, which makes the hardware metaphor an inexact one at best. It’s worth working on the connectome, he tells us, because it will be a clean, clear snapshot, rather than a blurry movie. But it won’t be the complete picture of the brain in action.
Minsky remembers a conversation with John von Neumann. How do you think of a computer or a brain starting up? How do these different pathways come into action? Seung offers that isn’t a purely theoretical question - that’s what happens in cases of profound hypothermia. It’s possible to cool the brain to the point where activity almost ceases, and this is sometimes helpful in terms of medical procedures.
Seung suggests that the connectome is useful because it raises questions of “where” in the brain, rather than “how”- understanding where particular activity takes place is a first step in understanding how. Minsky wonders whether understanding brain function is a matter of developing a more sophisticated vocabulary. We have rich and nuanced vocabulary to understand emotions, which are worshipped by the humanities. Perhaps we need the nuanced vocabulary to understand thinking, which Minsky suggests is as rich, complex, and multifaceted. (In the background, while this conversation takes place, Joi and his team of Google Jockeys bring up streams of images - neurons, connectomes, a picture of Homer Simpson’s brain, a suggested Zombie food pyramid.)
Hockenberry asks about the role of consciousness is distinguishing between rational and emotional states. Berger explains that there are clear pathways in the brain that appear to be dedicated towards adding emotional content to consciousness. Even though we can identify connections that disappear and reappear, some functions of the ground are constant. Even though synapses whirl, pull out and pull back in, there’s a global function that stays constant.
Minsky asks whether we’ve imaged brains while people are confronting different types of thinking: Zeno’s Paradox versus encountering overgeneralization. What might we learn from contemplating and studying different ways of thinking? Hockenberry suggests that we divide by zero all the time in real life, something computers handle very badly - Minsky suggests that we generally laugh when we encounter paradoxes, rather than exploding.
Asked what inspires him about nature, Seung notes that he’s inspired by the incredible complexity of networks. Berger is impressed by the hierarchical organization, and the stability despite that hierarchical organization. He reminds us how difficult it is to build stable systems that are also profoundly dynamic and flexible. It’s astounding to him that birds don’t fall out of the sky or explode. Marvin is amazed by the idea that organisms of trillions of parts can exist for so many years.