When science writer Kara Platoni set out to explore the frontiers of research into human perception, she encountered an eclectic mix of people. It’s right there in the subtitle of her book, We Have the Technology: How Biohackers, Foodies, Physicians, & Scientists Are Transforming Human Perception, One Sense at a Time. She found scientists chasing after a sixth taste while debating whether you can taste something you don’t have words to describe. She met perfumiers treating Alzheimer’s patients with scents that allowed them to unlock memories long thought forgotten. And of course, she met biohackers eagerly slicing open their own bodies to implant DIY technology.
Her book covers a lot of ground: the five senses, what we think we know (and don’t know) about them, and how we might alter and augment them. It’s fascinating research that illuminates both how much we know and the vast chasm of ignorance looming before us. And the questions it raises—what’s the connection between mind and body? how are we defined by the limits of our senses?—are both perennial and particularly urgent today, when so much technology allows us to escape, adapt, and augment our bodies. In the interview below, condensed and edited, we discuss the frustrations of waiting for the future, why we ought to pause before letting markets be the sole drivers of technological progress, and why the age of the cyborg is already here.
It strikes me that in the book we meet three related groups, among others. There are the scientists and researchers who are experimenting, trying to understand how the senses work, and there are the people who have benefited from that research through, for example, an artificial eye. Then there are the biohackers, the people who are frustrated that the future can’t get here fast enough, and that they just don’t have enough augmentations yet.
Right now, the biggest augments are reserved for people with special medical needs, and [there] are several reasons for that. One is that doctors have an injunction to do no harm—they take the Hippocratic oath. So they’re only going to use a very invasive technology or a very novel technology if there is no other treatment available to them, right? Which is why you can get a retinal implant if you are essentially totally blind from retinitis pigmentosa, but you cannot get one if you have normal vision, because it won’t help you. It won’t give you anything extra. It’s a last possible treatment for people who have exhausted all of their other alternatives.
But we are coming into an age where there are a lot of wearable technologies that an otherwise healthy person might wear for things that have nothing to do with medicine. For entertainment, for gaming, for movie watching, for giving you an extra added level of information about the world as you go through it. So these are things like augmented reality devices (smart glasses), smart watches, smart wristbands, even smart clothing, smart rings. And of course, virtual devices, which is right now usually some kind of goggles or glasses or helmet that you would wear that would occlude your vision.
Why can’t I have robot arms? Why can’t I get a bionic eye? Why can’t I enhance what I have even if I don’t have a medical need?
I talked with a lot of biohackers who are frustrated by this. They say, why can’t I have robot arms? Why can’t I get a bionic eye? Why can’t I enhance what I have even if I don’t have a medical need? One of the reasons, of course, is doctors and medical researchers have this obligation to do no harm. But there’s a profit motive as well, which is companies that make products for the most common need or the most common desire. And there are not that many people who are saying, “I want an extra robotic leg.”
You bring this up in the book, too: Much of this is market-driven, and maybe we’re not asking some of the most important questions if we’re simply saying, “Well, if we can find the market for it, then we should be producing it.”
One of the first big products to market in terms of augmented reality was Google Glass. Even though Google did not necessarily consider Glass to be an augmented reality device, I think it was hailed as one. It’s kind of the flagship product on the market. It’s where these glasses that you could wear that would feed you information in real time that would supplement what you could otherwise see. So I’m talking about any kind of glasses that would give you information about maps, or weather, or email, or directions, or tell you what kind of shops are around you. Anything like that, right?
A lot of the biohacker groups that I talked with and a lot of the tech critic groups that I talked with, one of the big criticisms was: “Basically, an algorithm is feeding you information about what’s around you, what’s going on, it’s guiding your perception, it’s guiding your behavior subtlely. And you don’t know who programmed that, or what their intention was, what they added and what they left out.”
Adam Wood, who’s from the group Stop the Cyborgs, who was a big Google Glass critic, said, “Let’s say you’re walking down the street. You’re trying to find a good pub and up pops information on your glasses.” Well, you’ll never know which ones were left out, and if there’s a ranking, you’ll never know the criteria for the ranking. Basically the idea is that all technology is made by people, and we never know exactly what their motivations were or what their expectations were—what they knew, what they didn’t know.
You might pass this pub or you might go to another one. So there, it would have subtly influenced your behavior and you would think you were a free agent making the decision based on the best possible information available, but you never know what information has been left out or which information has been prioritized.
That was a very interesting criticism, I thought, of any kind of technology that would overlay information on your sensory perception. And at this point, we’re mostly talking vision.
So when I would talk to biohackers about this I’d say, “OK, what’s your solution? Do you think everybody should have augments? Do you think everybody should have the opportunity at implant if you want them? How do you make sure that we’re not operating on the factory settings for devices that somebody else has made?”
And their solution was, “Oh you have to make it open source. Everybody has to be able to build it themselves. Everybody has to know what’s in it. Everybody has to know what the capabilities are. Everybody has to be free to modify or tweak it on their own.” Of course the obvious counter to that is most people don’t want to do that. Some people who have great technical skills, and knowledge, and the time and interest to do this would like to build their own devices. But a lot of people just want to buy them.
If big companies are going to start manufacturing augmented reality devices, virtual reality devices, eventually, perhaps implants that have something to do with sensory perception, we’re probably not going to know what’s in them because companies have a desire to keep information about what they’re building private. They want to be able to patent it. They want to have a competitive advantage over other companies. They don’t want the information to be out there so that anyone can build them themselves or so that their competitors know what’s coming and can one-up them. So that’s one of the big tensions that’s out there.
We’re probably not going to know what’s in cyborg devices because companies have a desire to keep information about what they’re building private.
One of the other big tensions was that the first generation of augmented reality devices tended to include a camera, which opens up all of these other questions about surveillance and privacy. Big questions about whether or not somebody could film you without your consent and what they could do with that information. Could they post it on the Internet? Could they turn it over to law enforcement? That sort of thing. And I think that was one of the reasons that Google Glass drew so much criticism with the camera aspect.
That said, I talked to Eyeborg, who is definitely a cyborg. This is Rob Spence—he’s a guy from Toronto, Canada, who has been wearing a camera in his eye socket for a number of years now. He said, look, I have to make the same decisions with my eye camera that anybody has to make with a camera phone or any other small camera, and those have been around for generations. I have to make decisions about what to film or not. I have to make decisions about when to ask permission or if to ask permission.
And I said, I think the big difference is if I want to take a picture with my iPhone I have to hold the camera up. I have to hold it up to my face or at least I have to move it out of my pocket, move it above waist level and shoot. And that gives people an opportunity to see that I’m taking a photo and to do something that’ll let me know if they want me to do that or not, right? They flinch, they turn away, they smile, they close with their friends, and that gives me a cue about whether or not it’s agreeable to them. If the camera is in your eye, that is really different. It’s just part of your gaze. They may not be aware of when the camera is on or off. It may be harder for somebody to say no.
When I talked to one of the attorneys at the Electronic Frontier Foundation about this very idea, he said, yes, one of the big differences is that with a traditional camera, it’s a little bit clearer when the camera is off and on. And with augmented reality devices, some makers have made a light, or there has to be a hand gesture, or just some cue that the camera is filming, but there’s no industry standard and there’s no law about it. So that’s one of the differences. And as he pointed out, the camera could always be on. You could walk around and film for a long time the way you probably wouldn’t walk around with your phone held out in your hand for long periods of time.
He compared it to like in the early days of the camcorder; I think a lot of people had made this comparison. In the early days of the camcorder, some nerd would film an entire party. But there was social backlash to that; people didn’t like it, people would get tired of it. So he thought that was a natural, social limitation on abuse.
You’re pointing out that much of how we use this technology is socially defined, often in subtle ways. But you’re also talking about technology foreclosing certain social choices for us—maybe we don’t know how an algorithm chooses which pub to recommend to us via Google Glass. But it also seems like we live in that world already, that algorithmically driven world where decisions are being made in ways that we don’t fully understand, from GPS to Netflix recommendations.
I think you’re right that we’re already living in a cyborg world, and we’re already living in an algorithmically driven world. I think some of the questions are how subtle is that influence, how noticeable is that influence, and how close is it to the body. So in the last three chapters, I set them up to start it with the most external technology, which is virtual reality. It still requires this very obvious element or set of goggles. In some cases, it requires going into this big room that is engineered to help give you this fantasy experience: floors that rumble, surround sound that makes you feel like you’re really in the middle of things. There are even virtual reality labs where they pump odors inside.
And then the next chapter is augmented reality, things that you wear on the body but not in it. So your vision isn’t occluded, but you could still have information being given to your eyes, being overlaid over your glasses. It’s rings and watches and other things that you wear on your body that are lightweight and socially permissible, but they still give you information that influences how you feel or influences your behavior. And then the last chapter was about the idea of technologies that would actually be inside the body, implants.
I want to make the argument that technology doesn’t just mean gadgets, and electronics, and computers. It means anything that people have built to assist us, to augment us, to make us better. So that goes all the way back through the sharpened stick, and all the way ahead to the stealth bomber, and the space station. And I am making arguments that language is a technology, and that culture is a technology, and that perfumed chemistry is a technology, and that medicine like Tylenol is a technology.
Technology doesn’t just mean gadgets, and electronics, and computers. It means anything that people have built to assist us, to augment us, to make us better.
And we use all of these to alter and shape our perception—and we have for a long time. They’re just not as noticeable or as dramatic as these new electronics that we wear on and in the body. But they’re there. We’ve always socially pressured one another. We’ve always given each other selective information. We’ve always rewarded behavior that we consider beneficial, and punished or ostracized people for behavior we consider bad. This is what people do when you live in a collective and we’ve done it for a long time.
But now we can do it with these new machineries that are worn very intimately and they’re worn very consistently. I think probably a lot of us might already be wearing a Fitbit, which you just wear in your wrist all the time. This year, probably quite a few people adapted to smart watches. I would consider, in some ways, the cellphone cyborg technology. It goes in your pocket, you wear it on your body, and it can track your location. It has an accelerometer; it can track how much exercise you’ve gotten. It can track your online viewing habits.
But of course, it was really hard when I was writing this book to come up with some kind of ultimate definition of what a cyborg is. I think you can get everybody to agree that it’s some commingling of machine and flesh. But nobody will agree if it’s a ratio where you’re finding more machines than flesh; is it any amount of technology makes you a cyborg?
When I talked to Rob Spence—Eyeborg, the guy with a camera in his eye socket—he said any technology makes you a cyborg. If you put on a T-shirt, that’s a technology because it protects you from the cold. You are more than a naked ape. You are wearing a technology that protects you from the elements. A lot of other people would say, well, maybe that’s too far, but what about all these other ordinary medical devices that we have now—the pacemaker, the cochlear implant, eyeglasses?
I really thought a lot about eyeglasses because I wear them. I will say honestly that my face feels naked and strange without the glasses on because I’ve gotten used to them. I consider my vision with the glasses on to be my normal vision because it restores the way my vision used to be when I was younger, before things got fuzzy. Now when I take my glasses off, that’s when the world looks wrong, and when my vision seems to be impaired. Of course, it’s totally the opposite. The fuzziness is my natural vision—it’s just the limits of what my biology can accomplish, and the glasses augment it. It feels the other way around.
And then of course other people said, look, if any technology makes us a cyborg then we’re all cyborgs, and that has no meaning. It’s too broad. You might as well just say “human”—that we’re kind of an engineering, maker species and this is what we do. Among the people I talked to there were various degrees of skepticism versus enthusiasm about the idea of us all being cyborgs, or all recognizing ourselves as cyborgs. Often, in reality, I would have to say this is just what people do: We make technologies that give ourselves a competitive advantage, and if it works everybody else around us has got to catch up or be left behind, and it’s been this way since the taming of fire, and it will always be this way. So I’m going to augment myself as much as I can because I don’t want to be left behind. And they would say, this is a chance to engineer your own evolution. This is Darwinism at its finest. We don’t have a choice in how we engineer the future of our bodies.
And then I would talk to people like Gregor Wolbring, who’s a disabilities and abilities scholar from the University of Calgary, who studies a lot of technologies that are meant to be assistive—they’re developed for people with medical needs. And there’s a wide range of opinions about whether or not these are helpful and welcome, about whether or not they ultimately just put pressure on people to conform to what’s considered normal. And Gregor would say we keep changing the standard of what a “normal body” can do. What you have to do is be seen as productive, what you have to do to be seen as functional, a member of society. Even just a few generations ago nobody had to know how to use the computer, nobody had to participate in the Internet. But now you have to do all of those things if you want a middle class job, if you want to get an education. He said you can opt out, but there’s a social and a financial consequence to your life if you say no. So, for example, Gregor would have no problem with someone giving themselves 10 legs. But the problem comes when 10 legs are seen as more productive than two, creating a social pressure. He would say we increase our expectations of what we should be able to do, and then everybody has to rise to that level, and then you have to be left out.
He and people from Stop the Cyborgs both say, OK, augmented reality gear might be really cool, it’s kind of for fun right now, and it’s for entertainment. But what happens if your job requires you to wear it? What happens if your school requires you to wear it? Then you have to be able to purchase it, and you have to be able to use it. And there are all kinds of things that haven’t been tested in the mass market yet, so what it would be like to have these things used by thousands of people? Will they be enormous distractions? Will they make everybody seasick? All of these things that might mean that some people can’t comfortably use them or won’t comfortably want to use them. Will they be left out? It’s an interesting question. In fact, one of the first places that augmented reality glasses are being used and seem to be creating a niche for themselves is in the workplace.
But what happens if your job requires you to wear augmented reality? What happens if your school requires you to wear it? Then you have to be able to purchase it, and you have to be able to use it.
They are being used, for example, by warehouse workers who use them for picking instructions. Basically, information comes up over your glasses that says, “Here’s how many things have to get out of this box; here’s how many things to ship out to the customer.” They are being promoted as tools; for example, emergency workers, they would allow you to stream videos from multiple remote locations to one person’s glasses. If somebody was responding to a hurricane emergency situation, they can get video feed from lots of other first responders and they would know where to find what, where the downed power lines are. They’ve been developed for medical use so that a doctor who is interacting with a patient might get information about the patient coming over his glasses, or information about previously taken X-rays and other things; you get quick overlay information about the body.
The workplace has been one of these places where people might have to adapt to this. It might become a job skill the way using a computer or a cellphone is a job skill now.
This question has a real urgency right now, both for people who want to ask some serious questions about what kind of society that’s going to lead to, and for the people who want to bring about that society as quickly as possible—at least for their own kind of very individualized ideas about what it will look like. For both camps, that urgency seems to come from a sense that we’re somehow almost there, that this is the most sci-fi time to be alive and we’re all on the cusp of this next big thing.
I think there’s very much that feeling out there that the way this maybe hasn’t crested yet, but it’s here. Now is the time for us to talk about what we want technology to be capable of, how much we won’t be able to do, what limits we should put on it—if there should be a limit. We should talk about these things before it’s simply a consumer choice. Right before that’s simply a question of “do I buy it or not?” or “which model do I buy it in?” or “which color do I want to buy it in?” To be honest, that’s the choice most of us have about a lot of our electronic devices. Do you want the chrome one or the black one? Which size do you want? We know very little about what’s happening inside of it, and what decisions have already been made by somebody else about what it can do.
Illustration by J. Longo