
“Big brother is watching you” is a catchphrase for the risk of large-scale surveillance. We could identify criminals walking on the street with widespread deployment of video, and the same technology could warn us against stepping into traffic. But the same stuff could help people stalk others, spy on people, and maybe expose some secrets we’d just as soon keep hidden. Given that the average person thinks that everything can be hacked, and that many think that government is trying to spy on us already, it’s not hard to understand why companies are reluctant to promote the use of see-all-know-all technology, even the narrow use.
Narrow use such as what? One of my regular contacts is a fairly big-name labor lawyer. I asked her about the use of video monitoring to guard against workplace accidents, and she said “every union would be afraid it would be misused, and every employer would deny that while jumping to misuse it.” Another contact told me that having extensive video monitoring to facilitate safe use of autonomous vehicles would almost surely face lawsuits from privacy advocates, supported by legions who are often where they’re not supposed to be.
Privacy is important to all of us. So is safety, health, life. We may be reaching a stage in technology evolution that will demand we decide how we balance these things against each other. Is the fear of AI running amok an example of this sort of concern? I think it is. And I think that long before AI could rise up and threaten us with extinction, it could rise up and save us, or expose us. We’ve had pressure to create guardrails on AI, but those pressures have largely dodged the broadest, most impactful, and most immediate one – which is the ability of AI and video combining to let the real world, including each of us, be watched by technology.
The obvious answer to this problem is governance, a set of rules that constrain use and technology to enforce them. The problem, as it is so often with the “obvious,” is that setting the rules would be difficult and constraining use through technology would be difficult to do, and probably harder to get people to believe in. Think about Asimov’s Three Laws of Robotics and how many of his stories focused on how people worked to get around them. Two decades ago, a research lab did a video collaboration experiment that involved a small camera in offices so people could communicate remotely. Half the workforce covered their camera when they got in. I know people who routinely cover their webcams when they’re not on a scheduled video chat or meeting, and you probably do too. So what if the light isn’t on? Somebody has probably hacked in.
Social concerns inevitably collide with attempts to integrate technology tightly with how we live. Have we reached a point where dealing with those concerns convincingly is essential in letting technology improve our work, our lives, further?
We do have widespread, if not universal, video surveillance. On a walk this week, I found doorbell cameras or other cameras on about a quarter of the homes I passed, and I’d bet there are even more in commercial areas. I wonder how many people worry that their doorbells are watching them while they’re in their yard. Fewer, I’d bet, than worry about AI rising up and killing them, and yet the doorbells are real and predatory AI is not. Clearly we can dismiss this sort of thinking, stop covering our webcams. Could we become comfortable with universal video oversight? Maybe, but it would be better if we could find a solution to the governance dilemma.