“Shall we play a game?” Most of us remember that line from “War Games” back in 1983 [1]. Matthew Broderick and Ally Sheedy racing against time to discover how they can avert the countdown to “Global Thermonuclear War” that they had created by hacking into WOPR, a lonely computer that only wanted to play a game. A computer that just happened to be in autonomous control of our nuclear command and control system. Fast forward 35 years and some experts in the field, our very own Iron Man, i.e. Tony Stark, played in the real world by Elon Musk believe that Artificial Intelligence (AI) poses an existential threat to humanity in the not too distant future [2]. AI has become ubiquitous. It exists everywhere. It has enabled his empire and those who wish to cash in on AI such as Google, NetFlix, and Amazon, to continue to do so. It will enable many more empires moving forward. Yet we are only in the second generation of AI. So let me explain AI for a moment.
The first generation of AI, that which enabled WOPR, didn’t exist back in 1983. It was only science fiction. That said, WOPR AI was not only fictional then, it remains so. 1st generation AI was driven by logic algorithms, vast nested series of “IF, THEN ELSE” statements stacked up with data so that they could be automated to answer questions faster than a human could respond. The larger the expert system and the faster the processor, the more options that could be considered. A doctor, for instance, culling through a library of symptoms looking for a cure, could be outpaced by the computer. These were called expert systems. What was missing was intuition. A series of logic gates, no matter how vast, will still lack human intuition. No matter how fast or how much data was thrown into 1st generation expert systems, intuition couldn’t be programmed. Oh they tried...entire fields of mathematics were developed. Fuzzy Logic [3], for instance. How do you program a computer with the nuances of the human mind? Is the brain analog or at its core has it been reduced to digital? The debate continues. The push was for more memory, more code, faster speeds. On the digital side, all of that was most certainly achieved. The power of technology we hold in our hands every day, as Arthur Clarke has told us, is sufficiently advanced as to be no less distinguishable from magic [4]. Less so on the analog side. Physical things make more intuitive sense. Automobile brakes for example. The real magic of disk brakes is the ability to turn kinetic energy into heat. You don’t need to know about the first law of thermodynamics [5] to mechanically intuit that when you step on the brake and squeeze the disks, the wheels will slow down. The control system is the driver, with his foot on the brake, and the drivers ability to sense the environment. The magic is in the energy conversion in the physical system...nobody really cares about that...but I digress.
So first a little analog feedback control theory from the physical domain. Before computers became ubiquitous (and I’ll get to digital systems in a moment) we still had achieved major advancement in technology through analog control theory. In fact most industry of the 20th Century grew up in the midst of industrial control theory. But then, much like the computer, industry came home. The rheostat became the ubiquitous development that brought control theory into the living room. Think about dimming light bulbs and the dial on your toaster. That is the pivotal device that allows for control of the voltage performing the task at hand, setting the mood or browning your toast. Good control theory requires feedback. In the case of the mood lighting, the feed-back is provided vie a human setting the temperature of the lights. If you can think back to a time when lights couldn't be dimmed, you set the lights on or off. There was no middle ground. Same thing with sliced bread. You could eat it burned or not quite toasted. The trouble was, it was up to the human, to have a well defined feel for the temperature control. In order to get it right, some math is required. If you automate the feedback loop to the toaster, and sensed the temperature of the toast, and fed that back to the temperature rheostat, it is possible to achieve perfect toast, every time. In control theory it’s called either proportional control where the reduction in temperature is automated to linear. Or integral control where some some math occurs. Principally calculus is used to integrate the feedback from a sensor into the signal to reduce the temperature over time from max power, down to zero, resulting in perfect toast. Toasters, still don’t work that way...although they could. A timer is typically what we use in the kitchen. A timer is what is referred to as a bang-bang controller. It lets you know if the temperature control should be on or off. It’s actually not feed-back at all, it’s called an open loop system and requires a human to make a few estimates on time, and then simply set the time for the required burn on your bread. If the human is in the middle, sensing the system, however, you can call it a feedback loop.
The cars we love to drive are chock full of devices that allow the driver to be the feedback loop for the system. Steering, speed, and braking are the primary ones. Automation came first in the case of intermittent wipers, then cruise control...to name the big ones, but heated seats, the climate control system, even automation of the volume on your radio, joined the fun. Driving a car is one huge feedback control loop through the drivers physiology all controlled by your brain to the controls of the car. All drivers have terrific ability to integrate time and distance in their minds automatically...without taking a single class in calculus. As you pull up to a stop light, for instance, you don’t slam on the breaks. You bring the car to a stop estimating the distance to the car in front of you by applying pressure to the break culminating to a slow roll just before the car ceases motion with perfect separation to the car in front of you, and without thinking about it. If you think about it you will probably jerk the car to as stop. But we do all this math instinctively. For a car to do this on it’s own is truly a modern miracle of AI and the adoption of digital processors to take over the math. And now I’m talking about 2nd generation AI. 1st generation AI, that which culled through a database using an expert system, didn’t quite get us there for automotive applications with the possible exception of some early map driven navigation decision aids. Most automotive control stayed analogue until digital processors became affordable.
Thus, committed to a digital processor, the expert system, which would not work in a car, was no longer applicable. A new type of AI was necessary and born. The type of learning algorithms, those that can be trained by data, became the second generation AI. This type of learning AI is what is currently setting technology on fire. It is what Netflix is using to offer you what you would like to watch, what Amazon is using to tell you what you would like to buy, what Google is using to tell you what you are searching for, and of course it is what Elon Musk is using in his Tesla’s to bring us as close to a driver-less car as we have ever been. 2nd generation AI is based on deep neural networks lashed with search algorithms and optimizes. They aren't actual neural networks such as what's in our brain. And to argue that they work like our human brain is a sham. Indeed they are networks, a dense series of branches or paths through a logic tree. The difference between neural network logic and expert system logic, is that the values on each branch of the decision tree can change slightly over time. By not locking in the value, the belief is that some of the nuance in human intuition can be achieved by training those values on more data and then optimizing. An expert system is trained by evaluating the response of several experts. A neural network is trained by continuously evaluating the responses from all the data (thousands or tens of thousands of responses) available and sensing the outcome in terms of success or no success. In this manner, most systems, that use this type of AI to drive their feedback loop, can achieve operational success that rivals human operation, and in many cases, can outperform a human operator. Sadly, and this is where my battle with AI begins, the technology is still a long way off.
Enter NEST into my family. As an avid early adopter (I’m not an innovator) I already have a Google NEST Thermostat installed everywhere in my life. The main living area of my house, the upstairs zone, and now I have one down at our beach house. Some of you might think that using a NEST makes me an innovator. Nope, innovators were the nerds programming their Raspberry Pi to control the temperature of their house ten years ago. I am just an early adopter. The true beauty of having a NEST is that you can set the temperature of your house from your phone. And I do that, more often than not, to annoy my wife. Or to keep my daughter from turning her room upstairs into a sauna. The beach house is obvious. I can monitor and set the temperature from 212 miles away courtesy of the internet. NEST makes the claim that it can save you money on your electric bill. NEST will monitor your heating and cooling habits, learn from your habits, and then by sensing the environment and the other zones in the house, will be able to optimize control of the temperature settings, to provide the most efficient energy plan.
2nd generation AI is what enables NEST to take control of your thermostat and in theory, your happiness at home, both day to day, and when you have to pay the bill. Here’s the trouble. Give me a light switch and I’ll turn the lights on or off. Give me a rheostat, and I’ll set the mood of the lights in the room. I’m the feedback loop...just like in a car. My brain is the sensor that is integrating how I feel with how many photons I need to see reflecting off the book I'm reading in front of me. The sensor that does this for me is my brain. It’s integrating light in the room, with the temperature of my skin, and the number of calories I just ingested. It also adds the fogginess of my brain in the early morning with the amount of light coming in through the window and whether I’m barefooted, wearing socks, or wearing my pullover fleece from Old Navy. Here’s another factor. My daughter is never wearing socks and always dressed in a single layer (note sauna temperature referenced for upstairs zone). And my wife always looks like she’s dressed for an Arctic expedition. NEST doesn’t know any of this, yet sets to work, with a happy neural network, a lot of technology, and a lot of math, trying to predict the temperature of my house while trying it’s best to keep me happy. That's all bullshit. I was happiest with NEST and all the AI I allowed into my home for about a week. During the training phase of the thermostat, NEST responded readily to my commands. I set it for 68 when I went to bed, it records that, check. I set it to 72 in the morning when I wake up, it records that, check. I leave the house for work, it senses I’m not at home, check. Then of course there are weekends when everything changes. NEST knows the day of the week, so it tracks that it’s the weekend, check. NEST is programming my own personal neural network. Sounds awesome. But NEST is also watching the habits of my wife and daughter. Wait, they didn't go to work. They woke up a few hours later. Oh, turn the thermostat back up at 10 am. Nobody ever went to work...oh, they left to go shopping. Turn the thermostat down at 1 pm. You get the picture. Remember back in school when you took computer programming and they told you, garbage in, garbage out? Well...it appears that if you program a neural network with garbage you now have a garbage neural network.
At this point, NEST is in control of my life. Now that this artificial intelligence is in charge of my life it’s tough to take control again. Try to change the setting manually and NEST will tell you to wait two hours. The number of times I give downstairs NEST the middle finger has been growing steadily since adoption. I’ve grown accustomed to just trying to screw with Her. I call her, a “Her”, because with the disembodied voice of Alexa taking over as our personal assistants, I now imagine all personal assistants talk like Alexa. I crank the thermostat up to 80 degrees. I crank it down to 50 degrees trying to get her to respond. Fuck you NEST, I want some heat in this house, a polar vortex is coming. Well...now the question is, after all this abuse, will my NEST become self aware and kill me in my sleep. No matter how many times I scream at her, she never seems to get upset. When I first brought the NEST Smoke Detector into my life I figured having a Google enabled smoke detector made great sense, the safety of my family after all, is paramount. And Google knows best. How many more times will she allow me to tell her to fuck-off before she tells me to go to Hell and fails to alert the family when carbon monoxide levels have become lethal? Presumably because when the polar vortex rolled in, and I was unable to change her setting sufficiently to warm the house I bought in the wrong type of space heater ... those which we are constantly warned about.
When will NEST become self aware and kill the your family? The answer. Never. Not self-aware, anyhow. Not with 2nd Generation AI. I can fight with her...but she’s not SkyNet [6]. She’s not WOPR. She is far from being self aware. We are still in charge if they will let us. If we depend on 2nd generation AI, it’s true, we can die. Sadly, we’ve seen what happens when an early adopter, set their Tesla into Autopilot mode while cruising into the sun. The AI didn’t see the tractor trailer making a left and stretching across the highway. Joshua Brown becomes the first early adopter of self driving cars to die [7]. The scenario that led to this fatality was caused by what is called a novel experience. It is the first time the AI encountered those particular circumstance with all of its sensors and algorithms. It didn’t have a response. It did nothing. There are countless novel experiences that the database of automated driving systems must encounter in order to be trained for basic skills. There are an infinite number of novel experiences that are still to come. Fortunately for us mortals we are not training self driving algorithms...we are training our NEST at home to keep us warm. The "experts" are training the self driving cars. Those neural networks are being developed by engineers...and presumably engineers who also happen to be good drivers. I want my self driving AI to be developed by German automotive engineers...not by engineers at Google, for instance. And although many of their algorithms started from a common set of algorithms and databases, each developer has modified their version to get a leg up on the competition. My money's still on the Germans. Just sayin...
Back to the battle. To me, the last thing we want to do, while implementing 2nd generation AI is to get into a pissing contest with it. Safe enough, when you are standing cold, wet, and naked in your hallway yelling at your NEST and telling her to get a life, using only hope as a strategy to get the heat back on. We shouldn’t be battling AI in our cars. Have you ever tried to get your phone to sync with your car while you are driving? The safety features do not allow you to do this while the car is in motion. So you do it through the hands free feature, every time you come to a stop light. Just to be safe. I’ve never been so close to death, while attempting to do this in a rental-car, while on the unfamiliar highways in Dallas, trying to listen to WAZE give me directional commands at the same time. Just for the record, Audi allows you to fuck with everything while driving. They know that scenario and they have smartly hedged in favor of the humans with a brain. They seem to have a drivers experience and are out in front.
Thus, 2nd generation AI isn’t so much an existential threat to humanity by way of SkyNet as it is to simply the adoption of new technology. The first people who drove cars died. The speed, lack of understanding of control, and the loss of control, the lack of seat belts, etc. It’s the technology that will kill us first...before we figure it out. It’s not that the technology will become self-aware and end us. Isaac Asimov 1st Law of robotics [8] which prohibit a robot from harming a human, or through inaction, allow a human to be harmed, is hopelessly mixed up. Arguably a robot just allowed the death of Joshua Brown, through inaction. These two rules in this one law cannot be combined...it must be split into those two very different parts, and dealt with accordingly. Inaction is very different from action. But was it inaction, or simply garbage that allowed the accident? The robot had nothing to do with it. The real law here belongs to Clarke not Asimov. What appears to be magic, like the self-driving car, or the robotic thermostat, is an illusion of intelligence brought about by the 2nd generation AI. This is the AI that we must ourselves combat because it is all an illusion. The magic is hidden from us, thus we cannot fight it. I cannot fight with the algorithm in my NEST because the math is hidden inside a black box. I lose every day. With 1st generation AI, I could at least override, and input my wishes manually if I so desired. Self driving cars must continue to be this way for sometime to come. Step on the brake or turn on the blinker and the vehicle will yield to the driver. And perhaps this is my single biggest complaint with about my NEST...and thus my rant. There is no manual override... except to pull the plug...the Germans have already figured this out...
As for 3rd generation AI and the coming apocalypse? Ask anyone in technology right now...there is no 3rd generation AI. The self replicating automatons of Norbert Wiener [9] are not yet a thing. We haven’t invented them yet. That said, 2nd generation AI might be all we need to put humans out of work. Is that the apocalypse? Or will humans just have to get better at using AI as their assistant and thus become more productive as humans? What we’ve just learned from the poor dude at the keyboard who sent the " Missiles are flying" message to our brothers and sisters in Hawaii recently. This week he has spoken up and has said he thought he was doing the right thing. He didn’t hear the “Exercise, Exercise, Exercise” message, he only heard the part of the drill that said “This is not a drill” and he pushed the button [10]. That wasn’t the most significant problem. As the autonomous system took over, primarily to send out several million text messages and emails, the most significant problem was that they has no way to retract the message. It took them 38 minutes to figure out how to pull the plug. That is the essence of the current battle with AI. It is not time to welcome our new robot overlords. It is time to make sure garbage in can be controlled with an off switch. “Hey Google, put a fucking reset switch on my NEST, will you?” As for cars, if you sense you are an early adopter, fuck Tesla, go buy an Audi...
https://en.wikipedia.org/wiki/WarGames
https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk
https://www.britannica.com/topic/fuzzy-logic
https://www.brainyquote.com/quotes/arthur_c_clarke_101182
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
http://www.latimes.com/opinion/op-ed/la-oe-webb-skynet-terminator-trump-20170829-story.html
https://www.wired.com/story/tesla-ntsb-autopilot-crash-death/
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
https://mitpress.mit.edu/books/god-golem-inc
http://abc7.com/worker-who-sent-hawaii-alert-was-100%-sure-it-was-real/3025254/
No comments:
Post a Comment