SkyNet awakens

The latest developments from the company that hypocritically tells itself, and us, not to be evil, are pretty damned impressive - and equally creepy:

Google today opened its annual I/O developer bash with details of how it’s going to lob machine-learning software at everything you do online and offline, and it truly means everything.

CEO Sundar Pichai took to the stage in Silicon Valley to explain how artificial intelligence will make life easier, safer, and more fun for everyone, provided they stay addicted to his company’s products. Not only will these AI systems simplify our lives, but they’ll also train us to use technology better.

There wasn't a lot of detail in the keynote – rather, it was a headlong rush through stuff Google is about to, or plans to, unleash on the world.

“Technology can be a positive force but we can’t be wide eyed about its impact,” he told the crowd. “There are serious questions being raised and the path ahead needs to be calculated carefully. Our core mission is to make information more available and beneficial to society.”

To showcase this, he showed off some the augmentations that are coming to Google Assistant in the next few months, not least in the voice it uses. By the end of the year, when you chat with Google’s digital personal assistant, it will use one of six voices, including that of the singer John Legend, that you can select.

In case you want to know what this new assistant looks like, here is a clip showing what socially awkward nerds (yeah, I know, pot, kettle, etc.) can do in creating a socially awkward AI:

Oh shit.

Turing Test, passed.

If I were on the other end of that call, I would not have been able to tell that I was talking to an exceptionally well-written computer program.

Now, it needs to be understood that AI, as they are currently built and being developed, are not self-aware sentient machines. They are simply exceptionally complicated and (hopefully) well-written programs designed to process vast amounts of data and "learn" from those data sets.

It needs to be understood that AIs are extremely good at doing things that are very, very hard for humans - and nearly useless at doing things that are trivial for us.

An AI can crunch unbelievably huge amounts of information and process it in an eye-blink. It can learn from human behavioural and speech patterns at speeds that boggle a biological mind.

That is why machine-learning algorithms can now diagnose and predict cancers with more accuracy than even the most highly skilled and trained human specialists.

That is why AI chatbots can now design and build their own methods of communication with each other and with humans - to the point where Facebook shut down two of their own chatbots because they had basically invented their own method of talking to each other which was incomprehensible to the developers.

Languages seem to be extremely chaotic and unstructured to human eyes. Any non-native speaker who has ever tried to learn English, for instance, can tell you how absurdly illogical the language itself seems to be. Similarly, any English speaker who has ever tried to learn French can tell you how difficult the irregular verb conjugations can be. And yet - figuring out those irregular conjugations is not that difficult if you have endless memory with perfect recall and every single rule is written down clearly for you. Similarly, the reason why Russian is so challenging is because it has a lot of complicated rules and a very foreign script - but once you learn the rules, which are finite, and once you know how to read the script, the remainder comes down to learning the vocabulary. AIs can do this faster, better, and more effectively than any human.

But AIs and robots are still useless at tasks that humans find simple.

Consider the act of making a cup of coffee. This is something that you and I find so trivial as to be unworthy of thought or comment. We just know how to do it.

Even the most sophisticated AI will find this ridiculously challenging.

Or consider the task of climbing a flight of stairs. A five-year-old can figure out how to do this without hurting itself.

A robot, by contrast, has to be trained and programmed to understand what a stair is, how to navigate it, and how to deal with oddly shaped or defective steps. The leaps of intuition that are trivial for biological minds powered by neurochemical reactions between synapses, are still not possible for AIs.

There is, justifiably, a lot of fear and concern around the concept of artificial intelligence. There bloody well ought to be. We simply do not understand what the full potential for AI really is. We do not know what a sufficiently powerful AI, with sufficient access to available data and information, could do to us mere mortals.

It is very easy to think up a scenario in which a military-grade AI, which by definition cannot understand the concept of "the value of human life", would regard the presence of human innocents near a bunker containing dangerous stores of enemy weapons to be collateral damage. Without strictly defined rules of engagement, such a bot would simply engage and slaughter everyone within the vicinity.

I'm not saying that humans would not do the same thing. We know quite well from the fallout of targeted drone strikes against suspected terrorists in the Sandbox and the Rockpile that human handlers of hunter-killer aerial drones often kill innocents in their pursuit of dangerous men.

But the human would understand the moral problem involved. An AI would not.

All of this being said - we are not, yet, at the point where SkyNet is going to go self-aware and kill us all by bringing about a nuclear Judgement Day. Our future does not yet involve a bunch of skeletal gleaming T-800s marching in perfect lockstep across a blasted battlefield and crushing piles of skulls underfoot in a grim pursuit of the last remaining vestiges of human life.

But the potential for that to become a real scenario is very real, and very scary.

We have to better understand what exactly it is that we are unleashing here, lest we create a demon of our own design that we cannot control.

Otherwise, we might someday wake up to find that, instead of "merely" unleashing something like the T-800s, we will have unwittingly created something like the Necrons:



Popular Posts