More than just intelligence. Here’s why we should embrace AI.

Have you seen a short-film called Sunspring? It’s a rather insipid tale about the future, with three characters mulling about love, revenge, and having to “go to the skull”. The 9-minute odd film has been directed by Oscar Sharp, and was made as part of the Sci-Fi-London film festival’s 48hr Challenge. All in all, if you haven’t seen it till now, you haven’t missed much in life.

So, if the story is nothing to rave about, the acting was no great jigs and the direction was so so, why are we discussing Sunspring?  Well, it is due to Benjamin, who wrote the AI_01screenplay of the movie. And no Benjamin is not some celeb writer or some Pulitzer-prize winner. You see, Benjamin happens to be a rather nondescript piece of technology, which goes by the real name as a recurrent neural network called long short-term memory, or LSTM for short.

Simply speaking, Benjamin is your friendly neighbourhood Artificial Intelligence or AI. It is a bit of technology that is able to learn and create, for instance, in this case, it crawled through 100s of scripts from the 80s and 90s to come up with this one.

It is for the first time ever that a screenplay has been written completely by AI. It is a giant leap in this regards, when AI becomes so intuitive that it can now move into the artistic space, and create content automatically. It is quite unlike say winning a game of chess or even hoping contestants in Jeopardy.

Elementary my dear Watson

Artificial intelligence has been around for quite some time. According to the Merriam-Webster dictionary the term artificial intelligence is defined as “the capability of a machine to imitate intelligent human behaviour”. When John McCarthy had coined the term in the AI_31950s, he had meant it a bit different, dubbing it, “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

Thus, from the 50s AI has moved from understanding human intelligence to mimicking it. It is no more about software or hardware, but rather more encompassing in that sense. Effectively, AI could now be an intelligent piece of software, a super-computer, a cloud-based system, or even a smart robot.

Yet, even as the definition evolves, there are certain core characteristics of the system that do not really change. We can broadly characterise AI as a system that is able to remember and learn without much external inputs. It is a self-learning system that learns from its successes and failures. Like say, how IBM’s Deep Blue defeated grandmaster Gary AI_02Kasparov in a rematch in 1997, after having decidedly lost to the master in 1996. It seemed to have evolved, learnt from its flaws, analysed its opponent’s strength.

But then, don’t mistake AI to be a costly proposition, the kind that exists in Deep Blue or IBM Watson. It can be much nimble and ubiquitous. So, your input keypad on the mobile device that remembers the colloquial and vernacular terms used and does not auto-correct them is a form of AI. The mobile assistants Siri, Google Now, Cortana, are also AI. Meantime, the self-driving car that uses concurrent data from sensors all over the car and manages to navigate is an example of a rather complex and a bit more advanced form of AI.

Yet, AI is not about remembering words, or navigating roads, it has a much broader approach and depth to it, ranging from deciphering the string-theory of the universe to say splitting the atom. There is no limitation to where AI can be applied or used, from the puny mobile phone to the massive Hubble telescope.  These days, Google Deepmind is defeating AlphaGo champions, IBM Watson is all the time keen to take up challenges for games or discussions, Intel is using deep learning to make machines smarter, GE is using Predix to create brilliant factories or digital twins.

And if that is not enough, we are also on a cusp of AI consumer revolution. There are scores and scores of products or prototypes in the market that are making AI ubiquitous AI_04and also cute. Everyone knows about humanoid from Honda, Asimo. But there’s a new cutie in town, a little buddy called Cozmo that makes faces or plays with you. Or you can adopt Aibo, the robot dog from Sony as a pet. There’s also this amazing racing game from Anki, Overdrive that uses AI. Grammy award winner Alex Da Kid uses AI to create music. And the shortly, we will have Moly, a fully-automated robotic kitchen that will cook-up any dish we desire with just the right ingredients and a connection to the WWW.

AI is not a really artificial anymore, it is becoming much more natural in our lives.

Tool for doom?

The quirky thing about AI is that as long as it has been around, it has both amazed and horrified people in equal measure. While at one end, researchers from top-notch universities are working out ways and means in which computers could gain cognitive abilities and solve things, alongside you have the prophets of doom, who keep warning against a self-aware system that could pose a threat to our very existence as a species. Since, it is well neigh impossible to code intangibles like values or ethics in software or an operating system, these doom-sayers paint AI as an existential threat. And much thanks to all those Hollywood flicks, where the machines are hell-bent on wiping out mankind in the future, people are more inclined to believe that scenario. Remember SkyNet?

The interesting bit is, when people like Stephen Hawkings, Elon Musk, Bill Gates, start AI_8warning against the way AI could pose a threat, you have to take it seriously after all. So, the question arises, should we invest in AI or on an underground steel bunker that can withstand nuclear explosion?

The Luddidite way

Before we come to building a survival shelter against the machines, let’s time travel to the past, say something like March 11, 1811, in Nottingham. It’s night time, and a bunch of raggardly workers forcefully enter a small textile unit. They aren’t here to loot or plunder, but they viciously attack the knitting machines that have newly been installed. These ex-workers break down the machine and set it on fire. Largely displaced by the adoption of textile machinery that automated weaving, these workers were protesting against the use of machinery in the textile industry. Dubbed as Luddites (on a fictional leader Ned Ludd), the protests spread across England, as hundreds of knitting machines were attacked, broken and burnt.

The British government quickly reacted to these attacks and soon positioned thousands of AI_1soldiers to defend factories. Even the Parliament went ahead and passed a measure to make machine-breaking a capital offense. Over a period of year, the protests were brought under control, though many Luddidites lost their lives or were sent to the gallows. The Industrial Age had truly begun and no Luddidite could stop it.

In the end, most of the fears that gave rise to these protests were unfounded, as the advent of industrial age in England and Europe vastly increased the economy and subsequently income of the populace. In fact, the 19th Century was the first time in human history, when the human population increased at an exponential rate. In just 100 years after the start of Industrial Revolution in the 1700s when the global population stood at some 700 million, the world population grew to 2 Billion people in 1927. That’s a phenomenal 185% jump!

Oddly, today we seem to be at the same crossroad of the 19th Century when the Industrial Age set in. We are on the cusp of a new revolution, it is a time when our passive machinery will become reactive, where sensor data and analytics will completely change the way businesses are run or products are created. Robotic arms will conduct surgery remotely, windmills will generate power efficiently by analyzing the windflow and changing orientation, aircraft engines will self-diagnose issues and alert engineering teams, and even farmers in Japan will modulate the flow of water automatically to their cucumbers based on the weather conditions.

It’s a dawn of a new age, it is the AI Age. It is an evolutionary leap that is inevitable.

With the advent of neural networks that mimic the human learning process, machines will become intelligent, aware and who knows possibly emotional as well. We are at a cusp of a disruption that will dramatically change our lives over the next decade or two. AI may finally be able to solve our biggest puzzles like discovering dark matter, deciphering the Indus script, or even solving the issues related to Climate Change. And while we are it, surely, Benjamin would be analysing a few thousands of scripts (by itself, of course) and possibly learn the nuances of good script-writing.

If it (we are going to have a lot of issues with prepositions in the coming days) could, IAI_10 could suggest a few interesting screenplay courses conducted by much-talented men in the UK or the US. You see, analysing the composition of a cloud is one thing, but imagining it as a fire-breathing dragon or a charging Knight Templar a completely different talent. We can be beaten on perseverance but imagination like the cloud-watch-shape-transforming pareidolia, I don’t really think so. What say you, Watson?

Leave a Reply

Your email address will not be published. Required fields are marked *