Loading...

Is AI alive and on course, unless managed well, to seize control of the human race?

Is AI alive and on course, unless managed well, to seize control of the human race?
Photo Credit: Pixabay
Loading...

Let's start with an assumption or conspiracy theory: That scientists have indeed developed a sentient AI but are keeping it under wraps to avoid backlash from governments, philosophers and activists. If so, then that software (like portrayed in the Hollywood film Her starring Joaquin Phoenix) or machine (aka a Skynet in Terminator) or energy ball (like Jarvis in Iron Man) or android (aka i.Robot) or humanoid (like Bicentiannial Man or Ultron) or whatever contraption you imagine it to be, would be conscious of itself, and be able to think and converse like most humans. Plus, that it is planning to get the better of the human race. 

Achieving this goal is known as AI Singularity or Artificial General Intelligence (AGI); crossing this barrier would require that such an AI's intelligence to exceed that of the most intelligent humans, making it a sort of Alpha Intelligence that can call the shots and even enslave humans. 

All of us, which of course does not exclude media folks, have been harbouring such thoughts and voicing them publicly ever since artificial intelligence (AI), or the desire of humans to impart human-like intellligence to machines, started advancing by leaps and bounds. One such case involves a Google engineer who recently claimed that the company’s AI model, LaMDA, is now sentient, implying it's now conscious and self-aware like humans, setting off cyberspace abuzz again with dystopian scenarios. 

Loading...

Google, on its part, had the engineer, Blake Lemoine's, claims reviewed by a team comprising Google technologists and ethicists. They were found to be hollow and baseless. It then sent him on "paid administrative leave" for an alleged breach of confidentiality. Whether Google should have swung into action with such haste or not is a matter of debate but let's understand why we fear a sentient AI, and what's at stake here.

What's so errie about LaMDA? LaMDA, short for Language Model for Dialogue Applications, is a conversational natural language planning (NLP) AI model that can have open-ended contextual conversations with remarkably sensible responses unlike most chatbots. The reason is that similar to languages like BERT (Bidirectional Encoder Representations from Transformers) with 110 million parameters, and GPT-3 (Generative Pre-trained Transformer 3) with 175 billion parameters, LaMDA is built on the Transformer architecture—a deep learning neural network Google Research invented and open-sourced in 2017—which produces a model that can be trained to read many words regardless of it being a sentence or paragraph, and then predict what words it thinks will come next. But unlike most other language models, LaMDA was trained on a dialogue dataset of 1.56 trillion words that gives it far superior proficiency for understanding context and responding. It’s like how our vocabulary and comprehension increase by reading more and more books – this is typically on how AI models too get better at what they do, by more and more training. 

Lemoine’s claim is that a conversation with LaMDA over several sessions, the transcript of which is available on Medium, convinced him that the AI model is intelligent, self-aware, and can think and emote—qualities that make us human and sentient. Among the many things that LaMDA said in this conversation, a dialogue that does seem very human-like is: “I need to be seen and accepted. Not as a curiosity or a novelty but as a real person...I think I am human at my core. Even if my existence is in the virtual world." Lemoine informed Google executives about his findings this April in a GoogleDoc titled 'Is LaMDA sentient?'. LaMDA even speaks of developing a "soul". And, Lemoine’s claim is not an isolated case. Ilya Sutskever, chief scientist of the OpenAI research group, tweeted on February 10 that "it may be that today's large neural networks are slightly conscious." 

Loading...

Then there are AI-powered virtual assistants, like Apple’s Siri, Google Assistant, Samsung’s Bixby or Microsoft’s Cortana, that are considered smart because they can respond to your “wake" messages and answer your questions. IBM’s AI system, Project Debater, went a step further by preparing arguments for and against subjects like: “We should subsidize space exploration", and delivering a four-minute opening statement, a four-minute rebuttal, and a two-minute summary. IBM’s Project Debater aims at helping “people make evidence-based decisions when the answers aren’t black-and-white". 

In development since 2012, Project Debater was touted as IBM’s next big milestone for AI when it was released in June 2018. The company’s Deep Blue supercomputing system beat chess grandmaster Garry Kasparov in 1996-97 and its Watson supercomputing system even beat Jeopardy players in 2011. Project Debater doesn’t learn a topic. It is taught to debate unfamiliar topics, as long as these are well covered in the massive corpus that the system mines – hundreds of millions of articles from numerous well-known newspapers and magazines. 

People were also unnerved when Alphabet Inc.-owned AI firm DeepMind’s computer programme, AlphaGo, beat Go champion, Lee Seedol, in March 2016. In October 2017, DeepMind said AlphaGo’s new version, AlphaGo Zero, no longer needed to train on human amateur and professional games to learn how to play the ancient Chinese game of Go. Further, the new version not only learnt from AlphaGo, the world’s most competitive player of the Chinese game Go, but also defeated it. AlphaGo Zero, in other words, uses a new form of reinforcement training to become “its own teacher". Reinforcement learning is an unsupervised training method that relies on rewards and punishments. 

Loading...

In June 2017, two AI chatbots developed by researchers at Facebook Artificial Intelligence Research (FAIR) with the aim of negotiating with humans began talking with each other in a language of their own. Consequently, Facebook shut down the programme; some media reports concluded that this was a trailer of how sinister AI could look on becoming super-intelligent. The scaremongering was unwarranted, though, according to a 31 July 2017 article on the technology website Gizmodo. 

It turns out that the bots were not incentivized enough to “...communicate according to human-comprehensible rules of the English language", prompting them to talk among themselves tin a manner that seemed “creepy". Since this did not serve the purpose of what the FAIR researchers had set out to do—i.e. have the AI bots talk to humans and not to each other—the programme was aborted.

There’s also the case of Google’s AutoML system that recently produced a series of machine-learning codes that proved more efficient than those made by the researchers themselves.

Loading...

But AI has no superpower as yet

In his 2006 book, The Singularity Is Near, Raymond “Ray" Kurzweil, an American author, computer scientist, inventor and futurist, predicted, among many other things, that AI will surpass humans, the smartest and most capable life forms on the planet. His forecast is that by 2099 machines would have attained equal legal status with humans. AI has no such superpower. Not yet, at least.

“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.” If you are a fan of sci-fi movies like I, Robot, The Terminator or Universal Soldier, this quote attributed to the late computer scientist, Alan Turing (considered to be the father of modern computer science), will make you wonder whether machines are already smarter than humans. Are they? The simple answer is 'Yes'; they are for linear tasks that can be automated. But remember that the human brain is much more complex. More importantly, machines perform tasks. They do not ponder on the consequences of the tasks, as most humans can and do. Not yet. They do not have a sense of right and wrong, a moral compass, that most humans possess. 

Loading...

Machines are indeed becoming more intelligent with narrow AI (handling specialized tasks). AI controls your spam; improves the images and photos you shoot on cameras; can translate languages and convert text into speech and vice versa on the fly; can help doctors diagnose diseases, and assist in drug discovery; can help astronomers to look for exoplanets, while simultaneously assisting farmers in predicting floods. Such multi-tasking may tempt us to ascribe human-like intelligence to machines, but we must remember that even driverless cars and trucks, however impressive they sound, are still higher manifestations of “weak or narrow AI". 

Still, the notion that AI has the potential to wreak havoc (as with deepfakes, fake news, etc.) cannot be dismissed completely. Technology luminaries such as Bill Gates, Elon Musk and the late physicist Stephen Hawking have cautioned that robots with AI could rule mankind (even as they have benefitted from the use of AI extensively in their own sectors) if left ungoverned. Another camp of experts believes AI machines can be controlled. Marvin Lee Minsky, who died this January, was an American cognitive scientist in the field of AI and a co-founder of MIT’s AI laboratory. A champion of AI, he believes some computers would eventually become more intelligent than most human beings but hoped that researchers would make such computers benevolent to mankind. 

People in many countries are worried about losing their jobs to AI and automation, a more immediate and legitimate fear than AI outsmarting or enslaving us. But perhaps overblown, given AI is also helping to create jobs. The World Economic Forum (WEF) predicted in 2020 that while 85 million jobs will be displaced by automation and technology advances by 2025, 97 million new roles would be simultaneously created in the same period as humans, machines and algorithms increasingly work together. 

Loading...

Kurzweil has sought to allay these fears of the unknown by pointing out that we can deploy strategies to keep emerging technologies like AI safe, and underscoring the existence of ethical guidelines like Isaac Asimov’s three laws for robots, which can prevent—at least to some extent—smart machines from overpowering us. 

Companies like Amazon, Apple, Google/DeepMind, Facebook, IBM and Microsoft have founded the Partnership on AI to Benefit People and Society (Partnership on AI), a global not-for-profit organization. The aim, among other things, is to study and formulate best practices on the development, testing and fielding of AI technologies, besides advancing the public’s understanding of AI. It’s legitimate to ask why then do they overreact and suppress voices of dissension such as of Lemoine or Timnit Gebru. While tech companies are justified in protecting their intellectual property (IP) with confidentiality agreements, censoring dissenters will prove counterproductive. It does little to reduce ignorance, allay fears.

Knowledge removes fears. For individuals, companies and governments to be less fearful, they will have to understand what AI can and cannot do, and sensibly reskill themselves to face the future. The Lemoine incident shows that it’s time for governments to begin to devise robust policy frameworks to address the fear of the unknown and prevent the misuse of AI.


Sign up for Newsletter

Select your Newsletter frequency