About Me


Connect

Thursday, October 26, 2017

Risks of Artificial Intelligence: Real or Imaginary ?

Deus Ex Machina from The Matrix Revolutions

For those having grown up watching sci-fi movies , especially those with apocalyptic or post-apocalyptic genre like the Terminator, The Matrix series and even 2001: A Space Odyssey the current trend in technological advancement towards Artificial Intelligence can give a sense of déjà vu. But then the celluloid world is quite different from the real one especially in the "apocalyptic/post-apocalyptic" genre. However, when one of the greatest theoretical physicist alive, Stephen Hawking and technologists and space-age entrepreneur Elon Musk raise alarm on unchecked development of artificial intelligence being a threat to our kind, it is not irrational to pause and wonder if we are on the right track. 

So can the Skynet become self-aware ? Or Deus Ex Machina conclude that humans can be enslaved and harvested for thermal energy ? Can a superintelligent machine come into existence and conclude humans are a burden on the planet ? Of course these are fantastical scenarios or at least scenarios we shouldn't be concerned about immediately but it is a fact that many nation-states are developing Lethal Autonomous Weapons to be deployed in conflict zones. Apparently the final decision making authority still rests with human operators  but the natural progression leads to developing combat robots or drones that can correctly assess the situation and take the decision themselves. If the very idea of developing artificial intelligence is that these machines can perform jobs far more swiftly and efficiently than humans, it is but natural that they would be deemed to perform better in battlefields too.

However, we have already seen combat drones or unmanned combat aerial vehicles kill disproportionately large number of civilians and destroy properties across combat zones even with human control. It is debatable if lethal autonomous weapons can indeed distinguish between real threats and innocent civilians, especially in complex conflict situations. After all, precision here is not just algorithmic output but human lives and properties. Then there are apprehensions that once LAWs go mainstream there is good probability of them being falling into the hands of rogue actors leading to very dangerous outcomes. ISIS did manage to acquire and use drones to drop bombs on government troops. True, it was a small and rudimentary UAV when compared with those deployed by nation-states and the terror group is nearly finished, but it does not alter the fact that military technology has a way of finding itself in the hands of rogue actors. It is this kind of inherent risk that technologists like Elon Musk have been warning about when they talk of probability of AI triggering the next World War. 

Another aspect of AI application which raises my concerns is how big businesses adopting machine learning and advanced algorithms are affecting our everyday life. True, in many instances such as conducting medical diagnosis, transport services, climate change studies and many other fields AI can deliver astounding results but at what cost ?  I am ignoring the question of job losses for brevity's sake and instead focusing on how this trend is likely to affect our private lives. Let us not forget that after being in AI winter for a pretty long time, "thinking machines"  found renewed interest in themselves after corporations developed interests in them. And corporates tend to show interest in technologies that can give them maximum profit. Then there is the other aspect, data-mining and analysis by big Internet companies leaving us way too vulnerable when it comes to our public and private lives. 

Companies like Google, Facebook and other smaller players are sitting over a treasure trove of user data involving our likes and preferences, interpersonal relationships, purchasing behaviour, movement and other details of our everyday activities. This is the kind of data that machine learning requires to truly come up with accurate insights and intelligence. No wonder Google and Facebook are aggressively integrating or rather substituting existing systems with AI in all branches. Interestingly, the major focus on developing AI by these platforms are on "neural net", a model based on the understanding of how the human brain functions. Naturally, perception is of special interest to the designers, with "trained" algorithms now capable of processing natural language, speech recognition, facial recognition, identifying objects in a photo and even facial expressions, we could say we have the makings of neural nets in place.

From end user perspective we are already being influenced, whether consciously or subconsciously, in decisions we make online and offline such as who to connect with, what to buy, where to visit and more. As algorithms become more intelligent and autonomous or semi-autonomous there is a good chance that our online behaviour would be far more influenced by these intelligent algorithms, affecting our own autonomy in making conscious decisions. In effect, users become something like micro-organisms in the gut of this large artificial creature which actually knows and suggests what we want, pretty much deciding it for ourselves. Deus Ex Machina ? 

On the positive side, these advancement can help make our lives easier and productive as well as affect our interpersonal relationships. Remember how Google helps us find relevant information in matters of seconds and how Facebook has helped us stay connected with friends separated by distance and time ? But this new paradigm brings with it its own problems. The idea of losing my autonomy even subconsciously scares me and should scare everyone who care about freedom to make our own choices even if an agency guarantees to make better  choices than we do. Then, machine learning depends on data which in this case is user-generated data and likely to include the same bias and prejudices inherent in our society. The worse part is these systems are created and maintained by profit making organisations, so it is difficult to know where the parameters of "what is best for us" overlaps and gives way to "what is best for the company". Now, of course we can always avoid using products of these tech companies (I avoid Facebook but Google for me is not an option, this very blog is on Google owned platform). 

But even those who can afford to avoid using these services or use them securely (anonymous  accounts , VPN, encrypted emails)  there still remains the issue of governments using their abundant resources to spy on own citizens and those of countries of interest. Post-Wikileaks dumps we are pretty much clear to what extent US Intelligence agencies could go to intercept electronic communications and even use back doors to access computer systems. It would be pertinent to remind of the ongoing tussle between US administration and Apple over the latter's cryptography standard. I haven't come across much information on what countries like China, Russia ,Israel and Japan are doing with AI in their security apparatus neither have I read on India's approach but I need to say that the drive to make Aadhaar ubiquitous in every aspect of life is a red flag. I can't even begin to enumerate the dangers, there are too many and dedicated folks are already doing a great job in highlighting the risks. I would urge you to check out the hashtag #DestroyTheAadhaar or read up articles on web to find out more. In the present context, if the Aadhaar database is somehow accessible to interested agencies, they don't even have to use machine learning to make sense of the data. For instance, law enforcement agencies in the US and other countries are already using machine learning and CCTV camera footage to, among other things, identify people. Just reflect what governments and those with access to biometric database and machine learning in video surveillance can do to keep citizens under a magnifying glass. 

Finally, from everything I have read in recent times, I don't get the feeling that artificial general intelligence with or without anthropomorphic features and capability to become superintelligent is arriving in the very near future, not because technology is not developing fast enough but because the progression is fragmented. However, unchecked advancement in the field in various sectors do raise questions over their safety. When Stephen Hawking, Elon Musk and others say that AI needs to be regulated, they are undoubtedly correct regardless of what Zuckerberg and "non-Luddites" might say. But the bigger question is who can be truly trusted to regulate the AI advancement? Not governments nor companies that make profit out of AI. A global consortium seems a bleak hope, especially after the recent divisiveness seen in the hitherto consensus-driven W3C and over the question of DRM. So who ? Your thoughts ? 

0 comments :