Artificial intelligence –

– Jay Kothari

Artificial intelligence is a buzzword for the current decade. It is often termed by the acronym AI. Most people are aware of it even if they don’t belong to the technology industry.  As in every field, from finance to education, Artificial Intelligence is becoming an essential part of every aspect. Especially in this digital era, the role technology is playing in our daily lives is making everyone tech-friendly.

However, there have always been discussions and thoughts that revolve around whether it is good or bad. Is it a boon or a threat to humankind?

Artificial intelligence is an evolving field with tremendous potential to change the world. Although in its infancy, it is revolutionizing several industries ranging from cryptography to self-driving cars. Like any promising new technology, it has garnered its share of critics concerned about the threat it poses to the human race.

According to the latest survey, 85% of Americans use at least one of six products with AI elements. Robotic process automation tools, like UI path, are making the monotonous and mundane manual work a thing of the past. This trend is prevalent in banking, information technology, and manufacturing to name a few. Technology titans like Google and Tesla are pioneering the field of autonomous cars. Language is no longer a barrier with Google Translate. Alexa and Siri already made their way into the American household. AI-driven business intelligence tools, like IBM Watson, are widely adopted in large corporations to analyze complex datasets.

Let’s go through some of the threats that are in discussions nowadays.

1. Job Loss

AI can take over several manual jobs in various industries. Any job involving repetitive, routine work is prone to automation. As an example, a bank teller’s job can be completely automated. This trend can cause large-scale unemployment.


On the contrary, experts claim that AI will create more jobs. According to the World Economic Forum, 58 million new AI-related jobs will be created by 2022. An example of an AI-driven job is skilled data technician, a position that involves sorting and labeling the information fed into algorithms while watching for bias, predicts Colin Parris, vice president of software and analytics at GE Software Research. PwC also predicts job gains in robotics and information technology: in sectors in which new AI technologies boost demand through increasing income and wealth, and in fields that require a human touch, such as health, education, and personal services.

2. Technological Singularity and Misaligned Intelligence

The popular notion is that AI can become exponentially smarter leading to the technological singularity. As a result, super-intelligent AI can annihilate mankind for the slightest divergence from human goals. Alphago, an AI algorithm developed to play the board game ‘Go’ has defeated the 18-time world champion Lee Sedol. An advanced version of Alphago Zero defeated Alphago in a span of 40 days of reinforcement learning. The input was only the basic rules of the game with no historical data.


Although this is a plausible threat, it can be averted with advanced research in AI safety and risk governance controls. The right balance between risk and innovation should be achieved to prevent the catastrophe. The ability to assess the risks and to engage workers at all levels in defining and implementing controls will become a new source of competitive advantage. Implementing AI selectively in certain low-risk industries is another pragmatic solution. In other words, a carefully designed and rigorously tested self-driving car is less risky, compared to an AI robot with access to destructive weapons and a nuclear arsenal. According to the survey, 89% of road accidents are caused by man-made errors. Self-driving cars can avert them completely. This alone can change human commutation forever. According to Mckinsey Global Institute, Amazon reduced its ‘Ship-to-click’ time by 225% because of AI.

3. Inevitable Bias in Healthcare and Criminal Justice

A system is only as good as the data it learns from. According to the BBC survey, take a system trained to learn which patients with pneumonia had a higher risk of death, so that they might be admitted to hospital. It inadvertently classified patients with asthma as being at lower risk. This was because in normal situations, people with pneumonia and a history of asthma go straight to intensive care and therefore get the kind of treatment that significantly reduces their risk of dying. The machine learning took this to mean that asthma + pneumonia = lower risk of death.


Since so much of the data that we feed AIs is imperfect, we should not expect perfect answers all the time. Recognizing that is the first step in managing the risk. Decision-making processes built on top of AIs need to be made more open to scrutiny. Since we are building artificial intelligence in our own image, it is likely to be both as brilliant and as flawed as we are. Streamlining the input data with multiple quality checks should minimize the risk of misjudgment.


To drive home the point, a quick peek at two recent incidents bolsters the case for AI: Two AI robots in Facebook started communicating in their own language. Facebook recognized this quickly and dismantled them. On the contrary, the Google Translate AI program developed its own artificial language to translate between different pairs of languages although it was programmed for specific language pairs. This is a constructive leap towards perfection. Google proudly announced this amazing development.

P.S.-  Artificial Intelligence can be good or bad depending on how we use it. The purpose of the knife is to cut it. A cook may produce delectable food, whereas a murderer can cause harm to others. The use of AI is therefore entirely dependent upon how we educate it and how we use it.

“We can complain because rose bushes have thorns or rejoice that thorn bushes have roses”.