What is Artificial Intelligence?

Artificial Intelligence has been around for more than 50 years. But the use and the impact of Artificial Intelligence and Machine Learning has exploded in recent years. This is mainly based on the hardware development which has provided the necessary performance. This article gives an overview of the past and current development and some of the benefits and concerns of the expected development.

Artificial Intelligence (AI) has become one of the key New Age areas of IT development in recent years. Thanks to AI, computers have beaten humans in various areas such as;

Perhaps more significant than any of these is that AI has recently passed humans when it comes to quality of image recognition (2015).

Computers are not really “intelligent” but have outperformed humans by sheer brute force. The computers are not really “thinking”. However, these cases still show that computer solutions are starting to outperform humans in discreet areas. Computers winning trivial games may not really have any significant impact in our day-to-day life. But other areas where computer solutions are improving have as we shall see in this article, already a major impact on our lives.

Artificial Intelligence is used in areas so diverse as when we search on Google, speak to virtual personal assistants such as Siri, Google Now, Cortana or Alexa, in various video games, making purchase prediction and recommendations in websites and music services.

AI is also used for fraud detection, to produce simple news updates such as financial summaries and sports reports, for security surveillance and even in smart home devices. Further, for stock trading, in medical diagnostics for various military applications and of course for self-driving cars. In most of these cases, we are not even aware of that AI is involved.

Moore's lawCertain applications of AI has been around since the late 50’s, and the real application of the technology seemed like fusion always to be in a distant future. However, in the last five years, the speed has accelerated dramatically.

The speed of innovation related to Artificial Intelligence has in recent years outpaced Moore’s law (that the number of transistors per square inch on integrated circuits had doubled every two years. Gordon Moore predicted that this trend would continue into the foreseeable future).

Artificial Intelligence is expected to impact our lives even more significantly in the next decades. Expected breakthrough includes; Autonomous cars, transportation, automated manufacturing and some reports suggest that half of the job titles known today may be gone within 20 years. Therefore understanding what Artificial Intelligence is and what consequences the expected development may cause is not any longer a subject for an obscure academic community but is important for everyone.

Turing’s test (Mark Jensen with permission)

Even though the concept of robots and the foundation of neural networks are older than AI, the idea of Artificial Intelligence is ascribed to Alan Turing who in 1950 published a now famous paper on Computers and Intelligence, which suggested that it was possible to construct a machine which could think. In this article, he also described what has later become known as the famous Turing Test. The Turing Test states that a human should communicate with two other entities (one computer and one human being) via a network. If the person who interrogates the other two cannot determine which is the human and which is the computer the computer should be considered intelligent.

The term Artificial Intelligence was suggested in 1956 by John McCarthy. During the next few years, scientists from fields as diverse as mathematics, psychology, engineering, economics and political science discussed the concept of an artificial brain. The real breakthrough of AI happened from 1990 onward, and we are right now in a formidable revolution of using machine learning and other concepts. While many applications may not be overly visible for the common man, the next years and decades will be disruptive.

Science fiction literature traditionally suggested robots with human-like characteristics. That is not the mainstream expectation any longer.

Transistor Count and Moore's Law
Exponential development of computer technology (Moor’s law)

The AI community defines three levels of AI;

  • Narrow or weak AI (ANI) which means using computers and software to resolve simple discrete tasks such as driving a car, voice, pattern or image recognition, text analysis, search, playing a game or any other current application of AI including all of the above-described applications belong. Note that narrow or weak AI still generally exceeds human capability within its narrow area.
  • General or Strong AI (AGI) which will be able to conduct any cognitive task a human can do of generic nature. This level refers to when the AI solution is as smart as a human being across the board. This level does not yet exist
  • Artificial Superintelligence (ASI) is a hypothetical level where an AI solution can beat humans in basically any field including science, innovation, wisdom and social capability. Note that the difference between AGI and ASI is very minor. When computers reach AGI level with access to the entire internet, they have already past humans. The moment when computer intelligence passes human capability is often referred to as “The Singularity”.

A lot of technology development has been exponential. In particular computer development based on the doubling of the number of transistors per area every two years. This development has driven the performance of computers in a never ending exponential development. AI using computers is part of the same development route. Hence the development is going faster than most of us realise. The human capability has increased over centuries and decades but in a much more linear manner. It is worth to remember a quote from Bill Gates that “we always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don’t let yourself be lulled into inaction.”

Types of Artificial Intelligence

Artificial Technology is an umbrella term. There are various diverse technology and techniques which jointly make up the area of Artificial Intelligence. Here are some examples:

  • Expert Systems which uses a knowledge base (a database of previous cases) to infer and present knowledge
  • Fuzzy logic (where rather than exact values degree of truth is used)
  • Grammatical inference, various techniques such as Genetic Algorithms (simulating biological modification of genes), Tabu search, MDL (Minimum Descriptive Length) a variety of Occam’s Razor where the simplest option is preferred, Heuristic Greedy State Merging, Evidence driven state merging and Graph colouring and constraint satisfaction have been used
  • Handwriting recognition
  • Intelligent Agent, an independent program performs some service such as collecting information. This could be searching the Internet at regular intervals for some information you are interested.
  • Image Recognition
  • Natural Language Processing (recognise, interpret and synthesise speech)
  • Neural Networks, described in the next section
  • Optical Character Recognition (OCR)
  • Pathfinding techniques such as Neural Network, Genetic Algorithms and Reinforcement Learning have been used
  • Sentiment analysis is the process of determining whether a piece of writing is positive, negative or neutral. It’s also known as opinion mining, deriving the opinion or attitude of a speaker
Artificial neural network
Artificial neural network

Deep Learning or machine learning

In 1957 an algorithm was developed called Perceptron.

The first implementation was developed as a piece of software for the IBM 704 computer by Frank Rosenblatt at the Cornell Aeronautical Laboratory funded by the US Office of Naval Research. Later it was developed into specialised hardware. It tried to implement a very simple artificial variety of brain neurones and had simple image recognition capabilities.

In the news of the day, it was described as if it in later versions would “be able to walk, talk, see, write, reproduce itself and be conscious of its existence”. While that has indeed not yet happened, the same technology (neural networks) is used in a lot of pattern recognition, and while the algorithms indeed have been further developed than in the Perceptron, it is mainly the enormous improvement of computer performance that has made the last few years development possible.

Recent application of AI

Machine learning using artificial neural networks, so-called deep learning has been a formidable revolution over the last few years.

Risks with AI

More and more data is collected by the government institutions as well as employers, when combining analytics and AI it may be possible to identify risk behaviour. While this may have some positive effects, there are also risks. There are many intellectual who are concerned about the potential development of extreme AI. There are a lot of valid reasons to worry. Some public profiles such as Stephen Hawking, Steve Wozniak, Bill Gates, and Elon Musk have each predicted that strong AI could pose a threat to humanity. Philosopher Nick Bostrom has also raised similar concerns.

While their concerns are probably valid, weaker forms of AI are perhaps already a threat when used by the wrong person. While strong AI may take a long time to materialise, there are already indications that AI and Analytics have been used to affect the result of some the recent elections including the Brexit referendum and the US presidential election in 2016. Today’s AI algorithms are however not necessary very intelligent since there are indications that Google’s search engine and Facebook have been manipulated into showing propaganda before the serious news. Elon Musk has raised concern about AI being used to fight wars. As the development of artificial intelligence progresses AI-implementations will get more intelligent, there may not be one single super-intelligence in the world, but many. If these would be fighting wars with each other’s humanity may certainly be at risk. However, even now AI used for the wrong purposes is already dangerous.

When will AI take your job?

To be honest, so far automation and robots have not taken a lot of jobs. Harvard economist James Bessen looked at the professions listed in the US 1950 census and found that there was only one job which was gone – the elevator operator). Certainly, jobs are disappearing, or perhaps more correctly the number of people working in certain professions are being reduced.

But so far new jobs have been created instead, and so far the threats of unemployment have been severely exaggerated. That may not be the case in the future, and given the increasing number of applications, we may see more risks in the not so distant future. However when only certain parts of the process are automated the prices may go down and the demand increase and hence there may be other jobs created reducing the drastic impact which would be easy to predict.

However, that is only one side of the coin. AI experts themselves are worried. In a survey, they thought that by 2032 half the driving on motorways would be done by self-driving cars. Job losses may also come in sectors which are not expected. One article suggests that lawyers are a sector ripe for automation. Since a lot of manual labour, currently is spent on searching through old cases, such research work can relatively easily be replaced by AI algorithms.

How will society change?

On another level automation, access to information and prices, and cutting of intermediaries are likely having a significant deflationary impact on mature markets and may well be one of the reasons for the current low-interest levels in the Western World. This transformation may not be only due to Artificial Intelligence. But AI increasingly powers simple applications such as using Google maps to find your route from A to B.

Society will change, and policymakers will have to adjust to a different society. The expected AI revolution may well be as dramatic as the industrial revolution was. Elon Musk has suggested that we would need to consider basic income provided by governments since the number of unemployed who cannot get traditional jobs will increase dramatically as automation increases over time.

Jobs that are most likely to become obsolete

A few recent studies have listed the likeliness of certain job categories being replaced by computers.

What can we learn from this? The more streamlined and process oriented a job is, the more likely it will become redundant. Among the 5%, most likely jobs to be automated are Telemarketing, Insurance Underwriters, Watch repair, data entry, Umpires, Referees, drivers, bookkeeping, accounting and payroll and Paralegals and Legal Assistants. On the other side, the least likely job profile to be replaced was Recreational Therapist. However, experts assume that Artificial Intelligence will beat us in every task within 45 years.

Conclusions

Due to the exponential development of computer hardware, artificial intelligence solutions have led to significant breakthroughs in the last few years. Experts assume that we are at the beginning of a new industrial revolution where automation will make many jobs obsolete. Governments must prepare for a society where not everyone would have a traditional job. While this development may look scary, it also means opportunities for a better life for all of us. There are threats but if they can be managed computers will help us all to live a better life even in the future.

Mikael Gislén is the Managing Director of Gislen Software, a Swedish-owned Indian Software development company. Mikael started the company in 1994. Gislen Software provides high-value software development services to clients mainly in Scandinavia and the UK.

 

 

 

Leave a Reply