Subscribe to our newsletter to receive the latest news and events from TWI:

Subscribe >
Skip to content

What is Artificial Intelligence (AI) and How Does it Work?

   

What is Artificial Intelligence?

Artificial intelligence (AI) is an area of computer science that involves building smart machines that are able to perform tasks which usually require human intelligence. Advances in deep learning and machine learning have allowed AI systems to enter almost every sector in the tech industries.

Contents

Click the links below to skip to the section in the guide:

AI and TWI

TWI has been closely involved in a number of projects related to the development of AI solutions for industry – ranging from automotive through to smart factories and Industry 4.0.

This includes the creation of The Artificial Intelligence Innovation Centre alongside the University of Essex, which aims to develop financially sustainable research into AI.

Please contact us for more information on how we can help with the development of AI solutions for industry, below.

contactus@twi.co.uk

How Does it Work?

AI is a complex subject without a clear singular definition beyond such vague assertions such as ‘machines that are intelligent.’ To understand how AI works, it is important to understand how the term ‘artificial intelligence’ is defined.

The definitions have been broken down into four areas:

  • Thinking humanly
  • Thinking rationally
  • Acting humanly
  • Acting rationally

The first two of these areas relate to thought processes and reasoning, such as an ability to learn and problem solving in a similar manner to the human mind. The last two of these areas relates to behaviours and actions. These abstract definitions help to create a blueprint for integrating machine learning programs and other areas of artificial intelligence work into machines.

AI technology can be powered by ongoing machine learning, while others are powered through more mundane sets of rules. Different types of AI work in different ways, meaning that it is necessary to understand the different types of AI to see how they work differently from each other.

What is Artificial Intelligence (AI)

Types of AI

AI generally falls into two broad categories – ‘Narrow AI’ (also known as weak AI) and ‘Artificial General Intelligence’ (AGI – also known as strong AI).

1. Narrow AI

This is the most limited form of AI, focusing on performing a single task well. Despite this narrow focus, this form of artificial intelligence has experienced a number of breakthroughs in recent years and includes examples such as Google search, image recognition software, personal assistants such as Siri and Alexa, and self-driving cars. These computer systems all perform specific tasks and are powered by advances in machine learning and deep learning. 

Machine learning takes computer data and uses statistical techniques to allow the AI system to ‘learn’ and get better at performing a task. This learning can take the form of supervised learning (via labelled data sets) and unsupervised learning (via unlabelled data sets). Deep learning uses a biologically-inspired neural network to process data which allows the system to go deeper into the learning process to make connections and assess input for the best results.

2. Artificial General Intelligence (AGI)

This form of AI is the type that has been seen in science fiction books, TV programmes and movies. It is a more intelligent system than narrow AI and uses a general intelligence, like a human being, to solve problems. However, truly achieving this level of artificial intelligence has proven difficult.

AI researchers have struggled to create a system that can learn and act in any environment, with a full set of cognitive abilities, as a human would

AGI is the type of AI that is seen in movies such as The Terminator, where super-intelligent robots are able to become an independent danger to humanity. However, experts agree that this is not something that we need to worry about at any point soon.

History

The notion of intelligent artificial beings dates back as far as ancient Greece with Aristotle’s development of the concept of syllogism and deductive reasoning, however AI as we understand it now is less than a century old.

In 1943, Warren McCullough and Walter Pitts published the paper, ‘Logical Calculus of Ideas Immanent in Nervous Activity.’ This paper proposed the first mathematical model for building a neural network. This idea was expanded upon in 1949 with the publication of Donald Webb’s book, ‘The Organisation of Behaviour: A Neuropsychological Theory.’ Webb proposed that neural pathways are created from experience, becoming stronger the more frequently they are used.

These ideas were taken to the realm of machines in 1950 when Alan Turing published his ‘Computing Machinery and Intelligence,’ which set forth what is now known as the Turing Test to determine whether a machine is actually intelligent. This same year saw Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer and Claude Shannon publish the paper, ‘Programming a Computer for Playing Chess.’ Science fiction author, Isaac Asimov also published his ‘Three Laws of Robotics’ in 1950, setting out a basic blueprint for AI interaction with humanity.

In 1952, Arthur Samuel created a self-learning computer program to play draughts and in 1954 sixty Russian sentences were translated into English by the Georgetown-IBM machine translation experiment.

The term artificial intelligence was coined in 1956 at the ‘Dartmouth Summer Research Project on Artificial Intelligence.’ This conference, led by John McCarthy, defined the scope and goals of AI and this same year saw Allen Newell and Herbert Simon demonstrate Logic Theorist, the first reasoning program.

John McCarthy continued his work in AI in 1958 by developing the AI programming language Lisp and publishing a paper ‘Programs with Common Sense,’ which proposed a hypothetical complete AI system that was able to learn from experience as effectively as humans do. This was built upon further in 1959 with Allen Newell, Herbert Simon and J.C. Shaw developing the ‘General Problem Solver,’ a program designed to imitate human problem-solving. 1959 also saw Herbert Gelernter develop the Geometry Theorem Prover program, Arthur Samuel coining the term ‘machine learning’ while at IBM and John McCarthy and Marvin Minsky founding the MIT Artificial Intelligence Project.

John McCarthy went on to found the Stanford University AI lab in 1963, however there was a setback to AI in 1966 when the US government cancelled all funding for MT projects. The setbacks continued in 1973, when the British government also cut funding for AI projects as a result of the ‘Lighthill Report.’ These cuts led to a lack of progress in AI until 1980 when Digital Equipment Corporations developed R1 (aka XCON), the first successful commercial expert system.

Japan entered the AI arena in 1982 with the Fifth Generation Computer Systems project, leading to the U.S. government restarting funding with the launch of the Strategic Computing Initiative. By 1985, AI development was increasing once more as over a billion dollars were invested in the industry and specialised companies sprang up to build systems based on the Lisp programming language.

However, the Lisp market collapsed in 1987 as cheaper alternatives emerged and computing technology improved. By 1993, many of the initiatives of the 1980s had been cancelled, although the U.S. military successfully deployed DART, an automated logistics planning and scheduling tool, during the Gulf War of 1991, and IBM’s Deep Blue famously beat chess champion Gary Kasparov in 1997.

The new millennium has seen several advances in AI technology, including the self-driving car, STANLEY winning the DARPA Grand Challenge in 2005 and the U.S. military investing in autonomous robots like Boston Dynamic's ‘Big Dog’ and iRobot's ‘PackBot’ the same year. Google made breakthroughs in speech recognition for their iPhone app in 2008 and 2011 saw IBM’s Watson beat the competition on the U.S. quiz show, Jeopardy!

Neural networks were further advanced in 2012 when a neural network successfully recognised a cat without being told what it was and, in 2014, Google’s self-driving car was the first to pass a state driving test in the U.S. 2016 saw another advance in AI as Google DeepMind's AlphaGo beat world champion Go player Lee Sedol.

When Did AI Start?

As shown above, AI has seen a large number of developments over the decades since 1950, but modern artificial intelligence is widely accepted as beginning when Alan Turing asked if machines can think with his paper, ‘Computing Machinery and Intelligence.’ This led to the Turing Test, which established the fundamental goals of AI.

Who Invented AI?

While many people have helped advance AI, building upon one-another’s research and breakthroughs, it is commonly believed that British mathematician and ‘Father of Computer Science,’ Alan Turing came up with the first concepts for artificial intelligence.

Advantages of AI

AI offers a range of advantages including:

  • Low error rates compared to humans (provided coding is done correctly)
  • AI is not impacted by hostile or aggressive environments, meaning that these machines can perform dangerous tasks and work in environments and with substances that could harm or kill people
  • AI doesn’t get bored with tedious or repetitive tasks
  • Able to predict what people will ask, search or type – allowing them to act as assistants to recommend actions, such as with Smartphones or personal assistants like Alexa
  • Able to detect fraud in card-based systems
  • Able to quickly and efficiently organise and manage records
  • Can help with loneliness through machines like robotic pets
  • AI is able to make impartial, logical decisions with fewer mistakes
  • Able to simulate medical procedures and achieve a level of precision that is difficult for humans
  • No need for rest means that AI systems can continue working longer than humans
 

Of course, for all of these advantages there are also some disadvantages associated with AI…

Disadvantages of AI

The disadvantages of AI include:

  • Costly to build, repair and develop
  • Ethical questions need to be addressed regarding some applications and, in sme instances, the entire notion of human-like robots
  • There are still questions as to how effective AI is when compared to humans – including being able to assess situations empathetically
  • Unable to work outside the boundaries of their programming
  • Lacking creativity and common sense
  • Using AI to replace human workers could lead to unemployment
  • Dangers of people becoming too dependent on AI and the notion that AGI artificial intelligence could supersede people (although this is still not likely any time soon)
 

Where is it Used?

Artificial intelligence is used in a wide variety of applications including autonomous vehicles, medical diagnosis, natural language processing, mathematics, art, gaming, search engines, digital assistants (such as Siri), image recognition, spam filtering, flight delay prediction, targeted online advertising, energy storage and more.

Artificial intelligence is now widely used by social media platforms to determine which stories should be targeted to which sections of the audience to generate more traffic. This can, in itself create problems such as presenting a one-sided or biased view of world events and also opens up the possibility of ‘deepfakes,’ which present news on things that didn’t actually occur.

Examples

Examples of artificial intelligence use include:

  • ‘Conversational’ bots for marketing or customer services
  • Disease mapping and prediction
  • Entertainment recommendations from places such as Spotify and Netflix
  • Joined-up manufacturing – such as that being promoted through Industry 4.0
  • Personalised healthcare recommendations
  • Robotic-advisors for stock trading
  • Smart assistants (such as Siri and Alexa)
  • Social media monitoring
  • Spam filters on email

How Artificial Intelligence will Change the Future

Artificial intelligence will impact our lives in a number of different ways as the technology continues to grow and advance. We are already seeing many of these changes beginning to come to fruition and the future will see advances that we may not even have thought of yet. However, here are some of the upcoming ways in which AI will change the world around us.

Driverless Cars and Robots

Advances in AI and robotics have led to growth in areas such as driverless cars and delivery drones. The autonomous transport solutions could revolutionise how we transport good and people around the world.

Fake News

This negative aspect of AI is one that we are already seeing have an impact on society. Whether this is through voice or image replication, it means that it could become increasingly difficult to trust what we see or hear within the media.  

Speech and Language Recognition

Machine learning systems are now able to recognise what people are saying with almost 95% accuracy. This opens up the way to robotic transcribers of spoken language into the written word as well as offering options for translation between languages.

Facial Recognition and Surveillance

This is another grey area for AI, as there are many people who are against the idea of using facial recognition for surveillance purposes. The idea of using facial recognition alongside CCTV is already being promoted in China in order to track criminals and follow people who are acting suspiciously. Despite privacy regulations, there is a good chance that artificial intelligence will be used more widely to track people in the future, including technology that is able to accurately recognise emotion.

Healthcare

Healthcare could benefit greatly from AI, whether that is noticing tumours from X-rays, spotting genetic sequences related to disease or identifying molecules that could lead to more effective pharmaceuticals. AI is already being trialled in hospitals for applications like screening patients for cancers and spotting eye abnormalities.

AI FAQs

Why do we need artificial intelligence and what can it do?

Artificial intelligence looks set to provide an array of benefits in the future for a range of applications. These include allowing machines to take on repetitive or menial tasks, assisting us in our everyday lives and revolutionising manufacture, transport, travel and healthcare.

Can Artificial Intelligence Replace Human Intelligence?

AI is unlikely to replace human beings although it will most likely change the roles we play in society. AI is currently seen as an assistant rather than a replacement for human intelligence in most areas.

What is Machine Learning? Is it the same as AI?

Machine learning is an aspect of artificial intelligence, based around the idea of allowing machines access to information which they can then use to develop and learn for themselves.

What are Neural Networks?

Neural networks are computer systems that are inspired by the biological neural networks in our brains. They are comprised of units or nodes called artificial neurons which allow machine learning and other artificial intelligence applications.

For more information please email:


contactus@twi.co.uk