Skip to the content

Computer vision: Can AI ever truly emulate the human brain?
personMary‐Ann Russon eventMay 19, 2017

Computer vision: Can AI ever truly emulate the human brain?

Humans have long dreamed of a world whereby robots are used as to aid us in completing menial labour and house chores so we don’t have to. The problem is, it is very difficult to get a robot to move and navigate like a human because it doesn’t have our muscles and nerves, and more importantly, it doesn’t have our brain.

Even the brain of a human baby is far advanced beyond what a machine is currently capable of, which is why computer scientists all over the world are working on artificial intelligence.

You can’t just build a brain in a day, because in just one second, the mind carries out thousands of processes at once, so you have to start small and break down the concept of understanding and recognition into bitesize processes, and try to make each process work.

In order to know whether we are in danger or even just to comprehend the situation at hand, the human brain derives a great deal of knowledge from what we see and what we hear. So for example, if you are walking across the road and you suddenly see a car coming, your brain makes a split-second decision to jump back onto the pavement and get out of the way.

To give robots this key ability, researchers are working on the field of computer vision – the process developing computer algorithms to help computers analyse and understand images and videos so that they can extract data and use it make important decisions. Computer vision is key for the development of facial recognition technology, or to process images from medical scans.

How can we get a machine to learn?

To get computers to understand images, we need to talk about machine learning, which is the overarching name for the practice of building algorithms to analyse data and make a prediction about it. Machine learning consists of various methods including neural networks, supervised learning and unsupervised learning.

Neural networks consist of large groups of artificially intelligent classical computers. The computers are trained to solve complex problems using models similar to the central nervous system in humans, where billions of sensory neurons take information from the eyes, ears, nose, tongue and skin, and send that to the brain, which then makes a decision.

In a neural network, there are multiple layers of computers, and each layer has to examine a different part of the problem. At the end, the neural network combines the answers from each layer to produce one final answer.

So for example, let’s say you gave a neural network a picture showing a road, a mountain, a car and a tree. If you ask it to identify the car, it will need to first break down the image into different parts – it will need to understand what a car is, and what all the other objects around it are, in order to pick the car out of the scene.

Visualization of machine learning

Source: Machine Learning

Supervised and unsupervised learning

To get the neural network to pick out the car from the picture, you need to teach it, so in addition to writing algorithms to instruct each layer, computer scientists also need to train the computer. They do this using large databases of images that have been pre-labelled by a human, and this is called supervised learning.

The computer scientists have to painstakingly feed the neural network the database of images, and hope that the network will learn from the data and make the right decision. But supervised learning isn’t enough.

If we want to have truly artificially intelligent robots that are able to perform tasks and be safe, we need them to be able to autonomously make their own decisions without a human having to be involved at every step.

This is known as unsupervised learning, and the point is to give the neural network various types of data and then leave it to find patterns and relationships between the data by itself. That’s what computing giant IBM is trying to do with Watson – get the cloud-based computing tools to gather insights and detect patterns in data that humans can’t currently see, to help them make better decisions.

Can we ever emulate the human brain?

Artificial intelligence is an uphill process. It’s not easy, and we have to focus on solving one tiny task at a time. But it’s looking positive.

It will just take a very long time, and as you can see, it will not be possible to give each robot a brain that is as powerful as ours – there will need to be a huge number of computers working away in the background together in order to compute even a fraction of what the human brain can do today.

Neural networks are now being used in a multitude of fields – imaging software developers for healthcare and commercial photo-editing are using neural networks to make software smarter at detecting objects in images.

Law enforcement agencies are using the technology to detect faces from CCTV camera footage, and even to discover identities from censored videos. The CIA is using it to detect potential social unrest five days before problems happen, by analysing data from social media networks and websites.

The finance industry is using neural networks to detect fraudulent online transactions, while car manufacturers want their automobiles to be able to sense an oncoming collision or bad weather conditions and act quickly to save human drivers.

Clearly, AI will be massively important in the years ahead. But its makers have all been very clear about one thing: its intended use is to augment human skills and ingenuity, not to replace them.

About the author
Mary‐Ann Russon
See full profile

Mary-Ann Russon is Senior Technology Reporter for the International Business Times UK. Her work recently earned her a nomination for Digital Writer of the Year at The Drum Online Media Awards 2016 and she was shortlisted for Technology Journalist of the Year at the UK Tech Awards 2016. Mary-Ann currently focuses on consumer technology — futuristic technologies like drones, 3D printing and robotics (artificial intelligence/machine learning); IT and mobile security; medical and scientific technology breakthroughs; aerospace and defence; quantum computing.

Latest from Mary‐Ann Russon

To enable comments sign up for a Disqus account and enter your Disqus shortname in the Articulate node settings.

Recommended articles

Black Mirror: Science fiction or a chilling reality?

Agtech: Where field and server farms meet

2020: why a quarter of workers could be robots

AI: Friend or Foe? (Part II)

AI: Friend or Foe? (Part I)

Latest articles

Why automation could lead to mass unemployment and how a Universal Basic Income might be the only solution

CERN’s asset management supercollider

5 Tips on How to Innovate

Process Model Innovation: A vital part of Modern Dual Innovation

Change activism for the really agile enterprise

Outro

Science and technology are the principal drivers of human progress. The creation of technology is hindered by many problems including cost, access to expertise, counter productive attitudes to risk, and lack of iterative multi-disciplinary collaboration. We believe that the failure of technology to properly empower organisations is due to a misunderstanding of the nature of the software creation process, and a mismatch between that process and the organisational structures that often surround it.