Skip to the content

How machine vision is changing the manufacturing industry
personSooraj Shah eventJan 20, 2020

How machine vision is changing the manufacturing industry

The manufacturing industry is becoming more sophisticated as a result of advances in technology.

It should come as no surprise that every aspect of the manufacturing industry is becoming more sophisticated as a result of advances in technology. After all, unlike other sectors, the manufacturing industry has been relying on machines for centuries.

Despite that, there are some technologies that have only come to the fore recently that have changed core parts of the manufacturing process – machine vision is one of these.

A good example of the sophisticated work and potential benefits of deploying advanced image recognition technologies is by RS Components, a distributor of electronics and maintenance products.

Source: Hack & Craft

The company sought out innovative image recognition technology that could perform product identification across its extensive online catalogue. H&C designed and implemented a complex multi-level image recognition architecture, run on a Raspberry Pi, housed inside of a bespoke self-service kiosk with a studio-simulation lightbox. The technology utilises TensorFlow, convolutional neural networks and is supplemented with colour extraction, object size detection and finally OCR detection. These multiple steps increase the capability and accuracy of the system, and as it is designed to be self-learning, the technology will get smarter as time goes on.

The installation of the image recognition service has increased sales by providing faster and more accurate identification of unknown products, allowing customers to easily purchase the products they needed in a more efficient and automated process.

Machine vision is the broad title for a number of technologies – the most simple of which is 2D image recognition, and the most complex being 3D machine vision. 2D image recognition, combined with robotics incorporates a single digital camera and a software analysis tool that can help with quality control and help manufacturers to check for abnormalities. It can swiftly process parts in a flat plane relative to the robot. 2D image capturing can be used for grading and sorting; it’s most commonly seen in factories that are dealing with fruits and vegetables, enabling a company to check whether the size, shape, colour and various other factors are what they’re supposed to be. 

Last year, car manufacturer Audi trailed an AI-based system that uses smart-cameras with image recognition software to detect small cracks in sheet metal. The system can potentially detect the finest of defects using millions of images, helping to automate quality control. Meanwhile, General Motors deployed a cloud-based image classification tool on nearly 7,000 robots. It detected 72 instances of component failure that could have led to unplanned downtime.

3D machine vision can go a step further, processing the location of parts across all three dimensions. In addition, 3D machine vision detects objects that are out of position – and therefore the robots that have this technology have more flexibility. 3D machine vision systems don’t require extensive reprogramming as they can account for unexpected variables in work environments – for instance, the lighting may impact the images taken, and a 2D image recognition system may mistake this for a defect or perhaps miss a defect because of the way the light is reflected onto a product. 3D machine vision systems have the ability to comprehend this as being an issue with light, and then process the image with this taken into account.

Source: Adobe Stock

Another example is Newbury Electronics, a manufacturer and designer of PCBs (printed circuit boards). The company has customers in almost every industry including autosport and aerospace, and these end users are more demanding than ever, wanting modern PCBs with complex capabilities but also requiring smaller, lightweight boards. In order to meet these manufacturing requirements for multi-layer boards, Newbury Electronics has invested in new production equipment.

John Rowley, MD at the company explains that the company has a design unit, a fabrication unit and an assembly unit.

In the assembly unit, the organisation has robotics everywhere.

“We’ve got machines that will pick up really tiny components and scan them with a static camera, and this camera will check the cross-section or rotational position of those components and analyse them against the library, to make sure they are what they think they should be and then place them with an accuracy of say half-a thousandth of an inch, which is mind-blowing really,” Rowley states.

Between 10,000 and 20,000 components are checked in this way every hour for each machine, and Newbury Electronics has 10 machines.

After this is completed, the first board that is assembled will go through scanning, which will check if the PCBs look like the anticipated data from an image library, and there are also test robots that send electrical signals through a bare PCBs to ensure the connections that should be there are there, and other connections are not. If something is wrong, it is flagged to an operator and then it’s up to an operator to decide whether it’s a genuine fault that needs fixing or whether the machine parameters haven’t been set correctly.

After the initial scans, which use mirror direct imaging, the organisation then uses Automatic Optical Inspection (AOI), which Rowley believes contains aspects of 3D machine vision.

“With 3D machine vision you have some robotics and you’ve got a 3D camera taking imagery and working out where failures are coming from in high-volume applications that will happen on the production line. The only difference between that and the AOI machine we use is that we do it offline – and this is only because we’re doing 10 to 15 different custom designs every day,” Rowley states.

“The AOI machine takes very high-definition pictures of all areas of the populated piece of electronics and you can zoom in at any angle to any degree,” he adds.

Newbury Electronics can ensure accuracy for the drilling and profiling elements of manufacturing to boards that are typically 25 microns thick and may have holes measuring only 50 microns in diameter rather than the 150 microns, which is typically the smallest hole that can be mechanically drilled.

The benefits of machine vision are clear; without these technologies, we would not have been able to achieve such a high quality of engineering. These technologies are not merely making manufacturers more efficient by reducing dependence on people, they’re enabling manufacturers to become more accurate, more specific and allow them to create more sophisticated products.

About the author
Sooraj Shah
See full profile

Sooraj Shah is a journalist, editor and copywriter specialising in B2B technology with strong contacts in the IT and technology industry. He is currently contributing editor for New Statesman Tech and Contributor for Forbes. As a freelance journalist he has written for publications including The Guardian, Computer Weekly, Diginomica, CIO UK, The Register, Computing, Infosecurity Magazine, SC Magazine UK, Mobile Europe, Hot Topics and IT Pro.

Latest from Sooraj Shah

To enable comments sign up for a Disqus account and enter your Disqus shortname in the Articulate node settings.

Recommended articles

Real-time insight: connected factories and the future of manufacturing

OCR: applying a laser-like vision to manufacturing processes

5 key trends in Industrial IoT for 2020

On the road to a greener system of vehicle manufacturing

Smarter, safer drilling: innovation in the oil and gas industry

Latest articles

Why automation could lead to mass unemployment and how a Universal Basic Income might be the only solution

CERN’s asset management supercollider

5 Tips on How to Innovate

Process Model Innovation: A vital part of Modern Dual Innovation

Change activism for the really agile enterprise


Science and technology are the principal drivers of human progress. The creation of technology is hindered by many problems including cost, access to expertise, counter productive attitudes to risk, and lack of iterative multi-disciplinary collaboration. We believe that the failure of technology to properly empower organisations is due to a misunderstanding of the nature of the software creation process, and a mismatch between that process and the organisational structures that often surround it.