What is the Business Case for Computer Vision in the Supply Chain?
In Influencers Roundtable, we bring together voices from consulting firms to share how they perceive various emerging technologies. The answers below have been edited for clarity and length.
When the first Android phone was released, the New York Times did not know what to call it.
“The Google phone is real, and it’s finally here,” an article at the time read, as it explained the difference between a phone’s software, hardware, and maker. The confusion made sense for the time: smartphone, as a word, did not enter the lexicon until about 2010, according to Google Trends.
As new technology emerges, struggling with words is a common story. We faced a similar struggle coming up with the right term for this week’s question — and I’d argue that’s a good thing.
Computer vision, as one of our sources puts it, is behind the artificial intelligence “renaissance” supply chains are living today. It’s so widespread, and has so many use cases, one term (“Android”) may not be enough to capture the whole technology.
Still, technology without a common name is hard to sell for use in the supply chain. So, we picked a term, and asked the experts:
What is the business case for computer vision in the supply chain?
“Make your vision so clear that your fears become irrelevant.” – Anonymous
Never more true in manufacturing! Companies are increasingly using vision systems in manufacturing to identify quality issues in supplier parts, to perform in-line quality checks post-assembly, and to prevent quality issues in applications such as robotic path guidance for dispensing.
Applications for machine vision have been quite diverse, ranging from spotting defects in leather for footwear manufacturing, to checking component presence and mounting quality on electronic circuit boards.
So, what is driving this recent growth spurt? The three biggest drivers have been:
1. Decreasing cost of robotics/automation 2. Increasing labor cost 3. Increasing computational speed
These factors create a perfect, incubatory environment for machine vision technology to flourish.
Advances in Artificial Intelligence (AI) have also given a steroidal boost to adoption of machine vision. This is because AI algorithms learn just like humans do, and can train their machine-vision eyes on several thousand parts a day, without taking a break or suffering from tired eyes at the end of a 12-hour shift. Given that most plants and warehouses run two shifts a day, the ROI have been healthy with paybacks typically landing within one to two years of implementation.
With the size of the vision system market at $8 billion and growing at double-digit compound annual growth rate, it is clear to see – more machines are watching, and they’re doing it well!
Computer vision has broad potential in supply chain.
Teaching machines how to “see” the world has so many potential applications; it’s hard to definitely say where the biggest benefits are (even when narrowed down to supply chain).
A few specific areas that are especially promising include:
• Video and image analysis — this can be used in a number of ways such as analyzing images for trends to match or make recommendations for fashion, using it for security and biometrics such as facial recognition.
• In combination with unmanned aerial vehicles (drones) — This is ideal for situations where the task is dangerous, difficult, or otherwise inaccessible by humans. This can include tasks such as tracking assets in the field (think vehicles and inventory at vast construction sites), mapping (such as creating a representation of the terrain for navigation and other purposes) and site surveys (such as flyovers to get an updated view of locations for development, or overview of the progress).
Another example is within retail and the consumer packaged goods space.
Computer vision can be used in the unified commerce experience in a number of ways. It can track the customers journey to the point of sale, create more advanced heat maps to understand where customers are and manage staffing accordingly, optimize space based on customer interest, and evaluate customer responses to products based on emotion detection.
In addition, companies can:
• Add computer vision (CV) to robots — This can assist with navigation as well as other tasks such as quality control (the latter, is known as machine vision, a subset of CV and has been around for quite some time, but recent technological advancements are democratizing the solution).
• Use it for planogram compliance — This can either be automated like on a robot, or a fixed camera, but can also be used to augment a worker’s ability in checking planogram compliance by improving effectiveness and speed, while decreasing errors. The CV algorithm identifies things like product positioning, facing, shelf availability, out of stock, and pricing.
• Use it in warehousing and logistics — tied to the robotics example above, a vision-enabled robot can navigate a warehouse and do various tasks like pick- pack lists, especially when there are high insurance type situations (where human worker safety is an issue/concern).
Computer vision is becoming the new vibration sensor. While in the 2010s we talked about equipment vibration, electrical current and weather, in the 2020s it will be all about what we can see … well what the computer can see ☺.
But in all seriousness, computer vision together with machine learning is going to add the missing sense in our IoT endeavors and we are already seeing this at our industrial IoT clients today. Computer vision is the lynchpin of the current AI/machine learning (ML) renaissance. In 2013, deep learning neural nets crossed a historic milestone of achieving human level accuracy at the ImageNet image classification benchmark, and the pace of algorithm sophistication, training data proliferation and performance has accelerated even faster.
The majority of AI/machine learning powered solutions across the supply chain follow a sense→think→act paradigm. As such, computer vision is critical for all such applications where visual information is a key for sensing.
It is useful to think about vision-enabled use cases along two dimensions:
1. First is the learning paradigm going from narrow (supervised) where the vision solution can only solve similar problems to which it is trained on, to more flexible paradigms (e.g. reinforcement and unsupervised learning), where the vision solution can handle situations that it has not necessarily seen before or that requires complex sequencing of actions and outcomes.
2. On the other dimension is the nature of the vision input required to solve the underlying problem – does the solution take discrete visual input and images or is there a need to ingest and respond to a real-time, continuous stream of visual inputs (see figure below):
In short we expect to see continued adoption and scaling of supervised and reinforcement-learning-based vision solutions in the near future. While unsupervised-based solutions are still in proof of concept stages, companies should continue to monitor progress as breakthroughs would significantly lower the current high training data requirements for solution development.
The business case for imaging technology and computer-based vision in the supply chain, particularly when linked to AI, can be made in three distinct areas:
1. Where you need to keep a constant “eye” on a process;
2. Where you’d like to reduce the cycle time by orders of magnitude; and
3. Catching where error or damage might have taken place in a logistics process, and quickly rectifying it.
Computer based vision will help reduce bottlenecks, which typically happen when manual processes kick in, and it will help reduce or eliminate human error, and it will speed up processes.
Let’s take constant watchfulness to start. A machine breaking down at a beverage manufacturer that’s using an automated manufacturing process could create a massive bottleneck or disruption in the supply chain, rendering the manufacturer dead in the water as a result. Deploying computer-enabled vision and imaging technology to observe the machine, and let’s say it has enhanced infra-red capabilities, can provide the manufacturer with a constant “set of eyes” watching over the equipment, monitoring its performance, and anticipating potential failures.
It’s a different story if a human being is trying to do this same job. He or she is subject to fatigue, limited powers of observation, distraction or simply the need to be doing something that’s perceived as a higher-value task. With machine vision, it becomes possible to keep a constant watch.
A second area where imaging technology could be deployed is in logistics, where goods are being moved from one place to the next, over several legs of a journey, and could be subject to damage anywhere along the line. At each stage, computer-based vision can help check on the state of the goods and send word back. If an item gets damaged, an early replacement can speedily go out, and better still, we know where the damage happened along the way.
In my third example, one which vastly reduces cycle time, I’ll stretch the definition of supply chain. When your property has been damaged, for example by a storm, in the old model, your insurance company would send out an adjuster to view the damage, photograph it, and send details back to the claims processor. Based on the damage, the claims adjustor will come up with a dollar value for the payment, a check gets cut and mailed. Many insurers are now sending drones with computer-aided vision. These drones photograph the damage, the photos are uploaded to the cloud which digitally sends the claim to the back office from where payment to the customer goes out. From the computer-based vision on, the entire claims process can be automated.
In each of these cases, humans can be redeployed in higher-value tasks, providing a strong business case for the technology’s use in supply chain.