How Deep Learning's Classification Tool Works
Classifying parts and components is useful for complex assembly verification, when multiple items must be identified and sorted for use along the product line. Classification is also critical for in-line process control and continuous process improvement, providing essential data to catch mistakes before they become entrenched problems.
Despite the incredible advancements in machine vision, detecting objects or components that vary in shape, size, and location has traditionally required the flexibility of the human eye. Even the most powerful computer-powered inspection systems are easily confused by busy, highly patterned backgrounds and image quality issues like specular glare.
These conditions make it very difficult for vision algorithms to locate an object or region of interest with reliable precision. It can be time-consuming and difficult, if not altogether impossible, for an automated system to ignore irrelevant features in order to successfully identify the region of interest.
In these scenarios, when an application demands automated precision to find complex features and objects, it can be useful to turn to deep learning-based tools which, rather than relying on programming, learn from image examples. Using these self-learning algorithms unlocks new abilities to locate and sort parts into classes by their color, texture, material, packaging, or defect type.
Let’s examine how deep learning-based classification tools can help various industries.
Deep Learning Classification for the Automotive Industry
In many automotive applications, identification is performed by barcode reading and optical character recognition (OCR) technology to decode barcodes and serial numbers. Yet in environments which don’t support code reading or alphanumeric text, manufacturers must rely on visual identification. This can be problematic for any identification occurring in unpredictable locations or with visual variation and deviation, but it is complicated further when parts need to be counted and sorted or classified according to these markers.
Imagine an automotive manufacturer receiving a shipment of spark plugs from a parts manufacturer. These plugs vary in their color and marking because they are designed for several car models. As they arrive on the production line for pre-assembly in differently colored trays, the automated inspection systems needs to identify, count, and classify them in order to pass on data to the robotic assembly stage. To do this, the automated system needs to differentiate between the differently colored spark plugs, which will offer important assembly information to the robots, but also ignore the background colors of the trays, which are a distraction.
A classification challenge such as this demands a deep learning-based tool which can generalize the normal appearance of the spark plugs based on their shape and dimensions without getting distracted by individual markings, and then classify them by their color. To do this, a deep learning-based image analysis tool like Cognex Deep Learning uses labeled training images to generalize the appearance of a spark plug to count the collection on the tray. Then, it is able to sort them by color and transmit this data to the robot for assembly.
Deep Learning Classification for the Electronics Industry
For electronic hardware manufacturers working under extremely tight tolerance, all surface defects on their components must be meticulously detected and catalogued early in the production process. The metal surfaces of these components cast specular glare, which can confuse an inspection system by seemingly changing the part’s appearance to the camera.
Common defects like hits, scratches, or stains on components which occur during assembly are often difficult to discern during early stages of production because of the rough, textured, and reflective surfaces. What’s more, these defects are only visible under certain lighting conditions, often manifesting during local changes in contrast caused by non-uniform lighting.
At the same time, normal variations and naturally occurring but insignificant anomalies in the material’s surface need to be tolerated by the inspection system. Using a deep learning-based tool, electronics manufacturers can detect typical defects in any orientation using standard and non-specialized lightings, tolerate specular glare and insignificant anomalies, and then sort and classify defects by their common attributes.
One common application involves classifying surface defects for quality control. Deep learning can sort each defect type into its own class, or type, according to their common characteristics. For example, a model can sort ‘hits’ from ‘stains,’ ‘dents,’ and ‘scratches’ according to how they commonly present on metal surfaces. Because even each hit varies slightly from another hit, a manufacturer needs to use deep learning to conceptualize and generalize the common characteristics of hits to correctly identify them.
To do this, the deep learning-based inspection system incorporates contextual information about the components’ metal surfaces in order to form a reliable model of the shape, dimensions, and texture of surface detects. Consequently, defects like hits and scratches are flagged as anomalies—or failing or “bad” images—because they appear as textured areas that deviate from the normal surface texture. From there, all “bad” images with common characteristics are sorted by their common aspects, such that hits, stains, dents, and scratches.
If certain defect types don’t cause functional damage and are considered permissible by the manufacturer, then the system can make the decision to tolerate that class and allow it to pass through to the next stage of production.
Deep Learning Classification for the Packaging Industry
Appearance-based packaging identification without the use of a barcode is challenging. In these cases, inspection systems need to be sensitive not only to the normal and expected variations in product or batch appearance but also to the way that packages change in appearance due to local changes in contrast from non-uniform lighting.
In the case of multi-pack food and beverage and consumer products, where the same packs may be prepared differently in their caddies, the inspection system needs to instantly recognize that the subtle differences in wrapping—which can be hard to detect under certain lights—result in two separate classes of packages.
For example, two identical four-packs of toilet paper rolls labeled with the same barcode may be nested differently in their caddies; some four-packs may nest individually, while others are wrapped up with additional packs for shipping. To get a machine vision inspection system to catch this subtle difference would involve programming with extensive selection criteria and carefully tuned and optimized detection algorithms.
Deep learning-based image analysis instead relies on a human-like approach to learn to distinguish between the two different classes of packages. Based on labeled images of both classes of packs, the system is able to perceive that the additional wrapping is what distinguishes them and sort them accordingly.
Deep Learning Classification for the Life Sciences Industry
Cancerous cells exhibit variable and unpredictable forms. In fact, the various sizes and shapes of a single type of cancer cell differ more than they share any common characteristics. It can be nearly impossible for a pathologist to put his finger on what ‘makes’ a breast cancer cell. The old adage that one “knows it when he sees it” proves frustratingly true when it comes to cell pathology.
Deep learning-based defect detection tools overcome this hurdle by learning the various, innumerable forms of a cancerous cell and are able to accurately flag all those which appear anomalous, all while keeping in mind the natural and normal variations of a healthy cell. But when it comes to grading a cell’s degree of differentiation, a deep learning-based tool can classify all anomalous images according to specific morphologies. This is a task that machine vision cannot handle due to inherent programming limitations.
For example, prostate cancer cells are graded according to their “Gleason pattern”, or degree of glandular structure, on a scale of 1-5. “1” is uniform and “5” is irregular and distinct. A deep learning-based tool can help automate the inspection by incorporating a model of what cell tissues graded 1-5 look like based on their degree of glandular differentiation, and based on that appearance, a classification tool can sort all samples accordingly. A tool like Cognex Deep Learning can do this even when the scene involves multiples by focusing on several regions of interest within a single image.
When it comes to classification, “classes” can vary by defect type, size, shape, color, and various morphologies unique to every industrial industry. Deep learning-based industrial image analysis software can not only power up manufacturers’ automated inspections but also do the previously impossible: classify, sort, and grade without programming. This makes it possible to finally automate inspections that involve differentiating between visually similar but different products while tolerating large variations within the same class, as well as distinguishing between tolerable anomalies and true defects.
Deep learning-based solutions like Cognex Deep Learning expands the power of traditional machine vision to not only inspect but also sort and classify parts by their visual characteristics, helping to speed assembly and catch production errors before they impact quality and throughput.