Identify Various Components or Part Configurations With Deep Learning Assembly Verification Tools
Incredible strides have been made in machine vision such that advanced algorithms can distinguish between visually similar parts by the most subtle of features or markings. There is a natural tradeoff between specificity and accuracy of inspections and upfront investments in programming and training. To develop linear rules to teach a computer to distinguish between hundreds or thousands of the variations possible in a single part is, unsurprisingly, incredibly time intensive.
And still, the approach isn’t necessarily foolproof.
Unstructured and/or highly complex scenes which exhibit contrasted patterns and specular glare can simply become too unwieldy to program, especially for assembly verification applications which need to identify a large number of components which may vary part-to-part and appear in numerous configurations. Even when parts are consistent and well-manufactured, assembly verification inspections remain some of the very toughest to automate. This is because, although machine vision systems are able to tolerate some variability in a part’s appearance due to scale, rotation, or pose distortion, conditions like complex, confusing surface textures and poor lighting pose very serious challenges. Machine vision systems struggle to account for variability and deviation between very visually similar parts.
When an assembly or sub-assembly contains many deviations and variations, the burden on the system intensifies until an appreciation for all of these differences becomes too difficult to program or account for. Ceding these inspections to human inspectors is inefficient, unscalable, and can still result in error due to fatigue and varying biases from inspector to inspector.
Deep learning-based image analysis tools are an alternative to automate the toughest assembly verification inspections.
Assembly Verification for the Automotive Industry
Many objects and scenes in the automotive manufacturing industry are unpredictable and present differently to the camera during various stages of assembly. Final assembly is a notoriously tricky verification process for finished cars. This is because it challenges the step-by-step filtering and rule-based algorithms which define traditional machine vision development.
As defect libraries grow and configuration changes expand, it becomes too unwieldy to maintain these algorithms. Final assembly verification tests the very limits of programming because it involves multiple changing variables like lighting, color, curvature, and field of view that can be very hard for a computer and camera to isolate. This is why, traditionally, human inspectors continue to perform exterior checks at the final stage of a car’s assembly. Though they may be skilled at identifying a variety of parts and features as different car models move down the line under changing lighting conditions, human inspectors can still be inconsistent.
Instead, deep learning software can reliably build a library of referenceable features—from said colors and components—to locate and identify them within a photo of a fully assembled automobile. From here, it is easy to automate the final assembly verification check by adding one additional feature: once the components have been located and identified, the software can provide a ‘pass’ or ‘fail’ result.
Assembly Verification for the Electronics Industry
Electronics manufacturers are embracing deep learning for the highly judgment-based decision-making required in their assembly verification application. It is too time-consuming to train an inspection system to spot and verify the presence and correct placement of multiple components. Inspections involving images with many tiny components close to or touching one another can be nearly impossible or too complex to solve with traditional machine vision.
A piece of electronic hardware being assembled, like a fuse box, must be inspected for any defects, contaminants, functional flaws, or other irregularities that could impede performance or compromise safety. These errors need to be caught before the fuse box is assembled into a device or shipped to customers. Thankfully, deep learning-based software is optimized to work under these confusing conditions, including when images are low-contrast or are poorly captured.
To ultimately verify the fuse box’s complete assembly, the deep learning tool first learns to identify the many electronic components based on images labeled with the locations of each part type. From this input, the tool’s neural networks build a reference model of each component: this includes their normal size, shape, and surface features as well as their general location on the box. During runtime, the tool segments all areas of the box containing components to correctly identify whether components are present or absent and are the correct type.
Assembly Verification for the Packaging Industry
Consider the task of verifying the correct assembly of a packaged frozen meal. The packaging of several food trays may look similar on the outside but contain a different mix of goods on the inside. Conversely, the same food components may be present in all packages, but their layout or portion size may change.
The number of food components and various configurations and layouts are difficult and time-consuming to program using traditional machine vision, especially because it is hard to automatically locate and identify multiple features within a single image using just a single tool. The highly complex scenes involved in any final packaging assembly verification application can be too difficult to control as exceptions and defect libraries grow.
Deep learning-based image analysis makes it simple to verify that a food tray is correctly assembled by learning not only the slightly variable appearance of each food component, but also the acceptable layouts. Once trained on the normal appearance of individual components, the software builds a complete database of the various foods to locate. During run-time, the inspection image can be split into different regions so that the software can check for the presence of foods and verify that they’re the correct type.
For situations where packaging layouts vary, the software is flexible enough to allow the user to train multiple configurations. As configurations change, the deep learning software can be adjusted to continue to spot each individual component and confirm that it is of the correct type. In this way, a user can automate the verification of a packaged food tray or frozen meal using just one tool.
Assembly Verification for the Consumer Electronics Industry
During the assembly of mobile device panels or modules, it is not unheard of for foreign materials like loose screws to fall into the housing of a neighboring module on the line. It is critical to detect any inclusions lest they cause obstruction or damage during final assembly. Debris is typically small, and slight variations in appearance—whether due to subtle lighting contrasts, changes in orientation, or metallic glare—can confuse an automated inspection system.
At the same time, these types of conditions can also make it difficult for the inspection system to tell whether the expected components are in their correct housing. Finally, mobile device panels contain many parts close together, which can be difficult for an inspection system to distinguish as independent components.
Programming all of these variables into a rules-based algorithm is time-consuming, prone to error, and challenging to maintain in the field. Fortunately, deep learning-based image analysis software can learn the correct finished appearance of the panel or module’s many components in order to identify improperly placed parts like screws. By training on “bad” images of a module where debris exists or components are missing, as well as known “good” images where the module is assembled correctly, a tool like Cognex Deep Learning creates a reference model of a mobile device panel which thrives under challenging conditions and is able to identify any defective panels as reliably as a human inspector, but with the speed and reliability of an automated system.
Traditionally, assembly verification applications have been relegated to human inspectors. However, on production lines that need to inspect hundreds or thousands of parts per minute both reliably and repeatedly, the inspection capabilities of humans are insufficient. Deep learning-based tools are now able to fill that void.
Cognex Deep Learning is trained on labeled images—no software development required—to correctly locate and identify parts which vary in size, shape, and surface features. Once this hurdle is overcome, confirming whether the correct components are present and arranged in the right layout or configuration becomes easy and, unlike traditional vision, requires no additional logic-building.