A machine vision system is commonly defined as a system that uses various technologies integrated together to create an imaging based automation system or inspection system used in industrial applications and manufacturing. It may include robotic guidance as a part of the process. A closely related definition is “automated optical inspection”
Machine vision applications are widely used to improve the results in many kinds of manufacturing or industrial environments. In some applications, such as high-speed automated assembly or product handling, a machine vision system might be used to confirm the placement of objects on a conveyor belt. Other applications might include a multi-spectral view of an aircraft airframe to search for structural flaws. Still other might be used to detect the presence of certain chemicals. In all cases, the machine vision system is “seeing” in a way that is beyond the capability of humans.
Let’s look in more detail at some of the elements in an automated optical inspection system.
We’ll assume we have an object that is to be inspected. To do so, the following basic elements are needed in any machine vision system.
- A sensor – often a camera – used to observe the object.
- An illumination source
- Some sort of data capture element, such as a frame grabber and frame buffer.
- A program that can process and interpret the captured data.
- Some pre-determined output or action based on the interpreted data.
The sensors in a machine vision application will vary depending on the object and the type of information needed with regard to it.
In many cases the sensing is done using cameras. Cameras may operate in the visible spectrum where in the human eye can see, but special cameras are available that operate in the infrared or UV ranges. Such cameras can “see” well beyond the spectral range of humans and can capture data that can be processed and displayed in human readable form. Most images from the Hubble Space telescope are examples of this. Alternatively, we can program a system to respond to certain data automatically and never have a human representation of that data.
There are a couple important distinctions between the machine vision realm and human vision system. An electronic sensor does not capture data the same way the human eye does. It captures data 1 frame at a time, and while the frame rate can be very high, it is still one frame at a time. The human vision “system” integrates information over time compared to a camera that “freezes” each frame of information in time. This frame centric aspect also allows us to program the interpretation of the information based on comparing frames to a reference or by measuring the difference between one frame and another. This interpretation process is complex. While it is natural to humans, programming a system to do the same is challenging. Ultimately, the interpretation is limited by the amount of detail provided by the sensing sub-system. The detail collected is a function of not only the sensor but also the previously mentioned illumination sub-system. By illuminating an object with known patterns, software can determine details about the object. Recently, machine vision technology has been enhanced by the development of 3D vision. The implications and value of this will be addressed in a subsequent post. Learn more about how Keynote Photonics can help you apply DLP technology in machine vision applications by downloading the DLP System Design verview.