Machine-vision-looks-more-like-the-human-eye-hero2

Machine vision looks more like the human eye

Jan. 19, 2022
Continued advancements push vision systems into new applications

Driving home from work, I was treated to a view that only comes this time of year. There were four planets and the moon in the sky to the southwest with Jupiter, Saturn, Pluto, Venus and the moon in the waning twilight, all visible from our home planet, Earth.

Some might call me a bit of a geek, but I find that moments like this give us all a chance to recognize that we are but a small ingredient in the cosmic soup of life that comprises the universe. The clarity with which I could pick out the individual planets with the naked eye was totally amazing.

Also read: Machine vision goes autonomous

With the sun just barely below the horizon, our atmosphere was still reflecting sunlight down upon us, drowning out the millions of points of light that would soon fill the night sky.

In the right moment, the relative proximity of the Earth to these other celestial bodies affords the perfect focal point with which to be mesmerized. After a quick search on the Internet, I was happy to learn that there were also two other planets—Uranus and Neptune—and two asteroids in near perfect alignment in that night sky, hidden by the limitation of my eyes to capture the faint light from these other heavenly bodies.

By this point, you are probably wondering where the heck I could possibly be going with this subject. Well, all of the critical components that came together to give me such an inspirational view are elements that are necessary to capture a photograph and, by extension, bring a machine vision system to life.

The four base elements of photography are light, color, composition and subject. The most fundamental of these elements is light. Regardless of natural or artificial light, the direction of the light source is the most important. In my example above, the view was made most memorable because the time of year puts the sun low on the horizon, making the sliver of moon facing the sun much brighter than the rest of the moon.

That time of year also adds more color to the sunset because the light is refracted through the Earth’s atmosphere. The alignment of the other members of our solar system at this time of year provided the necessary composition and, of course, subject. The image is processed by the eye made up, in part, by a pupil, lens and optic nerve.

The pupil controls the amount of light that the optic nerve receives, and the lens amplifies the image in the field of view. Imagine trying to look at something on a sunny day. By squinting, we reduce the amount of light that the eye, and optic nerve, is exposed to, and the pupil further restricts the information that the optic nerve is receptive to, thereby sharpening the image.

A camera uses principles similar to the human eye, but the image is captured on a photo-sensitive medium, such as film, or, in modern cameras, a computer-rendered file. The main difference is the optic nerve is replaced by an optic sensor, and the processor, rather than the human brain, is a computer chip.

A properly focused subject, with appropriate light source, and control of the amount of light, using an aperture instead of a pupil, yields an image that can be captured by the processor. Upon these principles, a vision system is born.

The main difference between a photograph and a vision system is what is done with the image after it is captured. The major components of a vision system start with lighting, lens, sensor—like a camera—and add processing of the image and communication of the result. The vision system evaluates the image to extract information related to the subject, and that information is used to determine if the process that created the object is operating within acceptable parameters.

An example of an inspection might be to identify if there is any excess plastic, called flashing, on a medical tool or a metal fastener that might have flaws in the construction of the piece. Cracks or imperfect fill during the manufacturing can lead to failure of the product while in use.

Vision systems have advanced dramatically over the years. Early systems had few tools—electronic evaluation methods—with which to make decisions. Images were black and white. The most common tool was pixel counting.

Pixels are the base elements in the computer representation of a captured image. They are, at heart, small dots in close proximity to other small dots. Pixels range from white to black and the many shades between these two states. Viewed close up, the pixels look like tiny black, grey and white dots. When viewed from further away, the individual dots become less discernable and the image begins to appear. The pixel count relies on the subject having a pre-determined number of black or white pixels.

If a piece of the subject is missing, the number of pixels will vary from the standard, acceptable count. Other tools might use contrast—the number of white pixels to black pixels—or a tool that looks for a particular feature and then uses that to identify other features in relation to the first feature.

Twenty years ago, I worked for a company that developed its own, proprietary vision system. Such a system used cameras of a higher resolution—more pixels in the field of view—and many software tools to enhance the ability to electronically examine the subject matter.

The system had the ability to take a large number of images during the inspection time frame, called frames per second, and a high-end processor could make very quick decisions based on multiple images of the same object in a quick time lapse.

These early vision systems were expensive, and only a highly trained person could effectively use the system.

Systems today use high-resolution cameras, some capable of 3D-image evaluation. Images can be in color, adding a multitude of tools that make use of color differentiation and texture to better evaluate the subject.

Object detection, edge detection, edge inspection, measurement, pattern matching, 2-D and 3-D identification and optical character recognition (OCR) are also put to use. Once in the realm of proprietary, a vision system can be just a component of a larger control system, making decisions independent of the broader system and communicating the results using the latest in network technology.

Like everything else in the field of control design, the systems get smaller and more efficient. Happily, the cost of vision inspection has also reduced to the point where a vision system becomes a reasonable tool in a system design and isn’t restricted to those companies that can afford the technology. Lower cost per unit means more inspection systems can be deployed for the same investment.

The greatest impact that vision has had on our industry is in quality control. As machines produce at greater speeds, the limiting factor has been the ability to control the quality of the product. Traditionally, samples would be removed from the product stream and inspected manually by technicians using what now seem like primitive tools, such as visual inspection and manual measuring tools including calipers and micrometers.

If a quality check fails, hundreds or thousands of products might have to be scrapped over the risk that might exist since the last test was completed. Now, vision systems exist that can do all the quality-assurance testing while the product is still in the stream at normal production rates.

Product safety is a highly visible subject, and there’s a need to assure that only properly sealed products are released to the public. On the top of this list are medical products.

Hermetically sealed packages are a critical, public-safety concern and never more important than in the middle of a pandemic. Vision systems make all this possible with a high degree of confidence in the results.

Another area of concern is traceability. Manufacturers print lot codes and other identifying information on packages to provide an accurate way to keep track of where product is produced, when it was produced and the path it takes after it leaves the production line.

Vision systems are utilized to verify that the coding equipment is printing the right information on the package. By using OCR, a vision system can compare the printed information against the information the code printer was provided and stop the process if a discrepancy is noted. This could come as unreadable characters in addition to missing or incorrect information.

The list of applications for a vision system is ever increasing. As the technology is further developed, the measuring tool selection grows with it. 3-D image decoding is evolving, making processes such as machine tooling more accurate and faster.

Where earlier systems would take a single image, or multiple instances of the same image, to evaluate for features on a single face, 3-D images allow for not only pattern quality checks, but also the relationship between multiple faces on a machined piece. The information gathered by the vision system can be used to make minute adjustments to the machining process to produce a higher quality piece.

As a control designer who develops program algorithms, one of my favorite features of newer vision systems is the availability of module profiles and add-on instructions (AOIs) that are developed by the vision system manufacturer to work with current-generation programmable controllers.

This important feature makes it even easier to add vision to a control system. These programming tools are very user-friendly and further promote the consideration of using vision.

Vision systems are here to stay and will become more important in manufacturing processes in the immediate future. Capabilities will increase, and new systems will bring more value to the investment. With vision, there is more than what the eye can see.

About the author: Rick Rice
About the Author

Rick Rice | Contributing Editor

Rick Rice is a controls engineer at Crest Foods, a dry-foods manufacturing and packaging company in Ashton, Illinois. With more than 30 years’ experience in the field of automation, Rice has designed and programmed everything from automotive assembly, robots, palletizing and depalletizing equipment, conveyors and forming machines for the plastics industry but most of his career has focused on OEM in the packaging machinery industry with a focus on R&D for custom applications. 

Sponsored Recommendations

eBook: Efficient Operations: Propelling the Food Automation Market

For industrialized food production sectors, the megatrends of sustainable practices, digitalization and demand for skilled employees are underpinned by rising adaptability of ...

2024 State of Technology: Report: Sensors, Vision & Machine Safety

Manufacturing rarely takes place in a vacuum. Workers must be protected from equipment. And equipment must be protected. Sensing technology, vision systems and safety components...

Enclosure Cooling Primer

Learn more about enclosure cooling in this helpful primer.

Ultra-fast, ultra-accurate linear indexing

NSK integrates advanced automation and drive technologies to deliver high capacity, high speed, ultra-precise indexing and positioning in a compact, flexible linear actuator: ...