Vision systems can really be divided into two separate but similar technologies—machine vision and computer vision. Both are based on the goal of mimicking the function of the human eye but differ on the objective and means by which they collect and interpret information.
They both start with a captured image or images. Machine vision might be described more as an inspection tool with a specific goal, while computer vision could be described more as extracting as much information as possible from an image and other related sources. Machine vision depends solely on its own images, while computer vision can combine images from multiple sources to make a determination.
Get your subscription to Control Design’s daily newsletter.
Machine vision, like other aspects of our industry, has evolved as technology has evolved. Some of the earlier exposure I had to vision systems were in use in manufacturing medical devices. The cameras used in those days were of a much lower resolution and relied on the presentation of stationary or very-slow moving processes to capture the image, evaluate the information and make a decision.
One such application looked at a device used during childbirth. The device was produced by an injection-molding machine. The inspection system was used to make sure that the features were completely formed and to make sure that there was no leftover flashing from the molding process. In that early example, we were simply counting pixels to determine if too little or too much product was present on the inspected device.
Inspection methods are called “tools” in machine-vision systems. The toolsets of the earlier vision systems were rudimentary at best. Cameras captured images in black and white. With relatively low-resolution cameras, the tools were limited. We could check for the presence or absence of an object or feature. We would detect protrusions and look for a definable edge. With a reliable, consistent background, we could convert the image to pixels and use that in order to determine if a feature was missing by counting the pixels of our test piece and comparing that count to subject piece. A good piece would fall within a certain range of pixels in the image.
Another limitation of these early inspection tools was the need to accurately locate the subject piece within the field of view of the camera in a repeatable method. Over the years, this need has been improved by having more sophisticated algorithms that will first locate and orient the object within the field of view and then perform inspection tasks based on that presented image.
One of the major improvements with vision systems has been the size of the package. The first systems I worked with were comprised of a camera on a mount, an independent lighting system, a controlled environment, essentially a box around the inspection area, and a huge box that made up the smarts of the camera system.
Long cables connected the controller to the camera and lights. The controller was essentially an industrial PC. Specialty daughter boards provided the connection to the camera and peripheral devices. The software was proprietary and required a high degree of training to get the desired results.
While some high-end inspection systems still use a version of this architecture, most vision systems have lighting, camera and controller/interface all in one package that will fit in the palm of your hand. The system communicates via popular fieldbus protocols, such as Ethernet or Profibus. The setup is accomplished via a laptop running vendor-supplied software. Recently, some vendors are offering systems where the setup software resides on the camera package. A simple browser connection to the camera is all that is needed to connect to and configure the system for operation.
One huge improvement in machine vision has been the development of real-world application of artificial intelligence (AI) to the inspection toolset. Traditionally, rules-based tools are used to make decisions. These include object location, bead and edge detection, measurement tools, histogram and image-processing tools. More recently, texture and color-based tools have been developed to complement the base tools.
The introduction of AI tools further enhances the rules-based tools by adding the ability to make determinations that were far too complicated otherwise. These so-called deep-learning tools start with the introduction of multiple examples of good or bad product images. The larger the sample base, the more deterministic the results.
One such application of AI tools for my team was to explore alternate means of checking for open flaps on a carton. We would traditionally use a series of photo eyes located at strategic places on an exit conveyor to capture open major and minor flaps on a carton.
Cartons have minor flaps that fold down first and then the major, or longer, flaps fold down over the minor ones to seal a carton when a bead of glue is applied prior to folding the final flap down. Periodically, a minor flap will get missed or improperly folded and will stick completely out of the carton. Alternately, the minor flap might be partially folded and a portion will stick out of the finished package.
Similarly, the final major flap may not get a good glue application and will not completely glue down to the opposing major flap to completely seal the package. By applying an array of photo sensors in a tight formation around a carton as it exits the packaging machine, we attempt to catch any of these poorly sealed cartons. This is a tedious task to set up and often misses packages.
Our application further complicated the matter by having cartons that were filled and sealed in a vertical orientation. After the final glue station, the cartons transition from vertical to horizontal to continue down the packaging line.
This is accomplished by simply knocking over the carton after it exits the machine. Unlike a horizontal cartoner, where the location of the package is absolutely controlled as it exits the machine, for the vertical carton, the final position on the exit conveyor is somewhat random, in that it might not be square to the conveyor, and the conventional photo-eye array will not work.
Choosing a vision system with advanced AI tools provided us with an answer to this issue. We started with a single camera, looking straight down, but this only gave us profiles of the cartons. We could subject multiple passes of good or bad cartons, but we were still limited to two-dimensional, profile views of the samples.
Adding more cameras, some mounted on angles, rather than straight down, provided a much larger database of good and bad product profiles. The system was taught by passing a package through the field of view and then telling the system if the product was acceptable or not. We could pass the same product through with different angles of skew, as well as flipping the products over to present an opposite view of the same defect. The system learns from each pass and creates a larger sample base upon which to make decisions. This deep-learning algorithm was key to coming up with a good solution for our application.
Machine vision does more than just parts inspection. The ever-expanding use of robots is further enhanced by vision systems. Packaging machines, for example, use a vision system to indicate not only the presence of a package, but the orientation as well.
This is especially important in applications where we need to pick up the object at its geographic center but also be able to accurately orient the package for proper placement in the finished container. The need to identify each object precisely as it moves down a conveyor and command the robot to pick up the package in the correct position and orientation is extremely important. Vision systems are more than up to the task.
Like any camera application, lighting and background are very important to the overall success of the process. Advances in the types of lighting and the use of appropriate filters, based on the characteristics of the product being inspected, are additional considerations that must be made when applying machine vision to a project.
Machine-vision applications will continue to grow, and the reduction in package size coupled with the ease of use will make it more attractive in the years to come. Costs continue to come down, making this element of a control system even more desirable to future designs. Many manufacturers provide add-on profiles that will work with your favorite programmable controllers, so integrating into the final package is an appealing consideration.