You are on page 1of 6

Machine Vision System Applications (Metrology)

23.6. Machine vision systems can be used to replace human vision for applications like welding, machining to ensure that correct relationship is maintained between tool and workpiece, assembly of parts to analyse the position of parts so that other parts can be correctly aligned for insertion1 or some other form of mating. Machine vision systems are frequently used for printed circuit board inspection to ensure minimum conductor width and spacing between conductors and many other features. These are used for weld seam tracking, robot guidance and control, inspection of microelectronic devices and tooling, on-line inspection in machining operation, on-line inspection of assemblies, monitoring high speed packaging equipment, etc. The ability of an automated vision system to recognise well-defined patterns and to determine if these match with those stored in the memory make it ideal for the inspection of parts, assemblies (both quantitatively i.e. dimensions, number of parts, number of holes etc.) and qualitatively, i.e. defects, missing parts, misaligned parts, etc.). Machine vision systems in place of human inspectors are able to perform 100% on-line inspection of parts at high speeds, particularly in unpleasant environments. Without tiring or boredom, these can operate continuously with consistent results over prolonged periods of time, achieving 100% accuracy. Another important application of machine vision system is for the recognition of an object from its image. (Part identification). These systems are designed to have strong geometric feature interpretation capabilities and are used in conjunction with parthandling equipment like robots. Machine vision or Computer Aided Inspection (CAI) is finding slowly its place for on line inspection, in line with the introduction of Computer Aided Machining (CAM). Machine vision is defined as a technique which allows a sensor to view a scene and derive a numerical or logical decision without further human intervention. Machine vision systems have the advantages of consistency, high speed and reliability. The fundamental principle on which machine vision is based is that of digital image processing using a TV camera. The output of most TV cameras is a continuous video signal corresponding to a line-by-line scan of the image focused on the face of the camera tube. The signal is sampled at equally spaced intervals along each line, and a digital value assigned to the magnitude of the signal at those points. Line-by-line, a sampled image is constructed within the image processing computer as a 2-dimensional array. The sampling is analogous to laying a 2-D grid over the image of the scene and assigning numbers to the intensities within each element of the grid. The number of rows and columns comprising the grid are limited by both sensor and processor memory. Similarly, the range of numbers used to describe the intensities are also dependent on these two factors. The number of picture elements in the grid (called pixels or pels) determines the spatial resolution of the stored image, and the range of numbers used to describe the brightness affects intensity resolution. By reducing spatial and intensity resolution, the amount of information held within the image decreases. The design of vision systems requires a multi-disciplinary approach with a knowledge of sensors, optics, illumination, mechanical handling, as well as computer image processing

techniques. A machine vision system does not actually have to produce a recognisable image as part of its processing, as all the work is done on an electronic representation of the image. Vision system can be used for checking dimensions, checking overall shape conformity and checking surface finish. With machine vision systems, it is quite possible to automatically measure dimensions to a fraction of a micron, using an image analyser and conventional microscope optics. The operator judgement about where to define an edge can be completely eliminated and highly reproducible measurements can be obtained. In regard to checking overall shape conformity, it is done by checking the component against a master set of information and looking for discrepancies. The master information is a set of geometric measurements like area, perimeter, number of holes, amount of thinning of image possible before it breaks up, etc. Thus, the component need not be exactly aligned with the master as in conventional methods, and the handling requirements are also much simplified. For checking surface finish by machine vision, the surface is treated as a kind of mirror (though it may not be perfect reflector). The vision system looks at an image of a light source reflected in the surface, and changes to that image signify a defective area. It is also possible to look for defects directly, if the defects can be relied upon to have a distinct contrast with the correct surface. The vision sensor can be mounted on a robot arm for inspecting complicated or large surfaces automatically. The absolute resolution of the system is dependent upon the choice of optical lens system. Various types of image acquisition systems for different resolutions are : thermionic TV camera, solid state two-dimensional array cameras, linear array cameras. Thermionic TV tubes have the problem of drifting, especially during the first half hour or so after switching on, and under the influence of electromagnetic fields. However, they are best to distinguish small changes of tone than any solid state camera. Solid state sensors are preferred in general because these retain the spatial relationship between pixels (the smallest measurable unit, picture element) far more accurately. Solid state sensor systems can be calibrated for linearity of sensor and optics on a one time basis, allowing automatic correction of all subsequent measurements. The sensors in solid state cameras consist of an array of light-sensitive cells fabricated into a single silicon chip. These sensors are rugged, small and stable and thus usually fixed at the end of a robot arm. Solid state cameras are available with resolutions approaching those of CRT television camera devices (576 x 384 elements). In many cases, cameras embodying these devices give out digital data directly. Solid state sensors are also available as linear arrays. In such cases much higher resolutions (upto 4096 elements in one chip) are available. These are especially useful for applications requiring high precision as for dimensional checking. Laser scanners can also be used. They comprise can arrangement of rotating mirrors to scan a laser beam spot rapidly over an object. The reflecting beam from the object is then directed on to a single photo-sensor, the output giving an indication of the surface reflectance at that point. Where it is required to inspect a fast moving part, or to capture a transient event, it is possible to use an image sensor to freeze the motion. This is done by applying a short

pulse of light and thereby storing the image as a coarse pattern on the sensor. The image may then be read, at a slower rate, from the sensor into the computer. For measuring length of a shaft very accurately, linear sensors are used at each end of the shaft with a known dead space in between. If the component is such that it has several important dimensions to be checked in between also (i.e. not only the end dimensions of the component) then the vision system is integrated with a well-engineered stepping table. The specimen is then viewed in a number of steps which can be programmed to produce precisely abutting fields of view. Image processing computer is usually a conventional mini- or micro-computer with the addition of a frame-grabber and frame store. The frame-grabber is a fast A/D converter which converts the analogue voltage levels of the camera into digital words needed by the computer. The frame store has memory to hold the entire image. The vision systems as such suffer from drift problem due to change in lighting. This is taken care of by compensating for varying ambient light, using a circuit which measures the peak white output of the sensor and adjusting the artificial light to maintain a constant peak white output. The various vision systems are binary vision, gray-scale vision and three dimensional vision. Binary vision technique involves location and shape analysis of flat objects. The object is backlighted to produce a high contrast image which is then converted to binary (black and white) image by thresholding. Several methods are then available to determine the position, orientation and shape of the object. The binary image can be thinned to produce a skeleton of the object which may then be analysed to identify links, nodes and end points. The position, size and number of these can be used to identify shape. Often, simple parameters such as area and perimeter can be used as simple shape descriptors. By forming the convex hull of an object (equivalent to stretching a rubber band around it), the concavities or bays in the object may be isolated and labelled. A description of the number and size of these may be used as a crude description of shape. To establish position and orientation of an object, the centroid may be easily computed and then the axis of least (or most) moment of inertia can define the orientation. A series of concentric circles centred on the centroid of the object may be used to define shape. The intersections of the circles with the objects perimeter provide a number of (r, 0) pairs which can then be correlated with a reference set of (r, 0) pairs. In grey scale vision, 2-dimensional digital signal processing techniques are used to enhance and then extract the relevant features from the image. Grey-scale operations include contrast enhancement, spatial filtering (to remove spot noise, etc.) edge enhancement (to highlight areas of rapid intensity change) and texture anlysis. The third dimension in machine vision system can be measured be the structured light principle, by a process of triangulation. When a light pattern such as dots, bars or grids is projected onto an object, the pattern becomes deformed by the height contours of the object. A standard camera is used to sense these deformations is the projected pattern and a computer interperts this as height information. A single light strip is projected from a source inclined at an angle to a conveyor belt and spans its width. Objects passing underneath cause, by their height profiles, excursions in the strip when viewed from another direction. A range map is constructed from consecutive pictures of the deformed light line as the object moves along the conveyor and under the light pattern. The intensity in the picture corresponds to height information. And there is enough

information held in the map for a surface plot to reconstitute the original shape. Machine vision can carry out inspection tasks which are not practical for human to carry out at speed, like dimensional checking of hot metal.

Vision Systems.
Machine vision, based on the processing of interpretation of electro-optical (television) images is being introduced into practice for a number of diverse programmable automation applications like inspection, material handling, and robot assembly. Vision machines are designed to recognise or identify workpieces, their stable states, and to determine their positions and orientations. A vision system consists of a light source, an image sensor, an image digitiser, a feature extractor/data compactor, a system control computer, and output and peripherals. (Refer Fig. 17.33). The image sensor is usually either a vidicon TV Camera or a solid-state, charge-coupled device camera, or charge-injected device camera. The image digitiser is usually a very fast, six-to-eight bit, analog-to-digital converter which stores the digitised image in the main memory.

Fig. 17.33. Vision system. Some systems use an analog computer and a computer-controlled threshold to convert the video information to a binary image rather than a 6 to 8 bit gray-scale image. In some systems, gray scale image (stored earlier) is converted to a binary scale image before any image processing takes place. The feature extractor/data compactor provides high speed processing of the input image data. Pattern recognition algorithms are employed to generate a simple feature data set. The system control computer makes the decisions about the part being inspected. Two basic vision systems in use are : edge finders and correlators. Edge finders look for transitions form black to white, white to black, or from gray to non-gray level. When such a transition occurs, the edge finder system notes where it occurs (by line in the

vertical direction, and by clock count in the horizontal direction). When a second transition occurs, it is also noted and the number of picture elements (pixels) between the transitions is noted. If a measurement is desired, then the system will have been calibrated (the size of pixel determine) and the physical distance between the two transitions can be computed. Correlation systems are template matching systems. A complex image is stored in memory and the correlator searches the scene for a match to the image stored in memory. This method may be described by imagining a clear plastic template which has an image of the object printed on it. To locate the object, the template is moved until the printed image is aligned with the actual object. Using several templates, many objects can be identified and located. In a digital system the template is stored in memory as a two-dimensional matrix. This matrix is referred to as the reference. When a frame of video is loaded by the camera, the system overlays the reference on the upper left corner of the video and calculates the number of matches. This process, referred to as correlation, continues until the reference has been compared with the video throughout the field of view, the x, y coordinates where the best match occurs are then output. If there is no acceptable match anywhere in the field of view another reference may be used and the process repeated. This continues until the object is identified and located. Some vision systems employ light-striping systems. These use stripes of light projected onto an object with a laser or lamp. The distance of an object is determined by triangulation and the objects shape is determined from the distortion of the light stripes.

How Machine Vision System Functions (Metrology)

23.4. Fig. 23.5 shows a general block diagram of a vision system on next page. Lighting and presentation of object to be evaluated is very important task in implementing a vision system. It has great impact on system repeatability, reliability, and accuracy. Lighting source and projection should be chosen such that it accentuates the key features of the object, and gives sharp contrast and detail of the image. The specular reflections by small angle lighting and other techniques which provide diffused reflection should be avoided. Image sensor usually comprises of a TV camera, which may be vidicon TV camera which has greater resolution and low in cost, or it may be solid state camera (charge coupled device CCD or charge injection device CID). The solid state cameras have greater geometric accuracies, no image lag, and longer life. Imag^e digitiser is usually a six to eight bit analog to digital A/D converter which is designed to keep up with the flow of video information from camera and store the digitised image in memory. For simple processing, analog comparator and a computer controller threshold to convert the video information to a binary image is used. The binary images (having only two values for each pixel) are much simpler and facilitate high speed processing. However gray scale images contain a great deal more picture information and must be used for complex images with subtle variation of gray level across the image.

Feature extractor/data compactor employs a high speed array processor to provide very high speed processing of the input image data. To generate a relatively simple feature data set, pattern recognition algorithms need to be implemented. System control computer communicates with the operator and makes decisions about the part being inspected. These decisions are usually based on some simple operations applied to the feature data set representing the original image. The output and peripheral devices operate under the control of the system control computer. The output enables the vision system to either control a process or provide cation and orientation information two a robot, etc.