Está en la página 1de 13
394 wecHatromics 6.12 VISION SYSTEMS ‘Vision systems, also called computer vision or machine vision, are general purpose sensors. ‘They are called the “smart sensors” in industry since what is sensed by a vision system totally depends on the image processing software. A typical sensor is used to measure a variable, that is temperature, pressure, length, and so on, A vision system can be used to measure shape, orientation, area, defects, differences between parts, and so forth. Vision technology has improved significantly over the past 20 years, to the extent that they are rather standard “smart sensing components” in most factory automation systems for part inspection and location detection. Their lower cost makes them increasingly attractive for use in automated processes. Furthermore, vision systems are now standard in mobile equipment safety systems to detect obstacles and avoid collisions, especially when used with radar based obstacle detection systems. ‘There are three main components of a vision system (Figures 6.67, 6.68), 1. vision camera: this is the sensor head, made of a photosensitive device array such as charge coupled device (CCD) and optical lens, 2, image processing computer hardware (converts the CCD voltage to digital data) and software to process the image, 3. lighting system. The basic principle of operation of a vision system is shown in Figure 6.67. The vision system forms an image by measuring the reflected light from objects in its field of view. The rays of light from a source (Le., ambient light or structured light) strike the objects in the field of view of the camera, The part of the reflected light from the objects reaches the sensor head, The reflected light may be passed through an optical lens then to the CCD. The sensor head is made of an array of photosensitive solid-state devices such as photodiodes or charge coupled devices (CCD) where the output voltage at each element is proportional to the time integral of the light intensity received. The sensor array is a finite ‘May be packaged on the camera as ‘stand alone Vision system, DSP -image proceffor f (ic, PC Bus Card) r Camera Host PC oo 1 qf gs one aoe si Y FIGURE 6.67: Different hardware packages of vision systems: sensor and DSP at the same physical location, or sensor head and DSP are at different physical location and digital data is transferred from sensor head to the DSP using a high speed communication interface. sensors 395 4 4 4 4 4 “en Nee we seag “gee wooo] em, SES] 418 | aa” oat ca | tre coum ue 1 enone pe | ee || sya tiga | Ont “ee seta ' “ ‘Commarés| ee ' I spoons ooo FIGURE 6.68: Components and functions of a vision system, number of CCD elements in a line (i.e., 512 elements, 1024 elements, etc.) for the so-called line-scan cameras ot a finite array of two-dimensional distribution (i.e., 512 x 512, 640 x 640, 1024 x 1024) as shown in Figure 6.69. A field of view in the real world coordinates with dimensions [x,,y,] is mapped to the [n,,ny] discrete sensor elements. Each sensor element is called a pixel. The spatial resolution of the camera, that is the smallest length dimension it can sense in x and y directions, is determined by the number of pixels in the sensor array and the field of view that the camera is focused on, Ax, = 26 (6.188) Ay, = (6.189) ny where Ary, Ay, are the smallest dimensions in x and y directions the vision system can measure. Clearly, the larger the number of pixels, the better the resolution of the vision system. A camera with a variable focus lens can be focused to different field of views by adjusting the lens focus without changing the distance between the camera and field of view, and hence changing the spatial resolution and range of the vision system, The light source is a very important, but often neglected, part of successful vision system design. The vision system gathers images using the reflected light from its field of 123 1/0 1 ooo... oO 2/0 2/000... oO oO . oO ° Oo ° FIGURE 6.69: Vision sonsor head types: (a) nlo nz] 000... © | line-scan camera where the sensor array is * arranged along a line, and (b) two-dimensional camera where the sensor array is arranged fa) b) over a rectangular area. 396 wecHaTromics view. The reflected light is highly dependent on the source of the light. There are four major lighting methods used in vision systems: 1. back lighting, which is very suitable in edge and boundary detection applications, 2, camera mounted lighting, which is uniformly directed on the field of view and used in surface inspection applications, 3. oblique lighting, which is used in inspection of the surface gloss type applications, 4, co-axial lighting which is used to inspect relatively small objects, such as threads in holes on small objects. An image at each individual pixel is sampled by an analog to digital converter (ADC). .¢ smallest resolution the ADC can have is 1-bit, That is the image at the pixel would be considered either white or black. This is called a binary image. If the ADC converter has 2- bits per pixel, then the image in each pixel can be represented in one of four different levels, of gray or color. Similarly, an 8-bit sampling of pixel signal results in 28 = 256 different levels of gray (gray scale image or colors in the image). As the sampling resolution of pixel data increases, the gray scale or color resolution of the vision system increases. In gray scale cameras, each pixel has one CCD element whose analog voltage output is proportional to the gray scale level. In color sensors, each pixel has three different CCD element for three main colors (red, blue, green). By combining different ratios of the three major colors, different colors are obtained Unlike a digital camera used to take pictures where the images are viewed later on, the images acquired by a computer vision system must be processed at periodic intervals in an automation environment. For instance, a robotic controller needs to know whether a part has a defect or not before it passes away from its reach on a conveyor. The available processing time is in the order of milliseconds and even shorter in some applications such as visual servo applications. Therefore, the amount of processing necessary (o evaluate an image should be minimized. Let us consider the events involved in an image acquisition and processing. 1. A control signal initiates the exposure of the sensor head array (camera) for a period of time called exposure time. During this time, each array collects the reflected light and generates an output voltage. This time depends on the available external light, and camera settings such as aperture. 2. Then the image in the sensor array is locked and converted to digital signal (A to D conversion). 3. ‘The digital data is transfered from the sensor head to the signal processing computer. 4. Image processing software evaluates the data and extracts measurement information, Notice that as the number of pixels in the camera increases, the computational load, and the processing time, increases since the A/D conversion, data transfer, and processing all inctease with the increasing number of pixels and the resolution of each pixel (Le., 4-bit, S-bit, 12-bit, 16-bit). The typical frame update rate in commercial two-dimensional vision systems is at least 30 frames/s, Line-scan cameras can easily have frame update rate around 1000 frames/s. The effectiveness of a vision system is largely determined by its software capabilities. ‘That is, what kind of information it can extract from the image, how reliably can it do it, and how fast can it do it. Standard image processing software functions include the following capabilities 1. Thresholding an image: once an image is aquired in digital form, a threshold value of color or gray scale can be selected, and all pixel values below that value (white valuc)

También podría gustarte