Image processing for the CCD based lateral flow strip detector 1

Immunochromatography technology is developing rapidly since 1990s with the advantages of rapidness, simplicity, economy and single-test. Therefore, it has been used in medical test, quality monitor of food, poison detection, supervision of environment and so on. Today, immunochromatography is developed towards for the higher sensitivity, quantitative and multiplex detection, and may become the effective, sensitive, early screening and diagnostic mode for pathogens, malignant tumors and cardiovascular diseases, so on [1-8] . For the detection of the lateral flow strip, we designed a CCD based detector. We chose the LED circular light source, combined the CCD analog camera with the image capture card to gain the signals and shrinkage sharply the bulk of the biological immunochromatography analyzer, which laid the foundation of developing portable and accurate detector controlled by SCM. We established the matched software by Borland Delphi and achieve these functions as follows: image capture, image process, rapidly automatic diagnosis, the detective areas adjusting and report printing in time, as well as building a thorough patients’ information database to do some input, query and statistics about patients’ information. Herein we captured the strip images from different concentration grads of HCG, established the image processing to acquire the color signal to quantitative.

information, gray-scale transformation to the image can directly give the main image information and ignore less important information.Since bitmap is lattice image, each pixel is composed by the Red (R), Green (G), Blue (B) components; RGB combinations can produce more than 1600 million colors.We repeated the test based on the actual situation, and finally using the following method: according to YUV color space, Y is the luminance component of the physical meaning, contains all the grayscale Information, so the Y component will be able to show a gray scale of the image.YUB and RGB have the following correspondence: Y R 0.299 -0.148 0.615 U = G 0.587 -0.289 -0.515 (3) V B 0.114 -0.437 -0.100Using the above formula, it can be calculated: Y=0.299R+0.587G+0.114B (4) According to R, G, B values we can calculated Y values, after the R, G, B values are converted to Y, we will be able to show the gray scale image, shown in Figure 2.
Different in gray levels from the adjacent domain to enhance the edge of the signal, it should highlight the change of gray levels between adjacent points.Differential operation can calculate the rate of signal change, strengthen the role of high-frequency components, and make the image clean cut.Usually in the condition of unable to determine the contour orientation, we need to linear differential operator which do not have the space direction and rotation invariant, especially the convolution algorithm.Convolution is a good workaround to achieve sharpening, which can be seen as the weighted sum process.Convolution use the odd-dimensional matrix as a template, each pixel in region with the template, respectively, multiplied by the corresponding element, and that is the new regional center pixel value.We use a 3 × 3 template in the convolution, taking fully into account the border issues and cross-border issues.
Histogram reflects the relationship between the grayscales and the probability in the image, so it can be treat as a discrete function: information in a simple image, the median filter is a good choice and the early input noise can be well controlled.
Median filter s used to get the average value of a neiborhood points to replace the point's origin value of the digital image, where the value of Y is defined as follows: A serials of numbers X 1 , X 2 , X 3 , X 4 , ..., Xn, if the order is as follows: In one-dimensional case, median filter is an odd number of pixels of the sliding window, the value of the middle pixel is replaced by the median value of each pixel within the window.In the two-dimensional case, generally use the 3 × 3 or 5 × 5 window for median filtering.
Although the median filter can remove the noise while preserving image edges, but the fuzzy outline of the target image is difficult to avoid, such as poor focus camera system, the signal narrow band transmission system and so on.In essence, the image blurring is due to high spatial frequency components are less than low spatial frequency components.To eliminate this type of vague, we must enhance the image of the high frequency components, namely, the image sharpening.Image sharpening is an easy way to improve the image quality.The edge of the image is composed of pixel points.
Filtering for noise removal will be offset by the highfrequency enhancement for the whole image.Since the particularity require with the images in this article, based on the depth study of the sharpening principle, we use the edge detection algorithm detected the edge, and then only do the image high-frequency enhancement processing in the detected edge.Experimental results show that effectively solve the noise problem after image sharpening, shown in Figure 1.

Gray-scale transformation
Gray-scale transformation [9] is the most common image processing method, in the case of images with less n is Odd [Xi(n/2) + X i((n + 1)/2)] /2 n is Even % In the formula above, S k is the k-level gray scale of image f(x,y), n k is the number of pixels of image f(x,y) which are in the gray value of S k , and n is the total number of image pixels.
Because P(S k ) shows the estimated probability of S k , Histogram provides the distribution of the original gray value, that is an overall description of the image gray value.When we draw the histogram, generally reflect the 256 gray scales distribution with the x direction, and the y direction reflects the gray level on a number of pixels.The histogram is shown in Figure 3. WhereBz is the translation of B by the vector z, The dilation of A by the structuring element B is defined by: (7) The dilation is commutative, also given by: (8) The so-called contour extraction is to empty all of the internal point of the contour, retaining only edges.Specifically, when the original of the graphic is black, and there is a point that is the black point, while its eight neighboring points are the black point too we think this is an internal point of graphics that can be deleted.afterthis processing, we can get the contour from the image,shown in Figure 4.

Quantitative
When we get a clear outline, we can have a choice of effective information extraction.We calculated the average Gray value of the points that are inner the contour, and that is the final information we want.

Figure 1
Figure 1 the strip image Figure 2 the gray image of the strip image

Figure 3
Figure 3 the histogram of the strip image 2.3 Edge detection Usually before the contour extraction, we need to do some image processing about the details, to prevent the generation of false contours.The main methods are erosion and dilation, and the operation of structure open and structure close which are combination from erosion and dilation.The erosion of the binary image A by the structuring element B is defined by: (6)

Figure 4 Figure 5 A 5 B
Figure 4 the couter of the strip image