Full Length Article| Volume 28, ISSUE 1, P32-42, February 2023

Ok
• PDF [2 MB]PDF [2 MB]
• Top

# An Optical Approach for Cell Pellet Detection

Open AccessPublished:November 25, 2022

## Abstract

Cell-based screening methods are increasingly used in diagnostics and drug development. As a result, various research groups from around the world have been working on this topic to develop methods and algorithms that increase the degree of automation of various measurement techniques. The field of computer vision is becoming increasingly important and has therefore a significant influence on the development of various processes in modern laboratories.
In this work we describe an approach for detecting two height information, the phase boundary of a cell pellet and the bottom edge of the tube, and thereby a method for determining the highest point of the topology. The starting point for the development of the method described are cells obtained by various procedures and stabilized by a fixative. Centrifugation of the tube causes the cells to settle to the bottom of the tube, resulting in a cell pellet with a clear phase boundary between the cells and the fixative. For further studies, the supernatant fixative has to be removed without reducing the number of cells. The fixative is to be extracted automatically by a liquid robot, which is only possible by accurately determining the cell pellet height. Due to centrifugation, an uneven topology is formed, which is why the entire phase boundary must be examined to detect the highest point of the cell pellet.
For this approach, the tube to be examined, which contains the cells and the fixative, is rotated 360° in defined small steps after centrifugation. During rotation, an image is captured in each step, after which a defined image area is separated from the center of the image and merged into a panoramic image. This produces a panoramic image of the cell topology which represents the complete phase boundary, the boundary located on the outside of the tube. This panoramic image is modified through various image processing steps to extract and detect the phase boundary. Various image processing algorithms from the OpenCV library are used. In the first step, the panoramic image is convolved with a Gaussian blur filter to reduce noise. In the following step, a black and white image is generated by a thresholding process. This black and white image, or binary image, is convolved with a Sobel operator in the x and y directions and the results are superimposed. This overlaid image shows the top edge of the cell pellet and other edges located in the image. A logical exclusion method of the obtained boundaries is used for the detection of the phase boundary. To detect the tube bottom, a multilevel model was trained in advance with an appropriate data set. This model can detect and localize in near real time the tube bottom in an image.
By using the two-height information of the different boundaries, phase boundary and tube bottom, the highest point of the cell pellet can be detected. This information is then passed on to a higher-level process so that the liquid robot can approach this point with the pipette tip to remove the excess fixative. By determining the highest point, the probability of being able to remove a larger amount of fixative without reducing the number of cells is highest. This ensures that post-processing studies have the largest possible number of cells available with complete automation.

## Keywords

#### Abbreviations:

AI (artificial intelligence), AOI (area of interest), AVIS (automated computer vision-based inspections systems), IVIS (intelligent automated computer vision-based inspections systems), CV (computer vision)

## 1. Introduction

Recently, a new field named Computer Vision (CV) establishes into measurement technologies [

Marr B. 7 Amazing Examples Of Computer And Machine Vision In Practice. Forbes 2019, 8 April 2019; Available from: https://www.forbes.com/sites/bernardmarr/2019/04/08/7-amazing-examples-of-computer-and-machine-vision-in-practice/?sh=16c6701a1018. [August 01, 2022].

]. CV is a part of artificial intelligence (AI) and is focused on extracting relevant information from digital images, videos, or other visual inputs. This requires a large amount of data that is repeatedly analyzed for different features until a computer algorithm can identify different objects in an image. One of the most popular applications is facial recognition, which is already integrated into many applications in daily life. For example, modern smartphones can be unlocked, or bank transfers authorized by biometric data []. But CV is not only integrated into the daily life also industrial processes are improving by using the advantages of the new algorithms. Probably the biggest advantage of CV is that it imitates human vision. Therefore, processes can be automated. One example is the automation of quality check of welding procedures [
• Ivanov M
• Ulanov A
• Cherkasov N.
Visual control of weld defects using computer vision system on FANUC robot.
]. This achievement has been made possible by advances in Convolutional Neural Network learning algorithms and by the large amounts of data that have been generated lately. This allows CV to be used in many new areas, including modern laboratories.
Biotechnological processes are now widely used in numerous areas of the life sciences. The underlying methods are increasingly based on the use of cells, cell components, or even microorganisms. The classic applications include cell-based assays, which are used in high-throughput and high-content investigations for the determination of biological activity or the determination of biochemical pathways. But also, in medical diagnostics, there is the task of examining cell-containing biological samples. It is often necessary to separate the cells from the surrounding medium, usually centrifugation processes are used. The target of further investigations are often the resulting cell supernatants. In other applications - especially for cell purification or cell staining - the remaining cells are the target of further investigations. The cell pellets can be used for PCR, western blotting, or gene expression profiling. The investigation of cells can further be used for the early detection of diseases. Seemungal et al. reported the detection of rhinovirus in sputum for the detection of chronic obstructive pulmonary disease [
• Seemungal TA
• Harper-Owen R
• Bhowmik A
• Jeffries DJ
• Wedzicha JA.
Detection of rhinovirus in induced sputum at exacerbation of chronic obstructive pulmonary disease.
]. The detection of surviving and diagnosis of bladder cancer was reported by Smith et al [
• Smith SD
• Wheeler MA
• Plescia J
• Colberg JW
• Weiss RM
• Altieri DC.
Urine detection of survivin and diagnosis of bladder cancer.
]. The analysis of cells from the lung epithelium included in human sputum enables the detection of early signs of fibrosis or lung carcinoma [
• Wilbur DC
• Meyer MG
• Presley C
• Aye RW
• Zarogoulidis P
• Johnson DW
• et al.
Automated 3-dimensional morphologic analysis of sputum specimens for lung cancer detection: performance characteristics support use in lung cancer screening.
]. After various preparatory steps, the cells are in a fixative to stabilize them until the main examination using a cell CT. The procedures described above are widespread and very time intensive. Performed mostly by highly trained personnel, the procedures tie up a large part of the costs in monotonous tasks. In addition, laboratory staff is unable to focus on research, which adds significantly more value to society. In addition to cost savings and more productive research, automated processes are less error-prone than processes performed by a human, and they allow for continuous utilization of a wide variety of laboratory equipment. Therefore, automation of a wide variety of measurement processes is imperative to support, rather than interfere with, ever faster and better research in a wide variety of fields.
The automation of such processes requires innovative solutions for the detection of the resulting cell pellets or the detection of the interface between cells and cell supernatant. Due to the specific properties of the solutions to be centrifuged, the centrifugation process often does not produce smooth and even surfaces. The exact detection of the fill levels as well as the determination of the volumes / amounts of cells is necessary for the further processing of the samples in order either to achieve a complete suction of the cell supernatants or to determine the required amounts and concentrations of reagents for further processing.
Optical, camera-based methods can be used here. Essential during examining the samples is that the method doesn't affect or pollute the researched sample. Therefore, non-invasive measurement methods are suited as shown in many articles [

Eppel S, Kachman T. Computer vision-based recognition of liquid surfaces and phase boundaries in transparent vessels, with emphasis on chemistry applications. arXiv:1404.7174.

,
• Eppel S.
Tracing the boundaries of materials in transparent vessels using computer vision.
,
• Chakravarthy S
• Sharma R
• Kasturi R.
Noncontact level sensing technique using computer vision.
,
• Yazdi L
• Prabuwono AS
• Golkar E.
Feature extraction algorithm for fill level and cap inspection in bottling machine.
,
• Modi CK
• Chauhan JD.
Comparison of optimal edge detection algorithms for liquid level inspection in bottles.
,
• Ley SV
• Ingham RJ
• O'Brien M
• Browne DL.
Camera-enabled techniques for organic synthesis.
,
• Wang T-H
• Lu M-C
• Hsu C-C
• Chen C-C
• Tan J-D.
Liquid-level measurement using a single digital camera.
,
• Feng F
• Wang L
• Zhang Q
• Lin X
• Tan M.
Liquid surface location of milk bottle based on digital image processing.
,
• Liu X
• Bamberg S
• Bamberg E.
Increasing the accuracy of level-based volume detection of medical liquids in test tubes by including the optical effect of the meniscus.
,
• Yuan Weiqi
• Li Desheng
Measurement of liquid interface based on vision.
]. They are showing how to measure a height and the volume of liquids in different transparent vessels with laser diodes. That is a smart and efficient process but is impossible to detect more complex information's per example a topology of a cell pellet. The measurement equipment must have much more input signals, to detect detailed features [
• Liu X
• Bamberg S
• Bamberg E.
Increasing the accuracy of level-based volume detection of medical liquids in test tubes by including the optical effect of the meniscus.
,
• McNeal JD
• Liu Y
Sample Level Detection System: B2(US 6,770,883).
,
• Bryant ST
• Barber CP.
System and method for detection of liquid level in a vessel: B2(US 7,982,201);.
].
New research results in machine learning, a branch of AI, show that well fitted models can interpret lab samples automatically. This automatic interpretation is used in branches of medical departments in radiology, pathology, dermatology, and cancer detection [
• Liu X
• Faes L
• Kale AU
• Wagner SK
• Fu DJ
• Bruynseels A
• et al.
A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis.
,]. MindPeak (Hamburg, Germany) uses AI to extract information from microscopic images of biopsies that support the oncologist to choose a more suitable therapy []. The combination of AI and CV has a huge potential and increases the level of automation essential.
In addition, the computer processing power is improving each moment and has made great progress in the past. For this reason, the digital images processing for measurement methods is a new way to determine more complex informations as described above. The CV needs one or more images to perform the measurements. CV uses very complex calculating algorithms; hence they need more computer processing power, but the advantages are evident. For example, a camera system with a calculation unit can check the quality of filled bottles in bottling industry faster and more accurate than an employee [
• Yazdi L
• Prabuwono AS
• Golkar E.
Feature extraction algorithm for fill level and cap inspection in bottling machine.
]. In the process it is important, that the illumination quality is good and adaptive to the scene and material. For dimension measurements of components, it is preferable to use a background illumination in contrast to surface measurements. Here, like for the bottle quality controls in the bottling industry, are incident light illuminations the right choice. CV can be used in many branches and tasks on that assume the ability to see.
In the following section, similar methods are presented that have addressed the detection of phase boundaries. A wide variety of approaches are used to detect the phase boundaries. This paragraph also discusses why already published approaches are not suitable for the problem presented here in the publication. In the fourth section, the developed method is presented in separated steps and the procedure is explained. It will be discussed in more detail how the panoramic image is formed, image enhancement is performed, the phase boundary of the cell pellet is detected, and how the bottom edge of the tube is localized. In the following fifth section, various test results are presented and discussed. Finally, a summary and an outlook, which includes further developments, follow.

## 2. Methods for phase boundaries detection

The patent US 6,770,883 B2 describes a measurement system that uses two diodes with different wavelength to detect phase boundaries of a buffy coat in clear tubes [
• McNeal JD
• Liu Y
Sample Level Detection System: B2(US 6,770,883).
]. The system contains two diodes with different wavelengths, two detectors to receive the beams, and an actuator that moves the sample tube between the diodes and detectors. The diodes can be mounted in different heights. For optimal results, it is recommended to offset the diodes to one another by 90°. The detectors should be positioned by 180° opposite to the diode. The measuring device exploits the physical property of water to absorb light of a certain wavelength, which is described by the absorption coefficient. Light with a wavelength between 750 nm and 1,250 nm is marginally damped and is therefore most suitable for this method. The second diode emits light with a wavelength of 1,550 nm, which is an infrared light that is used to detect blood cells [
• McNeal JD
• Liu Y
Sample Level Detection System: B2(US 6,770,883).
].
This method considers two different scenarios that differ in the number of phase boundaries and the contents. The test tube includes a buffy coat consisting of serum and erythrocytes or serum, gel and erythrocytes. Based on the spectral properties of the three components depending on the absorption coefficient, a correlation can be made to detect the phase boundaries. The visible light is hardly attenuated by the air, serum, and gel. The intensity decreases only when the visible light hits the phase of erythrocytes. The absorption coefficients of serum and gel are similar to that of water; thus, no distinction is possible. Unlike visible light, infrared light can only propagate in air and gel and is absorbed by serum and erythrocytes. By moving the test tube vertically, the light intensity can be recorded as a function of height, allowing to detect the position of the interfaces. A series of tests was performed with 800 test tubes and an error rate of 3% was determined. The test series also shows that the method is robust to applied labels and marking [
• McNeal JD
• Liu Y
Sample Level Detection System: B2(US 6,770,883).
].
An optical system, which detects and analyzes the transmitted light of samples was presented by X. Liu et al. The detection depends on the lights wavelength, the vessel, the reflection, and the diffusion [
• Liu X
• Bamberg S
• Bamberg E.
Increasing the accuracy of level-based volume detection of medical liquids in test tubes by including the optical effect of the meniscus.
]. In contrast to other approaches, X. Liu et al. especially focused on the surface effects that are formed at the edge of the vessel [
• McNeal JD
• Liu Y
Sample Level Detection System: B2(US 6,770,883).
,
• Bryant ST
• Barber CP.
System and method for detection of liquid level in a vessel: B2(US 7,982,201);.
]. Due to adhesion forces between different materials, the fluid is pulled up the vessel side walls. A formation of a concave surface is the result, also known as a meniscus [

Wikipedia. Kapillarität. [October 27, 2021]; Available from: https://de.wikipedia.org/w/index.php?title=Kapillarität&oldid=213701740.

]. Depending on the physical property described above, the resulting volume difference must be considered when determining the liquid volume in vessels to achieve higher accuracy.
The measurement setup consists of three units: a detection and reference unit and an actuation unit. Each of the detection and reference units have a laser diode and an opposing detector. The laser diode and the photodetector are located on the same optical axis. The vessel to be examined is moved vertically between the laser diode and the photodetector by the actuation unit. The two detection and reference units are arranged on top of each other and differ only in the wavelength emitted by the laser diode. The detection unit uses a wavelength of 1,550 nm and the reference unit a wavelength of 980 nm. The parallel distance between the diodes is known and is used to align the two data sets. During the measuring process, the vessel is moved vertically by the actuation unit so that the detection unit and the reference unit can detect the section to be examined. During this process, three different scenarios are run through. In the first scenario, the light is only influenced by the vessel and the medium air. The second scenario describes the light attenuation by the vessel and the second medium. In the last scenario, the meniscus deflects the light. As a result, the strongest attenuation occurs here. This effect makes it possible to determine the position and height of the meniscus.
Patent US 7,982,201 B2 from 2011 describes a system that can detect multiple phase boundaries in a test tube [
• Bryant ST
• Barber CP.
System and method for detection of liquid level in a vessel: B2(US 7,982,201);.
]. For this purpose, the measurement system uses the different refractive indices of the media. By additionally adding the diameter of the tube, the volume of the media can be determined. The information of the liquid surface's position can then be passed to a system that can per example separate the liquids. The system consists of two diodes, a camera and an actuator that can move the test tube translationally and rotationally. One diode projects a strip of light onto the rear of the test tube. The camera is positioned opposite to the incoming light beam to be able to detect the outgoing light beam. The second diode is located below the test tube and illuminates the scene. To determine the boundaries, the diode projects a light strip onto the rear of the tube. This light stripe is refracted to different degrees depending on the media refraction indices present in the vessel. Applied labels do not affect the results. The resulting image is sent to a processing unit to interpret and determine fill levels.
Visual quality check systems are increasingly used in medical applications, food & beverage industry, chemical industry and others [
• Yazdi L
• Prabuwono AS
• Golkar E.
Feature extraction algorithm for fill level and cap inspection in bottling machine.
]. The main task of CV is to detect and extract features in images to understand different scenes. There are two different types of inspection systems: Automated computer vision-based inspections systems (AVIS) and intelligent automated computer vision-based inspections systems (IVIS). AVIS try to check the quality of the product through images. IVIS have more complex software/hardware or use AI to interpret more accurate the product quality or the whole scene. The work from L. Yazdi et al. engaged with AVIS to check the product quality of filled bottles in the filling industry and classifies the following errors: Underfilling or overfilling of bottles and the cap position [
• Yazdi L
• Prabuwono AS
• Golkar E.
Feature extraction algorithm for fill level and cap inspection in bottling machine.
].
An AVIS consists of an image unit, an illumination unit and a processing unit. The image unit captures the digital image, which is examined in the following steps. Essential is the image quality, which depends on the type, number, and position of the camera, as well as the scene in which the object to be examined is located. In addition, the illumination is crucial to extract the features to be examined, so the illumination unit also has a considerable impact on the image quality and the detection.
The processes described above are suitable for determining liquid surfaces and various features in images. During the liquid surface detection, low-viscosity liquids are examined, which has the advantage that the liquid surface has an equal height at every point after a defined time (excludes side effects such as the capillary action). Thus, the measurement in one point is sufficient to determine the liquid level. The processes from the bottling industry detects the liquid surface due to the convolution with an edge detector. Therefore, the level height detection in multiple points is possible. However, the vessels are observed from only one angle, thus details from side or backside are not included in the detection. Another approach, presented by T. Zepel et al. in 2020, uses the Sobel operator as an edge detector for monitoring the liquid level during continuous preferential crystallization. This is a time-consuming process that is usually performed by laboratory staff. This approach fully automatically monitors the liquid level in the vessel, through a side-mounted camera that detects the liquid level. The user is provided with a user interface, allowing him to visually set the limits of the liquid level. If the software detects a liquid level outside the limits, the software will change the pump parameters accordingly to adjust for the level error [
• Zepel T
• Lai V
• Yunker LPE
• Hein JE.
Automated liquid-level monitoring and control using computer vision.
]. However, this approach assumes a straight edge, which makes the ability to detect more complex edges impossible. Similar approaches published by M. Devare and Z. Preston et al., deal also with the detection of phase boundaries. However, here a canny edge detector is used to detect a liquid level in transparent vessels [
• Devare M.
Parallel image processing for liquid level detection.
,
• Preston Z
• Green R.
The levelshred method: a solution to fluid level detection in partially-obstructed containers.
]. The application of these methods is suitable for liquid surfaces but result in inaccurate measurement results in the determination of solids which form uneven surfaces such as cell pellets after centrifugation.
S. Eppel described an approach for the detection of complex phase boundaries in transparent vessels [

Eppel S. Tracing liquid level and material boundaries in transparent vessels using the graph cut computer vision approach; 2016.

]. In this detection method, the foreground and the background are separated by the mathematic operation named graph cut optimization. For the procedure, initial starting points, so-called seeds, must be set, which are in the areas of the different materials. Seeds are set by the user or by assumptions and must be present for each area that is separated from another [
Markus Hillebrand
Bildsegmentierung mit Graph Cut.
]. In the approach presented by S. Eppel, phase boundaries between solid-air and liquid-air are detected [

Eppel S. Tracing liquid level and material boundaries in transparent vessels using the graph cut computer vision approach; 2016.

]. A prerequisite for detection is the knowledge of the contours of the vessel in the image. For the automatic localization of the seeds, the assumption is made, that in the vessel, defined by the contours, the upper 10% belong to the sink and the lower 10% belong to the source. Based on this assumption; the individual pixels of the image are assigned to the different seeds by selecting an edge between the two different materials that have the lowest cost. This approach can detect complex phase boundaries between materials, but it requires information about the vessel contour that clearly separates the materials from the rest of the image.
Eppel et al. also reported the detection of liquids and solids in transparent vessels. For the detection, two different neural networks were trained in advance: one network segments the transparent vessel and thus creates a mask for the input image of the second network which performs the segmentation of different liquids, solids, foams. Therefore, the input image of the second network is cleaned from superfluous information. Thus, the segmentation achieves a significantly higher accuracy. [
• Eppel S
• Xu H
• Bismuth M
• Aspuru-Guzik A.
Computer vision for recognition of materials and vessels in chemistry lab settings and the vector-labpics data set.
,

Eppel S, Xu H, Aspuru-Guzik A. Computer vision for liquid samples in hospitals and medical labs using hierarchical image segmentation and relations prediction; 2021.

]. A condition for the detection of different phase boundaries is a high contrast between the different substances. When detecting a phase boundary from a cell pellet, the contrast is not sufficient to make a clear distinction between, for example, the cells and the tube. Therefore, this method is unsuitable for this problem.
In addition to the Graph-Cut-Method, it is essential that the low 10% of the image is representing the source material and the upper 10% the sink. For the process described in this paper, it means that the cell pellets must fill the lower part of the image. However, the cell pellet is not located at the image bottom but in the middle and furthermore it can't be ensured, that the number of cells in each image fill up the lower 10%.
Additionally, the cell pellet topologies can't reach the 10% assumption all the time. The centrifugation results in an uneven surface with hills and valleys. In case of valleys below the assumed 10%, the phase boundary is not detected correctly, and the detection is equal to the assumption. Therefore, another detection method must be used, to clearly detect the phase boundary between the cell pellet and the liquid. In order to determine the height of the cell pellet, the lower edge of the tube must also be detected because the tube bottom can vary in its dimensions due to manufacturing tolerances and positioning inaccuracies. Therefore, a new way to determine the cell pellet height, depending on the phase boundary and the lower point of the tube, is described in the next chapters. In addition, faulty edge sections caused by the shadows of the grub screws are automatically faded out. Through this process, the highest point of the cell pellet can be determined and thus a higher-level process can aspirate the fixative without reducing the number of cells. This process can be performed completely autonomously and no assumptions have to be made, such as the presence of cells in a certain image area.

## 3. Cell pellet detection system

The aim of this work is a suitable method for the automated detection of the phase boundary between a cell pellet and a supernatant. The fixative, made in equal parts of water and ethanol, is used to slow down the dissolving process of the cells. After the measurement, the highest point of the cell pellet topology can be determined. The information can be passed to the higher-level process software for further processing of the sample, such as the aspiration of the supernatant. Due to the height information, any loss of the cell pellet during the aspiration of the liquid can be avoided.
A suitable cell pellet detection system was developed. The system consists of a camera fixed on a linear rail system to set the focus area individually, a unit for holding and rotating a tube which contains the cells and a background illumination to light the scene (see Fig. 1.1). The holding unit consist of a tube made of synthetic material, which takes the test tube containing the cells. At the bottom of the transparent synthetic tube, three grub screws center the test tube and lift it up from the ground.
A step motor, below the tube holder, enables the rotation of the tube. All parts are covered by a black box to prevent side effects from environment light [
• Karnik Ameya
Entwicklung eines Geräts zur optischen Phasengrenzerkennung [Masterthesis].
]. The system is integrated with a liquid handling system for further processing of the samples, including aspiration of the supernatant above the cell pellet. Sample tubes are provided to the system using a robotic arm. In Fig. 2 Table 1 is showing the part of the detection unit in detail.
The Fig. 1.2 shows a single image of a tube with cells. In addition to the cell pellet the image shows shadows on the left and right side of the cone and two of the three grub screws: one on the left side and one on the right side. The shadows are a result of the background illumination and the conical form, but they are outside the area of interest (AOI). In Fig. 2 Table 2, the colored parts are explained in detail. The AOI width depends on the numbers of images (see also Chapter Panoramic image). The grub screws interfere the measurement since they generate shadows during the rotation, or they are on the same height as the cell pellet boundary.

## 4. Methods

The whole measurement process was developed in LabCV, an inhouse developed framework that enables support of data parallelism, hardware support and abstraction, support for changing labware, and device setup routines [
• Ritterbusch K.
A framework for optical inspection applications in life-science automation [Dissertation].
]. Furthermore, OpenCV libraries were used for typical image operations and the cascade classifiers to recognize the tube bottom.

### 4.1 Image acquisition

For accurate detection of the pellet height, images must be taken from different angles. Therefore, the step motor of the holding unit must rotate the sample tube in defined steps. The step size could be 1°, in this case 360 images will be taken from the sample tube from different angles, but the step size must be chosen so that 70 images will be taken at least. Less than 90 images will affect the quality of the panorama image. Withal, the angle value is the quotient of the full rotation and the number of images that will be shot. The first step of this process is a defined initial state, which is determined by an inductive proximity sensor.
At this position, the first image is captured and added to the vector I. Then the step motor rotates the tube by the calculated angle and the next image is taking and added to vector I. These steps will repeat until a full rotate is over. The Fig. 3.1 shows the program flow chart.

### 4.2 Panoramic image

The panoramic image is essential for the cell pellet boundary detection. To create a panoramic image, small subframes from every image from vector I must be stitched next to each other. The image parts are taken from the middle of the images because this point has the lowest perspective distortion, and the cell pellet is fully shown. The subframe i (Fig. 1.2 purple rectangle) width depends on the numbers of images and is calculated by:
$i=2·tan(α2)·r$
(4.2.1)

Due to the conical form of the tube, the radius r depends on the cell pellet height. Though this fact doesn't affect the cell pellet boundary detection, and therefore it can be neglected.
These subframes will stitch together and result in a panorama image (see Fig. 3.2 right). Therefore, the panorama image height is equal to the single image height, the width is the multiple, depending on the number of images, and the width of one subframe. Due to this dependency, the panorama image quality depends on the number of images that were taken. If the number of images is lower than 70, the formula to calculate the subframe i doesn't work. In this case, the calculated width of the subframe is too broad (see Fig. 3.2 left). Therefore, the cell pellet boundary is not shown correctly. The panorama in Fig. 3.2 right contains three grub screws (yellow) and the associated shadows (green), the label ‘5.0’ of the test tube and the associated lines for the height identification and the cell pellet, which has a color gradient due to the number of cells from black to brown, and the boundary (red line).

### 4.3 Cell pellet boundary detection

The phase boundary detection is divided into five steps. The initial point is one of three channels of the RGB images, exactly the extracted green channel from the panorama image. The first step is a convolution with a Gaussian blur filter to reduce the noise. This function is implemented in the OpenCV libraries. Because of the convolution, the cell pellet boundary gets smoother. In addition, the dust and dirt on the tube can be reduced. The second step converts the image to a black-white-image using a threshold process (OpenCV). The black-white-image contains the cell pellet boundary, the text label, the grub screws and the associated shadows. To detect the cell pellet boundary, the image is convolved in the x and y directions with a Sobel operator and the results are overlaid in equal parts. The function ‘findContours’ of OpenCV returns all detected edges from the panorama image and saves them as vectors, also edges from the grub screws or the labels on the test tube. According to a logical process of elimination that the edge must start on the left side and end on the right side, on edge from the function's result is select. All steps are shown in Fig. 3.3. To eliminate some bumps and smooth the edge, the moving average is calculated. In the Fig. 3.3 (i), the result drawn as the red line, and the shadows, generated by the grub screws, that bother the edge detection and affect the measurement results, can be seen.

### 4.4 Detection and elimination of the grub screws shadows

For the height measurement, the grub screws shadows must be detected and ignored. One method to detect the shadows is to specify the positions and a range, that will not be considered. This method has many disadvantages. The position of the screws shadows depends on the number of images and is not constant in every measurement. The grub screws shadow widths are different, and the defined range will be ignored even if the shadows do not affect the cell pellet boundary detection.
For stable detection, it is more suitable to detect the shadows with a numeric derivation of the detected phase boundary (red line). In the derivation, a search is made for three or fewer zeros. The zeros found must each be in one third of the image. A big advantage compared to the first method for the detection of the screw shadows is that if the phase boundary detection is not influenced by one or more screw shadows, this area is considered in the detection of the cell edge. Fig. 3.4 shows that screw shadows 1 and 2 are detected and the faulty area of the edge is omitted (in the blue limitations). The shadow of screw 3 is not detected because the shadow doesn't affect the cell pellet border, and the edge in this area can be considered.

### 4.5 Detection of the tube bottom

For the tube bottom detection, a method based on a cascade classifier is used. The OpenCV software includes a function to train a multi-stage model for the detection. For the training algorithms, several positive images containing the objects of interest as well as a few negative images containing all disturbing parts will be used. For the tube bottom detection, the training algorithms use 16,932 positive images and 10,352 negative images. The positive images are showing the tube bottom and for better detection, the tube bottom position should be at the lower edge of the positive image. The negative images show the scene without the tube bottom. Too many images result in an overfitting of the model, therefore the precision of the phase boundary detection decreases. The OpenCV function ‘opencv_traincascade’ creates a multi-stage model from the positive and negative images, which can be loaded during the phase boundary detection and run nearly in real time. This model can detect the tube bottom and determine the position in the image. The result is a rectangle, which contains the tube bottom with the lower edge having the equal position as the tube bottom. To determine the real bottom (the bottom inside the tube) an offset is added to determine the rectangle lower edge. The offset is defined by the bottom thickness, which is converted into the number of pixels.

## 5. Results

For the proof of functionality, test tubes containing different amounts of centrifuged solutions of human cervix carcinoma cells (HeLa) were used. HeLa cells are often used and widely accepted for different testing tasks. In 1951, they are extracted from a carcinoma and since then cultivated [
• Mink C.
Zusammenhänge von Struktur und Funktion unterschiedlicher membranaktiver Peptide.
]. Due to the different amounts, the height in each tube may differ. In total, 121 image series with 360 images each are acquired, and from each series a panoramic image is created. In these panoramic images, the quality and position of the detected edge, as well as the defined areas of the screw shadows, are checked for the selected position and their width.

### 5.1 Quality test of the deected cell boundary

For this quality test, 121 panoramic images (total 43,560 singles images) from 14 different tubes were created. These panoramic images are examined by the algorithms to detect the cell pellet boundary and screws shadows as described above. The function results are drawn as colored lines in the images (see also Fig. 3.4), for an optical analysis.
To represent the quality in numbers, the optical analysis results are given on a scale of 1 to 6, where 1 is the best and 6 is the worst quality. The test includes the phase boundary position, the upper edge from the cell pellet, whether this line is interrupted or has wrong detections. Excluded from this are the areas around the screw shadows, which are also excluded from detection. The results are presented in Fig. 2 Table 3.
From the results, the average score is 1.12. Two errors in the detection occurred that are shift of the detected edge due to non-significant transitions between the liquid phase and the cell pellet, as well as insufficient pellet formation in the centrifugation process. All samples are stored at 20 °C room temperature and will be processed immediately after the production. The first error is due to the non-stable state of the cells and the associated decay over time. The result is a blurred edge between the cells and the liquid phase associated therewith the loss of contrast. This error only occurred with samples stored at 20 °C room temperature for 24 hours and was the most frequent in the series of measurements with 11%. Since the process is not expected to examine older samples, this error can be neglected. The second error occurred in about 5% of the examined images and is attributable to the low number of cells in the sample.

### 5.2 Quality test of the detected the screw shadows

As a control, the correctly detected screw shadows and the correct set of left and right boundaries, a total of 726 for 363 screw shadows, were counted in the 121 panoramic images. The result is shown in the Fig. 2.
A big advantage compared to the static detection of the screw shadows is that if the cell edge is not influenced by a screw shadow, this area is considered in the detection of the cell pellet boundary. Fig. 3.4 shows that screw shadows (numbered based on the left to right order) 1 and 2 are detected and the faulty area of the edge is omitted. The shadow of screw 3 is not detected and the edge in this area can be considered.

### 5.3 Measured values

To check the accuracy of surface height detection, five tubes with HeLa cells are examined 50 times each. The five tubes contained 5, 10, 15, 20 or 25 cell units respectively. Fifty images were taken of each tube to perform a chi-square test [
• Papula L.
Mathematik für Ingenieure und Naturwissenschaftler: Band 3.
]. The algorithm described above calculated the phase boundary. The dynamic algorithm was used for the detection of the screw shadows. The measurements are taken one after the other, and each measurement takes about 3 minutes. Any influence of measurements among each other is excluded.
In the first step, the measured values are examined for statistical distribution by dividing them into classes. To test the samples for a normal distribution, a chi-square test was performed. According to Lothar Papula, a sample size of n > 50 is sufficient, and each class should contain at least 5 values [
• Papula L.
Mathematik für Ingenieure und Naturwissenschaftler: Band 3.
]. For each measurement series, 50 measurements are available, but not all classes have the necessary number of at least 5 measurements. Therefore, it is not possible to give a mathematical proof of the normal distribution, and it is only assumed that there is a normal distribution of the measured values. Fig. 4.1 shows the discrete histogram of the measurement series described above. The class limits are plotted on the x-axis in millimeters, and the class frequency is indicated by the y-axis in the range from zero to 30 and are equivalent for all figures.
To examine the algorithm for the repeatability accuracy, the standard deviation of the measurement series is calculated, here the assumption was made that the measured values $x1,x2,...,xn$ are samples from a statistical population and are normally distributed [
• Papula L.
Mathematik für Ingenieure und Naturwissenschaftler: Band 3.
].
The arithmetic average $x¯$ is the best estimate for the true value μ:
$x¯=1n(x1+x2+…+xn)=1n∑i=1nxi$
(5.3.1)

The standard deviation $s$ is the best estimate for the unknown standard deviation $σ$ of the statistical normally distributed population and is calculated by:
$s=1n−1·∑i=1nυi2=1n−1·∑i=1n(xi−x¯)2$
(5.3.2)

Based on the calculated estimates, a density function (x) of a normally distributed measurand can be established for each measurement series. The formula is:
$f(x)=12π·σ·e12(x−μσ)2$
(5.3.3)

The estimates are calculated for all measurement series and inserted into the density functions $f(x)$. The Fig. 4.2 shows the density functions of the five-measurement series.
The functions have been normalized to the value one, and the x-axes of the graphs have the same range of values and the same division to make it visually comparable. The standard deviations of measurement series 1, 2, 3 and 4 have a maximum of 0.024 mm and are therefore inconspicuous and the estimate absolute heights from the measurement series 1 to 5 are 4.91 mm, 6.25 mm, 7.38 mm, 9.20 mm and 8.67 mm. The standard deviation for measurement series 5 is the highest at 0.048 mm. However, all calculated standard deviations are smaller than one tenth of a millimeter, which indicates a high measurement accuracy. The contrast in the measurement series 3 and 4 are lower compared to series 1, 2 and 5. This circumstance could have an influence on the measurement accuracy because the parameter for the threshold function is set to the pixel value (green channel) 170, which is why fluctuations can occur. The brightness difference for the images from the different measurement series can be clearly seen in Fig. 5.1.
From the Fig. 5.2, the height or the number of cells influence the image brightness. The histograms of measurement series 1, 2 and 5 show that the images are brighter in contrast compared to the other measurement series. It can also be seen in Fig. 5.1 that the illumination decreases as expected.

### 5.4 Detection of the tube bottom by a multi-stage cascade classifier

To test the tube bottom detection, the tube was rotated in 1-degree steps and in each step an image was acquired. In the first step, the repeatability accuracy is proofed, and therefore the standard derivation was calculated. The Chi-square test is used to test whether there is a normal distribution of the measured values. Fig. 6.2 shows the detection result (green rectangle) of the cascade classifier.
To test the accuracy, the tube bottom position (the lowest point of the tube) is determined manually in each of the 360 images. Following, the multi-level model is detecting the bottom. The tube bottom is set to be at 940 pixels. Fig. 6.2 shows the histogram of the measurements.
The measurements show a systematic deviation from the nominal value 940. The average deviation is fours pixels, which is equal to 0.093 mm. This systematic deviation can result from teaching the cascade classifier when the positive images are not properly aligned (the tube bottom is not at the lower edge of the positive images). The green rectangle in Fig. 6.2 shows the image section that will be used to teach the cascade classifier. If this image section is not aligned with the bottom of the tube, as in this example, the cascade classifier works inaccurately. This issue could be corrected afterwards by an offset.
Again, a normal distribution is assumed. The measurements show a standard deviation of 0.0690 mm, which indicates a high measurement accuracy. Summing up the highest standard deviation of the cell pellet boundary and the detection of the tube bottom results in a total standard deviation of 0.12 mm. Through the cell pellet boundary detection and the information about tube bottom position, the height of the cell pellet can be determined.

### 5.5 Move to detected height

A specific method for a Biomek 3000 (Beckman Coulter, California, United States) was developed to check the detected height. The first step is that the method described above detects the height of a cell pellet. Next, the Biomek 3000 pipetting head moves above the tube. The pipetting head is moved to the calculated height of the cell pellet twice. The double movement to the detected point cannot be suppressed, the framework does this automatically. After the movement, the pipette tip is stopping at the boundary and the developed method creates a new panorama image. The result is shown in Fig. 7. Now, the height can be measured manually.
The Fig. 7 shows a panorama image of a cell pellet, made of 90 single images. The detected height is marked in green and is 5.03 mm. This value is being passed to the method that moves the Biomek 3000 to this point. In addition, the pipette that has moved to the calculated position can be seen. The pipette tip is 28 pixels lower ($≈0.652mm)$ as the cell pellet boundary. This issue can be the result of a deviation of edge detection due to changed contrast ratios, standard deviation of edge detection, standard deviation of tube bottom detection, positioning error of the Biomek 3000 or change of the topology by double immersion.
It is highly probable that the first immersion of the pipette has an influence on the shape of the cell pellet boundary and thus on the height. When measuring the edge height of a tube repeatedly, the detected height decreases by 0.1 mm compared to the previous measurement. Therefore, it is not possible to make a static evaluation about the repetition accuracy because the measured values would not be independent among each other. Each measurement influences the subsequent measurement by the immersion of the pipette.

## 6. Conclusion

The measurement results indicate that the developed methods in this work are robust and have high repeatability. The variations of the measurement results are in the range of less than a tenth of a millimeter, below the positioning accuracy of common robots. The feature to create a panorama image is a powerful tool and can be used in other projects as well. The detection of the phase boundary, by the Sobel operator, is solid, and the good results are improved by the preparations. Using the multiple-level cascade classifier created by functions from the OpenCV library, a strong detector can be created in a very short period of time that identifies the searched object in images very well. A suitable number of positive and negative example images for the training is required. For further developments, more information can be extracted of the cell pellet. A depth camera could be used to create a three-dimensional object from the cell pellet; therefore, more information of the topology from the middle of the cell pellet are available and the cell pellet volume can be calculated more exactly.
The system, as well as the application example, show that CV is a powerful tool and is rightly gaining in popularity. The ability to enable computers to see through cameras gives users a new way to measure instruments that can capture significantly more than one measurand. There is also the possibility to connect the CV with an A.I., which keeps a much greater potential.

## Declaration of Competing Interest

The authors declared no potential conflicts of interest regarding the research, authorship, and/ or publication of this article.

## Acknowledgments

The authors wish to thank Dr.-Ing. Steffen Junginger, Heiko Engelhardt as well as B.Sc. Anne Reichelt for valuable input, discussions and technical support.

## References

1. Marr B. 7 Amazing Examples Of Computer And Machine Vision In Practice. Forbes 2019, 8 April 2019; Available from: https://www.forbes.com/sites/bernardmarr/2019/04/08/7-amazing-examples-of-computer-and-machine-vision-in-practice/?sh=16c6701a1018. [August 01, 2022].

2. Schneider K. Biometrische Verfahren werden beim Banking beliebter. [August 01, 2022]; Available from: https://www.handelsblatt.com/technik/sicherheit-im-netz/fingerabdruck-statt-passwort-biometrische-verfahren-werden-beim-banking-beliebter/26093358.html.

• Ivanov M
• Ulanov A
• Cherkasov N.
Visual control of weld defects using computer vision system on FANUC robot.
in: 2022 International conference on industrial engineering, applications and manufacturing (ICIEAM). IEEE. 2022: 859-863
• Seemungal TA
• Harper-Owen R
• Bhowmik A
• Jeffries DJ
• Wedzicha JA.
Detection of rhinovirus in induced sputum at exacerbation of chronic obstructive pulmonary disease.
Eur Respir J. 2000; 16: 677-683https://doi.org/10.1034/j.1399-3003.2000.16d19.x
• Smith SD
• Wheeler MA
• Plescia J
• Colberg JW
• Weiss RM
• Altieri DC.
Urine detection of survivin and diagnosis of bladder cancer.
JAMA. 2001; 285: 324-328https://doi.org/10.1001/jama.285.3.324
• Wilbur DC
• Meyer MG
• Presley C
• Aye RW
• Zarogoulidis P
• Johnson DW
• et al.
Automated 3-dimensional morphologic analysis of sputum specimens for lung cancer detection: performance characteristics support use in lung cancer screening.
Cancer Cytopathol. 2015; 123: 548-556https://doi.org/10.1002/cncy.21565
3. Eppel S, Kachman T. Computer vision-based recognition of liquid surfaces and phase boundaries in transparent vessels, with emphasis on chemistry applications. arXiv:1404.7174.

• Eppel S.
Tracing the boundaries of materials in transparent vessels using computer vision.
3rd ed. 2015
• Chakravarthy S
• Sharma R
• Kasturi R.
Noncontact level sensing technique using computer vision.
IEEE Trans Instrum Meas. 2002; 51: 353-361https://doi.org/10.1109/19.997837
• Yazdi L
• Prabuwono AS
• Golkar E.
Feature extraction algorithm for fill level and cap inspection in bottling machine.
in: International conference on pattern analysis and intelligence robotics. 2011: 47-53https://doi.org/10.1109/ICPAIR.2011.5976910
• Modi CK
• Chauhan JD.
Comparison of optimal edge detection algorithms for liquid level inspection in bottles.
in: 2009 Second international conference on emerging trends in engineering & technology. 2009: 447-452https://doi.org/10.1109/ICETET.2009.55
• Ley SV
• Ingham RJ
• O'Brien M
• Browne DL.
Camera-enabled techniques for organic synthesis.
Beilst J Organ Chem. 2013; 9: 1051-1072https://doi.org/10.3762/bjoc.9.118
• Wang T-H
• Lu M-C
• Hsu C-C
• Chen C-C
• Tan J-D.
Liquid-level measurement using a single digital camera.
Measurement. 2009; 42: 604-610https://doi.org/10.1016/j.measurement.2008.10.006
• Feng F
• Wang L
• Zhang Q
• Lin X
• Tan M.
Liquid surface location of milk bottle based on digital image processing.
Commun Comput Inf Sci. 2012; 346: 232-239https://doi.org/10.1007/978-3-642-35286-7_30
• Liu X
• Bamberg S
• Bamberg E.
Increasing the accuracy of level-based volume detection of medical liquids in test tubes by including the optical effect of the meniscus.
Measurement. 2011; 44: 750-761https://doi.org/10.1016/j.measurement.2011.01.001
• Yuan Weiqi
• Li Desheng
Measurement of liquid interface based on vision.
in: Fifth World Congress on Intelligent Control and Automation (IEEE Cat. No.04EX788). 2004: 3709-3713https://doi.org/10.1109/WCICA.2004.1343291
• McNeal JD
• Liu Y
Sample Level Detection System: B2(US 6,770,883).
2002 (Available from:)
• Bryant ST
• Barber CP.
System and method for detection of liquid level in a vessel: B2(US 7,982,201);.
2009 (Available from:)
• Liu X
• Faes L
• Kale AU
• Wagner SK
• Fu DJ
• Bruynseels A
• et al.
A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis.
Lancet Digit Health. 2019; 1: 271-297https://doi.org/10.1016/S2589-7500(19)30123-2
4. Künstliche Intelligenz findet Krebszellen. [03. 08 2020]; Available from: https://www.ndr.de/nachrichten/hamburg/Kuenstliche-Intelligenz-findet-Krebszellen,mindpeak100.html.

5. Wikipedia. Kapillarität. [October 27, 2021]; Available from: https://de.wikipedia.org/w/index.php?title=Kapillarität&oldid=213701740.

• Zepel T
• Lai V
• Yunker LPE
• Hein JE.
Automated liquid-level monitoring and control using computer vision.
2020
• Devare M.
Parallel image processing for liquid level detection.
in: Iyer B Crick T Peng S-L applied computational technologies: proceedings of ICCET 2022. 1st ed. Springer Nature Singapore, Singapore2022: 372-382
• Preston Z
• Green R.
The levelshred method: a solution to fluid level detection in partially-obstructed containers.
in: 2021 36th international conference on image and vision computing New Zealand (IVCNZ). IEEE. 2021: 1-6 (2021)
6. Eppel S. Tracing liquid level and material boundaries in transparent vessels using the graph cut computer vision approach; 2016.

• Markus Hillebrand
Bildsegmentierung mit Graph Cut.
2018
• Eppel S
• Xu H
• Bismuth M
• Aspuru-Guzik A.
Computer vision for recognition of materials and vessels in chemistry lab settings and the vector-labpics data set.
ACS Cent Sci. 2020; 6: 1743-1752https://doi.org/10.1021/acscentsci.0c00460
7. Eppel S, Xu H, Aspuru-Guzik A. Computer vision for liquid samples in hospitals and medical labs using hierarchical image segmentation and relations prediction; 2021.

• Karnik Ameya
Entwicklung eines Geräts zur optischen Phasengrenzerkennung [Masterthesis].
Universität Rostock, Rostock, Deutschland2020
• Ritterbusch K.
A framework for optical inspection applications in life-science automation [Dissertation].
Universität Rostock, Rostock, Deutschland2012
• Mink C.
Zusammenhänge von Struktur und Funktion unterschiedlicher membranaktiver Peptide.
Zugl.: Karlsruher Inst. für Technologie, Diss. Logos-Verl, Berlin2009 (2010)
• Papula L.
Mathematik für Ingenieure und Naturwissenschaftler: Band 3.