“Human respiration rate measurement with high-speed digital fringe projection technique,” (2023)

A. L. Lorenz and S. Zhang, “Human respiration rate measurement with high-speed digital fringe projection technique,” Sensors 23(21), 9000 (2023)

Abstract

This paper proposes a non-contact continuous respiration monitoring method based on Fringe Projection Profilometry (FPP). This method aims to overcome the limitations of traditional intrusive techniques by providing continuous monitoring without interfering with normal breathing. The FPP sensor captures three-dimensional (3D) respiratory motion from the chest wall and abdomen, and the analysis algorithms extract respiratory parameters. The system achieved a high Signal-to-Noise Ratio (SNR) of 37 dB with an ideal sinusoidal respiration signal. Experimental results demonstrated that a mean correlation of 0.95 and a mean Root-Mean-Square Error (RMSE) of 0.11 breaths per minute (bpm) were achieved when comparing to a reference signal obtained from a spirometer.

“Calibration method for a multi-focus microscopic 3D imaging system,” (2023)

L. Chen, X. Wang, and S. Zhang, “Calibration method for a multi-focus microscopic 3D imaging system,” Optics Letters 48(16), 4348-4351 (2023)

Abstract

This Letter presents a novel, to the best of our knowledge, method to calibrate multi-focus microscopic structured-light three-dimensional (3D) imaging systems with an electrically adjustable camera focal length. We first leverage the conventional method to calibrate the system with a reference focal length f0. Then we calibrate the system with other discrete focal lengths fi by determining virtual features on a reconstructed white plane using f0. Finally, we fit the polynomial function model using the discrete calibration results for fi. Experimental results demonstrate that our proposed method can calibrate the system consistently and accurately.

"Pixel-wise rational model for a structured light system" (2023)

Raúl Vargas, Lenny A. Romero, Song Zhang, and Andres G. Marrugo, “Pixel-wise rational model for a structured light system,” Optics Letters, 48(10) 2712-2715 (2023)

Abstract

This Letter presents a novel structured light system model that effectively considers local lens distortion by pixel-wise rational functions. We leverage the stereo method for initial calibration and then estimate the rational model for each pixel. Our proposed model can achieve high measurement accuracy within and outside the calibration volume, demonstrating its robustness and accuracy.

"Flexible structured light system calibration method with all digital features" (2023)

S. Zhang, “Flexible structured light system calibration method with all digital features,” Optics Express 31(10), 17076-17086 (2023)

Abstract

We propose an innovative method for single-camera and single-projector structured light system calibration in that it eliminates the need for calibration targets with physical features. Instead, a digital display such as a liquid crystal display (LCD) screen is used to present a digital feature pattern for camera intrinsic calibration, while a flat surface such as the mirror is used for projector intrinsic and extrinsic calibration. To carry out this calibration, a secondary camera is required to facilitate the entire process. Because no specially made calibration targets with real physical features are required for the entire calibration process, our method offers greater flexibility and simplicity in achieving accurate calibration for structured light systems.  Experimental results have demonstrated the success of this proposed method.

"Large depth-of-field microscopic structured-light 3D imaging with focus stacking" (2023)

L. Chen and S. Zhang, “Large depth-of-field microscopic structured-light 3D imaging with focus stacking”, Optics and Lasers in Engineering 167, 107623 (2023)

Abstract

State-of-the-art microscopic structured-light (SL) three-dimensional (3D) imaging systems typically use conventional lenses with a fixed focal length achieving a limited depth of field (DOF). This paper proposes to drastically increase the DOF of the microscopic 3D imaging by leveraging the focus stacking technique and developing a novel computational framework. We first capture fringe images with various camera focal lengths using an electrically tunable lens (ETL) and align the recovered phase maps using phase constraints. Then, we extract the focused pixel phase using fringe contrast and stitch them into an all-in-focus phase map via energy minimization. Finally, we reconstruct the 3D shape using the all-in-focus phase map. Experimental results demonstrate that our proposed method can achieve a large DOF of approximately 2 mm and field of view (FOV) of approximately 4 mm × 3 mm with a pixel size at the object space of approximately 2.6 μm. The achieved DOF is approximately 10× the DOF of the system without the proposed method.

Semi-Global Matching Assisted Absolute Phase Unwrapping (2023)

Y.-H. Liao and S. Zhang,  "Semi-Global Matching Assisted Absolute Phase Unwrapping," Sensors 23(1), 411 (2023); doi: 10.3390/s23010411

Abstract

Measuring speed is a critical factor to reduce motion artifacts for dynamic scene capture. Phase-shifting methods have the advantage of providing high-accuracy and dense 3D point clouds, but the phase unwrapping process affects the measurement speed. This paper presents an absolute phase unwrapping method capable of using only three speckle-embedded phase-shifted patterns for high-speed three-dimensional (3D) shape measurement on a single-camera, single-projector structured light system. The proposed method obtains the wrapped phase of the object from the speckle-embedded three-step phase-shifted patterns. Next, it utilizes the Semi-Global Matching (SGM) algorithm to establish the coarse correspondence between the image of the object with the embedded speckle pattern and the pre-obtained image of a flat surface with the same embedded speckle pattern. Then, a computational framework uses the coarse correspondence information to determine the fringe order pixel by pixel. The experimental results demonstrated that the proposed method can achieve high-speed and high-quality 3D measurements of complex scenes.