An Automated and Modular Framework for Synthetic and Ex-vivo Larynx Experiments


Objective: Understanding the fundamental mechanisms of human voice production requires accurate experimental data describing the interaction between airflow, acoustics, and vocal fold vibration. The so-called fluid-structure-acoustic interaction (FSAI), which occurs in the phonation process, can be analysed through studies on synthetic and ex-vivo larynx models. However, a consistent challenge in conducting such experiments is the integration and coordination of experimental components (e.g., synchronizing multiple measurement modalities and controlling hardware). Therefore, building upon previous work, we present a refined and fully automated framework that enhances both precision and repeatability.

Methods/Design: The framework integrates sensor control, data acquisition, and motorized adjustments of parameters such as airflow, adduction and elongation. This allows for systematic exploration of different phonatory conditions whilst reducing operator-dependent variability through a standardized measurement procedure. Furthermore, the framework provides computer-controlled synchronization of various measurement channels, such as high-speed imaging, particle image velocimetry, sub- and supraglottal pressures, and acoustic data. The resulting dataset ensures complete temporal alignment of all FSAI-relevant parameters within individual phonatory cycle. Written in Python, the framework is not only intended as an in-house control platform, but also as an open-source project to promote reproducibility across laboratories. Its modular design allows for easy configuration and adaptation to diverse experimental setups conducting synthetic, ex-vivo hemi-, and ex-vivo full-larynx studies. We will demonstrate the framework's application and performance across all three of these larynx models.

Results and Conclusions: Experimental results from both synthetic and ex-vivo larynx models will be presented and discussed. Overall, the results show stable and reliable synchronization across all data channels. By automating and standardizing the measurements, the framework improves experimental accuracy and allows detailed quantitative assessment of all FSAI-relevant parameters. The resulting comprehensive datasets provide a basis for directly studying voice production mechanisms and validation and refinement of computational models.

Helena
Stefan
Boğaç
Ruiqing
Michael
Latečki
Kniesburges
Tur
Wang
Döllinger