PT2 - AI powered Metrology: from Quality Metrics to Sensor Networks
- Event
- SMSI 2025
2025-05-06 - 2025-05-08
Nürnberg - Band
- SMSI 2025
- Chapter
- Plenary Talks
- Author(s)
- Dr. A. Rusconi - USound, Vienna (Austria)
- Pages
- 7 - 8
- DOI
- 10.5162/SMSI2025/PT2
- ISBN
- 978-3-910600-06-5
- Price
- free
Abstract
MEMS Speaker technology is today enabling radical innovation in next-generation portable consumer products. This is possible thanks to the availability of piezo thin films on silicon and to transducer designs featuring large force, large displacement, high linearity. USound technology is based on a MEMS actuator featuring multiple piezoelectric elements connected together to maximize mechanical performance; the MEMS chip is packaged and integrated with an acoustic membrane to provide a larger emitting surface and superior damping performances. With this combination it is possible to design a low power, omnidirectional transducer spanning from audio to ultrasound frequencies up to 80kHz.
The largest consumer application for MEMS Speakers are TWS (True Wireless Stereo) earphones, offering best in class performances in sound quality, transparency and hearing enhancement features, Hi-Res Audio and ultrasound enabled functionalities. The technology has also reached maturity in specialized markets, such as Healthcare with MRI-compatible headphones and in Industrial applications, used in test equipment for high-performance MEMS microphones.
USound technology is capable of substantial innovation in ultrasound functionalities: MEMS transducers, in particular the Conamara series, are key enablers for next generation high-performance biometric sensors (hearth rate, bloos pressure) and ToF (Time of Flight) systems for automation, robotics, augmented reality, tracking, and smart environments. Their compact size, broadband ultrasound range and omnidirectional characteristics ensure effective integration, high signal-to-noise ratio (SNR), robust performance in handling multi-path and reflections, and spatial resolution in mm range with high temporal resolution.
A growing enabler for such systems is the integration of embedded Machine Learning (Edge-AI) which are capable of handling advanced signal designs such as ZC multi band sequences and chirp-based encodings; by processing acoustic data locally and detecting meaningful patterns, Edge-AI minimizes latency, reduces system power consumption, and enables real-time decision-making without constant cloud connectivity.
Together, the combination of advanced signal processing, broadband MEMS transducers, and Edge-AI establishes a scalable, energy-efficient framework for nextgeneration automation solutions across a wide range of mobile and autonomous platforms.