Audio

Audio2019-07-15T12:46:31+00:00

AFE for MEMS Microphone

  • Our Analog Front End (AFE) provides best in class performance for the ultra low power MEMS Microphones employing an extremely low-power preamplifier and a very high-performance sigma-delta ADC
  • AFE with parametric performances such as 70dB SNR and unheard of 140 SPL provides the highest accuracy and lowest power consumption on the market.
  • The AFE also includes a never before seen proprietary Comprehensive Electromechanical Self-Test resulting in significantly reduced production test cost for the AFE imbedded in the microphone module.

Hardware Implemented Smart Features for AFE

  • Chaoyang’s unique state of the art A.I. is integrated into our AFE, creating a first to market solution for mobile phone devices. Our Smart AFE has amazing safety features such as Acoustic Event Detection (AED) and always-ON feature for voice activated smart products. Unlike the typical cloud based solutions currently on the market, Chaoyang solution keeps all data locally which not only addresses the privacy concerns of the cloud based devices such as Amazon Alexa and Google Home but also is extremely fast acting. Demo can be seen upon request.
  • Our forward looking roadmap immediately follows the above product with unique Full Speech Recognition as well as coupling our Audio detection with complex Image Recognition capabilities.

Smart Audio Abilities

With the exponential rate of growth in processing power, edge-based AI algorithms, and specifically neural network architectures, smart audio is gaining a huge momentum in many different use cases ranging from speech analysis to audio tagging and event detection. Up until recently, most of the smart audio applications where focused speech processing including ASR (automatic speech recognition), based on the assumption that human speech is the most informative component of acoustic signals. However, as the speech and speech to text analysis products become more and more advanced (e.g. real-time translators, command-based control systems, security systems, scene detections, etc.) augmentation of speech data with non-speech audio becomes increasingly necessary. This is because non-speech signals can expose many aspects of a scene or an event that would naturally be ignored otherwise. In addition, non-speech analysis can provide hyper for enhancement of many speech and scene analysis applications such as ASR, ISE (Informed Speech Enhancements), Close Captions Systems, to name a few. Recently, Chaoyang Semiconductor has been focusing on a series of revolutionary products based on the above concept, i.e. ASR combined with AED (acoustic event detection). In particular, gaining access to richer acoustic event classifiers has attracted enormous interest from mobile, wearable, and audio component manufactures.