The core of our AI engine is constructed upon highly optimized implementation of AI-cores with advanced learning capabilities and 10x less area and power consumption than competitors’ hardware with sensor level inference engines and edge accelerator capabilities. Our smart sensors include AI implemented by our patented algorithms and architectures, enabling the world’s lowest power smart microphone with advanced event detection capabilities. Our deep learning accelerator technology provides significant power-area savings while supporting high reconfigurability through our novel data architecture and efficient I/O bandwidth.
Smart Sensor AI
Chaoyang’s approach to smart sensing is inspired by design and development of a multifunctional on-sensor AI core (MOSAIC) technology that can be built upon various sensing mechanisms applicable to a myriad of verticals ranging from smart phones and wearables, to the countless sensors used IoT and IIoT applications. Our key focus is on creation of very small size and low energy reconfigurable Neural Network (NN) platforms that can provide accurate decisions by customization to the application in hand, while using an order of magnitude less power and chip area.
Sensor Fusion at the Edge technlogy is another unique value proposition that can augment various sensory mechanisms for a powerful and quick event detection in smart homes, buildings, and industrial IoT.
MOSAIC technology can provide countless benefits including sensor self-test, low energy, low latency, high accuracy, less network traffic, and most importantly a better user experience. Our unique testing technology ensures that the sensor data is reliable and can be used by the AI engine for accurate inference.
Benefits of on-sensor AI platform
Focusing on addressing the two main challenges of edge computing, i.e. approaching the “End of Moore’s Law”, and the “Memory Wall Problem” raised by the famous Von Neuman computation model, we have developed a breakthrough technology that is designed around the needs of today’s Edge computing. Unlike other non-Von Neuman technologies however, our design is based on the well-established CMOS technology with high yield and high performance/cost.
Our technology is built upon an architectural breakthrough in NN inference engine design called LIFE-DLA® (Liquid Inference Fabric Engine-Deep Learning Accelerator). LIFE-DLA® is a disruptive approach to enhancing digital data representation, data movement, data augmentation, RAM requirements and I/O bandwidth. The liquidity of data refers to the ability to change the data representation and word size on the fly, through software with no hardware cost. Our innovations and highly optimized reconfigurable features empower the LIFE-DLA® core to be the perfect solution in various applications from battery operated edge devices to high throughput edge servers, through its massively parallel computation core.
Compared to standard digital implementation, this vastly optimized DLA core can save up to 20x in area and 10x in power, without any performance penalty. LIFE-DLA® core software cycle is designed to be compatible with popular libraries like Tensor Flow and employs highly optimized user mode and kernel mode drivers.
The LIFE-DLA® Core is highly scalable to the larger scale processing required for loud-based servers and server farms. One of key the values of this technology is it’s very low power consumption compared to competitor DLA’s which can create savings in electricity bills, HVAC, and Large Electrical Facility server farms.