Building the solutions for tomorrow – First Low-Power AI-Inference Accelerator Vision Processing Unit from Think Silicon debuted last week @ Embedded World 2018

Think Silicon® demonstrated a prototype of NEMA®|xNN, the world’s first low-power ‘Inference Accelerator’ Vision Processing Unit for artificial intelligence, convolutional neural networks, last week @ Embedded World 2018 at Nuremberg, Germany. Additionally, supporting demonstrations for attendees showcased features for ultra-low power 3D GPU and display processing, along with graphics software analysis and development tools.

The world premiere prototype demonstration of NEMA®|xNN unveils a power efficient inference accelerator to solve computer-vision tasks in edge-computing applications using optimized convolutional neural networks. The architecture has the ability to scale from single to multi-core and leverages patented real-time compression algorithms to move data efficiently to the on–chip and off-chip memory, while providing 8-bit MAC operations, approximate calculations, data reuse optimizations and delivers memory-latency capabilities.

Think Silicon® also exhibited NEMA® |t – the industry’s first ultra-low power 3D GPU supporting open graphic standard APIs and Vector Graphics for System on a chip (SoC) solutions. Additionally, Think Silicon® showcased supporting software tools including NEMA®|Power-Model (developed as part of the LPGPU2 project), NEMA®|Profiler (developed as part of the LPGPU2 project) and NEMA®|SHADER-edit to assist the analysis and development of applications designed to simplify the creation process.

To view the entire press release, please follow this link.

Comments are closed.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close