公司

关于芯原
管理团队
新闻发布
相关报道
活动动态
合作伙伴
加入我们
商标
联系我们

投资者关系

董事成员
主要投资者
股票信息
投资者联系
Neural Network Processor IP Series for AI Vision and AI Voice

VeriSilicon’s Vivante VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices. VIP9000 Series’ patented Neural Network engine and Tensor Processing Fabric deliver superb neural network inference performance with industry-leading power efficiency (TOPS/W) and area efficiency (mm2/W). The VIP9000’s scalable architecture, ranging from 0.5TOPS to 100TOPS, enables AI capability for a wide range of applications, from wearable and IoT devices, IP Cam, surveillance cameras, smart home & appliances, mobile phones and laptops to automotive (ADAS, autonomous driving) and edge servers. In addition to neural network acceleration, VIP9000 Series are equipped with Parallel Processing Units (PPUs), which provide full programmability along with conformance to OpenCL 3.0 and OpenVX 1.2.  

VIP9000 Series IP supports all popular deep learning frameworks (TensorFlow, TensorFlow Lite, PyTorch, Caffe, DarkNet, ONNX, Keras, etc.) and natively accelerates neural network models through optimization techniques such as quantization, pruning, and model compression. AI applications can easily port to VIP9000 platforms through offline conversion by Vivante’s ACUITYTM Tools SDK or through run-time interpretation with Android NN, NNAPI Delegate, ARMNN, or ONNX Runtime.

VIP9000_Architecture.jpg
 VIP9000 Architecture

Programmable Engines (PPU)
128-bit vector processing unit (shader + ext)
OpenCL 3.0 shader instruction set
Enhanced vision instruction set (EVIS)
INT 8/16/32b, Float 16/32b
Tensor Processing Fabric
Non-convolution layers
Multi-lane processing for data shuffling, normalization, pooling/unpooling, LUT, etc.
Network pruning support, zero skipping, compression
On-chip SRAM for DDR BW saving
Accepts INT 8/16b and Float16 (Float16 internal)
Unified Programming Model
OpenCL, OpenVX, OpenVX-NN Extensions
Parallel processing between PPU and NN HW accelerators with priority configuration
Supports popular vision and deep learning frameworks: OpenCV, Caffe, TensorFlow, ensorFlowLite, ONNX, PyTorch, Darknet, Keras
SW & Tools
ACUITY Tools: End-to-end Neural Network development tools
Eclipse-based IDE for coding/debugging/Profiling
NNRT: Runtime framework supporting a droid NN, NNAPI Delegate, ONNX Runtime and ARMNN.
Scalability
Number of PPU and NN cores can be configured independently
Same OpenVX/OpenCL code runs on all processor variants; scalable performance
Extendibility
VIP-ConnectTM: HW and SW I/F protocols to plug in customer HW accelerators and expose functionality via CL/VX custom kernels
Reconfigurable EVIS allows user to define own instructions
Easy integration with other VSI IPs

搜索

联系

English

芯原股份 (688521.SH)
感谢您的订阅
感谢您通过邮件.订阅芯原的最新消息。在您等待我们网站的下次更新时,我们邀请您通过以下资源来了解芯原的更多信息。
一站式芯片定制服务
Vivante图形处理器IP
Vivante神经网络处理器IP
ZSP数字信号处理器IP
Hantro视频处理器IP
关于芯原
关闭