公司

关于芯原
管理团队
新闻发布
相关报道
活动动态
合作伙伴
加入我们
商标
联系我们

投资者关系

董事成员
主要投资者
股票信息
投资者联系
Processor IP Series for Data Center and Automotive Application

VeriSilicon’s Vivante VIP9400 processor family offers programmable, scalable and extendable solutions for markets that demand real time and powerful AI devices. VIP9400 Series’ patented Neural Network engine and Tensor Processing Fabric deliver superb neural network inference performance with industry-leading power efficiency (TOPS/W) and area efficiency (mm2/W). The VIP9400’s scalable architecture can provide up to 200 TOPS computing ability, enables AI for data center and automotive application.

In addition to neural network acceleration, VIP9400 Series are equipped with Parallel Processing Units (PPUs), which provide full programmability along with conformance to OpenCL 1.2 and OpenVX 1.2.

VIP9400 

VIP9400 Series in SEC14LPP

VIP9400 VIP9400-MP4
INT8/INT16/BF16 MAC 24576/6144/12288 98304/24576/49152
PPU cores number 32 128
Performance(INT8,1GHz) 49.16 TPOS 196.64 TPOS
Achievable Clock Speed(GHz) 1 1
Synthesis Logic Gates(MGates) 133 532
Memory Size(MBytes) 11.2 44.8
Programmable Engines (PPU) & NN Engine
128-bit vector processing unit (shader + ext)
OpenCL 1.2 shader instruction set
Enhanced vision instruction set (EVIS)
INT 8/16/32b, Float 16/32b in PPU
Convolution layers
INT8 with INT16, Float16, or BFloat16 variant in NN
Tensor Processing Fabric
Non-convolution layers
Multi-lane processing for data shuffling, normalization, pooling/unpooling, LUT, etc.
Network pruning support, zero skipping, compression
On-chip SRAM for DDR BW saving
Accepts INT 8/16b, BFloat16, Float16
Unified Programming Model
OpenCL, OpenVX
Parallel processing between PPU and NN HW accelerators with priority setting
Supports popular vision and deep learning frameworks: OpenCV, Caffe, Caffe2, TensorFlow, TensorFlowLite, ONNX, PyTorch, MXnet, Cognitive Toolkit, PaddlePaddle, Keras
SW & Tools
ACUITY: End-to-end Neural Network development tool
Eclipse-based IDE for coding/debugging/Profiling
Linux and Android NN API runtime support
Task specific engines speed up for commonly used AI apps
Scalability
Number of PPU and NN cores can be configured independently
Same OpenVX/OpenCL code runs on all processor variants; scalable performance
Extendibility
FLEXA API: Easy integration with other VSI IPs
Reconfigurable EVIS for user to define own instructions
VIP-ConnectTM: HW and SW I/F protocol to plug in customer HW accelerators and expose functionality via CL/VX custom kernels

搜索

联系

English

芯原股份 (688521.SH)
感谢您的订阅
感谢您通过邮件.订阅芯原的最新消息。在您等待我们网站的下次更新时,我们邀请您通过以下资源来了解芯原的更多信息。
一站式芯片定制服务
Vivante图形处理器IP
Vivante神经网络处理器IP
ZSP数字信号处理器IP
Hantro视频处理器IP
关于芯原
关闭