VeriSilicon’s Vivante VIP9000Pico processor family offers low power, programmable, scalable and extendable solutions for markets that demand low power AI devices. VIP9000Pico Series’ patented Neural Network engine and Tensor Processing Fabric deliver superb neural network inference performance with industry-leading power efficiency (TOPS/W) and area efficiency (mm2/W). The VIP9000Pico’s scalable architecture enables AI for wearable and IoT market. In addition to neural network acceleration, VIP9000Pico Series are optionally equipped with Parallel Processing Units (PPUs), which provide full programmability along with conformance to OpenCL 3.0 and OpenVX 1.2.
VIP9000Pico Series IP supports all popular deep learning frameworks (TensorFlow, TensorFlow Lite, PyTorch, Caffe, DarkNet, ONNX, Keras, etc.) and natively accelerates neural network models through optimization techniques such as quantization, pruning, and model compression. AI applications can easily port to VIP9000Pico platforms through offline conversion by Vivante’s ACUITY™ Tools SDK or through run-time interpretation with Android NN, NNAPI Delegate, ARMNN, or ONNX Runtime.
- 48, 96, 192, or 384 MACs configurations
- INT8 or INT16 weights and activations
- Flexible mixed precision inference
- Tensor processor core for low power RNN/LSTM and non-convolutional operations
- Tensorflow, TF-light, TF-Lite Micro, Pytorch, ONNX, ARM NN, Caffe
- 50+ built-in operations requiring no CPU processing