Accelerating AI from the Edge to the Datacenter

Play Video

Wave Unveils its TritonAI™ 64 Platform to accelerate inferencing at the edge

Learn More

New Release of MIPS Open Components Includes RTL Code for MIPS32® microAptiv Cores

Learn More

About Wave

Wave Computing’s unique dataflow technology and IP accelerate the industry’s broadest set of applications, from real-time, edge devices to enterprise datacenter solutions.

ABOUT WAVE

AI-Native Dataflow Technology

Wave’s unique approach to accelerating deep learning starts by recognizing that deep learning is a dataflow application that demands a different type of processor. Wave’s Data Processor Units (DPU) eliminate the need for a host and co-processor, creating a scalable architecture for any application from the Edge to the Datacenter.

OUR TECHNOLOGY
16K PEs per DPU

16K PEs per DPU

Wave’s DPUs (Dataflow Processor Units) pack 16,000 Processor Elements (PEs), to accelerate even the most complex AI Deep Learning models.

0 CPUsGPUs Required

Zero CPUs/GPUs Required

Wave DPUs eliminate the need for CPUs or GPU co-processors to accelerate Deep Learning neural network models

Optimal Scalability and Efficiency

Wave’s unique dataflow technology delivers the optimal balance of flexibility and efficiency for AI applications.

MIPS is an intellectual property licensing business within Wave Computing, providing processor architecture and core IP. Wave Computing acquired MIPS in June 2018, creating the industry’s first AI systems and embedded solutions powerhouse.