TritonAI™ 64 Customizable IP Platform
for AI

Driving Inferencing to the Edge

TritonAI™ 64 Platform for AI-enabled Edge SoCs

Wave Computing’s customizable, AI-enabled platform merges a triad of powerful technologies to efficiently address use case requirements for inferencing at the edge.


A Scalable Triad of Technologies

  • MIPS64 + SIMD Multi-CPU

    Multi-core, multi-threaded, multi-cluster MIPS™ CPUs

  • WaveFlow Technology

    Patented, scalable dataflow platform designed to efficiently execute existing and future algorithms

  • WaveTensor Technology

    Highly-efficient, configurable, multi-dimensional TensorCore processing engines

Combination of Technologies Helps Future Proof Your Investments in a Rapidly Evolving Industry


  • 128-bit SIMD/FPU
  • 8/16/32/int, 32/64 FP datatype support
  • Virtualization extensions
  • Superscalar 9-stage pipeline w/SMT
  • Caches (32KB-64KB), DSPRAM (0-64KB)
  • Advanced branch predict and MMU

Multi-Processor Cluster:

  • 1-6 cores
  • Integrated L2 cache (0-8MB, opt ECC)
  • Power management (F/V gating, per CPU)
  • Interrupt control with virtualization
  • 256b native AXI4 or ACE interface

WaveFlow Reconfigurable Architecture

  • 2-1K scalable tiles
  • Flexible 2-D tiling layout
  • Each tile contains 16 CPUs and 8 MACs (8int)
  • 8/16/32 int datatype support
  • Compatible datatypes with WaveTensor
  • Dynamically reconfigurable
  • Overlapping compute and I/O
  • Concurrent network execution
  • Future proof for all AI algorithms
  • Add signal and vision algorithms
  • Independent operation from CPU

WaveTensor Reconfigurable Architecture

  • Highly efficient for CNN execution
  • >10 TOPS/sq mm,  8 TOPS/watt @ 7nm
  • Configurable 4×4 or 8×8 MAC tiles
  • 8 int datatype support
  • Compatible datatypes with WaveFlow
  • Scalable # of tiles per slice
  • Scalable # of slices per core
  • Overlapping compute and I/O
  • Independent operation from CPU

TritonAI Application Programming Kit (APK)

  • GCC Binutil tools and IDE
  • Debian Linux OS support and updates
  • WaveRT API abstracts framework calls
  • WaveRT Optimized AI libraries for
  • CPU/SIMD/WaveFlow/WaveTensor
  • TensorFlow-lite build support and updates
  • TensorFlow build for edge training roadmap


Contact Us for More Details
on the TritonAI APK

The combination of Wave’s AI technologies and MIPS embedded, RISC, multi-threaded CPU IP enables inferencing at the edge today and training at the edge tomorrow.

AI-Native Platform Offers Significant Benefits

The TritonAI 64 platform provides the inferencing power needed for today’s AI edge applications, with the flexibility and scalability to address the rapidly evolving needs of the innovative AI industry.

MIPS architecture icon
Highly efficient for today’s AI CNN algorithms
AI cores icon
Flexible to support tomorrow’s AI algorithms
Highly scalable to analyze wide data rates
MIPS processors icon
Configurable to support AI use cases
MIPS processors icon
Focused on edge inferencing use cases
Roadmap to address edge learning use cases

Wave Innovation:
A Scalable, Unified Platform to Accelerate AI

By combining Wave’s unique, patented WaveFlow processor technology with its proven, efficient, MIPS core designs, Wave is powering the next generation of AI, with a single platform that scales from the datacenter to the edge.

AI-Native Dataflow Technology


MIPS Architecture & Technology


Contact Us

Want to accelerate your AI project with Wave innovation? Let us help you discover the power of scalable, dataflow technology.


Sign Me Up for Early Access

At Wave, we’re always working on what’s next. Sign up to be the first to learn about, and test, our next generation designs.