Arteris IP and Wave Computing Collaborate on Reference Architecture for Enterprise Dataflow Platform

The Arteris FlexNoC Artificial Intelligence (AI) Package Coupled with Wave Computing’s AI Systems and IP Technology Create a Unified Platform Optimized for AI Data Processing

CAMPBELL, Calif. May 21, 2019 Arteris IP, the world’s leading supplier of innovative silicon-proven network-on-chip (NoC) interconnect intellectual property (IP), and Wave Computing®, the Silicon Valley company accelerating artificial intelligence (AI) from the datacenter to the edge, are collaborating to create a blueprint that can help customers overcome compute-to-memory design challenges. Additionally, Wave Computing is licensing Arteris IP’s Ncore Cache Coherent Interconnect, FlexNoC interconnect IP, and its accompanying FlexNoC AI Package for use in the AI-enabled chips that fuel Wave Computing’s data center systems products. By working together to assimilate each other’s technology attributes, Wave Computing and Arteris can ensure the seamless flow of information enterprise-wide, helping speed time-to-insight.

“Wave and Arteris have complementary compute and networking technologies that, when packaged together, address some of the key challenges facing system-on-chip designers today such as shorter product cycles and rapidly increasing product complexity,” said Steve Brightfield, senior director, Strategic AI IP Marketing, Wave Computing. “The world of AI demands greater compute power. Working with Arteris allows us to design a scalable data platform with blazing-fast performance at a cost-effective price that helps customers accelerate insight from the edge to the data center.”

The key to a successful AI-enabled, system-on-chip (SoC) design is effectively managing the flow of information across the chip. By linking Arteris’ NoC interconnect and AI package IP technology with Wave Computing’s TritonAI 64 dataflow processing elements and cores, customers can successfully reduce latency and optimize the flow of information across their SoC platforms.

“Arteris IP has developed unique on-chip interconnect capabilities that facilitate the rapid assembly of complex machine learning SoCs with cache coherent, non-coherent and regular AI structures to provide a competitive advantage to engineering teams designing the next generation of AI and machine learning chips,” said K. Charles Janac, President and CEO of Arteris IP. “The combination of the TritonAI 64 IP platform and Arteris IP’s portfolio of interconnect technologies helps customers significantly boost performance and enable the seamless flow of data across a wide variety of compute-intensive, AI-enabled automotive, enterprise and networking applications.”

For more information on Wave Computing’s complete portfolio of IP and systems products visit www.wavecomp.ai. Additional details on Arteris IP’s line of AI-enabled network computing solutions visit www.arteris.com.

###