Untether AI Raises $13 Million from Intel Capital and Other Investors to Accelerate AI Innovation

TORONTO, April 1, 2019 — AI chip startup Untether AI (www.untether.ai) came out of stealth today with $13 million in Series A funding from Intel Capital and other investors. Drawing from the team’s computer architecture research and experience in delivering over 1 billion chips to market, Untether AI is developing a powerful new AI chip that eliminates the data movement bottleneck that costs energy and performance in traditional architectures.

“We’re at a historic moment in computing right now. AI is changing the nature of what computers do. They can now interact with the real world based on recognizing patterns instead of following procedures. These advances in AI have opened up tremendous possibilities for business and technological innovation, but traditional processors are not best suited to this new type of computing. Now is the time to build a new kind of ultra-efficient chip for new frontiers in AI applications — from autonomous vehicles, data centers, mobile, vision processing and beyond — that will ultimately change how everything works.” — Martin Snelgrove, CEO of Untether AI

“In under five months after a small initial seed investment, Untether AI was able to build a successful prototype based on a processing-near-memory architecture that promises to enable dramatically higher efficiencies and performance for AI accelerators. This is a team with decades of experience in chip design, and they’re now developing a commercial product for neural net inference – in an AI inference market that’s projected to grow to $22 billion by 2022.” – Dave Flanagan, vice president and senior managing director of Intel Capital

By combining the power efficiency of near-memory design with the robustness of digital processing, Untether AI has developed a groundbreaking new chip architecture for neural net inference. It reduces the distance data must travel to the absolute minimum. Untether AI’s Kensington architecture gets data to the processors at 2.5 petabits per second, a rate 1,000 times better than what is possible in traditional architectures. It completely eliminates the cost of using a bus to move data to processors, which traditionally dominates energy consumption. By dramatically improving inference efficiency, Untether AI enables larger scale use and broader AI while consuming less resources and energy, and needing much less supporting infrastructure.

“The AI inference market is poised for disruption and explosive growth. Cloud AI inference has an efficiency problem; automotive AI chips are underpowered; and sophisticated AI at the edge will flatten batteries in no time. By introducing a completely scalable and ultra-efficient inference architecture, Untether AI is uniquely positioned to address each of those market segments with what will be by far the most energy-efficient AI accelerator.” – Darrick Wiebe, CTO of Untether AI

About Untether AI

Untether AI® provides energy-centric AI inference acceleration from the edge to the cloud, supporting any type of neural network model. With its at-memory compute architecture, Untether AI has solved the data movement bottleneck that costs energy and performance in traditional CPUs and GPUs, resulting in high-performance, low-latency neural network inference acceleration without sacrificing accuracy. Untether AI embodies its technology in runAI® and speedAI™ devices, tsunAImi® acceleration cards, and its imAIgine® Software Development Kit. Founded in Toronto in 2018, Untether AI is funded by CPPIB, GM Ventures, Intel Capital, Radical Ventures, and Tracker Capital. More information can be found at www.untether.ai.

All references to Untether AI trademarks are the property of Untether AI.  All other trademarks mentioned herein are the property of their respective owners.

Media Contact for Untether AI:

Michelle Clancy Fuller, Cayenne Global, +1.503.702.4732
michelle.clancy@cayennecom.com

Company Contact:

Robert Beachler, Untether AI, +1.650.793.8219
beach@untether.ai