Octomizer
An arrow pointing leftHome

OctoML Makes Machine Learning Accessible to Almost Anyone

10/15/2021

The ML deployment platform runs on top of nearly any hardware and takes the manual labor out of accelerating the performance of ML models. No wonder Qualcomm, AMD, and Arm just partnered with the two-year-old company.

Since the rise of machine learning, ML developers and hardware vendors have hit a common wall whenever they add new hardware—they have to figure out how to make the most out of the deployment hardware their model will run on. It’s annoying, not to mention time consuming.

So two years ago, the same team who created the open-source Apache TVM project set out to create a way for companies to bring their trained models to production, which would automatically optimize the model’s performance and deploy it to pretty much any device or cloud service without compromising accuracy. It worked; OctoML, founded by a team with deep ties to the University of Washington—CEO Luis Ceze, CTO Tianqi Chen, CPO Jason Knight, Chief Architect Jared Roesch, and VP of Technology Partnerships Thierry Moreau—have raised $47 million to date. Now they’re landing some major partnerships.

OctoML just announced it will collaborate with Qualcomm Technologies Inc. to provide Apache TVM support for its Snapdragon platforms and SoCs. “OctoML’s ability to fully utilize hardware capabilities for machine learning, combined with the automation and accessibility of our SaaS platform, will greatly simplify the deployment of ML innovation across Qualcomm Technologies’ powerful hardware,” said Ceze.

The team is also partnering with AMD’s high-performance processors to offer users a standardized interface across a wide range of deployment hardware, along with OctoML’s signature automated process to accelerate ML models. No more manual coding necessary. “This collaboration is a great example of how the industry can work successfully with the open software community,” said Ceze.

And they’re not done making announcements. OctoML also entered a partnership with IoT hardware solution, Arm, also built on the open-source Apache TVM. This is the next step toward making the vision of “TinyML” a reality on embedded devices that don’t have their own full-fledged operating systems. “It’s great to see our collaboration with Arm now extend to our SaaS platform where their customers can both speed up deploying models and also enable new ML-based use cases that were not previously viable,” said Ceze.

These three partnerships may signal just the beginning for OctoML, whose founders hope to “empower enterprises to create a unified deployment lifecycle across all their ML hardware vendors,” said Ceze. “There’s been an explosion in specialized hardware and disparate cloud services—each with its own software stack and set of specifications. That’s a real drain on engineering time and resources. Our mission is to make high performance ML more sustainable and accessible to a wider community.”