WhyLabs Team
An arrow pointing leftHome

WhyLabs: A “Fitbit” for Machine Learning Systems

11/5/2021

The end-to-end AI observability and monitoring solution was just funded. PNW.ai got a chance to test-drive the product first.

WhyLabs announced yesterday that it has secured $10 million in Series A funding, co-led by Defy Partners and Andrew Ng’s AI Fund. Existing investors, such as Madrona Venture Group and Bezos Expeditions, also participated. That’s a pretty stellar list of funders, and makes a strong case that WhyLabs is on to something big.

So, we asked CEO Alessya Visnjic to walk us through a test drive of her company’s new product, which is already keeping an eye on machine learning models across giant platforms like those of on-delivery fashion company StitchFix, logistics firm Airspace, and Yahoo Japan. “We had two dozen companies join us once we released our product,” says Visnjic. “We are working with companies who automate the time-sensitive delivery of human organs and vaccines, fintechs who use machine learning to forecast and detect fraud, marketing firms who use AI models to automate the user experience, real estate, e-commerce, robotics, even the mining industry.” Now that WhyLabs is funded, expect to see more companies signing on — especially since the AI observability product maintains an open standard for data logging and a free, self-serve edition of its tech.

Visnjic likens her product to a Fitbit for your data pipeline. “It’s taking the vitals of your system all the time. It sees trends and predicts when something will change, and notifies ML engineers when it’s time to go in and debug,” she says. It grabs snapshots of a company’s algorithm across time and alerts stakeholders when their model has drifted so far as to no longer represent the real world, or the moment that a data pipeline broke. For data scientists, WhyLabs’ AI observability tool can be invaluable.

Take one example: A logistics company has a model trained on 5-digit zip codes — but then a client starts sending 9-digit zip codes with a dash, which Python would read as string. “AI observability would show engineers when this started happening and allow them to go back in time to the origin,” says Visnjic. Problem solved.

Using a real data set that we found on Kaggle, we uploaded seven snapshots of data from the personal loan company, the LendingClub. WhyLabs’ intuitive website gave us a login, then a token, and we were off to the races. It took maybe 3 minutes.

First, the model uploaded our sets and ran its code to give us a baseline of health. We configured the monitoring system to look at our data daily (though we could have chosen hourly). One glitch: The platform won’t start running for 7 hours once launched, though Visnjic says her team is working on making this instantaneous.

When we opened our dashboard, we could compare the differences in the data across time and space that our model had captured. For example, our fifth feature, “Annual Income,” was distributed differently across days, and “Annual Income Joint” had even more wider distribution across time. If we clicked on “Input” at the bar across the top, we saw all of our data visualized in colorful charts, which made understanding what we were seeing super clear. In one instance, we could see the “null fraction” line drooping, indicating that our data pipeline was missing a bunch of values over time. Viewing the features over time as shapes quickly alerted us to the problem. MLOps alerted!

“It looks wonderful, awesome,” says Vu Ha, Technical Director for the Allen Institute for Artificial Intelligence. “I think having insight into the data you are dealing with is absolutely important to get a sense of how the model is doing and if there is any need for adjustment and reacting to changes in the data.”

That’s exactly what WhyLabs’ clients do, says Visnjic. “Our customers start capturing this model health continuously as their model is in production, and this helps identify when their model is deviating from a healthy state. Then they can make changes – maybe something in the pipeline is altering the data before they receive it, so they can go to the provider and have them fix the way they are delivering the data; or maybe the model is drifting, so they need to go back and retrain the model.” Clients can set the platform to automatically turn off their model so they can take time to retrain. Or WhyLabs can just alert data scientists that something is awry.

“If your data scientists are only looking at your model once a week or once a month, you’re going to miss the root cause,” explains Visnjic. “If your company is doing predictions, you want to catch any unhealthy changes before you send anything out to your clients.”

WhyLabs’ observability platform — which doesn’t require customers to alter their data sets or change their coding language — begs the question: Why hasn’t anyone thought of this before?