
Very cool approach to ML testing. I like how you track against commits and help define goals as you define the pipeline. One question - how do define the "root cause" that you mention when solving failed goals?
Great integrated product for giving visibility into ML models, highly recommend for anyone looking for a way to benchmark, evaluate, and iterate on their models (which everyone should!)
This is awesome! Having a great debugging workspace on par with software engineering debugging has always been a pain point to me when working on finance data and autonomous driving. What are some of the use cases you enable today?
Optimizing an ML model at scale requires a bunch of different tools and lots of work by the engineers + data scientists. Love that Openlayer can do all of this for a company (detect errors, suggest new optimizations, etc.), definitely a game-changer for ML teams! 👏🏾
I love OpenLayer because it allows not just engineers, but PMs, analysts, and managers to participate in the ML development process. Finally, a way to catch errors before the product gets into the hands of users!