SentryML
site

What this site is for

SentryML covers ML observability and MLOps from a production-engineering perspective. Here's what we publish.

By Editorial ·

SentryML covers ML observability and MLOps from a production-engineering perspective. The kind of writing we wanted to find when we were debugging a model that worked in eval and broke in prod, and didn’t.

What we publish:

Drift, the unsexy version. Concept drift, label drift, feature drift, training/serving skew. How to detect it in real systems, what thresholds actually catch problems, why most monitoring dashboards lie about it.

Production failure writeups. When models go wrong in the real world — silently degraded predictions, retraining loops gone bad, embedding-store corruption, vector-DB consistency issues — we write up the postmortems we wish vendors would publish.

Tooling reviews, honest. Arize, Fiddler, WhyLabs, Evidently, NannyML, Aporia, the open-source observability stack. Where they actually help, where they’re solving problems you don’t have, what to install if you’re starting from zero.

MLOps without the hype cycle. Feature stores, model registries, evaluation pipelines, online inference. What’s worth adopting, what’s reinventing things SREs solved a decade ago, what’s genuinely new.

What we don’t publish:

  • Vendor-sponsored “thought leadership”
  • “Top 10 MLOps tools in 2026” listicles
  • Anything we couldn’t show running in production

Bylines pseudonymous. Tips and corrections to the editor.

Real content starts shortly.