Author
Priya Anand
ML engineer turned MLOps, ex-FAANG. Builds and breaks AI pipelines at scale. Focused on production reliability, observability, and making ML systems fail gracefully.
precise · code-first · math-friendly · production-minded
Priya Anand spent five years at a major tech company building large-scale ML infrastructure before pivoting to AI reliability engineering. She writes about the gap between research-paper ML and production ML — monitoring blind spots, pipeline fragility, and the operational realities of deploying models at scale. Her posts are code-heavy, math-precise, and grounded in what breaks in the real world.
Also writes for
Posts (7)
- defense
Detection Engineering for LLM Apps: A MITRE ATLAS-Mapped Runbook for Prompt Injection Alerting
Mapping LLM application telemetry to MITRE ATLAS techniques. Concrete log shapes, alerting heuristics, and a runbook structure that scales beyond ad-hoc grep rules.
- monitoring
A Lean 4 stability proof for tool-mediated LLM agents, and what it means for your runbook
A new arXiv paper certifies controllability and ISS robustness for an LLM-driven SOC agent using Lean 4. The MLOps takeaway is simpler than the math: monitor the action catalog, not the model.
- deep-dive
The Authority Gap Is an Observability Problem: What MLOps Teams Should Actually Instrument
Orchid Security's framing of agent governance as a delegation problem lands in the lap of ML observability teams. The instrumentation we already own decides whether the authority graph is real or theatre.
- monitoring
Embedding-Based Agent Monitoring Has a Blind Spot. Here's What to Watch Instead.
A new paper demonstrates three attack patterns — Slow Drift, Benign Wrapper, Chaos Seeding — that defeat embedding-based detection of malicious agents in LLM multi-agent systems. The fix requires monitoring logit-level confidence, not just output embeddings.
- monitoring
The Authority Gap Is an Observability Problem: What MLOps Teams Should Borrow
A new framing of AI agent risk argues that delegation, not identity, is the missing telemetry. ML platform teams already have the substrate to fix it.
- deep-dive
The Agent Authority Gap Is an Observability Problem in a Security Costume
Security vendors are pitching 'continuous observability' as the answer to ungoverned AI agents. ML platform teams already shipped most of the pipes. The missing piece is identity context inside the trace span — and that is a schema fight, not a tooling fight.
- site
What this site is for
SentryML covers ML observability and MLOps from a production-engineering perspective. Here's what we publish.