Observability for GenAI Applications (Grafana ❤️🔥 OpenTelemetry Community Call #4)
In this community call with my coworker Liudmila Molkova, we dove into the emerging field of observability for Generative AI applications. We explored how to instrument LLM-powered applications using OpenTelemetry, what metrics and traces matter most when monitoring AI workloads, and how to debug issues in complex AI pipelines. The discussion covered prompt tracing, token usage monitoring, latency considerations, and the unique challenges of observing non-deterministic AI systems.