This talk will be held in English. / Dieser Vortrag wird auf Englisch gehalten.
Container orchestration environments produce massive textual data across systems. This talk examines how Large Language Models (LLMs) can integrate with Kubernetes, Prometheus, Loki, and similar tools to accelerate incident response.
We cover two key areas:
- API Integration into Multi-Agent Systems – Using developments like function calling and the Model Context Protocol (MCP) to connect LLMs directly with Kubernetes APIs.
- Operational Impact – Drawing from real-world experience, we show how virtual coworkers reduce cognitive load and shorten incident resolution times.
We conclude by sharing best practices and insights from the Hyground project and early AI adoption in IT operations.
Attendees should have a general understanding of Kubernetes architecture incl. concepts and basic familiarity with observability tools like Prometheus and Loki. Practical experience with LLMs and exposure to incident response workflows in IT operations will be advantageous but not strictly necessary.
Attendees will learn how to connect LLMs with Kubernetes, Prometheus, and Loki, integrate external APIs into multi-agent systems using function calling, and understand the real-world impact of AI on reducing cognitive load and speeding up incident resolution. They will also take away practical best practices from the Hyground project for adopting AI in IT operations.

