New Relic, the observability platform, has introduced New Relic AI Monitoring (AIM), an application performance monitoring (APM) solution designed for AI-powered applications. The platform offers engineers enhanced visibility and insights into the AI application stack, simplifying troubleshooting and optimisation for factors such as performance, quality, cost, responsible AI use, and upcoming regulations.
With more than 50 integrations and features like LLM response tracing and model comparison, AIM supports teams in confidently building and running Large Language Model (LLM)-based applications.
AI-powered applications pose new challenges for organisations, introducing complexity and making components like Large Language Models (LLMs) and vector databases opaque to engineers. These applications can potentially yield inaccurate or biased results, introduce security concerns, and generate extensive telemetry data. According to New Relic, AIM addresses these issues by providing end-to-end observability for any AI-powered applications.
Engineers can utilise a unified view to troubleshoot, compare, and optimise various LLM prompts and responses, considering factors such as performance, cost, security, and quality issues like hallucinations, bias, toxicity, and fairness. AIM ensures complete visibility across the entire AI stack, services, and infrastructure, enabling engineers to access the necessary data for the responsible use of AI and compliance with upcoming regulations.
“With every organisation integrating AI into their products and processes, AI workloads are now part of modern organisations’ application architectures. With AI monitoring, we have applied our deep expertise from inventing cloud APM to providing end-to-end visibility into AI-powered applications to help businesses manage performance and costs, while complying with emerging AI regulations and standards,” said New Relic Chief Product Officer Manav Khurana.
New Relic agents come with all AIM capabilities, offering a quick and easy setup with full AI stack visibility, response tracing, model comparison, and more. The platform provides a holistic view across the application, infrastructure, and the AI layer, incorporating metrics like response quality and tokens alongside APM golden signals. This comprehensive approach allows engineers to trace the lifecycle of complex LLM responses, enabling the identification and resolution of performance issues and quality problems such as bias, toxicity, and hallucination.