REF / SOFTWARE

Spring Boot AI Error Analyzer - Open Source Library

The first AI error analysis library for Spring Boot. One annotation turns stack traces into plain-English root-cause analysis. Maven Central, MIT licensed.

RoleAuthor & Maintainer
Year2026
OutcomePublished on Maven Central, MIT licensed, ~$0.001 per analyzed error
DomainSoftware
00
STACK

Tech used.

Java 17Spring Boot 3AspectJOkHttpJPAFlywayElasticsearchOpenAIAnthropicGeminiGroqOllama

The Problem

Every Java team I've worked with loses hours per week to the same ritual: an exception fires in production, an engineer copies the stack trace into a chat, scrolls past framework noise to find the one line that matters, then walks back through the code to figure out what state actually broke. The information needed to diagnose the bug is usually right there in the stack trace, the failing method's source, and the input parameters. it just takes a human to assemble it.

LLMs are good at exactly this kind of assembly. The catch: every team I saw rolling their own integration was rebuilding the same plumbing. prompt construction, PII redaction, rate limiting, deduplication, a dashboard, multi-provider failover. None of it was the interesting work, and none of it was easy to get right.

I built the Spring Boot AI Error Analyzer to solve this once, properly, and ship it as a free library so the rest of the community doesn't have to. As far as I can tell, it is the first library of this kind published for the Java / Spring Boot ecosystem. there is no equivalent on Maven Central.

What I Built

A drop-in Spring Boot starter that adds AI-powered exception analysis to any Spring Boot 3.x application with one annotation.

One annotation, full pipeline: @AiAnalyze on a method or class wires up an AspectJ aspect that intercepts every thrown exception, builds a structured prompt (exception, stack trace, parameters, source code, HTTP context), sends it to the configured LLM provider, stores the result, and surfaces it in a built-in dashboard at /ai-errors. Zero further configuration is required to get useful output.

Six AI providers, with failover: OpenAI, Anthropic Claude, Google Gemini, Groq, Ollama (fully offline), and any OpenAI-compatible custom endpoint. A configurable fallback chain automatically retries with the next provider if one is unavailable, so a transient OpenAI outage doesn't drop your error pipeline.

PII protection built in: @SensitiveParam and @SensitiveField annotations redact parameter values and object fields before any data leaves the application. HTTP request bodies are never sent. For teams with strict data residency requirements, the Ollama provider keeps every byte of inference on the team's own hardware.

Production-grade controls: per-minute, per-hour, and per-day rate limits, a daily USD budget cap, deduplication that merges repeated occurrences of the same bug into a single record with an occurrence counter, retention policies that auto-delete old analyses, and three pluggable storage backends (in-memory, JPA on Postgres or MySQL, Elasticsearch).

Notifications on severity: email (Spring Mail), Slack (incoming webhook), and arbitrary webhooks. all configurable per severity level, so on-call only gets paged for HIGH and above.

Technical Highlights

The interception layer is an AspectJ aspect, not a BeanPostProcessor proxy, so it works on final classes and self-invoked methods that Spring AOP cannot reach. Analysis runs on a dedicated async thread pool (4 threads by default, configurable) and never blocks the request thread. the user-facing request returns immediately with whatever exception handler the application already has; the AI call happens out-of-band.

The prompt builder is a pluggable pipeline: source code resolution (reads the failing method from disk in dev, falls back to bytecode decompilation in production), parameter serialisation (Jackson, with @SensitiveParam filters applied), stack frame trimming (capped at the top N frames to keep token cost bounded), and developer context injection via @AiContext for domain-specific hints the LLM wouldn't otherwise know.

Provider abstraction is a single AiProvider interface. Adding a new provider is one class plus a Spring auto-configuration registration. users implement their own and set ai-error-analyzer.provider=their-name to use it.

Storage is split between an ErrorAnalysisRepository interface and three implementations. The JPA backend ships its own Flyway migrations under a dedicated schema so it never collides with the host application's schema. Elasticsearch support exists for teams that want to plug it into an existing observability stack.

The dashboard is a server-rendered, dependency-light page (no React, no separate build step) at a configurable path, with optional HTTP Basic auth for production use. Spring Security is not a required dependency. the library refuses to make security decisions for the host application.

The whole project is published on Maven Central as io.github.musamaqamarse:spring-boot-ai-error-analyzer-starter:1.0.0, MIT licensed, with a Docker-Compose example app and a full provider-by-provider configuration reference in the README.

Outcome

Adoption story is still early. the library was published to Maven Central in April 2026. but the engineering goals are met:

Cost: around $0.001 per analyzed error with gpt-4o-mini and deduplication on. Most teams will never see meaningful spend; the daily-budget cap is there for when someone deploys an infinite loop on a Friday.

Performance: zero added latency on the request path. Async thread pool absorbs the AI call; provider timeout defaults to 30s; failure of the AI call never propagates back to the user request.

Privacy: one configuration line (provider: ollama) gives teams complete data locality. PII annotations make redaction explicit and reviewable in code review, not buried in a config file.

Community: free, MIT, Maven Central, no telemetry, no upsell. Issues and PRs are open on GitHub and I'm actively triaging them.

If you build with Spring Boot, give it a try. the goal is to delete the "copy-paste stack trace into chat" loop from the Java community's daily workflow.

<dependency>
    <groupId>io.github.musamaqamarse</groupId>
    <artifactId>spring-boot-ai-error-analyzer-starter</artifactId>
</dependency>