Posts
All the articles I've posted.
-
Streaming LLM responses in Spring AI for a better user experience
LLMs generate text token by token. Streaming lets your users see that text as it arrives instead of staring at a loading spinner. This post shows how to wire Spring AI's stream() to a Server-Sent Events endpoint.
-
Getting structured JSON responses from LLMs in Spring AI
LLMs return free-form text by default. Spring AI's structured output support maps that text directly into Java records and classes — no manual JSON parsing, no fragile string manipulation.
-
Prompt templates in Spring AI — stop hardcoding your prompts
Hardcoded prompt strings in Java code are hard to review, impossible to change without a redeploy, and a maintenance nightmare at scale. Spring AI's PromptTemplate solves this. Here is how to use it properly.
-
Understanding Spring AI's ChatClient — the heart of every AI call
ChatClient is the central abstraction in Spring AI. This post covers the builder API, default system prompts, per-call options, advisors, and the difference between call() and stream() — everything you need to use it effectively.
-
Setting up Spring AI in a Spring Boot project — step by step
Module 2 starts with code. This post walks through adding Spring AI to a Spring Boot project, configuring OpenAI and Ollama, and making your first real LLM API call from Java.