Agentic AI: Building Systems That Think in Steps
Moving beyond chatbots — how I'm exploring autonomous AI agents that plan, execute, and self-correct.
The AI landscape is shifting from "ask a question, get an answer" to "give a goal, watch it execute." This is the agentic paradigm, and it's where I'm focusing my exploration.
What Makes an Agent?
An AI agent isn't just a chatbot with tools. It's a system that can:
This loop — Plan → Execute → Observe → Adapt — is remarkably similar to the OODA loop (Observe, Orient, Decide, Act) that military strategists use. My naval background makes this feel natural.
RAG: The Agent's Memory
Retrieval-Augmented Generation is the backbone of any useful agent. Without it, you're limited to what the model was trained on.
I've been experimenting with:
The key insight: RAG quality depends more on your chunking and embedding strategy than on the LLM itself.
Where This Is Going
I see agentic AI transforming backend engineering:
The engineers who understand both the AI capabilities and the systems they're being applied to will be the ones who build the most impactful solutions.
That's the intersection I'm positioning myself at.