AIRAGGovernmentIndia

We Built an AI the Indian Government Could Actually Trust

RAG for the Ministry of External Affairs. No hallucinations allowed, not as a feature, as a hard architectural constraint. Here's what that actually meant to build.

Shaurya Pratap Singh·Founder & Lead Engineer at LN Tech·

Most RAG systems are built with a quiet assumption baked in: the occasional wrong answer is acceptable. Add a disclaimer, tell users to verify, ship it. That's the standard playbook.

We couldn't do that.

Meghdoot is an AI assistant for IFS officers (Indian Foreign Service, the people running India's diplomatic operations abroad). When an officer asks about India's semiconductor trade policy with Australia, they're not casually curious. They might be walking into a negotiation. Getting it wrong isn't a UX problem. It's a different category of problem entirely.

How This Started

An IIT Delhi alum connected us to the right people within the MEA ecosystem. The need was obvious once you understood the day-to-day: officers spending real time manually hunting through government circulars, Ministry policy documents, official communications, cross-referencing everything because the stakes demanded it. Existing tools returned general web results. And in a context where MEA's official policy can differ meaningfully from what a general-purpose AI has absorbed from the internet, "general web results" is not good enough. It's arguably worse than nothing, because it's confidently wrong instead of honestly uncertain.

We had ten weeks.

The Constraint That Shaped Everything

The decision that defined the architecture: Meghdoot does not access the internet. It cannot hallucinate from training data because it doesn't use training data to answer questions. Every response comes from a curated vector database populated exclusively from mea.gov.in and *.gov.in, government sources, verified, nothing else.

Before the system answers anything, it retrieves. That retrieval is mandatory. Not a fallback, not an option. The only path. If relevant documents exist in the knowledge base, the answer is grounded in them with citations. If they don't exist, the system says it doesn't know.

That last part sounds simple. It isn't. Getting an LLM to acknowledge genuine uncertainty, without dressing up "I don't have this information" as a confident-sounding synthesis of vaguely related context, took more tuning than the retrieval pipeline itself. Honest uncertainty is harder to engineer than confident answers.

The Demo

Nikhil Mahajan, the IFS officer we were building for, ran a live query during the demo. India's semiconductor policy, recent trade developments with Australia. Real question, no warning, no pre-loaded example.

He was mindblown. Not in a "neat tech demo" way. In a "this would have taken me forty minutes of document hunting" way. The system retrieved the relevant policy documents, synthesized a verifiable answer, listed its sources. He wanted it deployed immediately.

The Hardest Week

Connecting the Python RAG backend to the Next.js web application while maintaining meaningful multi-search rounds, that was the week I'd rather not repeat.

A single retrieval pass isn't enough for complex policy queries. You pull, assess what's missing from the retrieved context, refine the query parameters, retrieve again, then synthesize. That loop needs to complete fast enough that it doesn't feel broken from the other side of the screen, while also being rigorous enough that you'd stake diplomatic accuracy on it. Getting both simultaneously, reliably, under real-world query patterns, that took the better part of a week of grinding.

Where It Stands

Meghdoot has been running at the Ministry since January. It's currently on a slower operational cadence, which is how government deployments tend to go, and something I genuinely didn't fully appreciate before this project. The system works. The technical constraints are met. The limitation isn't technical.

We were expecting more traction by now, honestly. But building for government is a different game from building for a startup. Adoption curves are slower, decision chains are longer, and "it works in production" is just the beginning of the conversation, not the end.

That's a lesson worth learning early if you're thinking about this space.

Working on something ambitious?

Let's scope it — free 15-minute call, no commitment.

Book a scoping call