ReclamaAI was born from an observation so obvious it stings: in Colombia, eight out of ten people facing a legal problem don't know what to do. Paying a lawyer for a tutela or a parking-ticket appeal is expensive, slow, or intimidating. The law belongs to them, but access doesn't.
When I started researching the problem, what surprised me wasn't the legal complexity — basic Colombian legal templates are fairly standardized — but the human friction: the fear of calling a government office, the discomfort of writing formally, the feeling of not knowing the right words. ReclamaAI isn't really an AI product. It's a calm product: a place where the user describes what happened in plain language and gets a document ready to file.
Why start here
VantLabs could have started with a thousand things. Why legaltech, and why Colombia? Three reasons, in order:
- The problem is huge, badly served, and current generative models are just good enough to solve it well.
- Colombian legal knowledge is publicly available (codes, jurisprudence, rulings). There's no private-data moat — the moat is the product, the copy, and the curation.
- I have the cultural context to understand why people don't use existing products. Foreign legaltech feels like an IKEA manual: technically correct, emotionally cold.
The architecture, on one page
ReclamaAI is a modest monorepo: a Next.js 16 app that serves the web, the API route handlers, and a BullMQ worker for long jobs (document generation, email sending, recurring billing). Postgres as the source of truth, Redis as the queue, S3 as the signed-PDF bucket, and Anthropic Claude as the brain.
One of the architecture decisions that took me the longest was which model to use for which step. A product that generates legal documents in under 5 minutes and at a price accessible to Colombian users can't afford to send everything to the largest available model. But it can't sacrifice quality where legal precision matters either. Finding that balance is iteration work, not tutorial work.
RAG without ego
The relevant Colombian law lives in an embeddings store. Before every generation, we retrieve the applicable legal context and pass it to the model. But — this matters — retrieval isn't the secret sauce. The secret sauce is editorial prompts: how we ask the model to speak, what tone to use, what to avoid, how to close a petition without sounding like a robot.
What I learned along the way
Three lessons that changed the product mid-sprint:
- Users don't want an editor. They're afraid to change the document. Afraid to break it. What they do want is to ask someone to tweak it. So we replaced inline editing with a natural-language "tweak" system: "make it more formal", "add that this happened two years ago", "mention the statutory law". The backend takes that instruction and regenerates.
- Price matters more than features. Charging COP $9,900 per document (~$3 USD) removes the mental friction of "will it be worth it?". Charging COP $25,000 would have killed us. The math is tight, but the strategy is scale: at thousands of documents per month, per-document infra costs become negligible.
- Filing is a separate product. We assumed users knew how to file the document with the entity. They didn't. We're building a "what to do next" layer that explains, case by case, where to take it, how to send it by mail, what to ask for at the counter. This will be as important as generation itself.
What's left before launch
Today ReclamaAI is in closed beta. The generation pipeline works end-to-end: wizard, AI, PDF, DOCX, S3, dashboard, tweaks, credits, Wompi payments. What's left are two screens that weigh more than they look — the checkout when credits run out and the payment-result page — and a final onboarding polish.
When we open, ReclamaAI won't be perfect. But for many people it will solve a concrete problem they currently solve badly or not at all. That's enough to ship.
If you want early access, write me. If you have a frustrating legal-bureaucracy story, also — those stories are the ones that teach the most.