Notes on Building, Leading, and Figuring Things Out

Lessons from 15 years in tax tech, building AI tooling, and leading teams through impossible-sounding projects.

February 2026 8 min read

Why I Built a Code Intelligence Platform in 6 Days (And What MCP Servers Actually Solve)

Everyone's talking about AI coding tools. But here's the problem nobody mentions: LLMs don't understand your codebase. They understand code in general. When your company has 63 repositories, a proprietary DSL, and calculation chains that span 15 files — "code in general" isn't enough.

At Taxwell, our tax calculation engine lives across two very different codebases: PowerBASIC (Drake) and MathMaster DSL (TaxAct). When we started using Claude Code for development work, we hit a wall immediately. The AI would confidently generate code using syntax that didn't exist. It would trace a calculation chain halfway and then fabricate the rest. It would suggest changes to functions without understanding that 47 other functions depended on them.

The answer wasn't better prompts. The answer was giving the AI a structured knowledge layer it could query. That's what Model Context Protocol (MCP) servers do — they give LLMs tools to look things up instead of guessing.

I built our code intelligence MCP server over a long weekend that turned into 6 days. The architecture is straightforward: an offline indexer parses all 63 repos and builds a SQLite database with FTS5 full-text search. Function definitions, call relationships, DSL field dependencies, cross-form references, include chains — all pre-computed and queryable. The MCP server exposes 37+ tools that Claude Code can call: search_code, get_call_chain, get_calculation_flow, get_form_dependencies, find_references.

The result? Questions that used to take hours of manual file-hopping now take seconds. "What breaks if I change this function?" has a real answer. "Trace the Earned Income Credit calculation end to end" returns the complete chain instead of a hallucinated approximation.

The lesson for other technical leaders

If your team is using AI coding tools and getting mediocre results, the problem probably isn't the AI. The problem is that the AI doesn't have access to the relationships and context that live in your engineers' heads. An MCP server — even a simple one backed by SQLite — can be the difference between an AI that wastes time and an AI that multiplies your team's output.

You don't need a vector database. You don't need embeddings. You need a well-indexed relational model of your codebase and a way for the AI to query it. Start there.

January 2026 6 min read

What Nobody Tells You About Merging Two Engineering Teams

When Cinven acquired TaxAct and Drake Tax, someone had to figure out how to make two very different tax development teams work as one. Here's what I learned about the human side of M&A integration that no playbook covers.

The tech was the easy part. Different codebases? Fine — we can build abstraction layers. Different deployment processes? Fine — we can standardize. Different testing philosophies? Fine — we can align on quality gates.

The hard part was that people had built their identities around their team. Drake developers were proud of being Drake developers. TaxAct developers had their own culture, their own inside jokes, their own way of doing things. Telling both groups "you're one team now" is about as effective as telling two families "you're one family now" at a forced reunion.

What actually worked

Shared problems, not shared mandates. Instead of reorganizing and hoping for the best, I gave both teams a shared problem to solve together. HappyFox ticket triage was the first one — everyone could see the customer pain, and fixing it required knowledge from both sides. Nothing bonds a team faster than a shared enemy, and "customer suffering" is a pretty compelling enemy.

Transparent career architecture. One of the first things I did was build a unified career ladder — Analyst I through Principal — with written job descriptions for every level. When people can see where they're going and what it takes to get there, a lot of the political anxiety disappears.

Let people grieve the old team. This sounds dramatic, but it's real. People had built careers on Team A or Team B. Acknowledging that something was being lost — not just something being gained — made the whole transition smoother.

Two years in, the team operates as a genuine unit. But it didn't happen because of a reorg slide deck. It happened because we focused on the work, built trust through shared wins, and treated the humans like humans.

November 2025 5 min read

From 38% SLA Breach Rate to 5.6%: A Dashboard Is Not a Dashboard

We had zero visibility into our support operations. No one could tell you how many tickets were open, who was handling them, or whether we were meeting our SLAs. Six dashboards later, our breach rate dropped by 85%. Here's the framework.

When I took over federal tax development, I asked a simple question: "How are we doing on support ticket response times?" The answer was a combination of shrugs and someone pulling up an Excel file from three weeks ago. That was the moment I knew dashboards weren't optional — they were the foundation.

But here's the thing about dashboards that most people get wrong: a dashboard that nobody looks at is worse than no dashboard at all, because it gives you the illusion of visibility. I've seen teams build beautiful Power BI reports that get opened twice and then forgotten.

The framework that worked

Start with one question, not one dataset. Our first dashboard answered exactly one question: "Which tickets are about to breach SLA?" Not a comprehensive ticket overview. Not a historical analysis. Just: what's on fire right now? That dashboard got used every single morning because it was immediately actionable.

Automate the delivery, not just the data. This is where Power Automate changed the game. I built a flow that sends an AI-generated daily recap to our Teams channel every morning before anyone starts work. Nobody has to remember to check the dashboard. The insights come to you.

Make it embarrassingly specific. Our dashboard shows which individual analyst handled which tickets and how long each one took. That felt uncomfortable at first. But when the top 5 analysts were handling 69% of tickets, that specificity led to redistribution that made the entire team healthier.

The 38% to 5.6% SLA improvement didn't come from working harder. It came from knowing where to look. That's what dashboards actually do when they're built right.