Selected Work

Building things that matter

I don't build demos. Everything here is either live, in production, or actively being used to solve real problems for real people.

Fintech Product

Live

Meridian Money

Personal finance, reimagined

Most budgeting apps are either too simple to be useful or too complex to stick with. Meridian Money sits in the sweet spot: envelope budgeting that actually makes sense, with AI available when you want it and completely out of your way when you don't. Your data is yours — no selling, no training models on it, no lock-in.

  • Envelope budgeting with automatic bank sync via Stripe Financial Connections
  • AI-optional: Plutus AI coach ($10/mo add-on) or bring your own AI via MCP server
  • Your data stays 100% yours — no selling to third parties, no AI training on user data
  • Built with Next.js, React, Prisma, PostgreSQL, and TypeScript
  • $5/month for core budgeting — positioned against YNAB at $14.99
Visit meridianmoney.app

Enterprise AI

Active Development

SMIS AI Tool

Bid intelligence for forestry and construction

Federal contracting is a $700B market buried in bureaucratic noise. SMIS cuts through it. The platform monitors SAM.gov, analyzes contract opportunities using LLMs, and delivers qualified leads to forestry and construction companies who used to spend hours manually searching.

  • Automated SAM.gov monitoring and opportunity scoring
  • RAG pipeline for matching company capabilities to contracts
  • LLM-powered bid analysis and competitive intelligence
  • Custom relevance scoring for forestry/construction verticals
  • Built with Python, FastAPI, and vector databases

Infrastructure

Running

Local AI Infrastructure

Production LLMs, zero cloud dependency

Running large language models shouldn't require sending your data to someone else's servers. I built a local AI infrastructure stack on an NVIDIA DGX Spark with 128GB of unified memory. vLLM for inference, custom orchestration, and the kind of performance that makes cloud APIs look expensive and slow.

  • NVIDIA DGX Spark with Grace CPU and 128GB unified memory
  • vLLM inference server with multiple model support
  • Open WebUI for team access and prompt management
  • k3s cluster for orchestration and scaling
  • Docker-based deployment pipeline
  • Models: Qwen, DeepSeek, Devstral, and custom fine-tunes