LangGraph agentic orchestration platform with grounded RAG
Hybrid retrieval, reranking over PostgreSQL and pgvector, permissioned tool governance, structured outputs, citations, LangGraph, FastAPI, and Azure OpenAI.
If you are hiring for a senior, staff, or principal-level AI engineering role, this page is the short version. My strongest fit is hands-on work across RAG systems, LangGraph-based orchestration platforms, real-time voice agents, eval infrastructure, and production hardening.
Southern California, remote. Hands-on builder with architecture depth and team leadership experience.
The strongest signal is the work itself: retrieval pipelines, orchestration layers, guarded voice systems, and evaluation loops that hold up outside of demo conditions.
Hybrid retrieval, reranking over PostgreSQL and pgvector, permissioned tool governance, structured outputs, citations, LangGraph, FastAPI, and Azure OpenAI.
Six-stage pipeline with injection detection, hybrid retrieval, confidence gating, verification, citation validation, and telemetry across PostgreSQL and Langfuse.
Hands-on daily in modern AI tooling, but with enough delivery and leadership depth to reason about production systems end to end.
I am strongest when the role expects both implementation and judgment: building the system, pressure-testing the architecture, and helping a team avoid predictable mistakes in retrieval, evals, releases, and observability.
FastAPI, Python, orchestration, evals, telemetry, provider integration, and production hardening are all normal daily work.
Bias toward fail-closed controls, regression discipline, fallback logic, and instrumentation rather than demo-only optimism.
I have led teams, but the best current fit is still hands-on senior IC or IC-plus work where technical depth matters every week.
For strong fits, the fastest path is usually the job description, team context, and the kind of system you need built or stabilized.