I architect systems where security isn’t an afterthought—it’s the DNA. As a founding engineer at Realm, I’ve spent the last year hardening AI deployments for enterprises that can’t afford to gamble with model integrity or data leaks. Think zero-trust pipelines for autonomous agents, adversarial testing frameworks that evolve faster than threats, and embedding-driven access controls that make traditional RBAC feel like a rusty padlock.
My work sits at the messy intersection of ML scalability and paranoia. Before breaking things for a living, I reverse-engineered cloud vulnerabilities at Oracle, optimized quantum circuit visualizations for Google’s Cirq, and contributed to tools like Dask and LLVM MLIR—projects where performance gains are measured in orders of magnitude, not percentages. Academia? A sandbox for blending theory with grit: published research on closing bias loops in GenAI systems at CHI ’24, and mapped AI attack surfaces into MITRE’s frameworks because “move fast and break things” only works if you know how to glue them back together.
Open to: Engineering roles where “secure by design” isn’t a buzzword, and leaders who treat AI safety as a first-class problem, not a compliance checkbox.