ai researcher, connecting agents to humans
building at the intersection of machine learning research and systems engineering. currently focused on multimodal llms, agentic infrastructure, and retrieval-augmented generation. previously interned at microsoft on the azure data spark native execution engine. open-source contributor to transformers, langchain, and more.
software engineer intern
microsoft · azure data
worked on the azure spark native execution engine (nee) using c++, scala, velox, gluten. integrated fuzz-testing pipelines, improved operator reliability, and enhanced ci/cd diagnostics for large-scale distributed sql execution.
undergraduate ml researcher
bennett university
researched recommender systems and computer vision using pytorch and tensorflow. improved ndcg/mrr on matrix factorization models and benchmarked cnn/transformer architectures for emotion recognition on fer2013.
privacy-first clipboard history manager with fuzzy search and aes-256 encryption. open-source.
high-throughput log ingestion system handling 50k+ logs/sec using go concurrency and kafka.
end-to-end workflow system integrating azure openai and ai search for enterprise-grade automation.
generative models (gan/vae) to produce smooth volatility surfaces for option pricing.
fine-tuned pegasus on aeslc for abstractive summarization in low-resource email domains.
optimal path algorithms using dijkstra's and custom regression models for yield prediction.
sequence-to-sequence model using encoder-decoder transformers for document summarization.
formulated memory interference as a failure mode in llm-based multi-agent systems, designing architectural variants for controlled retrieval-scoping.
designed a controlled evaluation framework to quantify output stability of llms under stochastic decoding conditions across 1,500+ tasks.
researching methods to improve diversity, relevance, and ranking stability in recommendation pipelines using llm-driven approaches.
benchmarked cnn and transformer architectures for emotion recognition on fer2013 dataset, analysing accuracy-efficiency tradeoffs.
improved ndcg/mrr metrics on matrix factorization models through novel regularization and training strategies.
your brain already solved the problem ai agents need to work on
multi-agent memory architecture mirrors 500 million years of neural evolution. your brain doesn't blend kitchen and living room memories into "memory soup" — multi-agent ai systems are converging on the same solution.
should models always finish their sentences: the case for explicit hesitation
the industry has spent three years optimizing for fluency, creating the world's most confident liars. hallucination is a structural byproduct of our objective functions, not a bug to patch with more data.