- Built reproducible pipelines with Airflow, MLflow, Docker, and Kubernetes, reducing onboarding time for engineers and teams.
- Designed cloud-native ML architectures on Azure integrating Cognitive Services, Azure ML, Functions, API Management, and secure VNet deployments.
- Led LLM and GenAI capability-building using Hugging Face, LangChain, and Azure OpenAI across teams.
- Developed end-to-end deployment frameworks with CI/CD and automated validation for model releases.
Designing ML systems that behave like reliable infrastructure.
MLOps Specialist and AI/ML Engineer building reproducible pipelines, LLM systems, and cloud-native architectures with measurable impact across latency, reliability, and cost.
From aircraft to AI systems
Background in aircraft maintenance and engineering, now designing robust MLOps and GenAI systems with the same discipline for reliability, safety, and traceability.
I build and operate production-grade AI, ML, and MLOps systems end-to-end—from data ingestion and experimentation to deployment, monitoring, and cost optimization.
Currently at Futurense Technologies, enabling teams to ship reliable ML pipelines on Azure with MLflow, Airflow, Kubernetes, and CI/CD. Previously at Boven Technologies, delivering ML and LLM solutions across classification, prediction, and NLP workloads.
Before AI/ML, I maintained Airbus A320 aircraft at Air India, which shaped how I think about safety, observability, and operational excellence—principles I now apply to ML infrastructure.
Owning ML systems end-to-end
Roles focused on building reproducible pipelines, GenAI capabilities, and cloud-native architectures with strong reliability and governance requirements.
- Delivered ML systems using TensorFlow, PyTorch, scikit-learn, and PySpark across classification, prediction, and NLP pipelines.
- Built and fine-tuned LLMs for summarization, Q&A, and internal automation, reducing manual workload for clients.
- Engineered MLflow-based lifecycle management with GitHub Actions and Docker for consistent builds and rollback safety.
- Created FastAPI and Streamlit applications for real-time model access and stakeholder demos.
- Implemented SHAP and LIME analysis pipelines to meet responsible AI requirements.
- Performed scheduled and unscheduled maintenance on Airbus A320 aircraft, specializing in CFM56-5B engines, APUs, and hydraulic systems.
- Conducted inspections per DGCA and EASA standards, ensuring compliance and airworthiness.
- Authored diagnostic reports and coordinated technical documentation audits to improve operational efficiency.
- Mentored robotics teams, guiding interdisciplinary projects and fostering innovation.
- Served as Campus Recruitment Officer, coordinating industry outreach and student placements.
Systems built for production, not demos
Selected projects that demonstrate end-to-end thinking—from data and experimentation to deployment, monitoring, and security in real environments.
Azure Lakehouse: End-to-End Data & AI System
Designed a Medallion-architecture pipeline (Bronze → Silver → Gold) with incremental, metadata-driven ingestion, transformation, and quality enforcement ready for downstream analytics and AI workloads.
→ Automated metadata-driven pipelines delivering ~60% reduction in manual engineering effort.
View details↗Full MLOps Pipeline: Azure · Airflow · MLflow · AKS
Architected a production-grade MLOps workflow with experiment tracking, data versioning, orchestrated training, containerized deployment, and metrics-driven monitoring across environments.
→ Blue-green deployments, automated model promotion, and monitoring for drift, latency, and throughput.
View details↗Enterprise-Grade RAG System on Azure
Built a Retrieval-Augmented Generation system using Azure OpenAI, embeddings, Azure AI Search, and LangChain, with Redis caching and Azure AD-based access control for enterprise document corpora.
→ Low-latency retrieval with secure role-based access and reranking for higher answer relevance.
View details↗Model Deployment & Monitoring Framework
Implemented a framework for deploying ML models via containerized APIs with CI/CD, automated tests, and responsible AI components for interpretability and monitoring.
→ Added explainability, regression checks, and visibility into performance for every deployed model.
View details↗Tools used in real systems
Focused on technologies that ship reliably in production: MLOps tooling, cloud services, LLM frameworks, and observability tooling.
Let’s talk about ML systems
Open to senior engineering roles focused on MLOps, GenAI, and production ML systems— especially where reliability, scale, and clarity of ownership matter.