Data + infrastructure

Data and infrastructure work inside real deployments, with real operating constraints

This is the delivery layer behind the application work: platform architecture, lakehouse modeling, orchestration, observability, cloud boundaries, and migration work that has to stay reliable once people actually depend on it.

Production systems, not side work

Data, orchestration, observability, and infrastructure

Built for operators, not demo screenshots

What I usually help with

Architecture decisions that survive rollout and production use

I am usually brought in where deployments need stronger shape: unclear ingestion boundaries, fragile orchestration, weak observability, shaky cloud interfaces, or legacy pipelines that are too risky to replace without a plan.

Talk about a deployment review

Streaming + ingestion systems

Design event-driven ingestion that keeps real-time and scheduled sources inside one platform contract.

Kafka, ingestion contracts, replay safety

Lakehouse + modeling

Shape Bronze/Silver/Gold and Delta-backed pipelines so raw inputs become trusted analytical tables.

Delta Lake, Spark, governed promotion

Orchestration + observability

Build for freshness, backfills, dependencies, and on-call clarity instead of shipping opaque DAGs.

Airflow, SLAs, monitoring, runbooks

Governance + compliance

Embed RBAC, lineage, auditability, and publishing gates where data changes state.

Healthcare-sensitive workflows

Modernization + migration

Move legacy pipelines without breaking reporting by using hybrid cutovers and validation paths.

Cloud migration, parallel runs, API modernization

Case studies

Systems broken down by ownership, tradeoffs, and operational reality

Each case study is structured around the business problem, what I owned, the architectural choices that mattered, and what changed after rollout.

Sanius HealthNov 2023 — Present

Healthcare cloud data platform

Unified wearable event streams and scheduled operational feeds into one governed platform for analytics and product reporting.

Role

Senior data engineer

Scope

Wearable telemetry plus operational batch sources

Ownership

Platform architecture, ingestion contracts, transformation design, and operational standards.

Reliability

Freshness gates, contract validation, and shared incident runbooks

Signals

  • Platform footprint: One shared platform contract for streaming wearable and scheduled operational data.
  • Delivery model: Cross-functional standards coordinated across a 4-engineer team.
Streaming ingestion architectureLakehouse platform designCross-functional platform standards
View case study
Sanius HealthNov 2023 — Present

Clinical + wearable medallion pipeline

Built a Bronze/Silver/Gold pipeline for clinical and wearable data so sensitive analytics could move from raw capture to decision-ready models with governance built in.

Role

Data engineer

Scope

Sensitive clinical and wearable datasets

Ownership

Medallion modeling strategy, governance controls, and promotion criteria for sensitive analytics datasets.

Reliability

Layer freshness checks, lineage visibility, RBAC-aligned access

Signals

  • Trust model: Bronze/Silver/Gold became the standard promotion path for sensitive analytics data.
  • Compliance posture: RBAC, lineage, and audit controls aligned with HIPAA/GDPR-oriented operating requirements.
Medallion architectureData quality gatesGovernance + compliance controls
View case study
Experience Flow Software Tech Pvt LtdApr 2021 — Nov 2023

Legacy-to-cloud data modernization

Modernized fragmented legacy pipelines into a cloud-oriented data platform without breaking reporting during the transition.

Role

Data engineer

Scope

Legacy services, scheduled pipelines, and event-driven workloads

Ownership

Orchestration modernization, migration safety, and interoperability between legacy, batch, and real-time systems.

Reliability

Dependency-aware runs, migration validation, and SLA monitoring

Signals

  • Migration safety: Legacy and modern flows ran in parallel during cutover so reporting continuity was protected.
  • Operational resilience: Airflow-based backfill, SLA, and dependency controls formalized production operations.
Legacy modernizationAirflow orchestrationBatch + streaming interoperability
View case study