I'm a passionate Data Engineer with extensive experience in developing and optimizing data workflows and pipelines. With a background in computer science, I specialize in creating efficient data solutions that drive impactful results.
- Cost Savings: Developed a custom Python-based DAG orchestration framework, resulting in $200k annual savings.
- Efficiency in CI/CD: Established a comprehensive CI/CD pipeline for Snowflake in Azure DevOps, ensuring robust SQL syntax and metadata validation.
- Manual Effort Reduction: Created an XML parser in Azure DataBricks (PySpark), significantly reducing manual effort for schema inference and data flattening.
- Performance Improvement: Optimized the existing ELT framework for Snowflake, cutting data load time by 70%.
- Processing Speed: Built a Python library that decreased the parsing time of 2300 files to around 180 seconds using multiprocessing.
- Programming Languages: Python, Snowflake SQL, Bash Scripting
- Tools & Technologies: Azure DevOps, DASK, Control-M, Pandas, PySpark, SnowPark
- Data Engineering: ETL, ELT, DataBricks, CI/CD, XML Parsing
B.Tech. in Computer Science
Institute of Engineering & Management, Kolkata
Jul 2016 โ Jul 2020 | GPA: 7.97/10
Data Engineer (Senior Analyst)
EY | Kolkata, India
July 2022 โ Current
- Developed a Python-based custom DAG orchestration framework with runtime restructuring, saving $200k annually.
- Established a comprehensive CI/CD pipeline for Snowflake in Azure DevOps.
- Developed an XML parser in Azure DataBricks (PySpark) to reduce manual schema inference effort.
- Optimized ELT framework for parallel and deadlock-free data load in Snowflake, reducing load time by 70%.
- Created a Python library to efficiently parse 2300 files, reducing parsing time to around 180 seconds.
I'm always open to discussing new projects, ideas, or opportunities to work together. Feel free to reach out!