top of page

Data Pipelines


Large Dataset Processing and Data Insights

Helvetica Light is an easy-to-read font, with tall and narrow letters, that works well on almost every site.


Handling large datasets to extract data insights posed challenges in parallelization, scalability, and data quality assurance. The need for optimized data processing and transformation of raw data into refined aggregations required sophisticated strategies.


We employed the Spark Scala DataFrame API for efficient data processing, orchestrated through Apache Airflow, and ran on Amazon EMR and EMR on EKS. Our approach included optimization strategies like tuning partition sizes and shuffle parameters, and analyzing query execution plans for efficient processing. Workload analysis aided in selecting optimal cluster configurations. Data quality was ensured through rigorous unit testing, row-level checks, and the Deequ framework.


The implementation led to significantly enhanced data processing performance and scalability. Optimized cluster management and data quality assurance strategies resulted in reliable and accurate data insights, proving crucial for effective decision-making and analytics operations. This comprehensive approach streamlined data transformation processes, ensuring high-quality data outputs and efficient resource utilization. A job previously took 1,000 cores to run in 20 mins. Down to 35 cores in 5 mins.

bottom of page