Data Engineer - Technical Lead
EXP: 8 to 12 Years
Location: Bangalore/Hyderabad
Job description:
5 years of experience with Databricks, including PySpark, Delta Lake, notebooks, and workflow orchestration.
3+ years of experience in SQL, covering advanced query optimization, data modeling, and analytical querying.
2+ years of experience in designing and building Power BI dashboards, semantic data models, and DAX calculations.
2+ years of experience developing Azure Data Factory (ADF) pipelines, data flows, and integration solutions.
3+ years of experience architecting scalable data pipelines, ETL/ELT processes, and lakehouse‑based data solutions.
3+ years of expertise in data quality frameworks, metadata management, and performance tuning for large‑scale workloads.
3+ years of experience working in Agile environments with Git, CI/CD, and collaborative development practices.
Responsibilities :
Lead the design and implementation of end‑to‑end data engineering and analytics solutions using Databricks, SQL, ADF, and Power BI.
Architect and optimize data pipelines, ETL/ELT workflows, and lakehouse structures for scalability and performance.
Guide the development team with best practices in coding, data modelling, version control, DevOps, and testing.
Review technical designs, code, and deployment processes to ensure enterprise-quality standards.
Work with business teams to translate analytical needs into data pipelines, models, and visualization layers.
Lead the development of Power BI dashboards, ensuring accuracy, performance, and effective user experience.
Troubleshoot issues across ingestion, transformation, and reporting layers; ensure stable production operations.
Collaborate with architects, business SMEs, and data governance teams for solution alignment.
Mentor junior engineers and analysts, supporting technical growth and delivery excellence.
96
Get Personalized Job Matches with 1 Click
Data Engineer - Technical Lead
EXP: 8 to 12 Years
Location: Bangalore/Hyderabad
Job description:
5 years of experience with Databricks, including PySpark, Delta Lake, notebooks, and workflow orchestration.
3+ years of experience in SQL, covering advanced query optimization, data modeling, and analytical querying.
2+ years of experience in designing and building Power BI dashboards, semantic data models, and DAX calculations.
2+ years of experience developing Azure Data Factory (ADF) pipelines, data flows, and integration solutions.
3+ years of experience architecting scalable data pipelines, ETL/ELT processes, and lakehouse‑based data solutions.
3+ years of expertise in data quality frameworks, metadata management, and performance tuning for large‑scale workloads.
3+ years of experience working in Agile environments with Git, CI/CD, and collaborative development practices.
Responsibilities :
Lead the design and implementation of end‑to‑end data engineering and analytics solutions using Databricks, SQL, ADF, and Power BI.
Architect and optimize data pipelines, ETL/ELT workflows, and lakehouse structures for scalability and performance.
Guide the development team with best practices in coding, data modelling, version control, DevOps, and testing.
Review technical designs, code, and deployment processes to ensure enterprise-quality standards.
Work with business teams to translate analytical needs into data pipelines, models, and visualization layers.
Lead the development of Power BI dashboards, ensuring accuracy, performance, and effective user experience.
Troubleshoot issues across ingestion, transformation, and reporting layers; ensure stable production operations.
Collaborate with architects, business SMEs, and data governance teams for solution alignment.
Mentor junior engineers and analysts, supporting technical growth and delivery excellence.