Humberger Nav
mployee.me logo
IN-Manager_ Azure DE + GCP _Data and Analytics_Advisory
PwC Service Delivery Center
naukri
Bengaluru
5-9 years
Not Disclosed
Full time
01 May 2026
Top Skills:
Data QualityData ManagementMachine LearningDbmsData ModelingAssuranceBusiness IntelligenceData ProcessingAnalyticalDatabase AdministrationAccess ControlAiApache AirflowAzureBigqueryBusiness IntelligenceCi/cd PipelineCloudCollectionCost OptimizationData AccessData AnalyticsData ManagementData PipelineData ProcessingData QualityData WarehousingDatabricksDataflowEnterpriseGcpGovernanceMetadata ManagementPerformance TuningPipelineProduction DeploymentPysparkPythonScalaSpark

96

Get Personalized Job Matches with 1 Click

Job Description iconJob Description
Download Resume iconDownload Resume
Line of Service
Advisory
Industry/Sector
Not Applicable
Specialism
Data, Analytics AI
Management Level
Manager
Job Description Summary
A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge.
Responsibilities:
  • Design, develop, and maintain scalable data pipelines using Python, PySpark, and Spark (Scala/ PySpark) across Azure and GCP platforms.
  • Build and manage data workflows using Databricks Workflows and Apache Airflow / Cloud Composer DAGs.
  • Develop and optimize Delta Lake tables on Databricks, ensuring data reliability, performance, and governance.
  • Implement and manage Databricks Unity Catalog for data access control and metadata management.
  • Work with BigQuery for large-scale data warehousing and analytics.
  • Develop event-driven and batch data processing solutions using Pub/Sub, Cloud Dataflow, and Cloud Functions.
  • Implement ML pipelines on Databricks, including experimentation tracking using MLflow.
  • Collaborate with data scientists, analytics teams, and business stakeholders to deliver end-to-end data solutions.
  • Ensure best practices for data quality, security, cost optimization, and performance tuning.
  • Support CI/CD pipelines and production deployments for data engineering workloads.
Mandatory skill sets:
  • Azure Databricks
  • Python, PySpark
  • Advanced Spark ( PySpark / Scala)
  • Databricks Delta Tables
  • Databricks Work


Disclaimer : This job posting has been aggregated from external source. Role details, content, and availability are subject to change. Applicants are advised to confirm the latest information directly on the company website before applying.