Humberger Nav
mployee.me logo
Data Engineer-Data Platforms-AWS
IBM
naukri
Pune
6-8 years
Not Disclosed
Full time
01 May 2026
Top Skills:
RedshiftScalaData WarehouseAwsData QualityData EngineeringData EngineerLambdaPythonApache AirflowSparkAnalyticsData AnalyticsData PlatformsData PipelinesData ServicesDbtAirflowData WarehousingKafkaApache AirflowApache KafkaAuroraAwsAws GlueAws KinesisCloudData AnalyticsData PipelineData PreparationData ProcessingData QualityData TransformationData WarehouseData WarehousingDmsDynamodbEmrLambdaPythonScalaSchedulingService ManagementSpark

96

Get Personalized Job Matches with 1 Click

Job Description iconJob Description
Download Resume iconDownload Resume
As a Data Engineer specializing in Data Platforms on AWS, you will advise on, develop, and maintain data engineering solutions on the AWS Cloud ecosystem. You will design, build, and operate batch and real-time data pipelines using various AWS services.

Your primary responsibilities will include:

Design and Develop Data Pipelines: Design, build, and operate batch and real-time data pipelines using AWS services such as AWS EMR, AWS Glue, Glue Catalog, and Kinesis, ensuring seamless integration and operation of data engineering solutions. Create Data Layers: Create data layers on AWS RedShift, Aurora, and DynamoDB, and migrate data using AWS DMS. Manage Data Services: Schedule and manage data services on the AWS Platform, ensuring efficient operation of data engineering solutions. Develop Batch and Real-time Pipelines: Develop batch and real-time data pipelines for Data Warehouse and Datalake, utilizing AWS Kinesis and Managed Streaming for Apache Kafka. Utilize Open Source Technologies: Utilize open source technologies like Apache Airflow and dbt, Spark / Python or Spark / Scala on AWS Platform to support data engineering solutions.

Required education
Bachelor's Degree
Preferred education
Master's Degree

Required technical and professional expertise
Exposure to AWS Toolset: Experience working with AWS services such as AWS EMR, AWS Glue, Glue Catalog, and Kinesis to design, build, and operate batch and real-time data pipelines. Data Pipeline Development: Exposure to developing batch and real-time data pipelines for Data Warehouse and Datalake, utilizing AWS Kinesis and Managed Streaming for Apache Kafka. Data Layer Creation: Experience working with AWS RedShift, Aurora, and DynamoDB to create data layers and migrate data using AWS DMS. Open Source Technologies: Exposure to utilizing open source technologies like Apache Airflow and dbt, Spark / Python or Spark / Scala on AWS Platform to support data engineering solutions. Data Service Management: Experience scheduling and managing data services on the AWS Platform, ensuring efficient operation of data engineering solutions.

Preferred technical and professional experience
Proficiency with AWS Databrew: Experience working with AWS Glue Databrew to support data engineering solutions, including data preparation and data quality tasks. Knowledge of Lambda Functions: Exposure to using Lambda functions with Python to support data engineering solutions, including data processing and data transformation tasks. Familiarity with RedShift Spectrum: Experience working with RedShift Spectrum to support data engineering solutions, including data warehousing and data analytics tasks.
Years of Experience:
6 - 8