Humberger Nav
mployee.me logo
Software Engineer III - AWS Data Engineer
JPMorganChase
linkedin
Hyderabad, Telangana, India
8-10 years
Not Disclosed
Full time
05 May 2026
Top Skills:
AgileAiAirflowArchitectureAutomationAwsCi/cdContinuous ImprovementData AnalyticsData ProcessingData QualityDatabricksDevelopment Life CycleEnterpriseMaintaining CodePipelinePysparkPythonSparkSystem ArchitectureSystem DesignTerraformToolingTriage

96

Get Personalized Job Matches with 1 Click

Job Description iconJob Description
Download Resume iconDownload Resume
Job Description

We have an exciting and rewarding opportunity for you to advance your software engineering career. Join us to build innovative data solutions and accelerate engineering productivity Guidelines.docx.

As a Senior Data Engineer at JPMorgan Chase within the Corporate Technology team, you design and deliver robust, scalable data products using Databricks/Spark on AWS.. You help modernize data processing platforms and pioneer LLM-assisted development, contributing to secure and reliable solutions that drive business impact Guidelines.docx.

Job Responsibilities

  • Design and deliver Databricks/Spark pipelines on AWS using Delta Lake/Lakehouse patterns
  • Build and maintain secure, high-quality production code in Python/PySpark aligned to enterprise security best practices
  • Implement CI/CD-first delivery using infrastructure-as-code (Terraform/CloudFormation) and automated testing for repeatable deployments
  • Produce architecture and design artifacts for complex applications, ensuring performance, resiliency, and security constraints are met
  • Gather, analyze, and synthesize data to drive continuous improvement of software applications and systems
  • Proactively identify hidden problems and patterns in data to improve coding hygiene and system architecture

Required Qualifications, Capabilities, And Skills

  • Formal training or certification on software engineering concepts and eight years applied experience
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Proficient in coding in Python and PySpark
  • Experience in developing, debugging, and maintaining code in a large corporate environment with Databricks/Spark and database querying languages
  • Overall knowledge of the Software Development Life Cycle
  • Solid understanding of agile methodologies, CI/CD, application resiliency, and security
  • Demonstrated ability to use Copilot/LLM-assisted workflows effectively while maintaining quality and security through review and automation

Preferred Qualifications, Capabilities, And Skills

  • Experience with agentic automation or LLM tooling patterns (e.g., task-oriented agents for incident triage, deployment validation, data quality checks, or developer enablement) Raw Posting.docx
  • Familiarity with Data Mesh, Airflow, and/or ThoughtSpot Raw Posting.docx
  • AWS and/or Databricks certifications (e.g., AWS SAA / Developer Associate / Data Analytics Specialty, Databricks certification) Raw Posting.docx
  • Design, build, and operate scalable Databricks/Spark data products on AWS, with a strong emphasis on Agentic AI patterns, LLM-enabled developer workflows, and Copilot-assisted delivery

ABOUT US