Humberger Nav
mployee.me logo
AI-Enabled Data engineer
Intellias
linkedin
Columbus, Ohio Metropolitan Area
5-10 years
Not Disclosed
Full time
05 May 2026
Top Skills:
AiAirflowArchitectureAwsAzureCi/cdCloudCloud StorageData IntegrityData LakeData WarehouseDatabricksDockerEc2EnterpriseEtlEtl ProcessGcpLambdaPythonS3SqlVerbal Communication Skill

96

Get Personalized Job Matches with 1 Click

Job Description iconJob Description
Download Resume iconDownload Resume

See the impact you make as an AI‑Enabled Data Engineer delivering production‑ready solutions from day one.


Our client is looking for a Data Engineer who works within a development team to design, develop, and deliver software solutions that incorporate applied AI capabilities. This role focuses on execution and delivery of production‑ready solutions rather than research, requiring immediate contribution with minimal onboarding. The engineer will support enterprise projects that integrate AI tools, libraries, or model APIs, working closely with cross‑functional stakeholders in a full‑time, onsite environment to ensure quality, reliability, and timely delivery.

This is a full‑time, fully onsite position (5 days per week during standard business hours).


Requirements:

  • Bachelor’s degree in Computer Science, Computer Engineering, Software Engineering, or equivalent practical experience.
  • Typically 1–2 years of relevant professional experience in software engineering and applied AI.
  • Proficiency in Python, including writing, modifying, and maintaining production‑quality code.
  • Demonstrated experience integrating AI tools, libraries, or model APIs into real‑world projects.
  • Ability to work productively with minimal onboarding and follow established architecture and coding standards.
  • Strong written and verbal communication skills to interact with cross‑functional teams.
  • Experience working in structured, production development environments.
  • 1+ years of experience with cloud vendors (AWS/Azure/GCP), DWH services (Redshift, Databricks etc.), cloud storages (Azure Storage, S3) etc
  • Experience with ETL/ELT/orchestration tools (e.g., Airflow, ADF, Glue, Nifi)
  • Contribute to the implementation of data warehouse and data lakes
  • Familiarity with AWS services (EC2, Lambda, S3, Bedrock) preferred but not required.
  • Exposure to Docker, CI/CD, MLOps, enterprise data environments, or responsible AI practices is a plus.
  • Ability to work full‑time (40 hours/week) and onsite (Columbus, OH) 5 days per week.


Responsibilities:

  • Support the design and implementation of data models and database structures
  • Assist in identifying and optimizing performance bottlenecks in the database system
  • Participate in implementing ETL processes to extract, transform, and load data into the data warehouse
  • Contribute to ensuring data integrity, consistency, and accuracy
  • Support team members with SQL-related tasks
  • Perform assigned duties independently and in a timely, accurate, and professional manner
  • - Adhere to organizational policies, procedures, and applicable standards, including quality and safety requirements.
  • Communicate effectively, collaborate with colleagues, and maintain required records and documentation.