Job Requirements
At Quest Global, it’s not just what we do but how and why we do it that makes us different. With over 25 years as an engineering services provider, we believe in the power of doing things differently to make the impossible possible. Our people are driven by the desire to make the world a better place—to make a positive difference that contributes to a brighter future. We bring together technologies and industries, alongside the contributions of diverse individuals who are empowered by an intentional workplace culture, to solve problems better and faster.
Key Responsibilities
- The candidate will cooperate with both Quest Global and Quest Global customer teams within a collaborative development framework.
- Write efficient and reusable code in Scala and/or Python (PySpark)
- Process large-scale structured and unstructured datasets
- Perform data transformations, aggregations, and joins across multiple sources
- Optimize Spark jobs for performance and resource utilization
- Work with distributed storage systems like S3 / HDFS
- Debug and troubleshoot production data issues
- Collaborate with data engineers, analysts, and stakeholders
- Ensure data quality, consistency, and reliability
- Collaborating with cross-functional teams to analyze, design, and implement new applications.
- Ensuring optimal performance, quality, and responsiveness of the application/services.
We are known for our extraordinary people who make the impossible possible every day. Questians are driven by hunger, humility, and aspiration. We believe that our company culture is the key to our ability to make a true difference in every industry we reach. Our teams regularly invest time and dedicated effort into internal culture work, ensuring that all voices are heard.
We wholeheartedly believe in the diversity of thought that comes with fostering a culture rooted in respect, where everyone belongs, is valued, and feels inspired to share their ideas. We know embracing our unique differences makes us better, and that solving the worlds hardest engineering problems requires diverse ideas, perspectives, and backgrounds. We shine the brightest when we tap into the many dimensions that thrive across over 21,000 difference-makers in our workplace.
Work Experience
Required Skills
- Strong hands-on experience with Apache Spark
- Proficiency in Scala and/or Python (PySpark)
- Good understanding of Spark internals (RDD, DataFrame, Dataset APIs)
- Experience with data formats like Parquet, Avro, JSON
- Familiarity with distributed systems and big data concepts
- Strong SQL skills
- Experience with cloud platforms (AWS preferred – S3, EMR, Glue, Kinesis, Firehose, Hive)
- Knowledge of performance tuning and optimization techniques
- Experience with CI/CD pipelines
- Exposure to streaming frameworks (Spark Streaming)
- Familiarity with workflow orchestration tools like Apache Airflow
Preferred Skills
- Familiarity with Agile development practices.
- Strong analytical and problem-solving skills.
- Good communication and stakeholder collaboration abilities.
- Ability to work independently in a fast-paced environment.
- Ownership mindset with attention to data quality and reliability.
Additional Information
- Team player.
- Effective communication skills.
- Readiness to broaden skills (or enhance existing skills) to GoLang or any other technology related to Java/J2EE.
- Mandatory requirement to work from the Customer Office in Pune location.