Data Engineer (AWS and Python)
Bangalore, India
Contracted to Full Time
Mid Level
Job Title: Data Engineer (AWS and Python)
Duration: 06 Months Contract to Hire
Location: Bangalore (Hybrid)
Job Description:
We are seeking a skilled Data Engineer to join our dynamic team. In this role, you will design, develop, and maintain scalable data pipelines and infrastructure. Your expertise in Python and PySpark will be essential for processing and analyzing large datasets. Experience with AWS services will be crucial for deploying and managing data solutions in the cloud. The ideal candidate will be detail-oriented, proactive, and able to collaborate effectively with cross-functional teams.
Key Responsibilities:
· Design, build, and maintain robust data pipelines and ETL processes.
· Develop data processing workflows using Python and PySpark.
· Implement and manage data solutions on AWS platforms.
· Optimize data systems for performance and scalability.
· Collaborate with data scientists and analysts to support data-driven decision-making.
Skills Required:
· Proficiency in Python and PySpark.
· Experience with AWS services such as S3, Redshift, and Lambda.
· Strong problem-solving and analytical skills.
· Ability to work effectively in a collaborative environment.
Qualifications:
· Bachelor’s degree in Computer Science, Engineering, or a related field.
· Previous experience in a data engineering role is preferred.
Duration: 06 Months Contract to Hire
Location: Bangalore (Hybrid)
Job Description:
We are seeking a skilled Data Engineer to join our dynamic team. In this role, you will design, develop, and maintain scalable data pipelines and infrastructure. Your expertise in Python and PySpark will be essential for processing and analyzing large datasets. Experience with AWS services will be crucial for deploying and managing data solutions in the cloud. The ideal candidate will be detail-oriented, proactive, and able to collaborate effectively with cross-functional teams.
Key Responsibilities:
· Design, build, and maintain robust data pipelines and ETL processes.
· Develop data processing workflows using Python and PySpark.
· Implement and manage data solutions on AWS platforms.
· Optimize data systems for performance and scalability.
· Collaborate with data scientists and analysts to support data-driven decision-making.
Skills Required:
· Proficiency in Python and PySpark.
· Experience with AWS services such as S3, Redshift, and Lambda.
· Strong problem-solving and analytical skills.
· Ability to work effectively in a collaborative environment.
Qualifications:
· Bachelor’s degree in Computer Science, Engineering, or a related field.
· Previous experience in a data engineering role is preferred.
Apply for this position
Required*