Data Engineer (Backend & AWS)

Charlotte, NC
Contracted
Experienced

Job Title: Data Engineer (Backend & AWS)
Location: Charlotte, NC (Hybrid)
Duration: 12 Months


Role Overview

We are looking for a highly skilled Data Engineer with strong backend development expertise and deep knowledge of AWS cloud technologies. This role focuses on designing, building, and optimizing scalable data pipelines and backend systems that power data-driven applications, analytics, and business intelligence initiatives.


Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL/ELT processes using modern cloud and big data technologies.
  • Build and enhance backend services and APIs to support data-driven applications and integrations.
  • Leverage AWS services to architect and deploy reliable, secure, and high-performance data solutions.
  • Collaborate with data scientists, analysts, and business stakeholders to translate data requirements into technical solutions.
  • Optimize data workflows for performance, scalability, and cost efficiency.
  • Implement and enforce data quality, governance, and security best practices.
  • Work with cross-functional teams in an Agile/Scrum environment to deliver high-quality solutions.
  • Monitor, troubleshoot, and resolve issues in data pipelines and backend systems.
  • Contribute to architecture decisions, including data modeling and system design.

Required Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • 5+ years of experience in Data Engineering and/or Backend Development.
  • Strong programming skills in Python, Java, or Scala.
  • Hands-on experience with AWS services such as S3, Lambda, Glue, Redshift, EMR, or Athena.
  • Proven experience building data pipelines, ETL/ELT processes, and data integration workflows.
  • Solid experience with backend frameworks such as Spring Boot or Node.js.
  • Strong knowledge of SQL databases (PostgreSQL, MySQL) and NoSQL databases (DynamoDB, MongoDB).
  • Understanding of data modeling, data warehousing, and distributed systems.

Preferred Qualifications

  • Experience with big data technologies such as Apache Spark and Hadoop.
  • Familiarity with containerization and orchestration tools like Docker and Kubernetes.
  • Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation.
  • Exposure to real-time/streaming data pipelines (Kafka, Kinesis, etc.).
  • Knowledge of CI/CD pipelines and DevOps practices.

Nice to Have

  • Experience in energy/utilities domain or working with large enterprise clients.
  • Exposure to data governance and compliance frameworks.
Share

Apply for this position

Required*
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file

Human Check*