Role Overview
We are looking for an experienced Cloud Data Engineer with expertise in AWS, Databricks, and data integration platforms to design, build, and maintain scalable data pipelines and infrastructure. The role involves ensuring data quality, security, governance, and compliance, while collaborating with data scientists, analysts, and engineering teams.
Key Responsibilities
* Build and maintain data pipelines for ingestion, processing, and transformation.
* Manage and optimize databases, data lakes, and warehouses.
* Develop and maintain ETL processes for analytics and reporting.
* Integrate data from multiple sources, ensuring accuracy and consistency.
* Monitor and optimize data processing and query performance.
* Implement data security, governance, and compliance practices.
* Automate workflows and maintain clear documentation.
* Collaborate with cross-functional teams to deliver technical solutions.
* Troubleshoot data-related issues and optimize cloud resource usage.
Technical Skills
* AWS: S3, RDS, Redshift, Lambda, Glue
* Databricks for large-scale data processing
* Informatica IDMC for data integration and governance
* Programming: Python, Java, or Scala
* SQL & NoSQL databases, data modeling, and schema design
Requirements
* Minimum 3 years of experience in data engineering/cloud platforms
* Strong analytical, problem-solving, and communication skills
Preferred Skills
* Big data technologies: Apache Spark, Hadoop
* Containerization: Docker, Kubernetes
* Data visualization: Tableau, Power BI
* DevOps & CI/CD pipelines
* Version control (Git)
* Relevant AWS, Databricks, or Informatica certifications
#J-18808-Ljbffr