Hiring for a USA based Multinational Company,
We are looking for a skilled and motivated Data Engineer to design, build, and maintain scalable data pipelines and architectures that power data-driven decision-making across the organization. As a Data Engineer, you will work closely with data scientists, analysts, and other engineering teams to ensure the availability, reliability, and integrity of our data infrastructure.
* Design, develop, and maintain robust and scalable ETL/ELT data pipelines.
* Build and optimize data architectures and databases for analytics and reporting.
* Integrate data from various internal and external sources, ensuring data quality and consistency.
* Collaborate with data scientists, analysts, and stakeholders to understand data needs and deliver high-quality datasets.
* Automate data processing and improve data reliability, efficiency, and scalability.
* Monitor and troubleshoot performance issues in data systems.
* Implement data governance and security best practices.
* Maintain documentation for data systems, schemas, and processes.
* Strong programming skills in Python, SQL, or Scala.
* Experience with data pipeline tools like Apache Airflow, Luigi, or Prefect.
* Experience with data warehouses (Snowflake, BigQuery, Redshift) and relational databases.
* Familiarity with distributed data processing frameworks (Apache Spark, Hadoop).
* Proficiency with cloud platforms (AWS, Azure, GCP).
* Understanding of data modeling, data lakes, and best practices for handling structured and unstructured data.
* Experience with real-time data streaming tools (Kafka, Kinesis).
* Familiarity with containerization and orchestration tools (Docker, Kubernetes).
* Knowledge of data governance frameworks, data cataloging, and security standards.
* Experience supporting data science or machine learning workflows.
* Excellent problem-solving and troubleshooting skills.
* Strong communication and collaboration abilities.
* Attention to detail and commitment to high-quality deliverables.