We're seeking an experienced Data Engineer to design, build, and scale robust data systems and pipelines for an innovative AI based start up. You'll shape the data infrastructure from the ground up, driving innovation in how data is collected, processed, and utilized for cybersecurity solutions.
Responsibilities:
* Design and maintain scalable, secure data pipelines and architectures.
* Own the full data lifecycle-from ingestion and storage to processing and visualization.
* Collaborate with engineers, data scientists, and product teams to support product and analytical needs.
* Monitor and optimize data performance, scalability, and reliability.
* Define and enforce data quality standards and best practices.
* Rapidly prototype and iterate on new data solutions.
* Mentor junior engineers and contribute to technical reviews.
Requirements:
* 7+ years in software engineering, including 4+ years in data engineering.
* Strong experience with data frameworks (eg, Spark, Kafka, Airflow) and ETL workflows.
* Proficiency with SQL and NoSQL databases, including query optimization.
* Experience with cloud data services (eg, AWS Redshift, S3, Glue, EMR) and CI/CD for data pipelines.
* Strong programming skills in Python, Java, or Scala.
* Excellent problem-solving and collaboration skills.
* Ability to thrive in a fast-paced, dynamic environment.
Why You'll Love This Role:
* Tackle complex, large-scale data challenges in cybersecurity.
* Work with a team of experienced engineers and technical leaders.
* Make a real impact by enabling proactive threat detection and risk mitigation.