DATA ENGINEER (Real Time)
About client:
At RemoteStar, we are hiring for a client who is a world-class iGaming operator offering various online gaming products across multiple markets through proprietary gaming sites and partner brands.
Their iGaming platform supports over 25 online brands and is used by hundreds of thousands of users worldwide. The company embraces a Hybrid work-from-home model, with the flexibility of working three days in the office and two days from home.
About the Data Engineer role:
You will contribute to designing and developing Real-Time Data Processing applications to meet business needs. This environment offers an excellent opportunity for technical data professionals to build a consolidated Data Platform with innovative features while working with a talented and fun team.
Responsibilities include:
* Development and maintenance of Real-Time Data Processing applications using frameworks like Spark Streaming, Spark Structured Streaming, Kafka Streams, and Kafka Connect.
* Manipulation of streaming data, including ingestion, transformation, and aggregation.
* Researching and developing new technologies and techniques to enhance applications.
* Collaborating with Data DevOps, Data Streams teams, and other disciplines.
* Working in an Agile environment following SDLC processes.
* Managing change and release processes.
* Troubleshooting and incident management with an investigative mindset.
* Owning projects and tasks, and working effectively within a team.
* Documenting processes and sharing knowledge with the team.
Preferred skills:
* Strong knowledge of Scala.
* Familiarity with distributed computing frameworks such as Spark, KStreams, Kafka.
* Experience with Kafka and streaming frameworks.
* Understanding of monolithic vs. microservice architectures.
* Familiarity with Apache ecosystem including Hadoop modules (HDFS, YARN, HBase, Hive, Spark) and Apache NiFi.
* Experience with containerization and orchestration tools like Docker and Kubernetes.
* Knowledge of time-series or analytics databases such as Elasticsearch.
* Experience with AWS services like S3, EC2, EMR, Redshift.
* Familiarity with data monitoring and visualization tools such as Prometheus and Grafana.
* Experience with version control tools like Git.
* Understanding of Data Warehouse and ETL concepts; familiarity with Snowflake is a plus.
* Strong analytical and problem-solving skills.
* Good learning mindset and ability to prioritize tasks effectively.
#J-18808-Ljbffr