Job Description Anticipated Contract End Date/Length: November 30, 2026 Work set up: Hybrid (3 days per week in office) Our client in the Information Technology and Services industry is looking for a Data Engineer to design, build, and optimize streaming data pipelines focused on OpenShift telemetry, observability, and proactive monitoring capabilities. This role plays a key part in enabling multi-tenant observability, ensuring data quality, lineage, and reliability across streaming platforms, while integrating telemetry into Splunk for advanced dashboards, alerting, and analytics. The position requires strong expertise in Kafka-based pipelines, OpenTelemetry instrumentation, schema governance, and collaboration with platform, SRE, and application teams to achieve Observability Level 4 maturity. What you will do: Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry including metrics, logs, and traces at scale. Stream OpenShift telemetry via Kafka and build resilient consumer services for transformation and enrichment. Engineer data models and routing for multi-tenant observability and ensure lineage, quality, and SLAs across the stream layer. Integrate processed telemetry into Splunk for dashboards, alerting, analytics, and proactive insights. Implement schema management using Avro or Protobuf, including governance and versioning for telemetry events. Build automated validation, replay, and backfill mechanisms to ensure data reliability and recovery. Instrument services with OpenTelemetry and standardize tracing, metrics, and structured logging across platforms. Use LLM capabilities to enhance observability through query assistance, anomaly summarization, and runbook generation. Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs. Ensure security, compliance, and best practices across data pipelines and observability platforms. Document data flows, schemas, dashboards, and operational runbooks.