* Design and implement enterprise observability platforms using OpenTelemetry, covering logs, metrics, and distributed traces.
* Deploy, configure, and operate the OpenTelemetry Operator on OpenShift, including collector configuration, CRDs, and multi‑pipeline telemetry flows.
* Architect and operate Kafka‑based streaming platforms, demonstrating strong expertise in brokers, partitions, replication, consumer groups, offset management, and security (SSL/SASL).
* Build and optimise high‑throughput Kafka‑backed telemetry and data pipelines, integrating with downstream systems such as Splunk or similar analytics platforms.
* Design, develop, and maintain scalable ETL/ELT data pipelines using Python and SQL, with hands‑on experience across Kafka, Spark, and modern data processing frameworks.
* Develop and apply robust data models supporting analytical and operational use cases.
* Orchestrate data and telemetry workflows using tools such as Apache Airflow.
* Work across cloud data platforms (AWS, Azure, and/or GCP), ensuring production readiness, security, and performance.
* Perform advanced performance tuning, capacity planning, and production troubleshooting across observability, streaming, and data platforms.
* Troubleshoot complex issues across Kubernetes/OpenShift, Kafka, OpenTelemetry Collectors, and cloud infrastructure.
* Collaborate with platform, SRE, and engineering teams to embed best practices for observability, reliability, and data governance.