1 day ago Be among the first 25 applicants
Direct message the job poster from BULLIT MANAGEMENT SERVICES LIMITED
We are seeking an experiencedAWS Data Engineerwith strong expertise inETL pipelines, Redshift, Iceberg, Athena, and S3to support large-scale data processing and analytics initiatives in thetelecom domain. The candidate will work closely with data architects, business analysts, and cross‑functional teams to build scalable and efficient data solutions supporting network analytics, customer insights, billing systems, and telecom OSS/BSS workflows.
Key Responsibilities
* Design, develop, and maintainETL/ELT pipelinesusing AWS‑native services (Glue, Lambda, EMR, Step Functions).
* Implement data ingestion from telecom systems likeOSS/BSS, CDRs, mediation systems, CRM, billing, network logs.
* Optimize ETL workflows for large‑scale telecom datasets (high volume, high velocity).
Data Warehousing (Redshift)
* Build and manage scalableAmazon Redshiftclusters for reporting and analytics.
* Create and optimizeschemas, tables, distribution keys, sort keys, and workload management.
* Implement Redshift Spectrum to query data in S3 using external tables.
Data Lake & Iceberg
* Implement and maintainApache Icebergtables on AWS for schema evolution and ACID operations.
* Build Iceberg‑based ingestion and transformation pipelines using Glue, EMR, or Spark.
* Ensure high performance for petabyte‑scale telecom datasets (CDRs, tower logs, subscriber activity).
Querying & Analytics (Athena)
* Develop and optimizeAthenaqueries for operational and analytical reporting.
* Integrate Athena with S3/Iceberg for low‑cost, serverless analytics.
* Manage Glue Data Catalog integrations and table schema management.
Storage (S3) & Data Lake Architecture
* Implement data lifecycle policies, versioning, and partitioning strategies.
* Ensure data governance, metadata quality, and security (IAM, Lake Formation).
Telecom Domain Expertise
* Understand telecom‑specific datasets such as:
o Network KPIs (4G/5G tower logs)
o Billing & revenue assurance
* Build models and pipelines to supportnetwork analytics, customer 360, churn prediction, fraud detection, etc.
Performance Optimization & Monitoring
* Tune Spark/Glue jobs for performance and cost.
* Monitor Redshift/Athena/S3 efficiency and implement best practices.
* Perform data quality checks and validation across pipelines.
DevOps & CI/CD (Preferred)
* UseGit, CodePipeline, Terraform/CloudFormationfor infrastructure and deployments.
* Automate pipeline deployment and monitoring.
Required Skills
* 3–10 yearsexperience in data engineering.
* Strong hands‑on experience with:
o Python/SQL
o Telecom data pipelines and handling large‑scale structured/semi‑structured data.
o Strong problem‑solving, optimization, and debugging skills.
Good to Have Skills
* Knowledge ofAWS Lake Formation,Kafka/Kinesis,Airflow, orDelta/Apache Hudi.
* Experience with ML workflows in telecom (churn, network prediction).
Seniority level
* Mid‑Senior level
Employment type
* Full‑time
Job function
* Information Technology
#J-18808-Ljbffr