Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Genai data engineer

Slough
Gazelle Global
Data engineer
Posted: 29 April
Offer description

Your responsibilities:

* Design and maintain scalable data pipelines using PySpark, Python, and distributed computing frameworks to support high‑volume data processing.
* Architect and optimize AWS-based data and AI infrastructure, ensuring secure, performant, and cost‑efficient ingestion, transformation, and storage.
* Develop, finetune, benchmark, and evaluate GenAI/LLM models, including custom training and inference optimization.
* Implement and maintain RAG pipelines, vector databases, and document-processing workflows for enterprise GenAI applications.
* Build reusable frameworks for prompt management, evaluation, and GenAI operations.
* Collaborate with cross-functional teams to integrate GenAI capabilities into production systems and ensure high-quality data, governance, and operational reliability


Your Profile

Essential skills/knowledge/experience:

* Strong experience with PySpark, distributed data processing, and largescale ETL/ELT pipelines.
* Strong SQL expertise including star/snowflake schema design, indexing strategies, writing optimized queries, and implementing CDC, SCD Type 1/2/3 patterns for reliable data warehousing.
* Advanced proficiency in Python for data engineering, automation, and ML/GenAI integration.
* Hands‑on expertise with AWS services (S3, Glue, Lambda, EMR, Bedrock / custom model hosting).
* Practical experience with GenAI/LLM model creation, finetuning, benchmarking, and evaluation.
* Solid understanding of RAG architectures, embeddings, vector stores, and LLM evaluation methods.
* Experience working with structured and unstructured datasets (documents, logs, text, images).
* Familiarity with scalable data storage solutions (Delta Lake, Parquet, Redshift, DynamoDB).
* Understanding model optimization techniques (quantization, distillation, inference optimization).
* Strong capability to debug, tune, and optimize distributed systems and AI pipelines.


Desirable skills/knowledge/experience: (As applicable)

* Pyspark, Python, SQL,AWS, GenAI

Apply
Create E-mail Alert
Job alert activated
Saved
Save
Similar job
Data engineer
Epsom
Family Building Society
Data engineer
Similar job
Hybrid aws data engineer - etl, glue, cloud dashboards
Guildford
Natobotics
Data engineer
€70,000 a year
Similar job
Data engineer
Reading (Berkshire)
Ignite Digital Talent
Data engineer
€60,000 a year
See more jobs
Similar jobs
It jobs in Slough
jobs Slough
jobs Berkshire
jobs England
Home > Jobs > It jobs > Data engineer jobs > Data engineer jobs in Slough > GenAI Data Engineer

About Jobijoba

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by Job Title
  • Jobs by Industry
  • Jobs by Company
  • Jobs by Location
  • Jobs by Keywords

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2026 Jobijoba - All Rights Reserved

Apply
Create E-mail Alert
Job alert activated
Saved
Save