AI & Data Engineer - Must have an Active DV Clearance - Hybrid
Ready to apply Before you do, make sure to read all the details pertaining to this job in the description below.
Location: Bristol, London, Manchester or Cambridge
Salary: £55,000 - £75,000 (depending on experience and clearance)
Our client is looking for an AI & Data Engineer to join their team.
As a practitioner, you are responsible for bridging the gap between our customers needs and technical solutions. Your primary responsibility is to make data valuable for our clients, by developing pipelines that ingest, transform, and enrich high volume and variety data into accessible, trusted information assets that can be used to derive actionable insights.
In your role, you will have responsibility for deliverables and client stakeholder relationships and will be delivering solutions for our clients using agile methodologies. You will often be working in multi-disciplinary teams across a range of industries, subject matters, and locations.
Our projects vary greatly and your responsibility as a consultant will differ based on the focus of the client engagement and your skillset, but could include and may require you to:
Apply data engineering tools, integration frameworks, and query engines to create high quality, standardised data for downstream use cases, such as for AI and reporting.
Design and implement high quality data pipelines and data stores, coordinating efforts with other developers and engineers.
Bring innovation and novel approaches to solve challenging data engineering problems.
Architect and implement for scale and complexity that provide value across many teams and consumers.
Develop logical and physical data modeling and governance strategy, establishing standards across teams.
Candidates will have hands on experience with one or more technologies relevant to these areas:
Ingesting, transforming, enriching, and integrating data from diverse sources into well-structured information assets.
Distributed computing techniques like parallel processing, streaming, batch workflow orchestration that enable handling large data volumes.
ETL, data pipelines, and automated workflows for moving and processing data.
Optimising data systems for performance, scalability, reliability, and monitoring.
Information security including access controls, encryption, anonymity for sensitive data assets.
Data governance, including metadata management, data quality, and lineage tracking.
Required Skills:
We are specifically looking for candidates with both technical and business focused skills, who can articulate the outcomes and value of their work, and have working experience in some of the following:
End-to-end development experience with data pipelines, ETL processes, workflow orchestration - using core concepts that apply across tech stacks.
Working with diverse data sources and types - batch, streaming, real-time, and unstructured.
Systems thinking and architectural design skills for building scalable, high-performance data solutions.
Data modelling, warehouse design, database optimization knowledge - with samples of logical/physical models that reflect proficiency.
Deploying, and managing distributed data systems.
Ability to monitor, troubleshoot, and tune these systems for reliability and performance.
Coding experience that demonstrates modularity, reusability, and efficiency - across languages.
Understanding full development lifecycle, SDLC concepts, version control, CI/CD pipelines.
Knowledge of data security, governance, metadata management, master data principles.
Communication skills, ability to understand business requirements and translate to technical data solutions.
If this is the role for you please submit your CV at your earliest convenience.
TPBN1_UKTJ