Trading Data Engineer We're looking for a talented Data Engineer to join our Trading team at Conrad Energy. This is a hands-on engineering role responsible for building and operating the data infrastructure that supports energy trading across UK Gas and Power markets. You'll work at the intersection of data engineering and live trading, ensuring traders have reliable, timely, and explainable access to market data, asset schedules, and trading positions. Data availability and correctness are business-critical, and this role has clear ownership of production pipelines. The role is a hybrid location, with a requirement to be office based 1-2 times per week. Our Trading Desk is in Abingdon, just outside of Oxford. About Conrad Energy Ltd Conrad Energy is a fast-growing UK energy company. We're powering the move towards renewables through innovation and technology. We generate power to support the National Grid when renewables can't meet demand and we buy, sell and manage energy for businesses nationally. With a portfolio including gas, batteries, solar, wind and hydrogen, our 83 sites, operational or in construction, have a potential to generate 983MW of power making us one of the leading flexible energy providers in the country. Optimised and operated using our market-leading software, iON, we're at the forefront of shaping a more efficient energy sector that is both reliable and sustainable. Over the last few years, we've planned and developed some of the largest energy infrastructure projects in Europe, as well as rapidly expanding the number of business customers working with us. We're proud to power a changing world, building a better future for us all. Key Responsibilities Build and maintain high-availability data pipelines for energy market data (e.g. EPEX, BMRS, NESO) using Python, SQL, and Spark Support on data flows to ensure our Power Generation and Storage assets are delivering the appropriate contracted schedules. Design and manage Delta Lake tables and SQL Databases, applying appropriate partitioning, optimisation, and data modelling strategies Develop monitoring dashboards and alerting to provide transparency on pipeline health, data freshness, and failure states Implement data quality checks, validation logic, and reconciliation across multiple market data sources Optimise pipeline performance and manage compute cost through efficient Spark and SQL design Produce and maintain clear technical documentation for data schemas, pipelines, and operational processes Contribute to structured Git-based development workflows and CI/CD pipelines (Azure DevOps) Support other business areas (Back and Middle Office) on ensuring they have the latest settlement and industry data delivered to them What we're looking for Essential 2-5 years' experience in data engineering or a similar production-focused role Strong Python (Pandas) and SQL skills, with experience building and operating data pipelines Hands-on experience with Spark, Delta Lake, and modern Lakehouse architectures Familiarity with cloud PaaS data platforms such as Microsoft Fabric or Databricks Solid understanding of data modelling, storage formats, and performance principles Experience using Git and working within CI/CD pipelines Comfortable working in an environment where data reliability directly impacts the business Desirable Exposure to energy markets, trading systems, or financial data Knowledge of Microsoft Azure Cloud Services (e.g. Function Apps) Experience integrating REST APIs or working with event-driven data Familiarity with orchestration tools (e.g. Azure Data Factory) Understanding of warehouse and lakehouse design patterns Basic awareness of trading concepts such as positions, P&L, and market schedules Why Join us Work on production systems used directly by a live trading desk Opportunity to build deep expertise in energy markets and trading technology SimplyHealth health cashback plan 24/7 Private GP access Salary Sacrifice EV Scheme Discretionary performance-based bonus