Junior Data Engineer - Growing Data & Analytics Consultancy Location: Greater Manchester (Hybrid) Salary: £40,000 - £45,000 with excellent benefits (dependent on level) About the Company: The company is an ambitious, innovative, and rapidly expanding data consultancy dedicated to delivering impactful data solutions for a diverse client portfolio. The culture is built on collaboration, continuous learning, and making a real difference through data-driven insights. The organisation prioritises a supportive environment that encourages growth, practical problem-solving, and technical excellence. Why Join the Company: Work on high-impact, diverse projects for impressive clients across finance, retail, manufacturing, and technology sectors. Be part of a supportive team that values innovation, expertise, and professional development. Benefit from a flexible hybrid working model with remote and on-site options. Enjoy a comprehensive package including 35 days holiday plus bank holidays, pension scheme, and ongoing training opportunities. Data Engineer (1.5 Years of Experience) The company is looking for a talented Data Engineer to join the technical team. The successful candidate will be responsible for building scalable, reliable data pipelines, managing data infrastructure, and supporting data products across various cloud environments, primarily Azure. Key Responsibilities: Develop end-to-end data pipelines using Python, Databricks, PySpark, and SQL. Integrate data from various sources including APIs, Excel, CSV, JSON, and databases. Manage data lakes, warehouses, and lakehouses within Azure cloud environments. Apply data modelling techniques such as Kimball methodologies, star schemas, and data warehouse design principles. Build and support ETL workflows using tools like Azure Data Factory, Synapse, Delta Live Tables, dbt, SSIS, etc. Automate infrastructure deployment with Terraform, ARM, or Bicep. Collaborate on report development and visualisation with Power BI. Manage version control and deployment pipelines using Git, Azure DevOps, or GitHub. Use scripting (PowerShell/Bash) for automation tasks. Contribute to the development of applications and APIs supporting data workflows. Promote best practices regarding code quality, testing, observability, and operational stability. Ideal Candidate: Possesses 1.5 years of practical data engineering experience. Has strong skills in Python, SQL, and PySpark. Experienced working with data lakes, warehouses, lakehouses, and cloud platforms, preferably Azure. Knowledgeable in data modelling, including Kimball and star schemas. Familiar with ETL tools such as Azure Data Factory, Synapse, Delta Live Tables, dbt, SSIS. Experienced with Infrastructure as Code (Terraform, ARM, Bicep). Skilled in Power BI report development. Proficient in version control and CI/CD practices. Capable of scripting automation with PowerShell or Bash. Knowledge of DevOps, MLOps, and LLMOps is desirable. Experienced in application and API development is an advantage. What They Are Looking For: A pragmatic problem solver who balances technical solutions with business requirements. Confident in owning full data pipelines from ingestion to deployment. An excellent communicator capable of explaining complex concepts clearly. Passionate about maintaining high standards for code quality, testing, and operational observability. What’s on Offer Hybrid working model (Greater Manchester-based office). 35 days holiday plus bank holidays. Opportunity to work on high-impact projects with an impressive client base. A supportive and collaborative culture with a focus on innovation and growth. Mirai believes in the power of diversity and the importance of an inclusive culture. They welcome applications from individuals of all backgrounds, understanding that a range of perspectives strengthens both their team and their partners' teams. This is just one of the ways that they’re taking positive action to shape a collaborative and diverse future in the workplace.