Apply for the Data Engineer role at JLL.
Role summary
As a Data Engineer, you will lead the development of scalable data pipelines and integration of diverse data sources into the JLL Asset Beacon platform. Your work will focus on consolidating financial, operational, and leasing data into a unified platform that delivers accurate insights for commercial real estate asset management. Collaborating closely with internal developers and stakeholders, you will gather requirements, solve integration challenges, and ensure seamless data flows to support informed decision‑making. In addition to technical responsibilities, you will mentor junior engineers, promote best practices in data engineering and maintain high‑quality standards through code reviews. Leveraging tools like Spark, Airflow, Kubernetes, and Azure, you will enhance the platform's performance, reliability, and scalability.
Company bio
JLL Asset Beacon is transforming commercial real estate asset management through data integration and innovation. Our SaaS platform consolidates and reconciles data across financial, operational, and leasing functions, creating a single source of truth. By providing real‑time, end‑to‑end visibility into asset, fund, and portfolio performance, we empower real estate professionals to make faster, more informed decisions. With robust data visualization and reporting capabilities, our platform simplifies complex data ecosystems, enabling seamless collaboration and unlocking opportunities for value creation.
Responsibilities
* Data pipeline development
* Design and implement scalable, efficient, and robust data pipelines
* Data platform management
* Support and maintain the data platform to ensure reliability, security, and scalability
* Collaborate with internal developers and stakeholders to gather requirements, deliver insights, and align project goals
* Mentor junior engineers, fostering their growth through knowledge sharing and guidance
* Conduct code reviews to maintain quality and consistency
Our Technologies
* Data Processing: Spark
* Workflow Orchestration: Airflow
* Containerization: Kubernetes
* Cloud: Azure
* Data APIs and Semantic Layer: CubeJS
The Candidate
* Educational Background: A STEM degree, preferably in Computer Science or Computing.
* Professional Experience: At least 2 years of experience in data engineering, data warehousing, or a related field.
Technical Proficiency
* Strong Python and PySpark experience
* SQL skills are essential
* Experience with data orchestration platforms or tools such as Airflow, ADF, or SSIS
Data Expertise
* Solid understanding of data modeling principles and data warehousing concepts.
Domain Knowledge
* Financial or real estate experience is advantageous but not required.
Seniority level
Associate
Employment type
Full‑time
Job function
Information Technology
#J-18808-Ljbffr