Join us at GE Vernova Grid Software to be part of the team leading the digital transformation of the energy market. As the world’s energy sector moves away from fossil fuels toward renewable energy sources, industrial companies are challenged with addressing this transition in transformative ways. Digitization will be key to making power-generating assets more efficient and the electric grid more secure and resilient. Our Geospatial products play a critical role in this transformation by supporting the design, modelling and maintenance of electric, gas and telecommunication networks. For more information on our strategy, check out GridOS overview (https://www.gevernova.com/software/products/gridos).
You will be a part of our Grid Software Engineering team, an Agile organization with a flexible working environment, where we are always looking to innovate our products and the processes and technologies we use. Our current focus is on leveraging our long history of Geospatial experience and expertise building client-server products, and evolving those products and tech stacks to modern cloud-based mapping and analytics micro-services. We are seeking to hire people who are passionate about technology, enjoy solving challenging problems and value the positive impact it makes to our customers. We are looking to grow our current team to meet these customer needs and will use your technical expertise and problem-solving abilities to innovate complex solutions.
As a Data Architect with a focus on building a backend data product, you will work closely with your product development peers in fast-paced Agile development teams, responsible for designing, developing, and delivering a data product that integrates into the broader GridOS Data Fabric. You will focus on managing the data ingestion process, ensuring efficient data flow into the data product. Your expertise in schema design and query optimization will ensure data is structured efficiently and queried with optimal performance.
Roles and Responsibilities
* Architect the data product to be scalable, performant, and well-integrated with the GridOS Data Fabric.
* Lead the design and implementation of data ingestion pipelines for real-time and batch data.
* Design and implement data models and schemas that support optimal data organization, consistency, and performance.
* Ensure that schema design and query performance are optimized to handle increasing data volumes and complexity.
* Ensure data governance, security, and quality standards are met.
* Monitor the performance of data pipelines, APIs, and queries, and optimize for scalability and reliability.
* Collaborate with cross-functional teams to ensure the data product meets business and technical requirements.
* Design APIs (REST, GraphQL, etc.) for easy, secure access to the data.
* Participate in data domain technical and business discussions regarding future architectural directions.
* Gather and analyze data and develop architectural requirements at project level.
* Research and evaluate emerging data technologies, industry and market trends to assist in project development activities.
* Coach and mentor team members.
Education Qualification
Bachelor's Degree in Computer Science or STEM majors (Science, Technology, Engineering, and Math) with advanced experience.
Desired Characteristics
* Proven experience as a Data Product Architect or Data Engineer focusing on building data products and APIs.
* Strong experience in designing and implementing data ingestion pipelines using technologies like Kafka or ETL frameworks.
* Hands-on experience in designing and exposing APIs (REST, GraphQL, gRPC, etc.) for data access and consumption.
* Expertise in data modeling, schema design, and data organization to ensure data consistency, integrity, and scalability.
* Experience with query optimization techniques to ensure fast and efficient data retrieval while balancing performance with data complexity.
* Strong knowledge of data governance practices, including metadata management, data lineage, and compliance with standards (e.g., GDPR).
* Familiarity with cloud platforms (AWS, Google Cloud, Azure) and cloud-native data services (S3, Redshift, BigQuery, Azure Data Lake).
* In-depth knowledge of data security practices (RBAC, ABAC, encryption, authentication).
* Experience working with data catalogs, data quality practices, and data validation techniques.
* Familiarity with data orchestration tools (Apache Airflow, NiFi).
* Expertise in optimizing and maintaining high-performance APIs and data pipelines at scale.
* Strong understanding of data federation and virtualization principles for seamless data integration across systems.
* Familiarity with microservices architecture and API design for distributed systems.
* Excellent communication skills for effective collaboration with cross-functional teams.
* Ability to consult with customers on technical solutions at an enterprise level.
* Ability to analyze, design, and develop software solution roadmaps and implementation plans based on current and future business needs.
Additional Information
Relocation Assistance Provided: No
Seniority level
* Mid-Senior level
Employment type
* Full-time
Job function
* Engineering and Information Technology
Industries
* Electric Power Generation
#J-18808-Ljbffr