About the role: Join Samsara as a Staff Software Engineer, focusing on our Developer Ecosystem and its critical role in the company's broader data and AI strategy. You will be a key technical leader, responsible for architecting and guiding high-impact, cross-functional initiatives that shape the future of our external integrations. This role is pivotal in connecting our core developer products (API, Eventing, Lambdas, Marketplace) to the company's next-generation AI and data platforms. You will be a force-multiplier, empowering our largest enterprise customers and our internal engineering teams to build scalable, intelligent, and deeply integrated solutions on the Samsara platform. This is a remote position open to candidates residing in Poland or the United Kingdom. You should apply if: You want to impact the industries that run our world: The software, firmware, and hardware you build will result in real-world impact – helping to keep the lights on, get food into grocery stores, and most importantly, ensure workers return home safely. You want to build for scale: With over 2.3 million IoT devices deployed to our global customers, you will work on a range of new and mature technologies driving scalable innovation for customers across industries driving the world's physical operations. You are a life-long learner: We have ambitious goals. Every Samsarian has a growth mindset as we work with a wide range of technologies, challenges, and customers that push us to learn on the go. You believe customers are more than a number: Samsara engineers enjoy a rare closeness to the end user and you will have the opportunity to participate in customer interviews, collaborate with customer success and product managers, and use metrics to ensure our work is translating into better customer outcomes. You are a team player: Working on our Samsara Engineering teams requires a mix of independent effort and collaboration. Motivated by our mission, we’re all racing toward our connected operations vision, and we intend to win – together. In this role, you will: Architect the end-to-end data integration strategy, evolving our high-throughput Eventing platform (Kafka) into a comprehensive data pipeline Drive the strategy and architecture for our serverless compute (Lambda) capabilities, enabling developers to run complex logic Establish and champion the technical vision and architectural governance for our external API framework, ensuring consistency, scalability, and a world-class developer experience across all of Samsara's product lines. Act as a technical leader across the organization, mentoring senior engineers, leading design reviews for complex systems, and aligning stakeholders toward a unified technical direction. Partner with senior leadership (Product, Engineering) and our largest enterprise customers to define and execute a multi-year technical vision for Samsara's developer and data platform. Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices Minimum requirements for the role: Deep, systems-level expertise in designing, building, and operating high-throughput, large-scale data streaming systems (e.g., Kafka). Proven experience in architecting and implementing AI-first data strategies, with a strong understanding of how event streams (Kafka) integrate with data lakes and data warehouses in a modern LakeHouse architecture. Demonstrable experience with serverless compute architectures (e.g., AWS Lambda) and their associated design patterns and trade-offs. Extensive experience in designing, building, and providing governance for large-scale, public-facing APIs (GraphQL preferred). A track record of leading complex, multi-team, and multi-quarter technical initiatives from inception to delivery. An ideal candidate also has: Experience with managed event streaming platforms (e.g., Amazon MSK, Azure Event Hubs, Google Cloud Pub/Sub). Deep familiarity with cloud data warehousing and data lake solutions (e.g., Snowflake, Databricks). Experience with ML/AI model deployment and MLOps infrastructure.