Role
This role will give you the opportunity to enable the safe and responsible uptake of robotics and autonomous systems in a range of application sectors including autonomous driving, maritime, and healthcare. Building on previous research undertaken in the AAIP such as SACE and AMLAS (https://www.york.ac.uk/assuring-autonomy/guidance/), you will develop methods that lead to the creation of convincing through-life safety assurance arguments for AI-based autonomous systems and demonstrate how these methods can be applied within an industrial context. This post provides a unique opportunity to develop and validate innovative approaches to safety assurance on real autonomous robotic systems, grounded within a solid foundation of systems safety engineering and computer science. The role will require you to work closely and effectively with other team members including experienced engineers and researchers and be able to explain your research clearly and precisely to a range of different audiences.
Skills, Experience & Qualification needed
You must have a first degree in Computer Science or cognate discipline and a PhD in computer science, autonomous systems, or equivalent experience. You should have knowledge of systems safety engineering, robotics, autonomous systems, artificial intelligence or a related discipline. Experience of developing machine learning models for embedded systems is desirable. You must have experience of undertaking high quality research. In addition, for Grade 7, you must demonstrate a proven ability to attract research funding and to take responsibility for managing a research project that includes the supervision of work of others.
Interview dates: 18th/19th December