We are seeking
a highly motivated postdoc to conduct research into this fast-moving area. Directions may include investigating quality evaluation methods for multi-agent systems, attack surfaces, defensive mechanisms and related topics to the safe deployment of systems contain multiple LLM and VLM powered models. You will be responsible for Developing and implementing; capability evaluations, attacks on and defensive mechanisms for safe multi-agent systems, powered by LLM and VLM models. Candidates should possess a PhD (or be near completion) in Machine Learning or a highly related discispline. You will have Knowledge of approaches for areas related to agentic systems powered by LLMs or VLMs and an ability to manage own academic research and associated activities. Only online applications received before midday on 23rd May 2025 can be considered.