Real world datasets are often plagued by label and input noise due to variability in data collection, annotation errors, and incomplete records. Fully curating such datasets is costly and impractical, and models trained only on idealised or clean data often fail to generalise to other tests sets such in deployment. This project develops noise-resilient AI models by jointly learning low-dimensional representations for both the data and model parameters within the training phase to build models that learn from both aleatoric and epistemic uncertainty and become robust and generalisable. The project is in close collaboration with the National Physical Laboratory and benefits from the scientific environment and resources provided by the Centre for Vision, Speech and Signal Processing CVSSP and the Institute for People-Centred AI at the University of Surrey.
A generous stipend is offered in addition to funding for UK-level tuition fees and research training.
We are looking for applicants with a degree in Computer Science, Mathematics, Physics, or Engineering. Prior experience in AI is necessary. Prior experience in tomographic imaging and medical physics would be advantageous but is not required.