Part time, fixed term contract Closing date: 23:59 31st August 2025 We are pleased to announce a fantastic opportunity for ambitious computer scientists to join our Computer Science Graduate Teaching Assistant (GTA) Programme! How does it work? Candidates will study for a four year, full time funded PhD (3 quarters of your time) whilst working and receiving a salary to gain valuable teaching experience (1 quarter of your time). Candidates will receive a salary and stipend package that exceeds the standard UKRI stipend for a full-time PhD. Home/RoI Students will have their PhD fees waived, International students will receive a fee waiver equivalent to the Home/RoI fee and will be expected to fund the difference between the International fee and the Home/RoI fee. There will be a package of support to enable you to develop a research career in this exciting field. PhD Topic Security and Privacy for Trustworthy Large Language Models The rise of Large Language Models (LLMs), such as ChatGPT, marks a significant milestone in the evolution of AI, transforming the way we perceive, communicate, and collaborate with intelligent machines. LLMs have demonstrated their remarkable capabilities in automating tasks, enhancing productivity, and improving user experiences across multiple domains. However, the increasing penetration of LLMs into our daily lives has raised major security and privacy concerns. For instance, LLMs are typically trained on vast and diverse datasets, many of which contain personal information about thousands or even millions of individuals; the large repositories of data used to train and fine-tune LLMs thus become targets for data breaches; pre-trained LLMs have been observed to memorise training data to some extent; LLMs are susceptible to adversarial attacks, where input text is manipulated to produce unintended or harmful outputs; LLMs can also infer users personal details from the query text at the inference time, and so forth. Addressing these security and privacy issues is crucial to ensure the responsible and ethical use of LLMs, and safeguard security and privacy in trustworthy AI interactions. This project is aimed at investigating the security and privacy vulnerabilities associated with the design and implementation of Large Languages Models in various applications, such as healthcare, 6G networks, understanding the underlying model/system -level causes for the observed risks, and proposing Security and Privacy Enhancing Technologies (S&PETs) to protect individual safety and privacy rights while maintaining the performance and benefits LLMs can offer. The PhD candidate will analyse the relevant security and privacy attacks targeting LLMs and the defense strategies, explore state-of-the-art security and privacy enhancing techniques, such as adversarial training, model assembling, cryptographic primitives, differential privacy, multi-party computation, federated learning, and design, develop and validate LLM-specific S&PETs based on the discovered vulnerabilities. During the 4-year study, the candidate will conduct research secondments for 8 months at highly-reputed universities renowned for research excellence. The partner universities include: Delft University of Technology, Netherlands Eindhoven university of technology, Netherlands University of G ttingen, Germany KTH Royal Institute of Technology, Sweden University of Alberta, Canada You will need to demonstrate you: meet the academic requirements for a PhD offer from the University of Reading. have a MSc degree in Computer Science or a related discipline are able to effectively organise your time and prioritise tasks to balance PhD studies with GTA responsibilities are able to demonstrate scholarship in developing a publication record in your area of specialist expertise and conduct high quality PhD research. are able to communicate scientific concepts clearly and with enthusiasm and in a way that engages students have good interpersonal skills and be able to work as part of a team See candidate pack attached for more details about funding/salary. Candidates will be provided with training to develop teaching and pedagogical skills, no prior experience of teaching is necessary. On the research side, our package of support includes access to MSc courses and bespoke training through our Postgraduate and Researcher College will help you in developing your professional skills as a researcher. Working hours for the teaching portion will be variable during the academic year but will be no more than 20 hours per week. The terms of the offer of funding for the PhD and the offer of employment will rely upon the postholder being registered as a full-time doctoral student. Successful candidates will be paid an annual salary ( 8745) and stipend ( 15585 per annum) over the 4 year period and will have PhD fees waived at the Home level (Please note that students liable for international fees will need to pay the difference between these and the home fee rate). Fees for 2025/26 (amount payable each year) can be found here. How do I apply? Candidates must submit an application for a GTA post via this link. You must upload a combined CV and Proposal (max size 1 MB) and complete the supporting statement. We look forward to hearing from you! Contact Name Dr Xiaomin Chen Contact Job Title Associate Professor in Computer Science Contact Email address xiaomin.chen@reading.ac.uk The University is committed to having a diverse and inclusive workforce, supports the gender equality Athena SWAN Charter and the Race Equality Charter, and champions LGBT equality. We are a Disability Confident Employer (Level 2). Applications for job-share, part-time and flexible working arrangements are welcomed and will be considered in line with business needs.