Senior Manager, Responsible AI Solution Architect
Employer Location London, United Kingdom Salary Competitive Closing date 19 Apr 2026 View more categoriesView less categories Sector Salary band Contract type Hours Where will they be working You need to or to save a job.
Job Details
About the role:
PwC is rapidly expanding its market leading Responsible AI (RAI) practice in the UK to meet fast growing client demand across all sectors. As a Senior Manager and Responsible AI Solution Architect, you will shape and deliver PwC's Responsible AI, Ethics, Security and Trust agenda. You will lead the architecture and delivery of trusted AI solutions across GenAI and agentic platforms, working at the intersection of engineering, cyber, data, model risk and regulatory compliance. You will help clients move from principles to production by designing secure, governed and observable AI systems. This role offers the opportunity to influence how some of the UK's largest organisations adopt AI responsibly through technical design leadership, assurance-by-design, and scalable governance patterns.
What your days will look like:
As a Senior Manager and Responsible AI Solution Architect, you will lead the design and delivery of trusted, secure and compliant AI systems across GenAI and emerging agentic platforms. Working across engineering, cyber, data governance and model risk, you will translate Responsible AI principles and regulatory requirements into production ready architectures. This is a hands on technical leadership role shaping how major organisations build, validate and scale AI safely.
1. Architect end-to-end AI / GenAI / agentic solutions (from data and identity through model integration, orchestration, deployment, monitoring and incident response), embedding PwC's "Trust by Design" architecture patterns to ensure systems are secure, governed, transparent and resilient.
2. Develop and mature PwC's "Trust by Design" reference architectures and patterns for GenAI and agentic AI, embedding: safety and policy controls (guardrails, content safety, prompt-hardening),
3. transparency and auditability (logging, traceability, lineage), privacy and security controls (data minimisation, encryption, key management), and human-in-the-loop and escalation mechanisms.
4. Translate regulatory, policy, privacy and ethical requirements into concrete technical controls embedded across: SDLC and secure-by-design practices, data lifecycle and data governance, model lifecycle (training, fine-tuning, evaluation, release, monitoring).
5. Partner with cyber and resilience specialists to advance AI threat modelling, prompt injection and data exfiltration mitigations, adversarial testing, and model assurance approaches.
6. Define and lead AI assurance strategies across testing, validation, monitoring and control effectiveness across classical ML and GenAI.
This role is for you if you have:
7. Deep expertise in Responsible AI principles and operating models, including design or assessment of governance/control frameworks aligned to recognised standards (, NIST AI RMF, ISO/IEC 42001).
8. Strong understanding of UK/EU AI regulatory landscape (including EU AI Act), data protection, model risk concepts, and AI ethics.
9. Proven experience as a solution architect / technical architect designing and implementing enterprise-grade AI systems.
10. Strong understanding of GenAI architectures (RAG, tool use, function calling, agents, orchestration patterns), including failure modes and risk controls.
11. Hands-on or architecture-level experience with cloud AI platforms and services, ideally across Azure (, Azure AI / Azure OpenAI, Prompt Flow, AML, Purview, Sentinel), AWS (, Bedrock, SageMaker, IAM/KMS, CloudWatch), and other common data platforms, API management, and identity/access patterns.
12. Familiarity with AI enabling technologies and ecosystems: vector databases, embedding pipelines, feature stores, model registries, prompt/trace observability, CI/CD for ML/LLM systems.
13. Demonstrated ability to design and lead testing and validation approaches for AI systems, including GenAI safety testing, adversarial testing/red teaming, monitoring and incident management.
14. Working knowledge of emerging methods such as fine-tuning ( parameter-efficient approaches), evaluation harnesses, and measurement of risk/quality.
What you'll receive from us:
No matter where you may be in your career or personal life, our benefits are designed to add value and support, recognising and rewarding you fairly for your contributions.
We offer a range of benefits including empowered flexibility and a working week split between office, home and client site; private medical cover and 24/7 access to a qualified virtual GP; six volunteering days a year and much more.
Company
In order to be the leading professional services firm it's important we have the right values, culture and behaviours embedded throughout our organisation, so our work reflects our purpose and we can successfully deliver our strategy.
Our set the expectations for the way we interact with each other, our clients, and in the communities in which we operate. These values, and the behaviours that they require from us, are relevant to all our people regardless of grade.
They support a culture that empowers our people to be the best they can be, through challenging experiences and encouraging our people to speak up to make the firm a better place. We want all our people to understand and embrace the culture and personally feel part of the legacy this will create for our future employees.
We’re a hugely diverse business, bound together by our purpose - to build trust in society and solve important problems for our clients and the communities in which we operate. We believe we can make the biggest impact when our purpose is embedded within everything we do.
Share this job
You need to or to save a job.
Sign in to create job alerts
Sign in or create an account to start creating job alerts and receive personalised job recommendations straight to your inbox.