How this assessment was built
The framework, data sources, rating methodology, sample design, and the reasoning behind the approach. This page is designed for anyone who wants to understand exactly what was measured and how, or who wants to challenge or build on the framework.
Why we reframed the question
The commonly asked question — “How are UK CS departments responding to AI?” — is the wrong question, or at least an incomplete one. It conflates three very different things that are all being called “AI in CS education”:
1. Teaching AI as a subject
ML algorithms, neural network architectures, training approaches. This is what research-intensive departments are good at and proud of. Most are doing it well.
2. Teaching with AI
Using AI tools in pedagogy, dealing with academic integrity. This is what QAA, Jisc, and HEPI are focused on, and it is a sector-wide concern, not specific to CS.
3. Teaching how to build software in an AI-transformed world
How to architect systems that incorporate AI components. How to validate AI-generated code. How to manage AI-augmented development teams. How to handle the lifecycle of software that is partly written by machines. This is the gap nobody is tracking systematically.
Our assessment focuses on the third question. The reframing centres software engineering rather than treating it as a secondary concern, invites assessment of practical readiness rather than academic prestige, and connects directly to the workforce question that policy-makers and industry leaders care about most.
The five-dimension framework
Each institution is assessed against five dimensions, each capturing a different aspect of readiness for AI-transformed software engineering practice. Within each dimension, five binary sub-indicators provide specific, verifiable evidence points.
Dimension 1: AI-augmented development skills
Are students learning to work with AI coding tools as part of their normal development practice? This includes using AI code generation and completion tools, validating and reviewing AI-generated code, using AI for testing and debugging, and understanding when AI assistance is helpful and when it introduces risk.
Sub-indicators: AI tools in assessed coursework • AI tools permitted with guidance • Prompt engineering taught • AI code review skills • AI referenced in SE module descriptions
Dimension 2: Systems thinking for AI-era architectures
Are software engineering modules teaching how to design systems that incorporate AI components? This means understanding how to integrate AI/ML services into larger software architectures, managing the lifecycle of machine learning models in production, handling the data pipelines and feedback loops that AI systems require, and addressing the governance and versioning challenges that AI components introduce.
Sub-indicators: Cloud/distributed systems taught • MLOps or model lifecycle management • AI service integration patterns • Data pipeline architecture • Technical AI governance
Dimension 3: Professional practice update
Have assessment methods and teaching approaches changed to reflect current professional practice? Indicators include whether assessment has moved beyond traditional code-submission-and-test-case models, whether industry placements reflect AI-transformed workplaces, and whether there is evidence of engagement with current professional practice.
Sub-indicators: Assessment beyond code-submit • Team/collaborative projects • Industry placement available • Code review in curriculum • Agile/DevOps practices taught
Dimension 4: Theory–practice balance
Where on the spectrum from pure CS theory to applied SE practice does the department sit, and how is AI being integrated across that spectrum? A department might score highly on AI research output while doing little to update its SE teaching. Conversely, a teaching-focused institution might have limited research output but be highly responsive to industry needs. Both are valid positions.
Sub-indicators: Named SE degree programme • Active industry partnerships • Professional skills embedded • AI research feeds into teaching • Practice-oriented curriculum
Dimension 5: Access and equity
Do students have equitable access to the AI tools and environments they will encounter in professional practice? This includes whether institutions provide enterprise-level access to AI development tools, whether there is structured training in AI tool use, and whether the digital divide is being actively addressed.
Sub-indicators: Institutional AI tool licence • CS-specific AI tool training • GitHub Copilot for students • AI use policy published • Structured AI training available
Rating scale
The rating scale is deliberately qualitative rather than numerical. A numerical score would imply false precision for what is fundamentally a public evidence review.
Strong — Clear, multiple pieces of evidence of engagement with this dimension. Visible in published programme materials, module descriptions, and institutional policies.
Partial — Some evidence of engagement, but incomplete or emerging. May show strength in one aspect of the dimension while lacking in others.
Limited — Minimal evidence from public sources. What exists is generic or indirect rather than specific to AI-transformed SE practice.
No evidence — Nothing visible in public materials. This does not mean nothing is happening — it means nothing is visible, which is itself a finding. A department that has no public evidence of AI integration in its SE teaching is, at minimum, failing to signal it.
Each dimension also carries a numerical score (0–100) for visualisation purposes in the dashboard. These scores are indicative and reflect the weight of evidence rather than a precise measurement. They enable visual comparison but should not be treated as definitive rankings.
Sample design
The sample of 23 institutions was designed to be purposive rather than random, covering the range of UK computing provision across three tiers:
Research-intensive (9 institutions)
Oxford, Cambridge, Edinburgh, UCL, Imperial, Southampton, Bristol, Manchester, Warwick. These dominate research funding, industry partnerships, and public attention. The relevant question for this tier: is your world-class AI research informing how you teach software engineering?
Mid-tier established (10 institutions)
Sheffield, Loughborough, Exeter, Leeds, York, Swansea, Queen’s Belfast, Heriot-Watt, Cardiff, Newcastle. Established departments with good research profiles and strong teaching. Many have named SE degrees. The relevant question: are you equipping graduates with practical skills for AI-augmented development environments?
Teaching-focused (4 institutions)
Coventry, Sheffield Hallam, UWE Bristol, Ulster. Post-92 universities focused on producing employable graduates. Their students are more likely to enter directly into practitioner roles. The relevant question: are your practice-oriented programmes keeping pace with how practice is actually changing?
Geographic coverage includes England (17), Scotland (2), Wales (2), and Northern Ireland (2). All four UK nations are represented. The sample deliberately includes three same-city pairs (Sheffield/Sheffield Hallam, Bristol/UWE Bristol, Queen’s Belfast/Ulster) to illuminate tier differences in the same local context.
Data sources
All assessments are based on publicly available evidence. The primary sources are:
University websites — Programme pages, module catalogues, departmental pages, and news announcements. This is where the majority of evidence comes from.
BCS accreditation records — Which programmes are accredited, at what level, and when last reviewed.
Institutional AI policies — Published guidance on AI use in teaching, learning, and assessment. Where available, department-specific policies (such as Oxford CS’s departmental AI policy).
UCAS programme descriptions — Standardised descriptions that sometimes contain detail not on the university’s own site.
Sector data — HEPI and Jisc survey data on student and staff AI usage. BCS/UCAS enrolment analysis. QAA benchmark statements. ACM/IEEE/AAAI CS2023 curriculum guidelines. REF 2021 results.
Each institution assessment includes source URLs so that every claim can be traced to its evidence. Assessment dates are recorded; programmes and policies change, and this is a snapshot.
Limitations and caveats
Public evidence only
This assessment can only measure what is publicly visible. Internal documents — actual assignment briefs, staff guidance, timetables, VLE content — are not accessible. Some institutions may be doing excellent work that simply isn’t reflected in their public materials. The “no evidence” rating is not a judgement that nothing is happening; it is an observation that nothing is visible.
AI-assisted research
The evidence gathering was substantially AI-assisted: AI tools were used to search university websites, read module descriptions, and produce structured draft assessments. Each assessment was reviewed for accuracy, but the process inevitably reflects the biases and limitations of web-available information. Institutions with better-structured, more detailed public websites will tend to score higher simply because more evidence is available.
Rapidly changing landscape
The AI in education landscape is changing fast. Several institutions in the sample are undergoing curriculum reviews (Warwick, Edinburgh, Leeds). Policies are being updated. New modules are being introduced. This assessment represents a snapshot as of March 2026 and will need regular updating to remain useful.
Not a league table
This is not a ranking. The dimensions capture different things, and an institution that scores “limited” on AI development skills but “strong” on theory-practice balance (like Sheffield) is not worse than one that scores “partial” on both. The assessment is designed to surface patterns and gaps, not to rank institutions against each other.
Next steps
This is a pilot assessment designed for validation and discussion. The immediate next steps are to seek feedback from colleagues within the assessed institutions, to refine the framework based on that feedback, and to extend the sample — particularly with more teaching-focused institutions that produce the bulk of the UK’s working software engineers.
The assessment connects to the broader argument in Making AI Work for Britain (London Publishing Partnership, forthcoming April 2026): that the UK’s AI ambitions depend not just on research excellence but on the practical readiness of the workforce that builds and delivers AI-incorporating systems.
Contact: Alan W. Brown, Professor of Digital Economy, University of Exeter Business School. Research Director, Digital Policy Alliance. AI Director, Digital Leaders Network.