What the data reveals

A detailed analysis of AI readiness across 23 UK software engineering and computer science programmes. The findings are organised around the patterns that emerged from the evidence — not around individual institutions, but around the structural issues that shape how the sector is responding.

1. The AI development skills gap

This is the starkest finding in the entire assessment. Despite HEPI reporting that 88% of UK students use AI tools in their assessments, and despite industry evidence that AI coding assistants are now used in the majority of professional development teams, not a single institution in our 23-university sample formally requires AI-augmented development skills as part of assessed software engineering coursework.

The best-performing institutions permit and acknowledge AI tool use. Oxford’s CS department published explicit guidance in October 2025 covering AI use for coding diagnostics and code review, with example declarations for AI-assisted coding. UCL’s three-category GenAI assessment framework (Category 1: no GenAI; Category 2: assistive role; Category 3: integral role) gives departments explicit permission to design assessments where GenAI is a core requirement. But even at these institutions, the AI tool engagement is being managed rather than taught as a professional skill.

The UCL standout

UCL’s COMP0237 “Automated Software Engineering” module is the only module in the entire 23-institution sample that approaches teaching AI as a software engineering tool. It explicitly introduces “search-based and other artificial intelligence methods, like large language models, which are used to automate software engineering tasks, such as software testing, program repair, or code completion.” This is available on the MEng and MSc Software Systems Engineering — but as an advanced optional module, not a core requirement.

At the other end of the spectrum, the majority of institutions are silent on AI development tools entirely. Module descriptions reference traditional programming environments — Eclipse, IntelliJ, Python, C, Java, UNIX tools. AI appears as a subject area (machine learning, neural networks, intelligent agents) but not as a tool for building software. Students are almost certainly using AI coding assistants on their own — 92% of students use AI tools according to Jisc — but this is student-led, not curriculum-driven.

Coventry is the only institution in the sample to explicitly mention “generative AI” in its programme-level descriptions. Even this is framed as a topic to study rather than a tool to master.

2. The two-speed system

Jisc’s 2025 research identified a “two-speed system” in UK higher education, with college and university staff often relying on free tools, workarounds, or personal accounts to engage with AI. Our data confirms this and makes it concrete.

Oxford provides ChatGPT Edu for all staff and students (the first UK university to do so, from September 2025), plus Microsoft Copilot Chat and Google Gemini through institutional accounts. It has published clear AI use policies at both university and departmental level, and runs an expanded AI training programme.

At none of the four teaching-focused institutions in our sample — Coventry, Sheffield Hallam, UWE Bristol, and Ulster — could we find public evidence of equivalent institutional AI tool provision. These are the institutions whose graduates will enter the same AI-transformed workplaces as Oxford’s. The playing field is not level.

The pattern holds across the full sample. Research-intensive universities consistently score higher on access and equity, not because they have better teaching (Sheffield Hallam has a Gold TEF, the highest possible rating), but because they have more resources to invest in institutional AI tool licences, training programmes, and published policies.

The equity question

Teaching-focused institutions score consistently higher on practice orientation — they are explicitly focused on producing employable graduates. But they score consistently lower on access to AI tools. This creates a paradox: the institutions most focused on practical readiness are the least equipped with the tools their graduates will need to use. The gap is not about teaching quality. It is about resources.

3. Same-city comparisons

Three pairs of institutions in our sample share a city, which provides natural comparisons that illuminate the tier divide.

Sheffield vs Sheffield Hallam

The University of Sheffield (Russell Group) offers named BEng and MEng Software Engineering degrees with the Genesys student-led software organisation, IBM and ARM partnerships, and strong placement programmes. Sheffield Hallam (post-92, Gold TEF) has no named SE degree but its “Home of Digital Technologies” with industry-mimicking labs, design thinking workshops, and progression from simulated to real-world client briefs gives it a practice-oriented strength. Hallam scores higher on practice orientation; Sheffield has stronger research feeding into teaching and better institutional tool access. Neither shows evidence of AI development tool integration in assessed work.

Bristol vs UWE Bristol

The University of Bristol has Isambard-AI (the £225M UK fastest supercomputer for AI research), “AI University of the Year” recognition, and the Temple Quarter Enterprise Campus launching in 2026. UWE Bristol has something Bristol doesn’t: a separate BSc Software Engineering and a BSc Software Engineering for Business — a rare cross-disciplinary SE offering. UWE also sits in Bristol’s tech “super-cluster” of 50,000+ workers with placements at Microsoft, IBM, and GCHQ. Bristol has prestige and infrastructure; UWE has named SE identity and practice focus.

Queen’s Belfast vs Ulster

Both serve Northern Ireland’s strong tech sector (Kainos, Liberty IT, Citi, BT). Queen’s has the strongest practice commitment in the sample: mandatory paid year-long placement with 500+ employers and an explicit aim to produce “professional software engineers.” Ulster’s distinctive strength is at postgraduate level, where its MSc Computer Science module explicitly covers “engineering, deploying, testing and orchestration of intelligence across modern computing” including “production pipelines, automated testing and automated deployment” — the most explicit MLOps content in the entire 23-institution sample, driven by the BT Innovation Centre and PwC Advanced Research and Engineering Centre partnerships.

4. Named SE degrees correlate with practice strength

Institutions with named Software Engineering degrees consistently score higher on the theory-practice balance dimension. This is not coincidental — a named SE degree signals institutional commitment to software engineering as a distinct discipline, not just an applied subset of computer science.

The institutions with the strongest SE identity in our sample include Sheffield (BEng/MEng), Southampton (BEng/MEng with BCS and IET dual accreditation), Edinburgh (BEng), Swansea (BSc), Imperial (MEng), Heriot-Watt (BSc CS - Software Engineering), Cardiff’s National Software Academy (BSc with SFIA mapping and BCS RIITech registration), UWE Bristol (BSc SE and BSc SE for Business), Coventry (BSc SE), and Newcastle (BSc/MEng).

At the other extreme, Cambridge has the highest proportion of world-leading research in the UK (45% 4* in REF 2021) but the lowest theory-practice score in our sample. Its Part III project explicitly focuses “more on research or research preparation rather than Software Engineering aspects.” No industrial placement option exists. This is a valid institutional choice — Cambridge produces brilliant researchers — but it represents the extreme end of a spectrum where SE is treated as secondary.

5. Systems architecture: the missing dimension

The gap between knowing AI algorithms and knowing how to architect systems that incorporate AI services remains largely unaddressed across the sector. MLOps, model lifecycle management, AI service integration patterns, and data pipeline architecture are rarely visible in undergraduate module descriptions.

The strongest evidence comes from unexpected places. Ulster’s MSc module on AI engineering covers “machine learning, federated operation of activities, data engineering, production of tailored computational artefacts (such as models tailored for a range of device types), production pipelines, automated testing and automated deployment.” This is precisely the AI-era systems thinking the framework is looking for — and it is at a post-92 institution, not a Russell Group one.

Edinburgh and Manchester show systems-level strength through their research depth (Edinburgh’s Informatics department is Europe’s largest; Manchester built the first stored-program computer), but the evidence that this research feeds into undergraduate SE teaching is indirect rather than explicit.

Bristol’s Isambard-AI supercomputer provides world-class infrastructure, and its new MSc Engineering with AI explicitly covers “industry standard AI infrastructure.” But at undergraduate level, the systems architecture for AI-incorporating systems is less visible.

6. Hidden gems and distinctive approaches

Cardiff — National Software Academy

A distinctive Welsh Government partnership delivering BSc Computer Science through a dedicated National Software Academy. Unique features include SFIA professional skills framework mapping (the only institution in the sample to use SFIA), BCS RIITech registration on graduation, and a three-day industry taster event led by employers. The programme explicitly aims for graduates to “hit the ground running.”

Imperial — Engineering-first approach

Rare among research-intensive institutions in offering a named MEng Software Engineering alongside CS. The engineering framing — with Y3 industrial placement, mathematical foundations, and systems-level thinking — provides a different orientation from the pure CS departments. Partnerships with Nvidia and Graphcore feed industry practice into teaching.

Coventry — The only institution to name generative AI

Coventry is the only institution in the sample to explicitly mention “generative AI” in its programme-level descriptions and is one of only three in the teaching-focused tier to offer a named BSc Software Engineering. The “Add+vantage” employability scheme provides portfolio-building beyond the core curriculum. For a post-92 institution, the programme breadth is notable.

Warwick — Curriculum overhaul in progress

Currently undergoing a major curriculum review with a “year zero” redesign of first-year CS modules — an explicit response to the AI disruption of CS education. The departmental motivation note acknowledges that “what students need to learn is rapidly changing.” If this review integrates AI development tools into core SE teaching, Warwick could move quickly.

Newcastle — Named SE with professional accreditation

Named BSc and MEng Software Engineering with BCS, IET, and Engineering Council accreditation. The School of Computing explicitly targets the “world-class application of computer science, data science, and cyber security.” Strong industry partnerships with local employers and a practice-oriented ethos visible in programme descriptions.

7. The sector context

Our findings sit within a rapidly shifting landscape. Key contextual data points:

AI degree applications up 15% in 2025 (BCS/UCAS data), while overall computing applications fell 10%. AI intake among 18-year-olds up 39%. Software engineering intake down 7%.

88% of UK students use AI in assessments (HEPI 2025), up from 53% in 2024. 92% use AI tools in some form (Jisc). Only 36% report receiving formal AI training.

QAA computing benchmark statements last updated in 2022, before the current AI wave. Statements from 2025 onwards will address GenAI as a cross-cutting theme, but computing-specific updates have not yet appeared.

ACM/IEEE-CS/AAAI CS2023 significantly increased recommended AI core hours but framed AI primarily as a knowledge area within CS — how AI changes SE practice receives comparatively little attention.

120 UK providers now offer undergraduate AI programmes for 2025-26, up from 47 the previous year. New programmes are proliferating, but existing core CS and SE programmes are adapting more slowly.

8. What this means

The AI transformation of software development is not waiting for universities to catch up. Companies are already reorganising their development teams, changing their hiring criteria, and expecting graduates to be proficient with AI-augmented development workflows. The graduates entering the workforce from UK universities in 2026 and 2027 were largely taught curricula designed before ChatGPT existed.

This is not a criticism of universities, which face genuine constraints: slow accreditation cycles, financial pressures, difficulty recruiting AI-skilled academics, and the inherent challenge of updating curricula for a technology that is itself changing rapidly. But it is an observation that the gap between what industry needs and what universities are producing is widening — and that nobody was measuring it in a way that could inform action.

That is what this assessment attempts to do. It is a starting point, not a final answer. Each institution assessment needs validation by someone with inside knowledge. The framework itself is open to challenge and refinement. But the patterns are clear enough to warrant attention.

The profession has moved. The curriculum is still deciding what to do about it.

Explore the interactive dashboard →  •  Read the methodology →