About this assessment

This pilot dashboard assesses how well UK university computer science and software engineering programmes are preparing graduates for an AI-transformed software development profession. It is not a league table — it is a structured evidence review asking a specific question that nobody else is asking systematically.

The assessment framework uses five dimensions, each with binary sub-indicators derived from publicly available evidence. Ratings reflect what can be observed from outside the institution: published module descriptions, university AI policies, BCS accreditation records, and programme specifications. “No evidence” does not mean nothing is happening — it means nothing is visible, which is itself a finding.

This is a draft for validation. Each institution assessment needs review by someone with inside knowledge. The framework itself is open to challenge and refinement.

The five dimensions

Methodology

Data sources: University websites, published module catalogues, BCS accreditation records, institutional AI policies, UCAS programme descriptions, QAA benchmark statements, HEPI and Jisc survey data, BCS enrolment analysis.

Rating scale: Strong (clear, multiple evidence of engagement) • Partial (some evidence, incomplete or emerging) • Limited (minimal evidence from public sources) • No evidence (nothing visible in public materials).

Assessment date: March 2026. Programmes and policies change; this is a snapshot.

Contact: Alan W. Brown, Professor of Digital Economy, University of Exeter Business School. Research Director, Digital Policy Alliance.