University of Melbourne
AI Adoption Report

The state of AI adoption at the University of Melbourne

By Gary Liang, Founder & CEOLast updated

The University of Melbourne governs generative AI through a central Generative AI Taskforce and operationalises it for teaching through a secure-versus-open assessment framework, with each subject targeted to make 50% of assessments 'secure' (observed exam, oral, performance, or supervised placement) by 2025.

Melbourne has built two in-house GenAI tools: AILA, a student-facing AI Learning Assistant embedded in the Canvas LMS; and Spark AI, a secure staff platform that exposes OpenAI and Anthropic models under the institution's data-handling controls. Microsoft 365 Copilot Chat is available to all students at no extra cost.

Institutional position

Melbourne governs generative AI centrally through a Generative Artificial Intelligence Taskforce, with operational policy owned by the Centre for the Study of Higher Education and Teaching, Learning and Innovation. The institution-wide assessment reform program for 2025 requires each subject to make 50% of its assessments 'secure', meaning observed exams, interactive orals, performances, or supervised placements where AI use can be reliably controlled.[1][3]

Unlike Sydney's two-lane model, UNSW's seven-level scale, or UWA's three tiers, Melbourne does not operate a numbered AI assessment scale. The framework is a binary 'secure or open' split with a 50% secure target per subject, leaving the remaining 50% as open assessments where AI use is permitted with acknowledgement.[3]

Melbourne's Student Academic Integrity Policy (MPF1310) governs misuse of AI tools, and the institution's public position on AI in assessment requires students to acknowledge any use of GenAI, specifying the tool name, prompts, and how output was used.[2]

On AI detection tools, Melbourne explicitly cautions against using Turnitin's AI writing detector as standalone evidence: 'A high AI score in Turnitin's writing detection report is not proof that academic misconduct has taken place and does not, on its own, constitute grounds for making an allegation of academic misconduct.'[4]

Bloom at Melbourne

Bloom runs on Microsoft Azure OpenAI and Google Vertex AI under enterprise data handling, the same model-provider posture Melbourne already operates internally through Spark AI. Prompts and responses are not used to train base models, and Bloom satisfies the same data-handling controls Melbourne applies to its in-house GenAI tooling.[5]

Bloom can be deployed at Melbourne in a day, with subject-specific material ingestion and convenor-controlled student access. No new procurement, no new vendor security review, and no engineering integration is required.

AI tools at Melbourne

Melbourne has rolled out three categories of generative AI tooling: a free vendor product (Microsoft Copilot Chat) for students, an internal staff platform (Spark AI), and an in-house student-facing tutor embedded in Canvas (AILA).

Microsoft 365 Copilot Chat

Free access for all Melbourne students during their time at the university. Note that this is the Copilot Chat entitlement, not the full Microsoft 365 Copilot productivity-suite licence: 'this does not include a license to Microsoft 365 Copilot.'[7]

Spark AI (staff)

An internal generative AI platform for Melbourne staff. Spark AI exposes OpenAI GPT-4 and GPT-3.5, and Anthropic Claude 3 Haiku and Claude 3 Sonnet (200k context), under Melbourne's institutional data-handling controls. Staff may input restricted or confidential University data subject to policy.[5]

AILA (AI Learning Assistant)

Melbourne's in-house student tutor, built on Spark AI infrastructure and embedded in the Canvas LMS. AILA operates in two modes: a subject-grounded Q&A mode limited to LMS materials, and a Socratic tutoring mode that avoids giving direct answers to assessment questions. Released in beta with broader student testing through 2025.[6]

AI research and ethics

Melbourne's institutional research footprint on AI ethics is anchored by the Centre for AI and Digital Ethics (CAIDE), founded in 2020 as a cross-faculty initiative with Engineering & IT and Melbourne Law School as founding faculties, joined by Arts, Education, and Medicine, Dentistry and Health Sciences. CAIDE is co-directed by Professor Jeannie Marie Paterson (Melbourne Law School) and Associate Professor Tim Miller (School of Computing and Information Systems), with Simon Coghlan as Deputy Director.[8]

Professor Gregor Kennedy, Deputy Vice-Chancellor (Academic), has framed the institutional shift in posture: 'When generative AI emerged a couple of years ago, most institutions, including mine, ran pretty hard at the risk. We worked closely with regulators to understand the implications, before gradually shifting to exploring opportunities.' He has separately characterised AI literacy as becoming 'as essential as digital literacy was in the early 2000s'.[9]

For Melbourne staff

If you want to evaluate Bloom for a subject, get in touch.

Get in touch

If we've got something wrong. This page reflects publicly available information as of 1 May 2026. If you work at Melbourne and there is something we should correct, please get in touch.