Header image

Opening Keynote: Productive and ethical use of AI - Professor Rowena Harper Deputy Vice-Chancellor (Education), Edith Cowan University

Tuesday, July 8, 2025
9:00 AM - 10:00 AM
BelleVue Ballroom 2 - Level 3

Overview

BelleVue Ballroom 2 - Level 3


Details

Opening Keynote: Professor Rowena Harper Deputy Vice-Chancellor (Education), Edith Cowan University For as long as they have existed, universities have, in varying measures, advanced three broad purposes: 1) preparing students to contribute to the world of work; 2) socialising students into the local and global contexts in which they’re situated, and 3) equipping students with the dispositions and capabilities to make the world a better place. The first is practical, the second is social, and the third is critical. As Gert Biesta highlights, while “synergy” between the three purposes is possible, “there is also potential for conflict” (p.22). This keynote suggests that in the higher education present, remembering these three broad purposes can help universities and their staff grapple with their responsibilities for engaging with Artificial Intelligence (AI). AI will replace some of the jobs and tasks for which we have been educating students (in fact, it already is). Universities therefore have a practical responsibility to elevate program learning outcomes in order to reflect capabilities well beyond those of AI. In addition, AI offers the world what Marcuse (1941) would call an efficiency machine, “the embodiment of rationality and expediency” (p.46) which promises to make all manner of tasks faster, including learning. As ‘efficiency’ is a core logic of the dominant economic order, universities have a socialising responsibility to teach students to appropriately leverage AI – that is, to teach students how to engage with AI to produce effective outputs and evaluate them. AI is also, however, a deeply problematic technology, with widely acknowledged ethical issues across all stages of its production and deployment cycle. If universities limit their critical responsibilities to teaching students how to evaluate the outputs of AI, they will fail to achieve their full range of purposes. Universities will less be developing graduates who can make the world a better place, and more producing graduates who, through their prompts and user feedback, are merely labouring to make better AI products for those who own them. Universities and their students must therefore develop and retain a critical disposition: one that is critical of AI itself, not just in relation to the effectiveness of its outputs. Returning to Biesta, this keynote argues that universities, in their engagement with and responses to AI, need to consciously sit in the discomforting place of the “conflict” between their practical, social, and critical purposes. They can neither disavow the impact that AI will have on the sector, ignore the responsibility to equip students and ourselves to work with it effectively, nor shirk responsibility to continually subject its production, biases, harms, and ideologies to sustained critique. This keynote examines how we may sit in this discomforting place of conflict in relation to AI, and in the context of the everyday work that our teaching teams do: deciding what students should learn, how they will best learn it, and assuring that learning has occurred. In a landscape shifting towards programmatic approaches to assessment, I explore how program leads, subject coordinators, and tutors can use this notion of discomforting conflict to help staff and students navigate both the productive and ethical use of AI.


Speaker

Agenda Item Image
Prof Rowena Harper
Deputy Vice-Chancellor (Education)
Edith Cowan University

Productive and ethical use of AI

9:00 AM - 10:00 AM

Final abstract

For as long as they have existed, universities have, in varying measures, advanced three broad purposes: 1) preparing students to contribute to the world of work; 2) socialising students into the local and global contexts in which they’re situated, and 3) equipping students with the dispositions and capabilities to make the world a better place. The first is practical, the second is social, and the third is critical. As Gert Biesta highlights, while “synergy” between the three purposes is possible, “there is also potential for conflict” (p.22). This keynote suggests that in the higher education present, remembering these three broad purposes can help universities and their staff grapple with their responsibilities for engaging with Artificial Intelligence (AI).
AI will replace some of the jobs and tasks for which we have been educating students (in fact, it already is). Universities therefore have a practical responsibility to elevate program learning outcomes in order to reflect capabilities well beyond those of AI. In addition, AI offers the world what Marcuse (1941) would call an efficiency machine, “the embodiment of rationality and expediency” (p.46) which promises to make all manner of tasks faster, including learning. As ‘efficiency’ is a core logic of the dominant economic order, universities have a socialising responsibility to teach students to appropriately leverage AI – that is, to teach students how to engage with AI to produce effective outputs and evaluate them.
AI is also, however, a deeply problematic technology, with widely acknowledged ethical issues across all stages of its production and deployment cycle. If universities limit their critical responsibilities to teaching students how to evaluate the outputs of AI, they will fail to achieve their full range of purposes. Universities will less be developing graduates who can make the world a better place, and more producing graduates who, through their prompts and user feedback, are merely labouring to make better AI products for those who own them. Universities and their students must therefore develop and retain a critical disposition: one that is critical of AI itself, not just in relation to the effectiveness of its outputs.
Returning to Biesta, this keynote argues that universities, in their engagement with and responses to AI, need to consciously sit in the discomforting place of the “conflict” between their practical, social, and critical purposes. They can neither disavow the impact that AI will have on the sector, ignore the responsibility to equip students and ourselves to work with it effectively, nor shirk responsibility to continually subject its production, biases, harms, and ideologies to sustained critique. This keynote examines how we may sit in this discomforting place of conflict in relation to AI, and in the context of the everyday work that our teaching teams do: deciding what students should learn, how they will best learn it, and assuring that learning has occurred. In a landscape shifting towards programmatic approaches to assessment, I explore how program leads, subject coordinators, and tutors can use this notion of discomforting conflict to help staff and students navigate both the productive and ethical use of AI.

Biography

Professor Rowena Harper is Deputy Vice-Chancellor (Education) at Edith Cowan University, with a portfolio that includes Library, the Centre for Learning and Teaching, Student Administration, and Employability. Professor Harper’s experience spans over 20 years of practice, research and professional service in higher education. She has taught in arts and humanities, enabling education and academic language and learning. She has also led services in the areas of learning support, staff development, curriculum innovation, and learning technologies. She is a former President of the Association for Academic Language and Learning (AALL), and co-founder of the International Consortium of Academic Language and Learning Developers (ICALLD). As an active researcher, Professor Harper is perhaps best known for her work in academic integrity. She has also researched in educational development in digital learning environments, and English language and communication development. Most recently, she was an invited contributor to the TEQSA guidance Assessment reform for the age of artificial intelligence (Lodge et al. 2023).
loading