3A -

Tracks
Track 1
Wednesday, July 10, 2024
10:30 AM - 12:25 PM
Hall C

Speaker

Agenda Item Image
Dr Ann Parkinson
Senior Lecturer in Physiology and Anatomy
University of the Sunshine Coast

10:30am - 10:55am Artificial intelligence and academic integrity: can they be friends?

10:30 AM - 10:55 AM

Final abstract

Focus
We present outcomes of an in-class activity to highlight student’s perceptions of ethical and responsible use of generative artificial intelligence (GenAI) tools.
Background/context
At the start of 2023 higher education was abuzz with fears that students would engage in cheating with the advent of the conversation artificial intelligence tool, ChatGPT (Cotton et al., 2023). We designed a learning activity for physiology students to consider ChatGPT’s use in group assessment tasks.
Description
In tutorial classes (March 2023), we engaged students (n=124) to critique a sample exam answer as a form of content revision. This answer was covertly generated by ChatGPT, and lacked depth and contained some hallucinated facts. Students then engaged in an open discussion about the use of GenAI for university study and assessment. Five question prompts stimulated discussion, including their feelings about using GenAI for their group written assignment. Student responses were collected anonymously via Padlet. Finally, it was revealed ChatGPT wrote the answer.
Method
The 225 separate student responses underwent thematic analysis. Group assignment scripts were checked for GenAI use declarations, and text was analysed using the AI detection tool in Turnitin.
Evidence
Students were concerned about breaching academic integrity guidelines, information accuracy and sources, and being detrimental to their learning. In general, students agreed that using GenAI for background research or inspiration were responsible uses. Only four groups (of 70) acknowledged the use of GenAI tools in the preparation of their assignment and no instances of inappropriate GenAI use were suspected.
Contribution

This activity showcases how ChatGPT was used to engage students in critical analysis, and to stimulate thought about how and when it is appropriate to use.
Engagement
The audience will be asked to reflect and share if/how they have engaged students in discussion about the use of GenAI tools in learning and assessment.

Biography

Ann completed a PhD in Physiology at the University of New South Wales (2001) and is currently a Senior Lecturer in Physiology and Anatomy at the University of the Sunshine Coast, Australia. Ann has over 25 years’ experience in developing and delivering curriculum across the areas of biology and physiology. Ann has gained several awards for teaching including: the Vice Chancellor’s Medal for Outstanding University Teacher, USC, 2008; an Australian Learning and Teaching Citation for excellence in learning and teaching in 2009. Ann was recognised with a HEA Senior Fellowship in 2019. Ann’s research areas are primarily in Biomedical Science Education, in particular academic integrity and ethics, and visualisation strategies.
Agenda Item Image
Dr Nicole Reinke
University of the Sunshine Coast

Co-presenter

Biography

Nicole completed a PhD in Physiology at the University of New England, Australia (2006), a Graduate Certificate in Education (2011) and a Master of Education (2013) at James Cook University, Australia. Nicole has taught biology and physiology for over 20 years at universities in Australia and Canada, and is currently located at the University of the Sunshine Coast, Australia. Nicole received an Award for Advancing the Blended Learning Environment in 2017 and was recognised as a Senior Fellow of the Higher Education Academy (SFHEA) in 2019. Her research interests include the development, implementation, and evaluation of learning technologies such as 3D immersive animation, academic integrity, and physiology projects focussed on cellular metabolism.
Agenda Item Image
Miss Wenjie Hu
The University of Hong Kong

11:00am - 11:25am Bridging the gap between students and generative AI: A pedagogical framework for AI literacy in translation education

11:00 AM - 11:25 AM

Final abstract

Focus
Presentation of a validated framework for enhancing AI literacy among translation students
Background/context
The emergence of Generative AI has profoundly impacted industry practices and higher education, notably revolutionizing the translation sector. It has boosted productivity while triggering concerns over job security for translators (Sahari et al., 2023). This shift underscores the necessity for translation students to acquire AI literacy to maintain their competitive edge (Lee, 2023).
Description
AI literacy refers to the ability to critically analyze artificial intelligence technology, effectively communicate and collaborate with AI systems, and correctly utilize AI tools (Laupichler et al., 2022). This study aims to construct a comprehensive framework for enhancing AI literacy among translation students to provide practical guidance for educators.
Method
This study involves one-on-one interviews with 10 professional translators, each with over 5 years of working experience, as well as 9 university translation instructors with more than 5 years of teaching experience on how to develop AI literacy among translation students. The resulting competency framework will be refined using the Delphi method, seeking expert consensus through iterative feedback.
Evidence
Our initial framework, shaped by expert interviews, is structured around three key pillars: attitudes, content, and methodology. Attitudinally, it cultivates an adaptive mindset towards AI, focusing on collaboration and critical oversight. The content pillar covers GenAI knowledge, tool proficiency, GenAI translation assistance, ethical considerations, and ongoing learning. Methodologically, it incorporates courses, workshops, and practical applications, merging theory with practice. After the initial interviews, the framework will be refined using the Delphi method by March to, ensuring it meets academic and industrial consensus.
Contribution
This study presents a validated framework for AI literacy development that connects translator education with professional practice, delivering targeted guidelines for pedagogical implementation.
Engagement
Initiate with an intriguing question, support points with visuals, and engage the audience with interactive activities.

Biography

Ms. Hu Wenjie is a PhD student at the Faculty of Education, The University of Hong Kong. Her research interests lie in the conceptualization and enhancement of AI literacy, as well as the application of AI technologies in higher education, particularly its use in developing holistic competencies.
Agenda Item Image
Dr Peter Matheis
Lead Learning Designer
Navitas

11:30am - 11:55am Unveiling the impact: Generative AI's role in assessing academic misconduct with rubric design

11:30 AM - 11:55 AM

Final abstract

Focus: This paper presents a comparative analysis between holistic and analytic rubrics, examining the efficacy of AI detection tools in identifying academic misconduct in assessments generated through AI. Scott (2023) explores AI's dual impact on higher education, potentially reshaping educational outcomes or exacerbating integrity breaches, particularly online. By investigating generative AI's practical implications for academic integrity, the research aims to shed light on detection mechanisms within nuanced rubric frameworks.
Background/Context: Amid rising academic misconduct challenges, institutions seek innovative assessment solutions. The study contextualises within academic integrity literature, emphasising the importance of comprehensive rubrics and generative AI in detecting and preventing dishonest behaviour.
Description: The research adopts a systematic approach. It begins by applying holistic and analytic rubrics from diverse academic disciplines. Then, generative AI tools are used to evaluate student submissions, providing a methodical exploration of the AI's ability to discern academic misconduct within the context of varying rubric criteria. This method aligns with Anderson et al. (2023), advocating for varied assessment formats to bolster AI detection capabilities amidst evolving educational technologies.
Method: Using a rigorous experimental design, the study collects evaluative data through generative AI-assisted assessments, analysing AI tools' accuracy, sensitivity, and specificity across different rubric criteria.
Evidence: Findings reveal insights into AI tools' performance in discerning varied rubric designs, highlighting challenges and nuances in detecting academic misconduct. This contributes to the development of more sophisticated evaluation tools, echoing Ifelebuegu (2023) and Barrett (2007) on the complexity of assessments AI may struggle to replicate.
Contribution: This research advances scholarship by offering empirical evidence on AI detection challenges with holistic and analytic rubrics, with practical implications for educational practice. It informs the development of strategies to uphold academic integrity amidst evolving AI technologies.
Engagement: Strategies include interactive activities showcasing generative AI tools and real-time assessment simulations, fostering dynamic learning environments.

Biography

Dr Peter Matheis has over 10 years of experience working in the Higher Education sector. He has held several educational and academic roles at a number of Australian universities and colleges, and also works as the Lead Learning Designer at Navitas. Peter has participated in a variety of large-scale innovative educational projects which have involved the design and application of learning management systems, learning designs, data analytics, technology enhanced learning and the implementation of diverse educational pedagogies. He has proficient knowledge of contemporary developments in learning pedagogies, as well as the design of blended, flipped, and hybrid teaching and learning approaches. Peter has a keen interest in investigating new developments in pedagogical strategies and the design and delivery of new technologies and discovering new approaches to digital learning environments.
Agenda Item Image
Assoc Prof Priya Khanna
University of New South Wales

12:00pm - 12:25pm Applying systems-thinking perspectives in large-scale curricular reforms

12:00 PM - 12:25 PM

Final abstract

Focus: Presentation of a systems-thinking based curricular design toolkit and its implications for program reforms.
Background: Several higher education programs are undergoing curricular renewal using non-traditional approaches to teaching, learning and assessment strategies. Curriculum renewal is a complex process as without overarching principles, the design can be fragmented, implementation can be difficult, and evaluation can be non-actionable. Using an example from a large-scale curriculum renewal, we describe the design and evaluation of a systems thinking curricular toolkit.
Description: Underpinned by soft systems thinking approach, we developed a conceptual toolkit (3P-6C) (Khanna et al 2021). The toolkit captures students' learning journey through logical linkages around the three subsystems: personal, program, and the practice. It aligns with the competency framework of the accreditation body (Australian Medical Council).
Method: Based on literature around systems thinking and curricular designs in medical education, the toolkit was validated with feedback from educators, students, and education designers. The refined toolkit was used in designing a micro-curriculum for interprofessional placements in a medical program (Taoube et al 2023).
Evidence: Based on focus groups with students and educators, the systems-thinking based toolkit allowed for unpacking the conditions (constraints and enablers) and conditioning (acceptance or rejection of new ‘non-traditional’ reforms) for the agents (students) to exercise their learning choices. The systems thinking approach provided a deeper understanding of how various parts of a complex curriculum interact and evolve (Khanna et al 2023).
Contribution: A systems thinking framework to designing and evaluating curricular reforms can provide educators and students with deeper understandings of how various parts of the learning systems and subsystems interact and integrate. Furthermore, such an overarching toolkit can uncover the tensions that could optimised for future enhancements.
Engagement: The audience will be invited to discuss the utility of the 3P6C toolkit for their redesign of their curricular components.

Biography

Priya Khanna is an academic, educator, and researcher with a background in science and medical education, and work experience in India and Australia.Her research focuses around curriculum development and complex educational interventions using systems thinking and critical realist perspectives in health professional education. She works as a Nexus Fellow at the School of Clinical Medicine in the University of New South Wales.
Agenda Item Image
Dr Daniela Castro de Jong
University of New South Wales

Co-presenter

Biography

Daniela is an occupational therapist from Chile, who completed her doctoral studies in Sweden, prior moving to Australia to work as an academic. Works as Nexus Fellows (School of Health Sciences and Clinical Medicine Office respectively) in the Faculty of Medicine and Health at UNSW Sydney. In the role, Daniela works in academic mentoring and teaching and learning innovation, by being involved in projects in the School, Faculty and University-wide level projects.

Chair

Agenda Item Image
Jay Cohen
Academic Director - Online Transition
The University of Adelaide / HERDSA Executive

loading