Poster session 4A (sub-theme 2.1) 3:30 - 3:45 PM
Tracks
Track 5
Wednesday, July 9, 2025 |
3:30 PM - 3:45 PM |
Level 1 Foyer & Cockle Bay Room, PARKROYAL Darling Harbour |
Overview
Poster session (sub-theme 2.1) - 2 minute oral presentations
Speaker
Mr Jack Cullen
Pharmacy Student
Monash University
Bringing together the HOW and WHY of GenAI in OSCE preparation through Unified Theory of Acceptance and Use of Technology
3:30 PM - 3:32 PMAbstract
Introduction: Generative Artificial intelligence (GenAI) use in Objective Structured Clinical Examination (OSCE) preparation has not yet been deeply explored. This study investigated how and why pharmacy students use or choose not to use GenAI in OSCE preparation.
Methods: A retrospective cohort study.
The study retrospectively collected OSCE scores and the seven criteria in the OSCE communication rubric for the primary care (community) station and responses regarding student AI use from a mandatory written self-reflection; students were not privy to the research project when writing their responses. Pharmacy students from second to fourth-year (over 2023-2024) undergraduate pharmacy degrees at Monash University across Australia and Malaysia campuses were included.
A summative content analysis converted qualitative data from the OSCE reflections to quantitative data. Qualitative data were deductively coded to The Unified Theory of Acceptance and Use of Technology (UTAUT) model to understand reasons for use. Wilcoxon rank sum and t-tests compared quantitative data from reflections to OSCE scores, whilst Fisher exact tests compared categorical themes.
Results: Of 997 student reflections, 163 used GenAI for OSCE preparation. Non-AI and AI users on Australian and Malaysian campuses performed similarly overall. However, non-AI users excelled in four of seven rubric criteria. Six themes were identified, with themes 1-3 focusing on AI use for creating OSCE simulations and receiving instant feedback, while themes 4-6 discussed reasons for not using AI. Mapping students who used AI to the UTAUT model showed a large alignment with the belief that students would consequently improve their OSCE grades.
Conclusion: The study showed that using open AI for OSCE preparation led to overall similar academic success in OSCEs, suggesting educators still have a role in guiding students to optimise AI use. Further research is needed surrounding how students use AI for OSCEs.
Methods: A retrospective cohort study.
The study retrospectively collected OSCE scores and the seven criteria in the OSCE communication rubric for the primary care (community) station and responses regarding student AI use from a mandatory written self-reflection; students were not privy to the research project when writing their responses. Pharmacy students from second to fourth-year (over 2023-2024) undergraduate pharmacy degrees at Monash University across Australia and Malaysia campuses were included.
A summative content analysis converted qualitative data from the OSCE reflections to quantitative data. Qualitative data were deductively coded to The Unified Theory of Acceptance and Use of Technology (UTAUT) model to understand reasons for use. Wilcoxon rank sum and t-tests compared quantitative data from reflections to OSCE scores, whilst Fisher exact tests compared categorical themes.
Results: Of 997 student reflections, 163 used GenAI for OSCE preparation. Non-AI and AI users on Australian and Malaysian campuses performed similarly overall. However, non-AI users excelled in four of seven rubric criteria. Six themes were identified, with themes 1-3 focusing on AI use for creating OSCE simulations and receiving instant feedback, while themes 4-6 discussed reasons for not using AI. Mapping students who used AI to the UTAUT model showed a large alignment with the belief that students would consequently improve their OSCE grades.
Conclusion: The study showed that using open AI for OSCE preparation led to overall similar academic success in OSCEs, suggesting educators still have a role in guiding students to optimise AI use. Further research is needed surrounding how students use AI for OSCEs.
Biography
Jack Cullen is a final year student in the vertical double degree Bachelor of Pharmacy (Honours)/Master of Pharmacy at Monash University, also holding Bachelors degrees in Biomedical Science and Science. His diverse roles, including research at the Murdoch Children’s Research Institute and a pharmacy technician within the Ramsay pharmacy, have fostered his adaptability and problem-solving skills in healthcare. Outside of pharmacy, he co-hosts Dose of Pharma, mentors prospective students at Monash’s Parkville campus, and serves as an ambassador to inspire future students. Recently awarded the Green Steps Sustainability Leadership certificate, Jack plans on integrating sustainability into his future roles. His ambition is to leverage his clinical and academic background to improve personalised healthcare outcomes. Jack is dedicated to using his academic and clinical background to improve personalised health outcomes. Jack’s ambition is to further advance patient-centred care by pursuing his interests spanning Artificial intelligence, autoimmune diseases, transplant medicine and haematology/oncology.
Dr Deanne Johnston
Senior Lecturer
University of KwaZulu-Natal
Pharmacy licensure examinations for pharmacist interns: Evaluating AI Chatbots’ Performance on High-Stakes Assessments
3:32 PM - 3:34 PMAbstract
Introduction: Artificial intelligence (AI) with natural language processing is believed to have a major impact on higher education providing innovative tools and approaches to learning. Despite the potential uses of AI chatbots, there is still concern regarding the accuracy of the information provided and ethical considerations. Limited information exists on the use of AI chatbots in pharmacy assessments including licensure examinations. With this in mind, the aim of this study was to compare the performance of various AI chatbots across licensure examinations.
Methods: Four sample licensure practice examinations, available on the websites of the Australian Pharmacy Council, Pharmacy Examining Board of Canada, Singapore Pharmacy Council, and South African Pharmacy Council, were evaluated. These examinations use multiple-choice questions (MCQs) which were answered by ChatGPT 4.0, MetaAI, and Perplexity. The answers from each chatbot were recorded and compared to the model answer. The results were descriptively analysed.
Results: In total 297 MCQs were answered by each chatbot. Overall ChatGPT (86.20%, n=256) performed better than Perplexity (80.47%; n=239) and MetaAI (75.76%; n=225). A similar trend was seen when reviewing the performance of the chatbots within each exam, where ChatGPT answered a higher percentage of questions correctly (79.13–94.12%) followed by Perplexity (72.00–84.35%) and MetaAI (70.67–88.24%). In 20 questions all three chatbots differed from the model answer provided, and of these, the three models provided the same “incorrect” answer for eleven questions.
Conclusion: Overall the three AI-powered chatbots fared well in MCQs used in pharmacy licensure examinations. Given this information, it is critical that regulators take measures to restrict access to AI-powered tools during high-stake examinations such as licensure examinations. In contrast, the role of chatbots in preparing pharmacist interns for writing the examinations, as well as, in the setting and moderation of questions should be explored.
Methods: Four sample licensure practice examinations, available on the websites of the Australian Pharmacy Council, Pharmacy Examining Board of Canada, Singapore Pharmacy Council, and South African Pharmacy Council, were evaluated. These examinations use multiple-choice questions (MCQs) which were answered by ChatGPT 4.0, MetaAI, and Perplexity. The answers from each chatbot were recorded and compared to the model answer. The results were descriptively analysed.
Results: In total 297 MCQs were answered by each chatbot. Overall ChatGPT (86.20%, n=256) performed better than Perplexity (80.47%; n=239) and MetaAI (75.76%; n=225). A similar trend was seen when reviewing the performance of the chatbots within each exam, where ChatGPT answered a higher percentage of questions correctly (79.13–94.12%) followed by Perplexity (72.00–84.35%) and MetaAI (70.67–88.24%). In 20 questions all three chatbots differed from the model answer provided, and of these, the three models provided the same “incorrect” answer for eleven questions.
Conclusion: Overall the three AI-powered chatbots fared well in MCQs used in pharmacy licensure examinations. Given this information, it is critical that regulators take measures to restrict access to AI-powered tools during high-stake examinations such as licensure examinations. In contrast, the role of chatbots in preparing pharmacist interns for writing the examinations, as well as, in the setting and moderation of questions should be explored.
Biography
Dr. Deanne Johnston is a senior lecturer in Pharmacy Practice at the University of KwaZulu-Natal. With more than a decade of experience in academia and having worked at the regulator, she is passionate about advancing pharmaceutical education in South Africa. Her hard work and dedication were recognized when she was awarded the Distinguished Teacher of the Year Award in 2019 at the 40th Academy of Pharmaceutical Society of South Africa Conference. With a wealth of experience in teaching pharmaceutics, clinical pharmacy, pharmacotherapy and pharmacy practice, Dr Johnston has supervised and mentored numerous students in achieving their postgraduate qualifications. She has served on numerous local, national and international task teams/committees.
Mr Tarik Al-Diery
PhD Candidate
University of South Australia
Unlocking outcomes of entrustable professional activities for workplace-learning: Development, entrustment, and practice-readiness in pre-registration pharmacy training – a systematic review
3:34 PM - 3:36 PMAbstract
Introduction: Entrustable professional activities (EPAs) are readily used in workplace-based learning to support competency development in pre-registration pharmacy training. To date, no systematic review has brought together outcome data EPA use in pre-registration pharmacy training.
Methods: English-language searches of primary literature describing outcome evaluations for EPAs in pre-registration pharmacy training were conducted in five databases (MEDLINE, Embase, CINAHL, Scopus, Emcare), from the inception of EPAs in pharmacy education until August 2024. A manual search of the included references was also conducted. Two reviewers independently screened articles for eligibility including studies reporting on entrustment, development, and practice readiness to reach a consensus on all included articles, and subsequently extracted the data. The Mixed-Methods Appraisal Tool (MMAT) Version 2018 was used to conduct the quality assessment of the studies.
Results: A total of 577 articles were screened, of which 28 met the inclusion criteria. Of these, 19 were quantitative-descriptive, 6 were mixed-methods, and 3 were qualitative studies. Three major outcomes were evident in the literature: EPAs support self-regulation and self-development in pre-registration pharmacy training, entrustment progresses throughout defined periods in training, and factors influencing entrustment decisions vary amongst pharmacy preceptors. The quality assessment showed most studies were of medium to high quality in strength of study design.
Conclusions: EPAs serve as an effective tool in pre-registration pharmacy training to support learners develop self-evaluation and self-development skills, which are necessary for lifelong learning. Nonetheless, there is variability in how pharmacy preceptors make entrustment decisions and the factors they consider when making these entrustment decisions. EPAs are an effective mechanism in supporting competency-development in pre-registration pharmacy students to enable them to work unsupervised upon registration and develop the appropriate self-awareness skills required for advancing practice.
Methods: English-language searches of primary literature describing outcome evaluations for EPAs in pre-registration pharmacy training were conducted in five databases (MEDLINE, Embase, CINAHL, Scopus, Emcare), from the inception of EPAs in pharmacy education until August 2024. A manual search of the included references was also conducted. Two reviewers independently screened articles for eligibility including studies reporting on entrustment, development, and practice readiness to reach a consensus on all included articles, and subsequently extracted the data. The Mixed-Methods Appraisal Tool (MMAT) Version 2018 was used to conduct the quality assessment of the studies.
Results: A total of 577 articles were screened, of which 28 met the inclusion criteria. Of these, 19 were quantitative-descriptive, 6 were mixed-methods, and 3 were qualitative studies. Three major outcomes were evident in the literature: EPAs support self-regulation and self-development in pre-registration pharmacy training, entrustment progresses throughout defined periods in training, and factors influencing entrustment decisions vary amongst pharmacy preceptors. The quality assessment showed most studies were of medium to high quality in strength of study design.
Conclusions: EPAs serve as an effective tool in pre-registration pharmacy training to support learners develop self-evaluation and self-development skills, which are necessary for lifelong learning. Nonetheless, there is variability in how pharmacy preceptors make entrustment decisions and the factors they consider when making these entrustment decisions. EPAs are an effective mechanism in supporting competency-development in pre-registration pharmacy students to enable them to work unsupervised upon registration and develop the appropriate self-awareness skills required for advancing practice.
Biography
Tarik completed his Bachelor of Pharmacy from the University of Auckland in 2014 and went on to become a clinical pharmacist at Middlemore Hospital in Auckland, which is the busiest hospital in New Zealand. In 2017 and 2018, Tarik completed his postgraduate certificate in pharmacy and postgraduate diploma in clinical pharmacy respectively from the University of Otago. Tarik then joined Alfred Health in Melbourne, Australia as a lung transplant and ICU pharmacist, and the undergraduate experiential education coordinator. In February of 2022, Tarik completed his Master of Clinical Pharmacy under the supervision of A/Prof Kyle Wilby, where he evaluated how pharmacy residency programs support competency development in early-career pharmacists. Tarik is now a PhD candidate through the University of South Australia under the supervision of Dr. Jacinta Johnson, Sally Marotti, and Prof Debra Rowett, where he is evaluating how entrustable professional activities support competency development in pre-registration pharmacy students.
Dr Jessica Pace
Lecturer
The University of Sydney
Does artificial intelligence use in higher education impact healthcare students' motivation and self-efficacy? A narrative literature review
3:36 PM - 3:38 PMAbstract
Introduction: AI is being increasingly used in healthcare and healthcare education. Clinical applications include detecting drug-drug interactions, making dose recommendations, medication counselling, diagnosing and taking a clinical history. Meanwhile, educational applications include providing personalised feedback, summarising relevant literature, assisting writing through spelling and grammar checks and helping students prepare for exams by generating practice questions. This study aimed to understand the impact of AI on self-efficacy and intrinsic motivation in healthcare students.
Methods: Literature was searched across databases including Pubmed, Medline, Scopus and Eric with keywords including “self-efficacy”, “artificial intelligence”, “intrinsic motivation”, “healthcare students” and “learning”.
Results: Six studies were identified relevant to AI and self-efficacy in higher education students, ranging from all university students to nursing, pharmacy and medicine. Studies showed that several AI tools increased self-efficacy by different means including fostering a positive attitude, increasing students’ confidence and lowering anxiety levels. Five studies were identified as relevant to AI and intrinsic motivation in higher education students. AI helped students to understand their learning contents, provided feedback and helped to engage with their peers which gave confidence and interest in learning.
Conclusion: AI increased the self-efficacy and intrinsic motivation of students’ academic achievement and should be utilised in student learning.
Methods: Literature was searched across databases including Pubmed, Medline, Scopus and Eric with keywords including “self-efficacy”, “artificial intelligence”, “intrinsic motivation”, “healthcare students” and “learning”.
Results: Six studies were identified relevant to AI and self-efficacy in higher education students, ranging from all university students to nursing, pharmacy and medicine. Studies showed that several AI tools increased self-efficacy by different means including fostering a positive attitude, increasing students’ confidence and lowering anxiety levels. Five studies were identified as relevant to AI and intrinsic motivation in higher education students. AI helped students to understand their learning contents, provided feedback and helped to engage with their peers which gave confidence and interest in learning.
Conclusion: AI increased the self-efficacy and intrinsic motivation of students’ academic achievement and should be utilised in student learning.
Biography
Dr Jessica Pace is an associate lecturer in the Sydney Pharmacy School, University of Sydney, a registered pharmacist with experience in both hospital and community practice and a Pharmacy Board of Australia oral examiner and exams subject matter expert for the Australian Pharmacy Council. Her research interests are in pharmacy education, learning and assessment and health policy (using empirical bioethics to find practical solutions to morally complex problems relating to medicines access and regulation).
