Dr Catherine Gunn
University of Auckland
Automated text analysis tools are an innovation at the tipping point in the use of information technology in teaching and learning. Research and development that was the domain of specialists has begun to spread as demand for these tools extends to common teaching and learning activities, e.g. assessment of students written work and discussion contributions. As class sizes grow and diversify, higher education takes place online as much, and sometimes more than in physical spaces. Instructors are challenged to provide timely feedback on students’ written work. Yet feedback closes a critical loop between teachers and learners in Laurillard’s (1993) Conversational Framework. Automated feedback from multi-choice quizzes is one way to meet the challenges of scale, though it does not reveal depth of understanding or clarity of expression as students’ free text, written answers do. Prior to the development of easy to use text analysis tools, instructors had few time efficient options to use this type of assessment with large groups.
One application of text analysis tools supports rapid evaluation of online discussion posts so feedback can be tailored to learner needs. The challenge of managing online discussions with large groups of students has never been fully resolved. The development of MOOCs (massive open online courses) raised this challenge to a new level, as thousands rather than hundreds of students enrol in courses where connectivist and constructivist learning models use discussion as a key method of learning. This presentation will describe the use of Quantext, a text analysis tool designed for teachers, to analyse discussions from a MOOC with an initial enrolment of around 19,000 (Donald et al. 2017). The presenter will explain how instructors can gain insights into student learning through a rapid analysis process that could support timely responses and targeted feedback in future iterations of the course.