Legal: The legal challenges of AI
Queensland Teachers' Journal, Vol 129 No 1, 16 February, page no. 23
AI’s leap from science fiction genre to an uncomfortably confronting reality at work, home and everywhere in between, is still difficult to comprehend. Teachers will continue to be at the forefront of determining how to confront the challenges it poses.
So where is Australia’s legal response up to in in that race? Internationally, Australia is already behind the front runners. In 2023 the European Union and USA, in different ways, introduced the first stage of requirements for testing standards to be met by AI providers before the release of new products, as well as requirements to develop water marking to improve the identification of and accountability for AI produced materials.
Australia’s comparable framework is still a work in progress, so we have to hope that we can learn from the experiences of the first wave.
The Commonwealth Government’s response to the 2023 national level discussion paper Safe and Responsible AI in Australia was released in December, without a lot of fanfare; at almost the same time, Education Ministers released the Australian Framework for Generative Artificial intelligence in Schools.
The framework attempts to address the role of AI in relation to education outcomes, ethical practices, and equity and inclusion. It highlights the challenges of providing security and safety, along with fairness, transparency and accountability in a field that is moving far too quickly for regulatory systems to keep pace with.
A couple of elements of the framework stand out as identifying (or potentially generating) legal issues for teachers and principals. The framework centres accountability on the human actors responsible for the use of AI tools. The nature of generative AI, however, is that while teachers are at the centre of the classroom when using AI tools, it is unclear how they will be able to control the technology. It will be teachers that have to identify and justify how and why the technology is producing results for individual students based on inputs from the student, the teacher, and the AI’s source material.
In 2023, a trial commenced in some Queensland schools of the Cerego platform, which relies on the Queensland Curriculum as its source material rather than being an open source generative system like ChatGPT, and this may provide a good test of the how teachers can use the platform to improve outcomes for individual students without creating a whole new set of record keeping responsibilities.
The framework also places a responsibility on teachers to understand AI tools before they are used. Any systems introduced through the Department of Education will inevitably come with a raft of training materials. But teachers and principals will come under increasing pressure – from parents, students and from themselves in their quest to improve teaching outcomes – to use tools that have not been provided by the department.
Despite those pressures, the risks to educators of using tools that have not been approved are significant, and feed directly into the legal risks identified in other central elements of the framework concerning security, privacy, and contestability.
- What sources of information does the AI tool use – do they generate copyright infringements risks for the teacher, for the school, or for the department?
- Can you clearly understand how the tool is processing the information provided to it, so that you can justify the outputs it is generating and the actions you take relying on those outputs?
- Do you know how the tool will use the information you and students input to it, and where it will store and potentially share that information?
- Can personal information be gathered through the tool in ways that can be aggregated to create a profile that can be shared or misused?
- How can you be sure students cannot “game the system”?
The framework evidently assumes that extensive monitoring, at a student, cohort, and school level, will be central to the safe and effective use of AI. Two important risk issues for teachers arise from this. Firstly, the risks to teachers from using AI tools in the classroom without clear authorisation are significant because so much is unresolved.
Teachers should resist the pressures to use AI tools unless they have explicit authority to do so. Secondly, principals will be expected to take responsibility for dealing with all the challenges that come with reliance on AI tools with the real risk that they will not be provided with the resources needed to address a field that is moving so fast.
While the potential benefits from AI in education to improve outcomes for students and teachers are immense it is a field that will require careful management to minimise the risks to all involved.