Statement on Artificial Intelligence in Writing Flag Classes

The Faculty Writing Committee has been reviewing chatGPT in light of other writing-related AI tools such as grammar checkers, plagiarism detection services, and automated citation systems. Below, we offer a summary of our guidance, followed by a fuller discussion of these high-level key points. Our suggestions are grounded in essential writing pedagogy for the use of AI, including chatGPT, in the writing classroom.

  • Decisions about whether and how AI may be incorporated into a class must be made on a careful, case-by-case basis, and should be revisited as the technology continues to evolve.
  • At present, we do not see a need to fundamentally alter courses or pedagogy in reaction to AI, and we will monitor further developments around these tools.
  • Students and faculty can and should critically engage the limitations of AI, including evidence of bias in AI and a lack of accountability to standards of scholarly and academic integrity.
  • In the end, human input and human feedback are essential parts of the writing process that AI cannot and should not replace. AI may assist students and instructors as they engage in the writing process, but it should not be used as a replacement.

We strongly encourage instructors to also read the more extended commentary below, as time permits.

Please note that the Center for Teaching and Learning has created a basic informational page for faculty on chatGPT, including some specific approaches to the tool that may be incorporated into classrooms.

The University Writing Center has created a handout with guidance for students on ChatGPT and other Large Language Models, including student obligations regarding these tools.

1. The Faculty Writing Committee concurs with the Association for Writing Across the Curriculum that there is room for AI tools in the writing and educational experience, but that decisions about these tools must be made locally:

Some learning communities might reject these technologies outright, including them, for example, in campus policies about plagiarism. Other communities might find productive pedagogical roles for this technology; indeed, some writing teachers are having students explore and experiment, in a critical fashion, with AI writing: its potential for aspects of the writing process, its limitations, its ethics, its costs. Furthermore, in some professional fields, AI tools have been available for years, and professors in those fields have incorporated attention to them in teaching. (AWAC Statement on Artificial Intelligence Writing Tools in Writing Across the Curriculum Settings, Jan. 30, 2023)

2. While it is important for individual faculty to review AI tools critically before deciding whether and how to adopt them, it is premature to expect definitive answers on the impact of tools like chatGPT. As instructors at a Research I institution, UT-Austin faculty have a special interest in promoting thorough, scholarly investigation of AI’s functionality, limitations, and ethical implications. Writing scholars have already begun delineating the inquiry process. We look forward to the results of this research and hope to see appropriate resources devoted to it.

3. As instructors make decisions about AI use, they should bear in mind that AI shares the biases of its creators. In the case of AI using Large Language Models (like chatGPT), this bias is a well-known, serious problem that shows no signs of abating. Bias is likewise evident in the structures and applied uses of grammar checkers, assessment systems, and plagiarism detection tools. Best practices preclude the use of systems that perpetuate linguistic and literacy bias—though, again, critical analysis of these tools and their biases can be a productive learning activity.

4. The Writing Flag’s drafting, feedback and revision requirement offers a useful map for navigating questions about chatGPT and other AI. Writing is an iterative process requiring drafting, feedback, and revision. Human feedback, specifically, is required for writing skills development, because writing is a process of communication between human subjects, with social and emotional components. AI might be used to augment some of these processes in more formulaic writing, serving as something like a template for a human writer (for example, to generate a draft of a letter that indicates the type of content that might appear in the letter but requires human discernment to evaluate those choices, add specific ideas and detail, and confirm the letter’s accuracy), but it should never replace them. Because good writing requires human input, writing that lacks such input will usually fall short of college-level standards for success. Centering a required process of drafting, feedback, and revision, as all Writing Flag classes should do, protects against the possibility that AI-generated text will be submitted as a student’s own work. In addition, emphasizing learning objectives focused on developing skills and practices as writers can reduce the incentives to use AI-generated work. The incentives to plagiarize and cheat (with or without AI) increase when students perceive that performance in a final submission is the only thing being rewarded in grading.

5. Instructors should avoid overreacting to fears about chatGPT. ChatGPT can convincingly mimic student writing: its output is often underdeveloped, lacks strong internal cohesion, and displays a lack of nuance in its deployment of vocabulary. It also has trouble contextualizing sources. While these are qualities of student writing that we often lament, they are not developmentally inappropriate for novice writers, and in fact, such writing is a necessary stage all learners progress through. If we penalize students for “sounding like chatGPT,” or accuse students of academic dishonesty merely because their prose sounds awkward and artificial, we will be punishing many students for perfectly normal writing activity.

6. Machines cannot take responsibility for what they write, but academic writers are obliged to do so. Scholarly journals are already setting standards for the use of AI in published work—for example, that chatGPT cannot be listed as an “author,” and that any use of the tool in the writing process must be fully disclosed. A similar approach should be adopted in the classroom: The use of AI by students or instructors should always be fully disclosed, including the tool used and the parts of the process it was used in.

7. While critical interrogation of AI can be a productive learning activity, instructors should not feel obligated to re-structure their courses or assignments around testing these products. Disciplinary writing classes already tend to be very tightly scheduled, with little room to add new activities. Creating new assignments and activities that incorporate chatGPT taxes instructors’ already limited time and energy. Furthermore, chatGPT (like many similar products) requires users to provide personal information before they receive access to it. Forcing students to divulge their information to a third party may not be safe or ethical.

8. Finally, we have noted that AI typically insinuates itself into writing courses when instructors have too many students. Instructors may turn to aggressively marketed products like “plagiarism detection services“ when they lack the time or energy to read students’ writing carefully. Students resort to chatbots and paper mills when they suspect their work won’t be carefully read by overburdened instructors. Programs should ensure that writing-intensive classes are appropriately staffed to allow the individualized feedback and revision processes that promote learning and discourage the misuse of AI.

For Writing Flag instructors, and faculty seeking to add a Writing Flag to their courses, the Faculty Writing Committee offers the following AI-specific guidance for meeting the Flag criteria:

  • No work assessed by any form of AI should be included in the “percent of grade” criterion for the Writing Flag. To qualify for the Flag, a class must offer students grades and revision-oriented feedback from a human reader (the instructor or graduate TA). Please see our Writing Flag Criteria and Interpretation.
  • Automated plagiarism detection tools should not be used without a full understanding of their limitations, and of students’ rights in regard to the software. Please see the Faculty Writing Committee’s Statement on Plagiarism Detection Software.
  • To ensure that all Writing Flag classes can provide the individualized, revision-oriented human feedback required, the Faculty Writing Committee recommends that departments staff Writing Flag classes at no more than a 25:1 student-instructor ratio. Please see our class size statement.

Approved by the Faculty Writing Committee, March 2023