From Freakout to Trusted Feedback Companion: One Academic’s Cautious Journey Toward AI Adoption

Written by Desi Richter, Ph.D.

When ChatGPT was released in 2022, I barely blinked. I was busy advising doctoral students, helping them shape literature reviews into the type of scholarly discourse that would help them conduct their dissertation research. I was also working at an educational nonprofit in my adopted city, New Orleans, designing programming that was meant to meet the needs of local K-12 teachers. 

Of course, I saw the articles that both heralded and hurled insults at the technology.

As far as I was concerned, AI wasn’t for me, but I didn’t really give it much thought.

And then, I received my first student paper that had been flagged for being written under the influence . . . of AI, that is. I was more annoyed than anything. I contacted my university supervisor and was given instructions to give the student an incomplete until we could figure out the best course of action. I did, and then, in an effort to delineate between the students’ authentic writing voice and ChatGPT-assisted writing, I spent the better part of an hour comparing discussion posts to that final paper. Finally, I made the call that I would grade the final paper as it did bear the trademark “voice” of the student. 

At this point, AI felt like yet another thing that took up my precious planning and teaching time. Every now and again, I would hear from an early adopter about a “tool” that would help students improve their writing. Still, I didn’t consider using AI. Large language model? Machine learning? These terms were as strange to me as if I were encountering a new language. Turns out, I was. So, I let the matter drop until I discussed it with an early adopter of AI. When they talked about using AI in their courses, I was dumbfounded. At first, it felt like learning AI was just another thing I needed to do, in addition to teaching and consulting work. To me, it seemed like an internet sensation rather than something that I would ever see as a viable tool to help my student write better. Honestly, with all the bad press, it seemed like it would likely do the opposite. 

As a research-writing faculty member, there is only one thing that I care about when it comes to AI usage: Can I ethically use this tool to help my students actually learn? I am sure that I am not alone in my desire to better understand both the capabilities and ethical conundrum called the ethical use of AI in teaching.

So, when it became apparent that AI wasn’t going anywhere, I decided to start learning. I went into research mode, beginning with this question: If I were to embrace AI, what concerns would need to be addressed? While I never had a full-on AI freakout, I did have thoughts about ethics, learning curve, and benefit to both me and my students.

Is AI Use Ethical?

Even with my head buried, I couldn’t miss the discussion about students using AI to cheat. Yet, focusing solely on this concern seemed short sighted. As a teacher, I have seen educators attempt to ban rather than utilize technology in their classrooms. 


The tech always wins. Remember those cell phone bags from back in the day? Yeah, me neither.

In considering the ethics of AI, the myopic focus on cheating only keeps educators from asking other important ethical questions.

When I taught for the Greater New Orleans Writing Institute, we used to play a game called pedagogy potluck. We would write out our best writing ideas on paper plates and ask everyone to take a “bite.” Inherent in that game was the question, “How Can I use this in my classroom?” Teachers are, by nature, a scrappy lot. And ChatGPT is a powerhouse of a tool. I started to wonder, “What if rather than hand waving around this and continuing to do things as I always have (and spending a lot of time doing it), I could harness some of this potential? Is it necessarily a slippery slope from tool utilization to the full-blown dumbing down of writing pedagogy?” 


Is the Learning Curve Steep?

Even if AI could be allowed into my writing pedagogy practice, did I care to take the time to learn it? I have a lot of things I enjoy doing outside of my consultation work: parenting, memoir writing, and songwriting — we can talk about augmenting practice all day long, but whenever we are implementing something new as an educator, we often exist in some transitional form. 

Long before we become adept users of any new tech, we are often clunky wielders of a turbo-charged tool. The irony is that as educators, we preach all day long about “productive struggle,” scaffolding, and having a “growth mindset.” But we often hate to sit in the discomfort of being a novice again. I wondered about the learning curve not just for myself but for the new teachers I have trained. How feasible is it for an instructor who is trying to get classroom structures and routines in place and navigate the landscape of high-stakes testing, trauma-informed approaches, and issues of equity to learn one more dang thing? And even if I learned this new technology, how would I embed it into my practice in ways that are viable?



Can AI Really Benefit Both My Students and Me?


In order to embrace AI for education use, I needed to trust that at least some version of it could benefit both my students and me. This idea hearkens back to the student-teacher relationship. At the end of the day, I am a writing pedagogue. I hail from the process-based writing camp and have helped numerous students overcome the roadblocks they need to get out of their heads and onto the page. Name a writing issue, and I have seen it. I know that students need feedback that is differentiated to their needs. And I also know that those students need a lot of care. 


Good teachings are always built upon relationships. I have seen students who I thought would never progress make strides in learning the moves of academic writing via a combination of teaching, feedback, and good old-fashioned care. If I were to embrace any form of AI, it would need to serve both my students and me, not the other way around. 

Like for real. 

For real, for real. 

I am still answering the above questions. Like most questions involving best practice, the answers are not binary, and they are certainly not one-and-done. They involve engaging in critical conversation. They involve the willingness to accept what is —  AI educational use is not the future. It is the present. 


So, in the present, in regards to the question of ethics, the answer is yes, AI can be implemented in ethical ways. And yes, AI can be leveraged to create mediocre work that is not from the students’ minds. Students have been finding workarounds in order to cheat forever. The upshot of this, by this point, not blinding revelation is this: The time has come for those with pedagogical knowledge and subject matter expertise to step in and guide the narrative about AI in education. Rather than leaving that framing to the next fear-inducing, clickbait article about “the robots taking over,” I am interested in learning from those who, like me, are testing the waters of ethical AI use, learning curve, and efficaciousness. 


In regards to the learning curve, I lean on my research background to help. When designing a study or helping students write, knowing what is outside the scope of the research and what lies in bounds drives learning and gives the writing a vector. The same is true of the necessary learning curve. Many writers are talking about the need to scale AI within organizations, and I appreciate that, but sometimes these articles come across as a bit tone-deaf. 

The educator in me asks, “Where is the modeling? Where is the gradual release? Why are we sending people into an ocean of knowledge with a life raft? 

Much can be learned at a “low lift” by navigating the tributaries. While programs in higher education scramble to put policies in place, much rich data about use cases can be examined. It is in these tributaries that evidence of learning is demonstrated. 

Approaching AI tributaries of use honors how humans actually embrace change. I’m pretty sure the “robot is not going to take over” if I use a single tool to help my students get feedback on the coherence of their article. For me, considering AI, my trusty feedback companion is comfortable. Can AI do more than that? Can it tutor students? Sure? But I can adopt this technology in ways that feel aligned with the pace of change both I and my students need in order to learn. I still begin with, “What does this student need?” Then, I allow AI to assist by identifying feedback and offering suggestions. 

AI applications evolve so quickly that I almost feel it deserves its own unit of measurement. AI has been out and about since 2022, but that feels like about 175 AI years. Yet, I fear that the voices of those doing the important work of researching and documenting the capabilities are not being pushed forward in the mix. How many times have we all seen this article in the Atlantic? 


Yet, who has read this one about a professor who is on the ground actually testing AI and offering his honest, well-reasoned opinion about it?  In this regard, AI implementation has fallen prey to the “only your mom reads your research” phenomenon. When I was getting my doctorate, I remember very much wanting to write a dissertation that more than 5 people would read. I succeeded by landing in the innovative land of arts-based research. After I wrote and performed my dissertation in front of about 70 people, we engaged in critical conversation around its themes. Similarly, we need platforms where critical conversations need to be taking place both within and outside the university proper.

I believe that there is a space for educators like me who are mid-adopters — that is, we stuck our toe in the water to check the temperature — to weigh in.

We may be early(ish) adopters, but we didn’t necessarily dive in head first. And that is okay. This is why I am happy to be working as a research director at Moxie. I get to test tools. I get to ask questions about their efficacy and ethics to my heart's content, and I get to partner with faculty who are, like me, cautiously moving from the freakout to questioning to the optimistic testing of these tools. 

Previous
Previous

Clarifying AI Use in Academia: How to Create Use Cases that Call upon the Strengths of Multiple AI Resources

Next
Next

Each one Teach One: How to Foster Critical AI Literacy in Higher Education