Here are some general ideas about ways to use AI text generators as writing assistants. Helping your students learn how to us AI to improve their writing and learning processes, without automating the writing process, obviously, could make giving feedback more efficient and more effective. Learning how to use these tools for your own research and writing could make you a more efficient and effective writer as well.
With each of the following examples, indeed as a general principle, don't assume the machine is right. You may find that thinking about the ways it isn't quite right will give you insight into what you are trying to say, how best to say it, or how what you are saying might be misunderstood by a given audience. If you ask a vague question you will get a superficial answer at best. You may also get something that sounds perfectly convincing that is inaccurate or misleading or even wrong.
Prompt-crafting is a rhetorical art-form akin to technical and professional writing. Your prompts need to be precise, specific, and contexualized. AIs are very literal. They can't read between the lines, although if you ask for irony they can oblige.
Writer's block is real. Blank screens are intimidating. Sometimes just writing down a few ideas quickly -- a list of questions, key words, potential issues, to say nothing of an outline -- requires overcoming inertia. Googling to generate ideas has been very useful and will continue to be so, but now we also have tools that can summarize and organize and present ideas in a way that saves us some initial time reading and sorting raw information. AI can generate anything from a list of key words to an outline to an entire draft formatted for a specific genre, audience, and context. While people who write for a living in many business contexts might be able to work from an AI generated draft, we teachers need to decide, perhaps on a class by class basis, how complete a draft we want our students to have generated for them. One might begin by having students critique an AI draft and then develop a new draft part by part, generating and organizing ideas and then writing a draft from that material. I think the learning outcome of knowing how to craft prompts might be nearly as important as the pre-AI learning outcomes for a writing assignment.
These suggestions assume the writer knows the subject matter they want to write about but that's about all they know at this point.
These prompts amount to a semi-structured warm-up to actual drafting.
Sometimes getting started with a piece of writing is hampered by an unclear sense of the genre one needs to produce. In such cases a prompt like -- what are the topics typically covered in X kind of speech for Y kind of audience? Might be helpful. Or, please give me some examples of X kinds of speech
If you know your topic but have no idea where to start, ask an AI to help you generate ideas
example
Remember: these AIs write even nonsense with clarity and conviction.
In the not so distant past writers used libraries to access reference texts, thesauri, dictionaries, rhyming dictionaries, encyclopedia, quotable quotations, citation indexes, summaries, and so on to enhance their word choices, to check facts, find supplemental material, test their ideas against ideas others have had, and so on. When Internet Search arrived, we did the same things but with digital resources at our fingertips, without having to leave our desktops, Wikipedia as well as the digital versions of the hide-bound ones we used pre-google. Now we have AIs to do these same things and others, more efficiently and more conveniently. AI is more efficient because you don't need to provide your own summaries nor do you need to know what resource to consult. It is more convenient because it's all there in your browser. You do, of course, have to cross check. But that's not new. Even the Encyclopedic Britannica got things wrong some times. One fast way to cross check is to ask the same question of each of the most commonly available AIs, Bard, ChatGPT, and Claude. If you get wildly different responses, you need to investigate using other resources.
Once you have a topic, a perspective, and a plan or outline, developing effective sentences often requires quick historical or biographical or anecdotal lookups, short branches of thought, some specific word or phrase that will help you keep writing. Here are a few suggestions along these lines.
Once you have a draft that you think is fairly complete though not yet finished, you can use AI to help you tighten the sentences, organize a meandering paragraph, catch grammar and punctuation errors.
As the example for item 9 indicates, you might consider telling it what kind of editor you need it to be -- audience, genre, length, level of development (does it notice any arguments missing or problematic assumptions) -- as well as specific faults you want it to identify -- grammar, punctation, whatever. And then tweak your request based on what feedback the AI returns. You might ask it for a bullet form list of the issues it has identified so you can go back over your text to see what it missed. Experimentation is key here. Try different approaches, the same approach on differing kinds of writing, a one shot versus an iterative approach, and whatever else you can think of to learn how to make the best use of these tools.
The idea here is to find a prompt or a series of prompts that would help a writer improve an already fully drafted piece without "improving" it for them: AI as tutor rather than ghost writer. Alternatively, you could, as a composition instructor, develop a prompt plan and use it to semi-automate giving your students feedback. They turn in a draft. You run it through your protocol. AI marks up their draft. You check it over and if all is right, send the marked up version back to the student. You might be able to go through several iterations before grading, resulting in better grades for them and less tedious work of debatable utility for you.
Editing student writing has never been proven to help them write the next piece better. If the machine could show a budding writer one or two items at a time how better to phrase something or what a given transition signals or how to say the same thing with fewer words, then that student might start to need less feedback over time. But not all students are budding writers and many will accept AI advice without reflection and learn nothing.
You might also consider taking examples from drafts your students have given you -- a week or at least a few days before due date -- and in class ask one or more of the AIs to identify the problem or problems you noticed in each example. That way you would model for your students the AI prompting process you want them to follow before turning anything in to you.
I haven't verified this hunch yet, but I bet if you gave Claude or ChatGPT a precise rubric and asked it to use it to grade and provide feedback on a student paper, it could do a great job. You might need a few iterations. But I bet it can. Would you want it to? Presumably you would review each assessment and commentary before passing it along to the student.
This is a very partial list of questions that we, teachers and students, ought to be answering for ourselves and sharing with each other about the impact AI will have on our intellectual lives. If you are concerned about the social implications in particular, you might want to read Atlas of AI, Kate Crawford.
The more specialized the knowledge-domain, the less useful the output. These generally available AI tools were not trained on highly specialized content. This is why they can't write a real academic article and only a bland facsimile of a college essay. If there were an AI trained on an academic corpus, say all of College English and MLA and a few others, then it might be able to say more detailed things, but it still couldn't generate new knowledge or provide insights other than something latent in the corpus but until then unnoticed by a human being.
Think of a prompt you suspect will stump or mislead an AI generator and see how it handles the situation. Does the failure identify a limitation of the machine or does it suggest the prompt was imperfectly phrased?
I doubt, for example, that Claude would understand if I asked a question ironically, but does it understand what irony is? I was reading a chapter of a dissertation about identity performance and con-artistry when it occurred to me that an apt title for the chapter might be Fake it Until You Make It. And then I thought, a good rhetorical twist would be Fake it Until you Get Caught, because it would subvert the obvious with something more apt. Given that AI generators fill in blanks with what most probably comes next, and given the ubiquity of "fake it until you make it", I wondered how it would fill in that
______
A decent gloss. Then I asked it for an ironic completion to fake it until
_____
Clever, machine. I can forgive it for assuming Alanis Morrissett's definition of irony -- an unexpected reversal of fortune. Only a poorly trained historian of rhetoric would insist on the original definition, clearly meaning the opposite of what you say. I doubt Claude or ChatGPT or any of them could get such high-context jokes. Perhaps given enough context it could seem to infer you were being ironic, but generally computers are hyper-literal. There are people, perhaps, who would be willing to spend time crafting facsimiles of human interaction, and I'm certain there are many trolls itching for a chance to fool the machine into saying inappropriate or irrelevant things, but both of those strike me as human rather than machine failures.
In 2016, Microsoft created a chatbot they called Tay, Thinking About You. They gave it a Twitter identity -- "The AI with zero chill." They gave it a Twitter handle, @TayandYou, and then they gave it to Twitter. People would respond to Tay's Tweets and Tay would use that information to Tweet a-new. Sixteen hours later Microsoft had to shut Tay down because it was spewing racist, sexist vitriol. The Humans of Twitter trained it to be a Troll after their own image. A similar thing happened to one of Meta's AIs in December 2022. The Galactica language model was written to "store, combine, and reason about scientific knowledge" (Edwards). According to Edwards, Galactic included "48 million papers, texts and lecture notes, scientific websites, and encyclopedias." The goal was to facilitate and accelerate the composition of literature reviews, Wiki articles, and the like. When Meta offered it to the world to beta test, some people found it promising but others found it problematic and some set out to vividly demonstrate its problems by feeding it prompts that led it to articulate nonsense, in some cases offensive nonsense, as though it were facts. Yann LeCun announced the off-lining of Galactica with a Tweet, "Galactica demo is off line for now. It's no longer possible to have some fun by casually misusing it.
Happy?" (link), suggesting malicious human interaction rather than AI defects led to the problems Galactica exhibited.
A more general version of the same experiment is simply to ask it questions you know the answer to. How does it do? What did it get right and what did it get wrong? You might want to rephrase a prompt and ask again to ensure the failure wasn't the prompt's fault.
People have been recognizing that these AI generators make excellent study guides and I concur. When I've asked them to write code for me, they not only provide the code, they explain it as well. This can be a very direct way of learning something. You can also ask it to give you a step by step process to achieve a learning objective (or any goal, really) and what you get back might be a great way to get started. If you try this experiment a few times, I think you will discover that the less you know, the more satisfied you will be with the advice. In other words, these AIs are generalizers. You can ask them to act as a specific kind of expert and you will get less generic results, but the more you know the more gaps you may notice. Just because it made sense doesn't mean it got it right.
Claude can receive TXT, PDF, and CVS files (up to five at a time), read them, and perform various tasks. I uploaded an academic article and asked it to create a five item multiple choice test. I knew as I hit submit that my request was vague and hopelessly optimistic. The result
Given that information, it might well be possible to upload a more lecture-like text, accompany that with a rubric and a set of important topics, and you might get a multiple choice test you could tweak, saving, perhaps, some considerable time. Or not, but worth playing around with, I think.
The fact that it can read CVS files suggests it can analyze data, which means that it might be able to offer an outline or perhaps even a kind of rough draft based on data and some specific parameters about discipline and audience and reading-comprehension level and so on. Since uploading files to an AI gives those files to the AI, one would need to think carefully before doing something like that. But an experiment or two with safe data might be very instructive. If your prompt isn't specific enough, you will be prompted for more information, and if you go through that process a few times, you may get what you want.
I asked it to summarize the content it would find at a URL I gave it. The result
The warning was amusing and the content bland and irrelevant, but hallucination seems a bit melodramatic. Still, ask a human being a question they can't know the answer to and more often than not they will make up some nonsense on the spot, often with conviction. I think psychologists call that phenomenon fabulation. That the machine fails in such an all too human way is more impressive than it would be if it merely acknowledged its ignorance. Such reticence would be better than human.
At any rate,
I copied the content from the same URL (which it automatically turned into a TXT file when I typed cntrl v) and asked it to identify the 5 key points. Not bad at all.
And then channeling Erasmus' De copia: The bottom line -- I should ask an AI to offer alternatives to that clich‌é -- the bottom line is that if properly prompted AI text generators can provide helpful feedback and potentially useful insights into what you are trying to write and with whom you are trying to communicate. They can also act as very effective editors, given the right prompts. While we could ask them to write mundane texts for us, they still need careful supervision. They are too slickly rhetorical (without being intentionally deceptive) to be given direct access to public spaces. Send an AI-generated note of condolence and you will offend the universe. Maybe in the not too distant future we will be inured to robotic communications. Perhaps we will get more adept at prompting human-sounding texts from our AI-assistants. And the machines themselves may evolve a more personal style. Given the competition, I suspect we humans will start writing in a more distinctly personal voice, to display our humanity, eloquence over mere competence. Or maybe we will just off-load all bureaucratic discourse to the machines so we can focus on less transactional forms of writing.
How to use AI for generating ideas
How to use AI as a co-pilot while drafting
How to use AI for revising and editing
How to use AI as a writing tutor
Claude as editor
Claude as a grammar checker
ChatGPT as a grammar checker
How to use AI to grade papers
How to use AI as a facilitator of conversations about traditional rhetorical practices
Speculations about AIs' cognitive and social implications
Make a point of testing AIs' limits
Other experiments
The result
In conclusion
He said with the tired, mocking tone of a person who has read undergraduate essays for 30 years and counting.