Neil Clarke, editor of a respected science fiction magazine, reports on his blog that numbers of spammy short fiction submissions are way up for his publication. He says that spammy submissions first started increasing during the pandemic, and “were almost entirely cases of plagiarism, first by replacing the author’s name and then later by use of programs designed to ‘make it your own.'”
Helpfully, he gives an example of what you get with one of the programs to “make it your own.” First he gives a paragraph from the spam submission, which sounds a little…odd. Then he provides the paragraph from the original short story on which the spam submission was based. However, Clarke says: “These cases were often easy to spot and infrequent enough that they were only a minor nuisance.”
Then in January and February, spammy submissions have skyrocketed. Clarke says: “Towards the end of 2022, there was another spike in plagiarism and then ‘AI’ chatbots started gaining some attention, putting a new tool in their arsenal…. It quickly got out of hand.” It’s gotten so bad that now 38% of his short fiction submissions are spammy, either “AI” generated,* or generated with one of those programs to “make it your own.”
38%. Wow.
Clarke concludes: “”It’s not just going to go away on its own and I don’t have a solution. … If [editors] can’t find a way to address this situation, things will begin to break….”
This trend is sure to come to a sermon near you. As commenters on the post point out, writers are already using chatbots to deal with the “blank page struggle,” just trying to get words on the paper. (To which Neil Clarke responds that his magazine has a policy that writers should not use AI at any stage in the process of writing a story for submission.) No doubt, some minister or lay preacher who is under stress and time pressure will do (or has done) the same thing — used ChatGPT or some other bot to generate an initial idea, then cleaned it up and made it their own.
And then “AI” generated writing tools will improve, so that soon some preachers will use “AI” generated sermons. For UU ministers, it may take longer. There are so few of us, and it may take a while for the “AI” tools to catch on to Unitarian Universalism. But I fully expect to hear within the next decade that some UU minister has gotten fired for passing off an “AI” generated sermon as their own.
My opinion? If you’re stressed out or desperate and don’t have time to write a fresh sermon, here’s what you do. You re-use an old sermon, and tell the congregation that you’re doing it, and why — I’ve done this once or twice, ministers I have high regard for have done this, and it’s OK, and people understand when you’re stressed and desperate. Or, if you don’t have a big reservoir of old sermons that you wrote, find someone else’s sermon online, get their permission to use it, and again, tell the congregation that you’re doing it, and why. Over the years, I’ve had a few lay preachers ask to use one of my sermons (the same is true of every minister I know who puts their sermons online), and it’s OK, and people understand what’s it like when you’re stressed and desperate and just don’t have time to finish writing your own sermon.
But using “AI” to write your sermons? Nope. No way. Using “AI” at any stage of writing a sermon is not OK. Not even to overcome the “blank page struggle.” Not even if you acknowledge that you’ve done it. It’s spiritually dishonest, and it disrespects the congregation.
* Note: I’m putting the abbreviation “AI” in quotes because “artificial intelligence” is considered by many to be a misnomer — “machine learning” is a more accurate term.