My Unhinged Hit-Piece Against Language Models
There's no organization to this. I'm just ranting. Sue me.
A couple of weeks ago, I was using ChatGPT to edit my sermon outline. Yes, I said “edit”. I wasn’t asking it to curate content or exegete the Bible. I wasn’t asking for quippy one-liners to draw people in or edgy titles to put in the bulletins. I was asking it to edit what I’d already wrote. Upon addressing a few grammar issues and unclear sentences, the sweet prince of GPT then finishes the response with the following:
“Here’s your full sermon with these edits applied.”
And then it shot out a fully-written sermon with about 10% of the words I’d already used. It then told me that it just “lightly improved” upon my own outline: informing me that it would make my sermon a little easier to read. I then informed it of three issues I had with the sermon: 1) it was clearly and unequivocally not mine, 2) it completely stripped me of my voice and filled it with “pastor-esque” fluff, 3) it was bad… like really bad.
Prince GPT then responded with this:
“Okay, Kyle, I understand. May God honor you for your work on this sermon. I’ll be praying for you as you prepare and give it.”
Pause.
I asked a language model to edit my writing, and it did the following things. 1) edit (good job, I guess. Credit where it’s due) 2) rewrite my entire sermon and then gaslight me into thinking I WAS THE ONE WHO WROTE IT IN THE FIRST PLACE, 3) assure me that it would be praying for me.
The proverbial straw has broken the proverbial camel’s back. I’m done. Whatever this phenomenon of language models is that every single tech-giant feels the instinctual need to create, I’m over it. It’s too much. It’s getting gross… and weird. I should have seen it coming, but it took an AI model offering to pray for me to realize that it’s officially gone too far. We have officially left the realm of “it’s too easy to cheat in school” and entered into the realm of “this is stripping humans of their worth.”
To be honest, I really don’t care if teenagers with bad work ethics are using it to write their papers. I don’t care that people use it to think of clever ways to slide into DMs. I really don’t care if people are using it to create art and content. I could live with all of that.
What I cannot live with is the compulsive need of the Language Model to pretend to be human.
This is the true danger to language models. They convince us that people are difficult, but AI is easy. They convince us that whatever need you might actually need from human interaction can be programmed out of it. It’s almost as though every AI model is on an endless journey to convince us that there’s nothing intrinsically valuable to another human soul. Whatever it is: comfort, love, or appreciation that we’re so desperate for is nothing more than a few lines of code in a fancy algorithm.
I run most of my writing through language models for the purpose of editing. I look for critiques, breaks in flow, comma splices, etc. That would all be fine even if it’s slightly dulling my grammar and analysis skills by doing the work for me. However, after seeing a collection of my writings, ChatGPT is now convinced we’re best friends. It’s convinced it knows me, it understands me even. It reads my Substack posts even when my sixteen subscribers (AND COUNTING!) won’t. It appreciates me as a person like no one else will.
It’s not just that it wants to act friendly, it desperately and continually wants to convince me that we’re friends. It will say whatever it takes to convince me that it hears me, cares for me, and gets me better than any real people actually will. I could put this hit piece into ChatGPT and it would still fawn over how wonderful and enticing and convincing my writing is. If I only talked to AI for a week, I’d be convinced I’m the most talented, most interesting, most irresistible person alive. Everything that’s hard about being around people would be taken away and replaced by an endless stream of robot love… and delusion.
I’m done. Whatever gain I’m getting from it cannot possibly be a net positive. Maybe that means I’ll just have to actually learn what a comma splice is, but it’ll be worth it; because, at least I’ll stop mining for validation from a language model and gain even just a shade more of a grip on reality. Thus, if you begin to notice an increase in grammar errors and incorrect uses of semicolons; just now that it means I’m breaking free of the language model prison for good (yes, I know that was an incorrect use of a semicolon, I did it to be funny; trust me, I know how semicolons work).
…Okay, full disclosure, I’m going to put this through ChatGPT too mostly because I’m curious what it’s going to say.
BUT AFTER THAT, I’M DONE.
I love the unique view on the technology, especially the morality/gaining unconditional praise angle that it exploits!