Who’s Right About Writing with AI?
I’m in trouble. Why? I really like using the em dash — it functions much like a comma or colon, setting off phrases in a clean, clear way.
Now, however, it appears that if we see the use of one or more em dashes in copy, we might quickly jump to the conclusion that it was written by AI.
That’s just not fair. I was using em dashes long before the plethora of AI-generated copy.
There are other giveaways that a writer may have used AI. Perhaps you’ve seen the words “vibrant,” “seamless,” or “robust” used more than ever? Or perhaps someone you’ve deemed an average writer suddenly becomes polished and prolific? Or, maybe, this is you?
Relax. I’m not judging.
But others are. If you’re in the professional world, proceed with caution. Everyone’s watching and wondering, including your boss and your peers.
Is AI-generated copy everywhere you look?
When I tell fellow communications professionals that I’m noticing these trends, they acknowledge seeing the same. In just the past year or so, colleagues have mentioned wordy, sterile copy showing up in emails, publications, news releases, and more.
I decided to test our assumptions.
Lisa Vasquez, a senior communications professional and former board president of the Dallas Chapter of Public Relations Society of America, shared a quote she read from a recent Texas news story. She had her suspicions that the long-winded sentences and super-polished tone might not have originated from the writer. [Naturally, I would never call out a fellow writer, so I’m not including the quote here.]
Running the quote through the AI tool ChatGPT to review and assess the likelihood that it was written by a human or AI, here is the result:
If we could do a quick test like this, then so could our bosses, our clients, and our customers.
Many of them probably are. This is the reality in today’s workplace. You could be judged or, worse, reprimanded.
But should we care?
The use of AI in writing is becoming ubiquitous, and frankly, there’s no turning back at this point. To borrow a Texas metaphor: That horse has already left the barn. Just like calculators help us perform mathematical functions, AI tools help us write.
We just need to recognize that too much of a good thing can be a bit conspicuous.
When asked about her thoughts on the use of AI, Juliette Coulter, recently recognized as one of the five “Most Influential People in Dallas Public Relations,” said, “Writers should write what they feel most comfortable saying in their own original words and style. AI is simply a tool to enhance what’s been written and shouldn’t replace original writing.”
Truth.
And lest you think this only applies to me or other communications professionals, you’re mistaken. Writing is a vital life skill, and very few of us escape the need to master it as a core competency for success.
Consider this list of roles that require good writing skills — which may or may not also include retirees: journalists, bloggers, teachers, professors, students, scientists, administrators, authors, poets, lawyers, marketers, family archivists, community leaders, and clergy.

Classroom and business considerations
Two years ago, in an era of early adopters trying out the new AI tools, I was asked to teach a university graduate communications course. At the time, the department had prohibited the use of AI by students. However, that policy quickly changed the following semester, and at most universities. Now, administrators agree that students need to learn these valuable, contemporary tools, but leave most of the AI rule-making and enforcement to faculty discretion.
Of course, ethics matter. For a time, the temptation to cheat was undoubtedly great. Surely, thousands of students around the world seized the opportunity to exploit that short, uncertain era between casual use and full-on mastery. It didn’t take long before the faculty found a solution. Today, the entire higher education community uses AI-detection tools like Turnitin, GPTZero, Copyleaks, Winston AI, and more.
While we’re on the topic of ethics, another concern in this digital age is fake content. A law professor told me that not only is student writing integrity at stake, but also content validity. Some students had submitted papers last year citing legal cases that did not exist. That’s right. If you haven’t yet heard about it, AI can hallucinate, creating legal cases that never happened.
AI has also given new meaning to the phrase “fake news.” A recent article describes a scheme like none we’ve ever seen — “Business Insider yanked 40 essays with suspect bylines. Are they related?” A Washington Post investigation into story retractions found suspicious articles written possibly by phony writers, possibly AI, filled with dubious facts.
Where is a discerning writer to turn? AI, of course.
Sigh — all of this makes my defense of the em dash seem like a very small hill to die on.



