Most people don't know this, but I went back to school about a year ago to pursue a degree in medical lab sciences. I kept it to myself until now, mainly because I had no idea how it would go or how I would adapt to the study, paper, and exam routine. It turns out, I have adapted surprisingly well. After a year, my GPA is at 3.9. It would be 4.0, but I only got an 89% in precalculus. So far, the classes I've enjoyed the most are statistics, chemistry, and political science. I even enjoyed precalculus a lot more than I imagined. It was the class I was dreading, and while I had my moments of wanting to throw in the towel, I stuck with it and ended with a grade I am more than happy with. I got 107% in phlebotomy and aced my clinical rotation at the Comprehensive Cancer Center. I am only half-joking when I say I panic at any grade below 98%. A few of my professors have told me to relax and reminded me that I only need to pass. That's great for some, but I need to pass with an A. I was not a good student when I was younger, and the revelation that I can be "that" student has been beyond incredible.
While I prefer in-person classes, I am taking two
online ones at the moment. I had clinical rotations and needed the flexibility
because I didn't know what my clinical schedule would be. I'm in my last few
weeks of statistics and English 102. I was dreading statistics since it is
quite mathy, but I have a very open schedule, which gives me tons of time to
read, review, and practice. Anyone who knows me knows that I love to write. It's
something I do to relax. So I was looking forward to a class that is all about
one of my favorite hobbies. For the class, I have to write three papers,
between three and five pages in length. One is an evaluation essay—I chose a
website design product I have worked with since 2018. The second is a
problem-solving essay, in which we had to choose a specific, narrow real-world
issue and propose a couple of solutions. The paper's audience would be the
person or persons who could implement the solution. I chose the multi-use trail
in my neighborhood and the danger of e-bikes on the winding path. The last
paper is one with a bit of research. We have to choose a topic that is commonly
misunderstood, state the misconceptions, find sources that challenge those
misconceptions, and weave in a personal example. All easy-peasy papers. Or so I
thought.
My English professor requires that all papers be
submitted using Turnitin, the software schools have been using for ages to
detect plagiarism. Now, Turnitin has added AI detection, and this is where
things get dicey. Turnitin AI will analyze a paper and decide if it is "likely"
to have been written, in whole or in part, by AI. Keep that word "likely"
in mind. Anything under 20% gets an asterisk, and most teachers leave it at
that. In my class, anything above 20% is ineligible for grading. The options?
Take a failing grade or redo the assignment on a different topic. What gets
flagged as 80% in one detector gets flagged as 0% AI in another. Which one is correct?
Who decides? Who decides who gets to decide?
My writing has ranged from an asterisk to 85%, and I
seem to be clocking in closer to 30% as I do more papers. When I received the
first email saying my paper came back flagged as 85% AI, I was stunned. And
angry. And confused. Even writing about it now, those feelings rush right back
in. I emailed my teacher to dispute the accusation of cheating, and he told me
he wasn't accusing me of cheating or wrongdoing; he said the paper was flagged
and that I needed to redo the assignment. That made no sense to me. If I wasn't
cheating, and if I hadn't done anything wrong, then why was I being punished
with another assignment?
I rewrote it anyway. I whipped out a new paper on a
new topic, EMS Fitness, in about an hour. That one got a 95% grade. The
original paper clocked in at 41%, and after reworking and resubmitting, it came
in between the high 20s and the low 40s. That's likelihood, not fact. And that
likelihood takes none of my previous writing into account. I've written
professionally since my early 20s. This blog started after years of writing
funny stories to friends who begged me to blog. I was the person my friends
came to when they needed their papers proofread and edited.
The problem with AI detection is that there is no
proof. I can't prove I wrote it, and Turnitin can't prove I didn't. It becomes
a he-said / it-said situation. With plagiarism, at least there's something
concrete to point to. I spent the night looking up information and statistics
on Turnitin AI. Turnitin has been trained on over a billion student
papers—papers they scanned to identify potential cheaters, but then kept
forever so they could evaluate new submissions. It looks for patterns,
vocabulary, polished grammar, all the things that were drilled into me when I
was younger, and many things I have refined over the years. Oh, and that long dash
"—" is called an em dash. It's been around since the 17th
century, and used to set off an explanatory remark, not unlike a parenthesis.
Shakespeare was a big fan. So is ChatGPT, but it is not proof of AI writing.
They claim a false positive rate of around 1%. I have
several issues with that number. Independent testing, including from outlets
like the Washington Post, has shown a false-positive rate closer to 50%. And if
no one can prove whether AI did or didn't write something, then where exactly
does the "1%" come from? I could not find any feedback loop. That's
disturbing, because a feedback loop tells you what's working and what isn't, like
when a product stops selling, or when raising a price causes sales to fall. But
when AI flags a paper as being written by AI, something neither party can
prove, how does a false positive ever get reported? Turnitin is a
billion-dollar company that makes money by identifying "cheats," for
lack of a better word. It is in their best interest to flag as much as
possible. The more "cheats" they find, the more institutions will be
keen to keep paying for it.
I also discovered that universities such as Yale, UC
Berkeley, UCLA, UCI, Vanderbilt, Notre Dame, Georgetown, the University of
Edinburgh, and others have banned or strongly recommend against using Turnitin
AI detection. That alone says a lot. Institutions like MIT have also debunked
AI detectors.
So now I'm in the awkward position of having two
outstanding papers being held up by AI detection. My professor and I have
exchanged dozens of emails. At one point, he asked if he could run my long
email—over two pages—through Turnitin. It came back at 0% AI. Now he argues
that I can write without flagging. And yes, technically he's right. But what is
the difference between my papers and that email? Anger? Emotion? Stress? A lack
of transitional sentences? No neat structural phrases like "in conclusion"?
I honestly don't know.
On Tuesday morning, I decided to start over and write
a new problem-solving essay on a new topic. It took me about an hour. I
submitted it, and it came back over 30% AI. I asked my instructor whether there
was a difference between the first page, which did not flag, and the second
page, which did. He didn't look. He only looks at the score. I asked if, after
reading my emails and the first page of my paper, it seemed unreasonable that I
would have written it without AI. Again, he doesn’t look at them if they don’t qualify
for grading.
Here is one of the paragraphs in question. According
to Turnitin, I used AI to write this:
Installing convex mirrors that provide visibility
around otherwise blind curves is another practical solution. The same types of
mirrors are used in residential communities, parking garages, and hospital
hallways. A couple of well-placed mirrors at the S-curve and recreation center
would help everyone see who is coming when navigating those areas. The
visibility would give both cyclists and pedestrians an extra few seconds to
react and plan. Once installed, the mirrors require minimal maintenance. They
are a cost-effective way to make long-term safety improvements.
Is it unreasonable to believe that I wrote that
paragraph without help?
And that begs the question: if a professor refuses to
apply the tiniest bit of human judgment, then what is the point of having a
professor? If AI can deem my paper AI-written, then let it grade the paper,
stop paying the professors, and make college more affordable.
With my grade and degree hanging in the balance, I've
offered to write my papers under supervision—in his office or anywhere on
campus. He can choose the topic on the spot, so I have no time to prepare, lest
he think I memorized something from ChatGPT. Since the paper has to be on
something we are knowledgeable about, I only ask that he limit it to marketing
or small business. I haven't heard back. I figure there are only two outcomes:
I pass the AI detection and am eligible for grading, or I flag as AI, with incontrovertible
proof of authorship, which I assume would make my paper eligible for grading.
This whole dilemma is getting in the way of my other
classes. My statistics class, which I really enjoy, has an extra credit paper
we can write. My current grade is in the higher 90s, and as much as I love a
triple-digit grade, I'm going to sit this one out. I don't want to deal with
the stress, and I can't risk my degree or everything I've worked for this past
year.
It makes me angry and sad, and honestly, it makes me
wonder how many other students are in the same situation. Even at a 1% error
rate, that's 300 on my campus alone. If the error rate is closer to the
Washington Post numbers, that jumps to 15,000. I try not to let fear hold me
back. I will often do something just because it scares me. But this time, fear
wins out. I can't risk it. Since when did going to college mean holding back
and dumbing down? Keep in mind, we are talking likelihood, not proof.
What if you were driving down the street, and you got
pulled over and fined for speeding? You weren't, but the officer thinks it's 35%
likely that you were, given that you drive a sports car: no ticket, no reports to
insurance, no accusation, no wrongdoing, just a fine based on likelihood. But
you can't leave without paying the fine. If you don't pay, your car gets
impounded. Would you quietly pay the fine? What if instead of a fine, you had
to forfeit your license? Would you?
When people talk about how AI writes, they seem to think
that AI has created a new way of writing. It didn't. It learned from billions
of samples across centuries. AI has supposedly read every book ever published.
It was taught how to write by humans. Is it really such a stretch to think that
maybe some of us legitimately and without malice, write similarly? Here's a fun
fact: the first time I ever used ChatGPT, a few days after it was publicly
released, I did have a bit of a "That sounds like something I would say"
moment. At the time, I thought it was funny. Now, not so much.
I've heard stories of students sacrificing a better
grade to avoid being flagged by AI. What doesn't flag? Typos. Bad grammar. Poor
structure. And what brings down a grade? Typos. Bad grammar. Poor structure. Sounds
like a win-win to me! Alec, I’ll take impossible situations for 100! Turnitin
doesn't read papers. I could say that I was hanging out with George Washington
while he was writing Harry Potter, and that wouldn't flag as AI, even though AI
is famous for hallucinating. But if my punctuation and grammar are perfect?
That's another story.
When I was a teenager, we didn't have spellcheck; we
had to know how to spell or look it up in a dictionary. In the library. We
typed our papers on old typewriters, and for term papers, we weren't allowed to
use correction tape or fluid. If we made a mistake in the last word of a
footnote, we tore out the paper and started over. No copy/paste, no undo
button. We were taught to be on our game when it came to writing.
And now, apparently, that's suspicious, at least to an
algorithm.

