
Chances are in the past year youāve read or seen something that was not created by a human being. Artificial intelligence (AI) has become increasingly prevalent in all walks of life including the creation of text, images and videos. The use of AI has drastically increased since the launch of OpenAIās ChatGPT in November 2022 and has no signs of slowing down. According to a from , a European Union law enforcement agency, as much as 90 percent of online content may be synthetically generated by 2026.
With such a staggering projected increase in AI-generated content, it is more important than ever to be aware of what content is synthetically created, what that means and whatās next.
How to identify text created by AI
51ĀŅĀ× Assistant Professor of English and AI expert Paul Shovlin says detecting text generated by AI can be tricky, especially across different kinds of writing. Faculty might identify a studentās writing assignment as being AI-generated because it doesnāt exhibit the kinds of specificity and word choice they are accustomed to from a particular student. This can become difficult when the writing isnāt as personalized and doesnāt have as much voice.
āThe issue is that the characteristics [a professor] may be using to intuit arenāt necessarily stabile in different kinds of writing,ā said Shovlin. āA scientific report isnāt going to have an identifiable, eccentric personal voice in it, for example.ā
At the same time there are instances of someone not using AI to write and their work getting flagged anyway.
āThere have been reports of the writing of some neurodivergent writers as being flagged as likely AI-generated when these individuals did not use any AI-assistance, at all,ā emphasized Shovlin.

in 51ĀŅĀ×ās College of Arts and Sciences
specializing in AI and digital rhetorics.
, the AI that specialize in analyzing, generating and understanding text can have a ātellā at times. LLMs often function by predicting the best next word to use. This can result in certain "tell" words that are overrepresented in the training data, but not used in colloquial speech says Chad Mourning, an 51ĀŅĀ× assistant professor of computer science and expert in AI and machine learning.
āOne that shows up a lot, particularly in the academic setting is ādelve,āā explained Mourning. āI see many student papers using that word, but they don't say that out loud. Makes one suspicious.ā
Mourning added that earlier LLM models tended to ramble and didnāt seem to know when they were done. Newer models, however, can add to the confusion as they tend to do a better job replicating organically created text.
āAdvanced prompt engineering and bot programming can lead to AI-generated writing that looks more like āorganically created text,ā than the general model many people use as a go to solution for AI-generated text,ā said Shovlin.
How to identify images created by AI
When it comes to images, AI often struggles to generate uniquely human features like faces and fingers. A quick method for identifying images that may have been synthetically created is counting the fingers of the people or seeing if their faces appear to be distorted.

computer science in 51ĀŅĀ×ās Russ
College of Engineering and Technology.
Mourning is an expert in aviation safety,
artificial intelligence and machine learning,
cybersecurity and advanced air mobility.
Even if an image does include people, additional steps may need to be taken to distinguish an image as AI-generated. Any sort of distortion or proportions that look extremely out of place can be red flags. For a more objective approach, applications and even AI itself can be used to detect images created by AI.
āIn theory, any image generated with an AI can be detected by an AI, but there's a lot more effort going into generation than detection,ā said Mourning. āIn fact, this task is, itself, a type of technique we call . You train a generator, then tell it which ones are fake to make a discriminator, then exclude the ones the discriminator detects to train a better generator, which can be used to train a better detector.ā
How data and the internet influence what AI generates
Artificial intelligence and LLMs are strongly influenced by the content they are trained with. Mourning says much of the growth we have seen in AI is based on training data.
āMost of these generational algorithms are basically weighted combinations of things from the training data, a millionth of this, a millionth of that,ā explained Mourning. āIf every picture labelled butterfly had a certain kind of symmetry, it will ensure that the generated image of a butterfly does too.ā
Since a LLM like ChatGPT is reliant on the data it is trained with, if the training content is biased or problematic, the resulting content will likely be the same. User-generated content that contains accidental misinformation or intentional disinformation can also pose an issue.
āIf there is enough deliberate disinformation that makes its way into the training models, it will show up in the output,ā emphasized Mourning. āThere have been AI generated search result suggestions telling people they should chew rocks to cure some ailment, based on a humorous response in a Reddit thread. I don't think it was a real danger, but there might be some cases that weren't so obvious.ā
Shovlin says there are ways to avoid some of this disinformation and misinformation when utilizing AI to generate content.
āYou can prompt ChatGPT and other AI tools to focus on specific texts you feed into them and only those texts,ā he said. āIn the case of a programmed bot with rules to not access the greater web, you may be reasonably assured that the responses it generates are from the specific sources you loaded into it.ā
Is it ethical to use AI?
The short answer is it entirely depends on the context. Mourning and Shovlin agree that there is nothing inherently unethical about using generative AI, but aspects of deception and privacy can present more of a complex grey area. Shovlin encourages users of generative AI to use rhetorical awarenessācritical thinking related to the text they are composing and the audience they are composing it for.
āOne question to ask oneās self, is: āWhat would my audience think if they knew I was generating this text with AI,āā said Shovlin. āAnother question is āwhat are the expectations of my organization regarding privacy, copyright, and artificially generated vs. human generated text.āā
How is AI regulated?
Mourning believes that the big ethical questions are related to deception and the unauthorized use of training data. The deception aspect could be easily remedied by adding disclosures, the data portion is a bit more complex. Some LLM models have been trained using YouTube transcripts, something that creators didnāt necessarily sign off on.
If companies are made to disclose all of their data, their methods would be public knowledge, but disclosing where they gathered data could be a good compromise.
āIf you make people disclose the actual training data, that's like forcing disclosure of trade secrets,ā Mourning explained. āBut in aggregate, if you have to list where you got the data from, people can at least see if their rights were violatedāwhether it's an artist's copyright or YouTube's terms of service.ā
Shovlin is more pessimistic about AI regulation and doesnāt think there will be meaningful regulation of generative tools.
āThe companies are very powerful, the technology prolific and profuse, and politicians seem to be generally technologically ignorant, based on their responses, for example, to social media controversies,ā Shovlin emphasized. āThere is a powerful point of view that AI regulation gets in the way of innovation and that given the extreme potential of AI, politicians may be hesitant to develop guidelines for it.ā
Will AI replace writers, other creative industries?
AI is already replacing some writers to an extent says Shovlin. to replace what would have been reported on by human journalists for some āunderservedā sports.
āWhile times change and jobs change, itās important that we carefully consider how AI is affecting the workforce and remember that we have a voice and can use it when itās merited,ā he said.
Creatives are already being replaced and AI is only going to get better but some creatives may be able to leverage the new technology, according to Mourning.
āThere will always be room for some creatives, but it's going to be fewer of them,ā he said. āExisting writers may make the best of the inaugural class of āprompt engineersā though. It's a transition, not an extinction.ā