AI and Academics at Canisius University


Was it Written by AI?

LLM AIs are designed to simulate people’s writing, but there are often signs that a text was written by AI:

  • AIs do not adhere to standards of accuracy or truth, and so will invent events, people, or other details as are needed to plausibly simulate a person writing about a topic.  This can even include sources cited within the text.  AIs might misattribute real quotes to the wrong author or speaker.  So if something seems untrue, this might be an AI at work. Check sources and quotes for accuracy
  • AIs may become vague or evasive if they do not have access to sources sufficient to respond properly to a prompt.   
  • AIs may struggle with understanding a prompt, rather more than most real people.  So it may in effect answer the wrong question, in whole or part. 
  • AIs are supposed to supply unique answers, and if the user is willing to put time and effort into prompting, the AI might provide very different prose from the user who simply writes a simple, crude prompt. However, in the latter case, AIs will often reuse language – phrases or even whole sentences – when asked to regenerate a response, or prompted by different users within a time frame (perhaps days or weeks).

AI Detection: Mechanics

The popular plagiarism prevention and detection service Turnitin has a toolset for detecting AI-composed writing within student submissions.  Additionally, other free or paid services offer AI detection capabilities.

These AI detection tools may or may not also be powered by AI. The more sophisticated tools provide users with an AI-generated judgment to users, on whether or not a particular text was written by AI or not. For example, a professor may activate Turnitin within their course dropboxes, and Turnitin’s AI detector will thereafter attempt to determine if student work submitted to the dropbox is AI-generated or not.

But just as Turnitin does, we strongly recommend faculty follow up on any suspected unauthorized AI use among students with further steps.  Check citations and quotes.  Does the student’s submission properly address the assignment prompt?  Does it answer the right question or perform the correct procedure?  Are quotes and citations properly attributed to real sources?  Is there detail or depth of argument enough to satisfy the prompt?  Discuss with students the context of the assignment and their submission.

Evolving Pedagogies

Ultimately, the best assurance that students do not misuse AI for academically dishonest practices is to design coursework that offers little opportunity for AIs. This isn’t simply a way to thwart potential student dishonesty; it faithfully moves our academic disciplines to better teach and assess the kinds of skills they need in professional worlds with AI.

Simply put: are there things AIs do not do (well or accurately), that professionals and engaged citizens must do? Your discipline’s fundamental learning goals and objectives probably outline several of these skills or abilities. Can you develop assignments and activities where students must practice, build, and demonstrate mastery of those skills? Can you demonstrate to your students where AIs fall short of acceptable outcomes in these tasks? The methods discussed on this site can help faculty get started on revising their courses to better serve learning goals and objectives, regardless of LLM AIs.


Pages: 1 2 3 4 5 6