Large Language Model Generative Artificial Intelligence
Beginning in late 2022, chatbot tools became widely available on the internet, powered by generative transformer or “large language model” Artificial Intelligence (AI) technology. In the last two years, generative and LLM AIs have developed rapidly. In the Center for Online Learning and Innovation we have observed these changes in our testing, and have listened to colleagues at Canisius University and across higher education and K-12 as they developed ideas, methods, and frameworks for understanding LLM AIs.
Here are COLI’s working notes on the relationship between LLM AIs and college-level pedagogy. Our recommendations are necessarily tentative and subject to change as those in various fields better understand AI operation, and AIs frequently develop or improve capabilities.
We present some possibilities for mitigating student misuse of AI, or how to detect or prevent cheating. We also suggest ways of having students use AI for certain learning tasks and even assignments. University education should not be just knowledge transfer, but guiding students as they determine who they want to become, and cultivate learning skills and habits of mind. It is likely that they will use AI along that path; if they do not learn to use AI safely, ethically and responsibly from academic intellectuals and practitioners, they will learn to use it elsewhere.
AI and Our Curriculum
Each discipline, and each faculty member will determine the extent to which LLM AIs compel adaptation or alteration of their curriculum and teaching methods. Here we offer generalizations that may assist our faculty in considering AI’s relationship to their courses, and how to adapt curriculum and pedagogy to the generative AI era. How should we incorporate AI into our pedagogy, because what we teach should mirror professional or civic life, where AI use might proliferate? In what ways can AI interactions oblige students to learn disciplinary knowledge and skills, even if by identifying AI’s shortcomings in these areas?
This affords us things to consider, when devising AI-era curriculum:
- What tasks, methods, or ways of thinking, that constitute our learning goals and objectives, can AIs do or not do well?
- Do we still need to assess our students’ abilities to perform tasks, methods, or ways of thinking that AIs can do reasonably well? If so, how can we do that without AI interference?
- Where might we incorporate AI into curriculum and assessment, in ways to that reflect probably real-world use of AI? How might our students be required to use AI in their professional lives, and how might we prepare them for those tasks?
Ignoring AI is increasingly not an option for university faculty. It is widely available as standalone websites and “companions” or features within other popular software. Our students will use AI in a variety of ways, only some of which we may anticipate. These may replace, but also augment certain skills in composition, analysis and calculation. We may seek ways to find inappropriate AI use, both to prevent and hopefully deter cheating. But as Cath Ellis and Jason Lodge remind us, the goal is to assess student learning, not student cheating.
What Can AIs Do?
AI capabilities vary by discipline and change with emerging AI Chatbot updates. Occasionally faculty across disciplines have generalized that AI can perform around the C or low B range on writing assignments, but this is admittedly subjective.
It is worth understanding some basics of how AI Chatbots work. Click the above to learn more.
Each faculty member needs to do two things to assess their existing assessments’ vulnerability to unauthorized AI use:
- Ask an AI to complete (examples of) your assignments and assess the results. Can the AIs achieve a passing, or even high grade? In what ways do they fall short of what you expect from students? You will likely need to do this periodically as AI capabilities change.
- Discuss with colleagues their impressions and experiences.
We can only offer some generalizations that might help put your own experiments in perspective. The below list assumes a conversation consisting of 1-3 human prompts and AI replies.
- Free AIs can produce text and images, plausible (if not always truthful) narratives, descriptions or arguments that are suitable submissions to many undergraduate writing assignments. AIs requiring a paid subscription can produce video animations, slide deck presentations (including slides and scripts). The latter generally produce work of better quality in images or text, too.
- The AIs are trained on information openly available on the internet, and increasingly some that is limited-access (requiring a subscription). The chatbots wield encyclopedic knowledge of historical events, social, political, and philosophical concepts or issues, economics, and culture.
- AI excels at foundational tasks: recalling facts and definitions (available on the web), comparisons, summarizing theories or specific texts, and explaining procedures, laws or principles.
- The chatbots might have considerable knowledge of literature that is not available on the web, if there is a lot of secondary sources or conversations on the internet about it. For example, ChatGPT might know themes, events, timelines and even details in Eugene Sledge’s book With the Old Breed, although that book is in copyright. This is because the book is a popular memoir that has informed popular media such as the miniseries The Pacific. However, the AIs might have only vague knowledge of a novel such as Marcus Goodrich’s Delilah, which was not digitized.
- AIs may be aware of, but not (yet) have good command of individual sources in archives that have not been digitized, or articles behind paywalls, such as in subscription publications or academic databases.
- AIs have become quite good at answering multiple-choice questions, even focused on specialized or arcane content.
- AIs can write fictional accounts that are solid if not usually very original. They can often, but not always, create hypothetical examples of theoretical concepts.
- AIs will be less capable of analyzing complex, real-world circumstances in the past or present. They can struggle to identify important arguments or build detailed interpretations around particular pieces of evidence, especially if the latter are not in their training data.
- When AIs are unaware of essential information to answer a prompt they may indicate that, or they may fabricate (hallucinate) details. They are designed to simulate conversation, which is not precisely the same as becoming authority committed to truth.
- AIs occasionally botch calculations and computer programming.
- In response to single or simple prompts, AIs frequently answer with vague or obvious observations that lack depth or detail.
- AIs can read and summarize documents (such as .pdfs) provided in a prompt, but may struggle with details in the documents.
- AIs excel at writing boilerplate or formality content, for example proposals, business plans, or courtesy correspondence.
Jason Tangen at the University of Queensland summarizes AI’s ability to produce media that could plausibly return high grades on many pre-generative AI undergraduate university assignments
Course-Level Policies
You should add an AI clause to your syllabus. This indicates what role AI may or may not play in your course. Canisius University’s Academic Integrity Code establishes two things regarding AI in course work:
- In the absence of specific instructions from the course instructor, use of AI is “unauthorized assistance” and therefore prohibited.
- Exceptions to the above are when instructors permit or direct students to use generative AI. Instructors chose when, where, and how much students may use AI for any specific task, assignment, activity or procedure in a course.
Academic freedom is in the second point; you determine the role of AI in your course. Your overarching course policy will be large part determined by how you approach AI within lessons and assignments. For example, you may generalize that AI use is permissible with Assignment Type A, but not Type B. Or, you may say that in assignments where AI use is permitted or encouraged, students should always be transparent, describing how they used AI and even perhaps linking to the chat as a citation.
This guide can supply a series of guidelines you may copy, modify, and otherwise use. If they do not supply you with a complete policy, some may inspire a policy crafted specifically for your course.
AI Use By Students
How may students use AI in your course? This really involves three questions:
- What is permitted by the university, and the course syllabus (you, the instructor?)
- Practically, where and how may a student use an AI to advance either their learning or their grade in the course?
- How may you, the instructor, be aware whether a student has or has not used AI for work related to your course?
Leon Furze, Mike Perkins, Jasper Roe, and Jason McVaugh has crafted an AI Assessment Scale (AIAS) that helps us consider how we may (or may not) permit or encourage AI use in our course assignments.

The AIAS is very general, but it does help us organize our thinking as we experiment with incorporating AI in assignments for students. It is well worth visiting Furze’s blog to read about how this tool was developed, and how to understand AI’s implications for pedagogy.
How to Proceed?
Below are additional pages that offer tips and insights for planning your pedagogical approach to AI:
Assignments without AI
There are reasons and methods to keep AI separate from assignments. In some cases, these can still be part of our assessment strategy.
Assignments with AI
There are compelling reasons to begin incorporating AI into our assessment methods. Already, faculty in various disciplines are doing this, with promising results.
Detecting AI use by Students
In many cases, it is possible to determine whether or not students have employed AI in their work. This can be a part of preserving academic integrity, although these methods should be employed alongside instructional and assessment design methods that incorporate AI, or assess concepts and practices where AI is impractical.
AI and Academics: Sources
Sources for further learning about AI, teaching and learning.
AI and Academics Resource Change Log
Check back here for updates to this resource.
