AI and Academics at Canisius University


Methods for AIs Within CourseWork

Each discipline will determine the extent to which LLM AIs compel adaptation or alteration of their curriculum.  Each faculty member will also need to consider the relationship between their teaching style and methods, and LLM AIs.  However, below is a list of things that might spur some inspiration.

At the Top of Bloom’s Taxonomy

Assignments that require creation or evaluation are particularly suited to humans and not AIs.  Have students make arguments based on (original or primary) evidence.  Or have students provide an interpretation, or assessment of quality, of a particular composition or source. 

AIs can simulate originality, and the simulation can be, to an extent, equivalent to simpler variations on ideas, or original arrangements. But students are more capable of making arguments or developing nuanced and detailed perspectives, for example comparing several different media with analysis tools or concepts. “The biggest thing [AIs are] missing,” explained OpenAI President Greg Brockman in summer 2023, “is coming up with new ideas.”

Brainstorming

AIs might assist students in faculty at the very beginning of thinking about a topic.  This is akin to encyclopedias, and students may get some basic search terms from a helpful LLM AI.  Moreover, the ability to have a conversation with an AI might spur additional inspiration, or paths of exploration.  

For faculty, having an AI craft a lesson plan, or portions of a syllabus may be a starting point for designing a course.  Even if the finished product heavily departs from the LLM AI’s initial suggestions, it can still be beneficial for getting the project started, as well as considering possibilities for content, activities, or even learning goals and objectives that faculty may not otherwise consider.

In each case, The AI gets the user past the blank page.

Mundane Writing Chores

We might use AI to write boilerplate text that professionals use everyday at their work.  We might permit students to do the same.  We might also ask students to analyze the boilerplate responses generated by AIs, to look for cultural context: in what ways do AIs write a thank you letter, or job application cover letter, that reflects certain social, economic, or cultural status?  Whose voices or ways of expression are not represented by the AIs? Why might that be?

You or your students may use AI to develop sample text for projects or processes.  For example, could the AIs write simple scenarios or case studies that students can work through using skills or abilities learned in a course?  AIs can produce data in columns or .csv, useful for learning statistics or data processing.

Getting Started: writing

“The blank page can be terrifying for artistic creation,” William Lowry, theater professor at Lehigh University, points out. He and professor Lyam Gabel have students use AIs to get started playwriting. Canisius Writing Center Director Graham Stowe suggests that LLM AIs might help “people struggling with writers’ block, giving writers just the spark they need to get a project going.”

Students might employ AIs to produce draft outlines, checklists, or just lists to help get organized early in a more complex assignment. These scratchpad thoughts may be something they previously developed with help from friends or family, offer organization for what they must do, and usually would not be considered academic dishonesty. Even parts of draft that require extensive revision and reconsideration may not be inappropriate, even in an assignment that requires students to product substantive content. If AIs develop text that is totally unsuitable – bad arguments, ponderous dialog, and so on – this gives students the opportunity to evaluate and modify it. It stimulates authentic student creativity in response.

Assignment


Ask students to prompt an AI for a draft project design. This project be a marketing campaign, a film, an experiment, or a website.

Indicate to students in your instructions that they will submit both the AI’s draft, as well as their revision of it, together with annotations explaining their rationale for what they kept, revised or replaced.

Getting Started: Research

A professor might reasonably suggest that a student request specific kinds of help from an AI, when beginning a research project. This might including narrowing down a topic through conversation, as well as learning about different kinds of possible sources.

A student approaching a research project, especially one outside their major, may do well to ask an AI for sources. They may need to prompt the AI with specifics: for example, scholarly sources on polio in the United States since 1900. With a long enough list, the AI will likely hallucinate, but if students are prepared for this, weeding out bogus sources isn’t a problem.

More than just a list of sources, an AI might provide good advice on types of sources, especially when the latter aren’t immediately obvious or specified in assignment instructions. For example, let’s say a student is researching public health measures surrounding polio in her home state, and needs suggestions for primary sources. An undergraduate student in biology or pre-medical studies might not reasonably know about the varied kinds of sources historians use in medical history: CDC, state, and municipal bureau reports, oral histories in historical society or university archives, or meeting minutes and public hearing transcripts. If an AI suggested any of these, it might help our student learn more about the structure of government and non-government institutions concerned with polio in the past.

If the student asked for suggestions for types of secondary sources, an AI might suggest theses and dissertations. Students are typically aware of books and articles, but may not be aware of university libraries’ dissertation or thesis collections.

A student should consult their university librarian for assistant. But having gotten some suggestions from the AI, it might make a conversation with a librarian more productive, since the librarian might find it easier to suggest sources the AI failed to mention, as well as direct students to find the sources the AI suggested.

ChatGPT 4 responds to a request for types or kinds of sources available for historical research in public health.

Programming or Spreadsheet Chores

LLM AIs have some success in writing computer code, Excel formulae, and other digital coding for calculation or functions. Dr. Justin Del Vecchio, in Canisius University’s Cybersecurity program, conducted a series of tests and found that ChatGPT provided tremendous time efficiency in code-crafting. A professor may instruct students to experiment with AIs to move through processes more quickly, in support of activities that directly serve or assess other learning objectives. The point of the assignment may not be to develop a particular module in Python, or an array formula in Excel, but these may be necessary steps in completing that assignment. Why not have an AI do that step, while teaching students to incorporate the AI’s more or less generic (or flawed) suggestion into their own work with prudent evaluation or necessary modifications?

Students may also be asked to reverse engineer AI-generated code, both to understand encoding by way of demonstration, as well as to debug or assess problems with the code. AI-generated code may generate code that simply fails to executive. Or, AIs may generate executable code that nevertheless incorporates bad practices or inelegant process. Students can become stronger in their coding skills by evaluating AI code, akin to peer review.

These ideas have the potential that, as Dr. Del Vecchio points out, computer science students “will learn a new skill; how to create a proper set of instructions or requirements that AI agents, like ChatGPT, use to autogenerate code.” Programmers will combine this skill and the AIs with their own, higher-order software development skills, for creating new kinds of software.

Fictional (but Plausible) Examples

LLM AIs are designed to simulate human beings, and so may be sources for learning simulations.  For example, could ChatGPT write essays in the style of an adolescent student?  It might provide teacher education students with examples on which to practice assessment and feedback skills.  Can Bing Chat suggest problems for mathematics students to solve?  Can Google Bard write business plans, or military Warning Orders, that finance or ROTC students can analyze for quality and appropriateness?

Can the LLM AI provide examples of texts for various sorts of analysis? Can the AI pretend to be a historical figure, writing a letter? Can it (as above) produce writing in a period style or genre, useful for student critique? These may require careful prompt writing to be useful, and in many cases, the AI may be limited to providing examples for early students and not upper-level courses. Nonetheless, this can be a time-saver for faculty who otherwise must continually identify fresh texts for student assignments.

Source Analysis

LLM AIs might provide handy source content on which students can practice critical thinking skills.  Can students spot certain arguments or descriptions that have political implications, and so are not as “objective” as perhaps the AI’s tone might suggest?  Can students spot errors or falsehoods?  Can students employ web literacy and fact-checking skills to assess veracity or just cultural nuances within an AI’s version of a story?  Are students able to spot writing styles that reflect class, race, ethnic or other social identities in the chatbot’s default mode? Perhaps a more complex question is how well do different language styles and dialects do in communicating with the chatbots via prompts, reflecting perhaps cultural and social limits of the underlying technology. Graham Stowe offers a basis for collective inquiry in humanities and social sciences classrooms: “Hegemonic and dominant linguistic systems are bound to be embedded in the systems that make the bots function.”

Hypothes.is has suggested prompts for students to analyze ChatGPT-composed essays. These prompts might work or inspire equally applicable assignments based around Claude 2.

Prompt Engineering

At least in the present, how the user crafts a question or command for an LLM AI, called prompt engineering, determines the AI’s product.  Even subtle re-wording of the same (to humans) questions can produce radically different results from the AI.  It seems that, for the time being, prompt engineering might be a useful skill to cultivate in students.  What specific practices or procedures produce different kinds of outcomes may depend on different disciplines.  

Sources

LLM AIs have learned primarily on open-sourced content.  This might be on the internet, or books that have been digitized.  But much of what we assign is copyrighted content, out of necessity, since that is where specialized disciplinary knowledge is found. This may not be available for training AIs. Lots of other text, even on the internet, has not (yet) become part of their training corpus. Some text may be available on the open web but only practically accessible to AIs for users who pay a subscription fee. It may be that assignments focused on these sources are “AI-proof,” but they may also be practically what we would assign anyway, given our discipline and course goals.

Similarly, having students do primary research is both pedagogically sound as well as irrelevant to AIs.  If students must do the lab work, or labor in the archives, they acquire familiarity with the foundations of knowledge.  ChatGPT itself points to “original research” as something it cannot perform or simulate.

ChatGPT remarks that it cannot conduct original research as historians do it. February 4th, 2023.

Micro Examples

LLM AIs will not have extensive access to specific examples that illustrate larger trends.  For example, asking students to read testimonies, letters, or documents written in the past, but are not particularly famous, can help them connect greater ideas to specific people or events.  Aside from the issue of LLM AIs, this often generates greater interest among students.  For example, having students read a letter written by a nurse during the 1918 Influenza epidemic, or read a Treasury Department report about a specific corporate fraud case, can help students understand larger arguments or legal concepts, within the structure of a compelling story.  LLM AIs may not be able to write with authority about these cases, since they are not published on the open internet, and this gives students the opportunity to draw their own conclusions.  

Some (Famous) Texts: WHere AIs can help

As noted above, the LLM AIs can easily slip into providing falsehood in response to questions about specific things. And they may not be able to access or properly analyze most text written by humans over the past several thousand years However, many famous works of literature, public policy, and other texts are in their training corpus, as well as secondary or tertiary sources discussing famous works.

A student might, for example, ask Google Bard or ChatGPT for help in understanding specific concepts within a challenging text, such as Karl Marx’s Capital. The more a text, and a specific concept, has had widespread circulation, the more likely an AI can help a student.

The LLM AIs might provide a somewhat more flexible and responsive assistance than an encyclopedia, or other research reference. On the other hand, AIs may do a better job with individual concepts discussed in a few pages, rather than more complex threads incorporating separate concepts or discussions throughout a text. They may readily provide good hypothetical examples, but may struggle with historical examples.

ChatGPT, May 30th, 2023

Beyond a single work, AIs might be able to quickly compare two or more sources that are commonly available on the web and in their training content. A beneficial lesson in prompt engineering may ask a student to experiment with this, since a plausible use of AI in future might be high-level review or distant reading and analysis of large and diverse collections of texts.

Other “Texts:” Where AIs May Not Help

Depending on how you wish students to reflect on, react to, or analyze a text, you may find your students can develop particular skills where AIs are less helpful. If you have a document available on a web page or in a PDF file, or a YouTube Video you’d like students to discuss together, you might use a toolset like Hypothes.is to have them engage via collective marginalia.

Hypothes.is can now enable student conversations in the “margins” of a YouTube-based video. For example, students can compare visual elements in musical performances to surrounding social, cultural, and political phenomena of the era.

You may assess students’ skills by asking them to make observations or comparisons to specific passages in a text, or point out the significance of scenes or visual elements in a video production. Moreover, they may be obliged to respond to fellow students’ comments on a text or video. In each case AIs might not supply quick or easy answers. Even if the student can supply a .pdf file or transcript to the AI, the AI may not supply analysis that is sufficiently relevant or focused. Students will need to read, watch, reflect, and compose an adequate reply to your assignment prompt.

Audio, too, may be content that AIs are less able to analyze. For example, the Buffalo History Museum‘s podcasts cover a range of Western New York history topics that provide content for various disciplines. Various other disciplines have podcasts as well.

Classroom Analysis Writing

An old, but good teaching method is the “exit ticket:” at the conclusion of a class, students write a short reflection on the class’s activity, be it a lecture, discussion, or some other exercise. This can be just a single-sentence response such as the Main Point or Muddiest Point (ex. what they still don’t understand), or it can be a lengthier writing assignment, perhaps due after several hours or days. If students are asked to reflect on their classroom learning and review their own notes, this is a beneficial activity, regardless of AIs. But since the AI did not attend the class, this also means students must compose their own response.

Scaffolded Work

“One-and-Done” assignments are where LLM AIs shine.  If you require students to complete a project in stages, and provide students formative feedback at each stage, students are more likely to learn research, writing computational, and other skills, and acquire more confidence along the way.  This isn’t something they can hand off to AIs.   

Metacognitive or Reflective Writing

Have students write reflections on course concepts or their learning.  For example, have a student describe how they arrive at a (perhaps tentative) conclusion based on available evidence.  Have a student describe how they arrived at their method for coding a program.  The AIs can simulate this only to a limited extent, and besides, might get things wrong.

Image Analysis

Several AIs produce images in response to a text prompt. Students can experiment with these to produce images of various kinds and qualities. These can be useful for augmenting student creations such as websites or videos, but can also be visual materials students can analyze using various disciplinary techniques.

(Many of the images on this site were produced using Bing Image Creator, a Microsoft product powered by OpenAI’s DALL-E engine.)

Creative Production That Isn’t Text or Flat Images

Have students create narrated videos: documentaries, tutorials, explainers, and so on.  While these could in theory be scripted by an AI, you may reasonably require composition that is more closely tied to visuals on screen, which might make AI-generated information less useful.  Even audio, such as podcasts, can require students to consider tone and delivery as part of the assignment, and it is less likely they can plausibly deliver an AI-generated script with warmth and a convincing tone. As with all above, this is a solid pedagogy regardless of AIs, since students are compelled to think critically about media that they are more likely to encounter than a traditional college essay. 


Pages: 1 2 3 4 5 6