AI and Academics at Canisius University


Helpful Steps for Faculty

As Dr. Del Vecchio points out, AIs can perform a variety of tasks to support faculty, in teaching, research, and scholarly or artistic creation. This can include suggesting everything from course content to assignment prompts. Since it only takes a few minutes to ask an AI’s opinion, you have little to lose if you aren’t impressed by its replies. And it might provide you with ideas for teaching your next class.

Update Your Course Policies

Mention generative AIs in your syllabus.  Should students avoid them altogether, for some or all assignments?  Can students use them for certain purposes?  This policy may be imperfect at first, until you acquire greater familiarity with LLM AI capabilities.  But it is better than nothing.

Periodically reflect on what exactly your position on AIs is from a curricular or pedagogical standpoint.  Do they have no place in your classroom or activities?  Could they reasonably assist students in some parts of their work, so that students can better focus their efforts on other, more important things?  Would you like students to experiment with AIs, to determine for themselves a sense of that LLM AIs are or are not capable of in your discipline?

This guide can supply a series of guidelines you may copy, modify, and otherwise use. If they do not supply you with a complete policy, some may inspire a policy crafted specifically for your course.

Experiment With AI

Try having conversations with the LLM AIs on topics both within your professional discipline, or just about anything else.  Get a sense of how they respond.  Ask the AIs for tips in writing prompts, so you can get answers from the AIs that help you better establish expectations for students. More specifically, can the AIs provide you with information that can augment your lesson content, by pointing out new sources, arguments, or processes you previously did not discuss with students?

Beyond that:

What can it Do?

Obviously you may be curious if an AI can complete the assignments in your current courses. You can prompt the AIs in various ways, to determine which assignments you may need to modify or scrap. If you ask an AI to answer a prompt several times, you will get somewhat different replies, but you will likely notice patterns in the replies, including turns of phrase, sources, and style. Moreover, be on the lookout for falsehoods, since they can easily creep into replies to very specialized or discipline-specific requests.

But beyond your current assignment prompts, ask the AIs to perform the types of analyses that are core skillsets for your discipline.  Can it accurately perform calculations of a sort?  Can it interpret types of evidence commonly used by professionals?  Can it identify important elements in a given text, according to certain scholarly or professional priorities?  Can it discuss relevant literature on a particular topic, or describe debates within a discipline?  And with all of the above, can it provide depth, detail, or precision that you expect students to exhibit when completing assignments?  

You’ll likely discover limitations.  Either there are procedures or activities you have students do, that AIs don’t do very well, or the AIs’ penchant for falsehood will surface.  For example, Dr. Mark Gallimore found that ChatGPT invented details and even whole simulated accounts by World War II nurses, when prompted to summarize examples from an anthology of primary sources. Even ChatGPT 4, which requires a paid subscription to use, was unable to provide good historical examples of historical concepts, analysis of a novel’s characters, or a good summary of a historical argument.

If you find that the AIs do a good or bad job, document these examples to share with students.  Even just asking AIs to summarize lengthy texts that are less likely to be the topic of analysis (good, bad, and everything in between) on the internet might not be as AI-friendly as it first sounds.  For example the AIs know Moby Dick or even Lord Jim very well, but they are less likely to be able to summarize arguments in scholarly articles or monographs.

With Your Students

When we say that AIs are likely a part of the professional future for many of our students, and we need to prepare them to work with or around AIs, what we might mean is simply making students aware of AI’s current likely behaviors, and developing in students a habit of remaining aware of trends in AI.  For example, if you are teaching economics or anthropology, you might periodically prompt AIs to discuss the day’s class subject, content or activity, and then discuss with students how the AI responds. Likely, much written about AIs will prove to be hype, but if LLM AIs become critical parts of knowledge work infrastructure, we must help students understand them. “Students can easily get lots of information from the internet,” English professor Laura Dumin reasons, “but do we really want them to learn about AI programs from TikTok?” Dumin provides excellent rationale, as well as tips for engaging candidly and productively with your students around generative AIs.

Various critical perspectives abound, many tailored to specific fields. You can find blog posts, essays, and books that deal specifically with the social and ethical implications of AI, or of automation generally. Even brief essays that deal in generalizations can be the basis for classroom discussions. For example, Danah Boyd presents two questions for considering AIs: however AIs are used, what is the (remaining) role of humans in systems? How do we define “effecient” when we speak of AIs as efficiency tools?

Companies like OpenAI and Google are sensitive to charges that AIs inherit bias and discrimination present in their human creators or training data.  They have taken steps to prevent this but their products remain controversial.  This might be an important conversation to have with students, based on various perspectives presented in sources.  However, one should be cautious about in-class or assigned experimentation with, for example, engineering prompts designed to provoke racist replies.

Another concern can be that students must create accounts at OpenAI, Google, and Microsoft to experiment with these AIs.  Requiring students to provide these companies with personally identifiable information (PII), or just to create yet another set of credentials, may be problematic.  Students should be encouraged to consult and understand the terms of service, even if it is optional.  

But classroom experimentation with AIs might be beneficial, to determine how forms of knowledge and ways of thinking in your discipline interact with AIs.  Use a single account, and project it on the big screen in class.  Work together as a class to generate or modify prompts.  Students might see how AIs stumble with certain questions, or provide simulated but incorrect answers.  If an AI cannot perform the kinds of analyses, creativity, or other skills you hope students learn in the course and in the process of assignments, it is good for students to see that for themselves while you are present to answer questions.  In engineering prompts and discussing AI outcomes, you also have an opportunity to demonstrate ways of thinking, habits, practices, and procedures that are the substance of your course learning objectives.

Lastly, it is reasonable to warn students that, apart from the serious moral, ethical, and social implications of academic dishonesty, there is the practical problem of a data footprint. Even if students take steps to anonymize their use of AIs (throwaway email accounts, private browser sessions, and so on) they might generate a digital trail of evidence.  Under various scenarios, that evidence may become exposed or fall into the hands of extortionists, which could threaten their future careers and professional lives.  This problem exists already with essay mills or contract cheating, and another form of it might appear with AIs, depending on whether AI providers change hands, or equip themselves with sufficient cybersecurity resources.

Ask the AI

As a starting point, it can be helpful to ask the AIs directly what they cannot do, but that are typical learning outcomes of a discipline.  They might answer with good insights.

This past Spring (April 5th, 2023), ChatGPT had these suggestions for creating activities and assignments:

History

  1. Primary source analysis: Assign students to analyze and interpret primary sources, such as letters, diaries, or government documents. This task requires critical thinking skills and the ability to contextualize information, which LLM AI models cannot fully replicate.
  2. Historiographical essay: Have students write an essay that evaluates the historiography of a specific historical event or period. This assignment requires students to synthesize and critically analyze a variety of sources, which LLM AI models may not be able to fully replicate.
  3. Historical interpretation: Assign students to write a short essay that offers an original interpretation of a specific historical event or figure. This task requires creativity, critical thinking, and the ability to make connections between different sources and information, which LLM AI models may not be able to fully replicate.
  4. Group research project: Assign students to work in groups to research and present on a specific historical topic. This task requires collaboration, communication, and the ability to synthesize information from multiple sources, which LLM AI models may not be able to fully replicate.
  5. Oral presentation: Ask students to give an oral presentation on a historical topic or event, and to answer questions from the class. This task requires public speaking skills, the ability to think on one’s feet, and the ability to explain complex ideas in a clear and concise manner, which LLM AI models may not be able to fully replicate.

Biology

  1. Laboratory experiments: Assign students to conduct experiments in a laboratory setting, where they can practice hands-on skills such as pipetting, culturing microorganisms, or using microscopes. LLM AI models cannot fully replicate the experience of conducting experiments in a physical laboratory environment.
  2. Data analysis and interpretation: Have students analyze and interpret data from a scientific study or experiment. This assignment requires critical thinking skills and the ability to draw conclusions from complex data, which LLM AI models may not be able to fully replicate.
  3. Literature review: Assign students to write a literature review that synthesizes and evaluates research on a specific biological topic. This task requires the ability to critically analyze and synthesize information from multiple sources, which LLM AI models may not be able to fully replicate.
  4. Field study: Ask students to conduct a field study of a specific biological ecosystem, and to write a report that details their observations and conclusions. This task requires observation skills, the ability to identify different species, and the ability to make connections between different aspects of an ecosystem, which LLM AI models may not be able to fully replicate.
  5. Oral presentation: Have students give an oral presentation on a specific biological topic or research study, and to answer questions from the class. This task requires public speaking skills, the ability to explain complex ideas in a clear and concise manner, and the ability to think on one’s feet, which LLM AI models may not be able to fully replicate.

Management

  1. Case study analysis: Assign students to analyze and solve a real-world business case study. This task requires critical thinking skills, the ability to identify and evaluate different solutions, and the ability to make decisions based on incomplete information, which LLM AI models may not be able to fully replicate.
  2. Group project: Have students work in groups to develop a business plan or strategy for a hypothetical or real-world business. This task requires collaboration, communication, and the ability to synthesize information from multiple sources, which LLM AI models may not be able to fully replicate.
  3. Leadership reflection: Ask students to write a reflection paper on their own leadership style and how it has evolved throughout the course. This task requires self-awareness, the ability to identify and evaluate one’s own strengths and weaknesses, and the ability to reflect on personal experiences, which LLM AI models may not be able to fully replicate.
  4. Role-playing exercise: Assign students to role-play different scenarios related to management and leadership, such as handling a difficult employee or negotiating a contract. This task requires interpersonal skills, the ability to read and respond to social cues, and the ability to think on one’s feet, which LLM AI models may not be able to fully replicate.
  5. Oral presentation: Have students give an oral presentation on a specific business topic or strategy, and to answer questions from the class. This task requires public speaking skills, the ability to explain complex ideas in a clear and concise manner, and the ability to think on one’s feet, which LLM AI models may not be able to fully replicate.

To generate answers similar to these, here’s the prompt: What are some assignments for a undergraduate university [discipline] course that have students practice or demonstrate things LLM AIs cannot do for them?

Like a lot of responses from LLM AIs, these suggestions are typically vague.  And an LLM AI–ChatGPT, for example– may attempt to simulate the things it tells it can not do very well, if prompted by you (or a student.)  Therefore, the faculty member needs strong command of any disciplinary knowledge involved in the assignment, if they are to assess student work for accuracy or integrity.  But these can be a good starting line for your process of thinking about assignments that are “AI proof.”

Innovative Pedagogy

If you need to “AI-proof” your course, you have the opportunity to do something more.  Can you make your assignments more effective as opportunities for students to practice or demonstrate the skills embodied in your learning objectives and goals?  

Academic and professional disciplines across campus usually advertise some or another form of critical thinking and analysis skills as part of their learning outcomes.  This is also present in our general education programs, the Core Curriculum and All-College Honors Programs.  These tend to correlate with higher levels of Bloom’s Taxonomy.  They can also be especially challenging to assess on classroom exams, at least in something approaching a real-world scenario.

But perhaps we can develop authentic assessments that challenge students to complete tasks poorly done, or altogether inaccessible to LLM AIs.  Many of these assignments may have been especially valuable before AIs existed.


Pages: 1 2 3 4 5 6