| Summary |
Students configure large language models (LLMs) to debate AI issues
on ethical and responsible use (e.g., copyright, surveillance, bias). By
creating advanced prompts to concisely support a for/against position
and having the LLM maintain its context, students learn not only
the challenges of configuring an LLM to produce specific output but
also better understand how and why controversial positions on AI topics may be defended/justified. Their prompts
are then used to have two LLMs debate one another on a topic. In addition to student providing
reflections on the results of their own experiments, students' prompts can also be pitted against one another
to create an engaging competition between pairs/groups of students. Students can then also act as judges,
on the LLM output can be used to both expedite and enhance the debate for students.
This assignment teaches prompt engineering, context management, and exposes students to
many responsible AI usage topics. |
| Topics |
Responsible AI, Prompt engineering, Context management |
| Audience |
Intermediate Machine Learning (Undergraduate or Early Graduate
level), or Advanced AI Applications/Literacy (Undergraduate) |
| Difficulty |
Moderate: Students need time to experiment with prompting and model context, and to understand AI issue being debated. |
| Strengths |
Highly engaging and creative; connects technical prompting with
social responsibility; encourages analytical skills,
and experimentation; builds LLM literacy. |
| Weaknesses |
Students may underestimate difficulty in configuring an LLM to perform in a specific and narrow scope,
and to maintain context over multiple exchanges. |
| Dependencies |
Basic knowledge of Python, Jupyter notebooks, LLMs, and prompting.
Students can run code either locally on a laptop or remotely (e.g. Google Colab).
|
| Variants |
- Provide more/less starter code to decrease/increase difficulty of the assignment.
- Increase/decrease the number of debate rounds to increase/decrease difficulty of the assignment.
- Introduce formal guardrails
and test these by giving LLM off-topic requests.
- Use debate topics related to course subject (e.g. contrasting
algorithm/model choices in a Machine Learning course).
|