|Summary||This assignment is a final project for an introductory AI course. Students work in small groups (1-3 students per group) to design a scalable guessing algorithm for the game of Mastermind (a game where players must guess a "code" of colored pegs, with limited feedback provided after each guess). In this version of the project, there are three "challenges": a fixed-size problem (4 pegs and 6 colors), a "scalability challenge" (a series of tests on problems with increasing numbers of pegs and colors), and a "learning challenge" (guessing codes generated by a "biased generator," where the students are given training data that they can use to infer the generator's bias).|
|Topics||This project is designed to permit the students to incorporate one or more techniques that they have learned during a one-semester introductory course: heuristic search, probabilistic reasoning, constraint satisfaction, and machine learning. Although Mastermind is a game, it is also reflective of many problems in the real world involving constraint satisfaction and partial knowledge. Real-world applications of the types of techniques that are useful for solving the Mastermind challenge include logistics planning and scheduling with partial knowledge, preference elicitation in domains such as travel planning and student academic scheduling, and design optimization and testing.|
|Audience||The project is designed for upper-level undergraduates (or first-year graduate students) in an introductory course on artificial intelligence.|
|Difficulty||The project proved to be moderately difficult. As a course project, it is intended to take the students several weeks to complete. They had approximately five weeks from the time the assignment was handed out until the first "dry run," when they were expected to have a working system. The second dry run was held two weeks later, with the class tournament one week after the second dry run, and the final report due a week after the tournaent.|
The project is very engaging for students, because it is so simple to
understand, yet very challenging when it comes to designing an optimal
(or even practicable) solution. The potential solutions also touch on
many areas of AI, so the students can be creative in applying and
synthesizing what they've learned to a new problem.
The three "challenges" gives the students the opportunity to choose which aspects of the problem they most wanted to focus on. The "tournament" nature of the assignment makes students very motivated to design a good algorithm. The nature of the assignment is also "scalable". It could be assigned as an individual project, or as a group project with 2 or 3 students working together.
|Weaknesses||The definition of "scalability" can be somewhat ambiguous (should the algorithms minimize CPU time, or the number of guesses)? The first year that we assigned this project, we ran the tournament as an pairwise elimination tournament, with a time limit on successively increasing problem sizes. The second year, we held an in-class "free-for-all" where students ran their own tests and put their names on a leaderboard as they solved larger and larger problems. Other models could also work; we recommend engaging the students in a conversation, early in the project planning, about the type of tournament they would like to have, and fixing the rules for the tournament in advance, so that the students know what their goal is. (Ideally, the tournament should only count for a relatively small fraction of the grade, to incentive students to try interesting solutions that might not pan out from a pure performance perspective.)|
Students must have a fairly good grasp of heuristic search,
constraint satisfaction, machine learning techniques, and the
programming language chosen.
In the first year of this project assignment, we developed a shared infrastructure in CLISP, and required all students to implement their code in CLISP using the provided API. (This infrastructure code is provided, and can be installed on Mac, Linux, or Windows environments.)
In the second year, students were not required to use the CLISP infrastructure; instead, they could implement their solutions in any programming language, and the in-class tournament was run with students using their own laptops and self-reporting their results.
Both of these tournament models were successful, although the shared infrastructure in the first year made it easy to complete a dry run of the tournament "offline" and to have challenges that were scored outside of classt time.
An alternative that we did not explore, but would be feasible with a fairly small amount of development effort, would be to provide a server-based tournament API that uses sockets to permit individual client players to connect to the server. In this environment, students could use any language to implement their player agents.
It would be interesting to add other constraints to the Mastermind
problem (e.g., limited availability of particular colors), to
provide other kinds of partial information as feedback, or to
permit other kinds of queries (e.g., guessing partial codes or
querying about specific colors and locations).
To administer the project in a non-shared computing environment, rather than head-to-head competitions, we created different categories of performance and kept a leaderboard for each category. The students are required to submit entries for some number of the categories and can apportion their CPU time during the class period when the competition is held in any way they want, as long as they post entries in the required categories. For example, one category is scaling the number of pegs (with 6 colors). To take the top spot in that category a team must either solve a problem with at least one more peg than the current leader or solve the same number of pegs as the current leader in fewer guesses. (Actually, the mean over 10 runs is used.) Other performance categories include scaling the number of colors (with 4 pegs), scaling both colors and pegs (taking the lead if you beat the current leader by one peg or one color, or taking fewer guesses for the same number of colors and pegs), and learning as described above. This approach keeps all teams actively engaged during the entire competition, fighting to take or retake the top spot in one or more categories.
Mastermind Project Description.
mm.lisp: class definitions, tournament functions, and utility functions.
mm-solver.lisp: code to define a simple generate-and-test guesser for Mastermind (meant for students to use as an exemplar for implementation, and as a baseline for testing).