PyPlat: A Flexible Platform Game Project

Sejong Yoon (The College of New Jersey,


PyPlat is a small, game-based AI project that requires students to design and implement game-playing agents in Python. It is a platform game, thus agents must jump over pits and/or climb the ladder to collect reward items while avoiding obstacles and adversarial agents. The project can be used as a small or medium-sized project in an introductory AI course for undergraduate students. There are several highly successful related prior works, e.g. PacMan (DeNero and Klein, 2010) and The Mario Project (Taylor, 2011). However, we aim to have a smaller programming project, but is easy to understand and flexible to modify, for both students and instructors.

The project has two design goals:

First, we hope the game domain is sophisticated enough to challenge advanced students, while simple enough for beginners to understand the material, stay motivated, and complete the assignments. PyPlat aims to provide a relatively small search space (each state grid is 12x20 and the screen is non-scrolling) without sacrificing the complexity and interestingness as a full, complete platform game.

To accomplish this, we cloned an open-sourced game from 1980s, called BONGBONG (T.K. Lee, 1989), which was a remake of a Japanese game PONPOKO (ポンポコ) (Sigma Enterprise, 1982). The game has all the basic mechanical properties (move, climb, jump) as a platform game, resembling the masterpiece Donkey Kong with artistic simplicity. We wrote our game engine from scratch in Python 3, without referencing source codes of the original game from 1989. Our source code is only about 1,000 lines (800 lines excluding the data file) in total thus students without much programming experience can understand the entire system quickly.

Second, Instructors can easily modify artistic aspects of the game while maintaining the core mechanics, without investing excessive effort. The whole level design (platforms, obstacles, reward items, number, speed, and location of adversarial agents) can be done in a single Python script file in the plain text format. Moreover, thanks to the simple game design, it is easy to add additional agents with different behavior. This allows broad range of assignment variants.

For the students, this assignment will showcase how different algorithms they have learned in their introductory AI course can be practically utilized to solve a new problem, beyond toy examples from textbooks. Particularly, students will see how various search algorithms can be used to solve complex problems, and further extrapolate to real-world applications.

The assignment comes with two presets of game levels and artworks, as well as sample handouts with setup instruction, and suggestions for customized assignments.

Meta Information

Summary PyPlat is a small, game-based AI project that requires students to design and implement game-playing agents in Python. It is a platform game, thus agents must jump over pits and/or climb the ladder to collect reward items while avoiding obstacles and adversarial agents. Students can either implement instructor-chosen search/planning algorithm or develop their own algorithm and compete with peers. It is recommended to be used as a team project.
Topics Search (any uninformed/informed algorithm can be utilized), Planning (e.g. MDP), Reinforcement Learning
Audience Undergraduate students who passed CS2. Successful students took an algorithm course.
Difficulty The instructors can take progressive approach to keep the difficulty of the assignment at the manageable level for students. In the initial adoption of the material, the assignment was given as the final project with three weeks to work on.
  • Accessibility. The overall source code is only about 1,000 lines and has few dependencies. It is also platform independent and written in modern Python 3. All of these help students grasp the global understanding of the project they are working on.
  • Flexible/Scalable. With the minimal game core design, it is very straightforward to change artwork or mechanics of agents in the game. Instructors can easily add another type of agents to make the environment more complex. Level design can be done in pure text editor. This allows not only instructors but also students to easily debug their own AI algorithms. It is also possible to scale down the project, and use it as a fancy way to cover search algorithms only.
  • Fun. All students like games. In addition, many students love to trial-and-error their attempts with visually pleasing outcome.
  • Due to design choice of Python Arcade library, it can be tricky to implement advanced AI algorithms that require raw pixel buffer information.
  • It does not have an autograder with code inspection to assess implementation quality. Instead, instructors may consider using game-based performance measures, e.g. success/fail, completion time, number of lives left, total points, etc.
  • Search and planning algorithms covered in intro AI course
  • Required: Python (3.6 and above), Python Arcade, PyGlet, PyGame (may be omitted if the map visualization feature is not needed)
  • Optional: NetworkX (if students chose to use this for their graph implementation), Numpy
Variants Several customizations of the assignment can be considered. For example, we suggest the following variants to discuss the probabilistic models to deal with various uncertainties:
  • Currently, the map information reveals the details of the bonus bag to the player, i.e. by checking the grid cell value, one can simply tell whether a bag is worth 500 points or 1,000 points, or even an adversarial agent. Instructors can easily change this so that the user will see all bags as equal, thus the content is hidden from them. Or, one can make the content completely random. If such constraints were added, how can agents plan the navigation?
  • What if the view of the agent is restricted at the maximum of N-hop neighboring cells, instead of the whole screen environment?
One can also come up with small research projects as well. For example:
  • Given a (randomly generated) level with arbitrary number of adversarial agents, how can we verify that the level has a solution path?
  • Can we come up with an automated prediction system that can compare the difficulties of two level designs?

Assignment Materials

Any development environment for Python with the required packages installed should be sufficient to start the project. Students can choose their preferred choice of Python interpreter and development environment. Our pilot course adopted Python as the language of the course, thus links to the official Python tutorial were provided in the course syllabus. The whole project was developed using PyCharm IDE.

Setup is also straightforward. Download the game engine and extract the files in a preferred location. Import the directory with the Python files as a new project in PyCharm IDE or any other IDE of your choice. If you want to start from the command line, you can simply start with

In the manual mode (default behavior without any AI algorithm implemented), basic controls are like following:

To record/replay a game play, please see comments inside file.

Class Adoption Example

This assignment was given as the final project in the CSC 380 course offered in 2018. Students were asked to form teams, and each team had up to three members. Total of 20 students (11 juniors and 9 seniors) formed seven teams. Prerequisites for the course are: CS1, CS2, discrete math, and Calc A (MAT 127 at TCNJ). Students were given three weeks to complete the assignment. Students had the freedom of algorithm choice, from whichever methods/algorithms covered in the course. The course used AIMA as its main textbook. At the beginning of the project, students were presented an overview of the program source code structure, as well as a demo replay of the game (included in to understand the goal of the project. 50% of the student teams were able to complete the project in full, and 85% were able to submit working agents. In the course feedback survey, many students testified that the final project was one of the most interesting parts of the course.

Student Team Ranking Chosen (either implemented or attempted) Method How Many Levels Cleared? Total Points Earned Elapsed Time Juniors/Seniors
1 BFS + Heuristics 10/10 121600 1:33 0/2
2 BFS + Heuristics 10/10 110200 2:07 3/0
3 MDP 10/10 96300 5:11 3/0
4 BFS + Heuristics 10/10 96400 Not Robust 2/1
5 MDP + Heuristics 7/10 36800 - 1/2
6 Greedy Best First Search + Heuristics 4/10 16700 - 2/1
7 DFS + MDP 1/10 1400 - 0/3

Sample Rubric: Instructors may modify and adopt the following rubric that was used in the course. Particularly, each team member may be assessed based on his/her contribution to the project outcome, by explicitly listing up all members' contributions as an appendix to the written report. It should be emphasized and students must be reminded that a working solution is not enough, and they must justify their proposed method and design choices. They should also explain/demonstrate why their solution is working (or not working!). In the sample rubric, the written report was used for those discussions.

Assessment Category Weight Details
Presentation (instructor assessment) 50 Assess clarity, rigor, implementation difficulty, completeness, Q&A. Each item worths 10 points:
  • Has the presentation given clearly?
  • Did the team consider various aspects of the problem? Has the team justified their design choices in a convincing manner?
  • Did the team propose a sufficiently sophisticated method while avoiding trivial heuristics?
  • Did the team complete their proposed implementation within the deadline?
  • Did the team adress questions from audience effectively?
Presentation (peer assessment) 50 Assess clarity, rigor, implementation difficulty, completeness, Q&A. Each team's presentation is graded by 6 other teams using the same rubric as above and averaged.
Game-based Performance Measures 50 Total number of levels cleared, points earned, elapsed time.
Basic Implementation 50 Each item worths 10 points:
  • Find a path to the target, move along the path, and eat it
  • Avoid platform difficulties (jump over obstacles, pitfalls)
  • Avoid adversarial agents
  • Do the above three throughout a stage
  • Do the above over multiple stages
Written Report 100 Following breakdown was used:
  • [10 pts] Introduction
  • [20 pts] Prior, related methods to proposed method
  • [30 pts] Proposed method description
  • [20 pts] Implmentation consideration, Experiment design if needed
  • [10 pts] Results, analysis, conclusion
  • [10 pts] Writing style, clarity

Additional Resources