Craft Planning with Iterative Widening

In this assignment, you will be making a simple satisficing Minecraft crafting planner. Given an initial state, you will want to find a series of actions that gets you to a desired goal state. You will first try an implementation of generic graph search (Dijkstra’s algorithm) to discover whether it is suitable for this task, and then implement a specialized planning algorithm called Iterative Widening.

In this assignment you will:

The goal of this assignment is for you to understand:

Let’s get to work in crafting.py. The automatic tests for this module are in test_crafting.py. You can run them with python test_crafting.py. Feel free to modify this file as you like to add tests, try out new examples, and so on.

First we need to load crafting.json, a json file that has a subset of the Minecraft recipes in it (with some abstractions to account for travel time to different raw materials). We’ll also throw in some other imports we’ll need later.

import json
from typing import NamedTuple, Dict, Tuple, Optional, Sequence, List
import array
import heapq
import time
import itertools

with open('crafting.json') as f:
    Crafting = json.load(f)

You may want to try printing out some different information from Crafting to get a feel for its schema:

# Example

# List of items that can be in your inventory:
print(Crafting['Items'])
# example: ['bench', 'cart', ..., 'wood', 'wooden_axe', 'wooden_pickaxe']

# List of items needed to be in your inventory at the end of the plan:
# (okay to have more than this; some might be satisfied by initial inventory)
print(Crafting['Goal'])
# {'stone_pickaxe': 2}

# Dict of crafting recipes (each is a dict):
print(Crafting['Recipes']['craft stone_pickaxe at bench'])
# { 'Produces': {'stone_pickaxe': 1},
# 'Requires': {'bench': True},
# 'Consumes': {'cobble': 3, 'stick': 2},
# 'Time': 1
# }

Let’s reflect a little bit in crafting.txt.

  1. Do the recipe dictionaries in crafting.json all have the same set of keys? Print out a few to find out, or read the JSON file.
  2. What is similar about the Initial, Goal, Produces, Consumes, and Requires schema? What is different about them?
  3. Thinking back to your data structures class, what operations are involved in looking up a key in a dictionary data structure, and how does that compare to obtaining values from an array?
  4. Look this up if you need to: What is the difference between a Python List of integers and a Python array.array of integers?

Let’s proceed by using the same data representation for an inventory state (as in Initial), a partial inventory state (as in Goal, Produces, and Consumes), and the inventory query posed by Requires. Because the speed of the inner loop is of supreme importance in graph search, we’ll want to use an array of unsigned integers data representation for each of these (this may feel like premature optimization; email me if you want extra credit by comparing it with a simple dict-based representation). In order to go back and forth between item names and item indices, let’s define a couple of global variables:

items_by_index: List[str] = Crafting['Items']
items_to_indices: Dict[str, int] = {
    item: index for index, item in enumerate(items_by_index)
}

We’ll wrap our array-of-items data structure in a convenience class called State. Because we’ll need to put =State=s into closed sets, priority queues, and so on, we’ll need them to be hashable, equatable, and comparable. We also will need to be adding and subtracting inventory states from each other later on, so we’ll put that in there too. A skeleton of this class is provided below, with some of the data handling code given.

class State:
    items: array.array

    def __init__(self, items: Optional[Sequence[int]] = None) -> None:
        if items is not None:
            # Copying a state from an old state.
            # This call to the array constructor creates an array of unsigned integers and initializes it from the contents of items.
            self.items = array.array('I', items)
        else:
            self.items = array.array('I', [0 for item in items_by_index])

    def __add__(self, other:'State') -> 'State':
        s = State(self.items)
        # A. How do we add together the contents of two states?
        return s

    def __sub__(self, other:'State') -> 'State':
        s = State(self.items)
        # B. How do we subtract one state from another?
        return s

    def __ge__(self, other: 'State') -> bool:
        # C. How do we know whether one state (self) contains everything that's inside of another (other)?
        pass

    def __lt__(self, other: 'State') -> bool:
        return not (self >= other)

    def __eq__(self, other) -> bool:
        return self.items == other.items

    def __hash__(self) -> int:
        hsh = 5381
        for s in self.items:
            hsh = ((hsh << 5) + hsh) + s
        return hsh

    def __str__(self) -> str:
        return self.to_dict().__str__()

    def to_dict(self) -> Dict[str, int]:
        return {items_by_index[idx]: self.items[idx]
                for idx in range(len(self.items))}

    @classmethod
    def from_dict(cls, item_dict: Dict[str, int]) -> 'State':
        return cls([
            item_dict.get(item, 0) for item in items_by_index
        ])

At this point we can already solve trivial problems:

# Example
initial = {'stone_pickaxe':1, 'ingot':2}
goal = {'ingot':1}
assert(initial >= goal)
print("It worked!")

Now that we have our state representation, we can rephrase the recipes in terms of what they need from the state. Python has a useful datastructure---namedtuple—we can use for this purpose, so we’ll have a namedtuple type called Recipe.

class Recipe(NamedTuple):
    produces: State
    consumes: State
    requires: State
    cost: int

It acts like a tuple in that its data are laid out contiguously in memory and it is immutable, but it has convenient accessors. Let’s initialize a dictionary mapping names to recipes:

recipes: Dict[str, Recipe] = {}
for name, rule in Crafting['Recipes'].items():
    recipes[name] = Recipe(
        State.from_dict(rule.get('Produces', {})),
        State.from_dict(rule.get('Consumes', {})),
        State.from_dict({item: 1 if req else 0
                         for item, req in rule.get('Requires', {}).items()}),
        rule['Time']
    )

Now we have our state representation and our action representation for the crafting domain. Let’s reflect.

  1. What was the state representation in the path planning assignment?
  2. What was the action representation?
  3. How many possible actions are there in the whole domain, and how many of those are possible in a given state?

In fact, we can consider any planning problem in terms of states and a transition relation between states and those actions which are valid in that state. If we’re thinking about path planning as search on the graph of possible locations (with edges given by a connectedness relation), task planning can be seen as search on the graph of possible states (with edges given by the state transition relation). Let’s implement the transition relation now:

def preconditions_satisfied(state: State, recipe: Recipe) -> bool:
    # D. What needs to be true about state and recipe?
    # Feel free to use State's >= method
    return False

def apply_effects(state: State, recipe: Recipe) -> State:
    # E. How do you make a new state out of a state and a recipe?
    # Note, DO NOT change state in-place!
    return None


Let’s try it.

1 Planning via Graph Search

  1. Consider your implementation of Dijkstra’s algorithm. What would need to change so it works on states-and-actions instead of locations-and-directions?
def plan_dijkstra(initial: State, goal: State, limit:int) -> Tuple[int, int, Optional[List[str]]]:
    start_time = time.time()
    # E. Implement it here!  When you find a solution, print out the number of nodes visited and the time it took to get there.  If you don't find a solution, print out the number of nodes visited and the time it took to fail.
    # Feel free to use or modify the solution printing routine from the last exercise.
    # Return a tuple of (nodes_visited, -1, None) if no path exists, or else a tuple of (nodes_visited, cost, path) where path is a list of recipe names.
    # You should also use limit to avoid visiting too many nodes before returning _something_.
    # Finally, you can check whether a State _satisfies_ a goal by checking `state >= goal`
    pass

To try it out:

# Example
print(plan_dijkstra(State.from_dict({}),
                    State.from_dict({'stone_pickaxe':1}),
                    200000))
print(plan_dijkstra(State.from_dict({'bench':1,'stone_pickaxe':1}),
                    State.from_dict({'ingot':1}),
                    200000))
  1. Imagine applying A* here. What heuristic would you want to use? Is that heuristic admissible? Is that a problem?
  2. What’s the largest planning problem (initial and goal state) you can think up which your Dijkstra’s implementation can solve optimally within 30 seconds? How many nodes does it visit and how long does it take in wall-clock time?

2 Planning with Iterative Widening

Let’s compare against a dedicated planning algorithm, rather than applying graph search naively. Planning domains have some significant differences from general graph search problems—let’s reflect on what they might be.

  1. In graph search, what is the goal of a search? How is that different from the goal of a planning problem?
  2. In graph search, what are the preconditions for traversing an edge? How does this differ in a planning problem?
  3. In graph search, detecting cycles is relatively cheap. Is that the case for planning problems?
  4. Is there more than one type of “cycle” in our crafting planning problem?

Think about a recipe like making a stone pickaxe. Every possible planning state either satisfies its preconditions or doesn’t. If this recipe were the only action, we could formulate the problem as a domain with just three abstract states—one without a pickaxe and without the needed supplies, one without a pickaxe and with the needed supplies, and one with a pickaxe (and it doesn’t matter what else is in the state).

  1. If we had a domain with just two recipes (punch for wood and wood planks), what would be the abstract states in the sense used above?

We can automate the intuition of (15) by transforming our states into combinations of propositions. A proposition here is a logical fact entailed by the state; for example “I have at least one piece of wood,” “I have at least two pieces of wood,” “I have at least one plank”, and so on. Note that if we have two pieces of wood then we necessarily have one piece of wood as well! Iterative Widening is a planning algorithm which works by abstracting away differences between states and discarding states which are too similar to states which have been seen already in this iteration.

In Iterative Widening we start with the minimal width — 1 — and as we are traversing the state space, we prune away a state if it does not have a unique proposition that we have never encountered before. For Minecraft, this means that at width 1, we can only craft things that don’t require multiple items – limiting ourselves to wood, sticks, planks, and a bench. If our goal includes an item beyond those, then we will need to widen our state space to 2. In the set of 2 we can now craft things that require multiple items at once, but for the most part we will always need a bench, some sticks, and some other material, so most plans will require a width of 3. At each step in Iterative Widening if we can’t find a new, unique, never before-seen set of items, then we will fail and move on to the next width.

Two states are similar if they share some number of propositions in common—so if the width measure is one, then when we have seen one state where we have at least one stick we subsequently ignore every other state we might find later with one or more sticks (we’ll relax this a little to say “any sufficiently different state is worth exploring”—so if it has at least a few propositions that are unique with respect to all seen combinations of a given width, we will consider it). To regain completeness—to always find a solution if one exists—the size of the combinations of items considered in this similarity computation is gradually increased until a solution is found.

Returning to the problem of creating a stone pickaxe, any state which has a bench, three or more cobble, and two or more sticks is interchangeable. It also produces a propositions stating that we have a stone pickaxe. We need to know the full set of possibly interesting propositions in the world (for the purposes of applying recipes), so we want to accumulate both the propositions involved with making the recipe and its outputs. In order to know what all the combinations of propositions might be during planning, we need to all the individual propositions that might come up. We also need to be able to convert initial and goal states into sets of propositions, so we’ll have a function to do that once we start our search algorithm:

class Proposition(NamedTuple):
    item: int
    at_least: int

def state_propositions(state: State) -> Set[Proposition]:
    propositions: Set[Proposition] = set()
    # F. Do something for each item in state.  Output all propositions entailed by the state's contents
    return propositions

# Now let's get the propositions from the recipes

def recipe_propositions(recipe: Recipe) -> Set[Proposition]:
    propositions: Set[Proposition] = set()
    # G. Do something with recipe.consumes, recipe.produces, and recipe.requires.
    # Emit, for this recipe, all the propositions entailed by the preconditions and the _minimal_ set of propositions embodied in the postconditions (i.e., don't need to output wood >= 2, wood >= 1, wood >= 0 if the recipe creates 2 wood.)
    return propositions

recipe_propositions = set()
for r in recipes.values():
    recipe_propositions |= recipe_to_propositions(r)

We can capture the notion of “ignoring states that are not different enough” by using the idea of a closed set from the cycle prevention techniques in graph search. Instead of checking that the newly expanded state is present in a set of seen states, we can check whether it offers any predicate combinations of width up to \(W\) we haven’t already encountered at width bound \(W\). Given the set of propositions that are important in our state, we want to create a list of all the propositions and combinations of up to \(W\) propositions. When considering a newly expanded state \(s\), we find all of the unique combinations of propositions that are true in \(s\) and return the size of the smallest such combination; we compare this size against \(W\) and give up if it is too high.

For example:

  • If \(s\) was the first state we’ve seen with bench>=1 it would have width 1; we use the closed set to determine whether a given combination has been seen before.
  • If \(s\) was the first state with bench>=1 and wooden_axe>=1 but no new atomic propositions, it would have width 2
  • If \(s\) has no unique combinations up to size \(W\), we say it has infinite width (which we can write as just W+1, since we ignore states wider than \(W\)).
  • If the width of \(s\) is greater than \(W\), we do not add it to the open queue.

Provided is a snippet that will check whether a state satisfies a set representing a combination of propositions. It will be useful in determining whether a state is novel.

# Example, assuming propositions is a Set[Proposition]
state_props:Set[Proposition] = state_propositions(state)
if state_props.issuperset(propositions):
    pass
    # The state has this combination!
else:
    pass
    # The state does not!

Now you are equipped to implement iterative widening search. For each instance of the search, you will want to keep track of all the witnessed combinations of propositions; this can be a set of sets (well, a set of FrozenSets, a Python type for an immutable set). To update this set, you will implement a function see_state(s:State, combinations:List[FrozenSet[Proposition]], seen_combinations:Set[FrozenSet[Proposition]]) -> bool which will take in a state, a list of combinations (sets of Propositions), and the seen set and output whether any new combinations were witnessed in this state. Note that one state might lead to the discovery of several new combinations.

def see_state(state:State, combinations:List[Set[Proposition]], seen_combinations:Set[FrozenSet[Proposition]]) -> bool:
    any_new = False
    state_props = state_propositions(state)
    for combo in combinations:
        # H. Is this combination already in seen_combinations?
        # I. If not, it's novel; so is this combination a subset of the state_props?
        pass
    return any_new

The outer loop of iterative widening gradually increases the bound \(W\) up to a given \(\mathit{WMax}\). The inner loop has the same skeleton as a standard graph search, with the exception that non-novel states are immediately thrown away. For now, implement iterative widening’s inner loop using breadth-first search.

Your search should return the sequence of actions required to reach a goal condition from an initial condition, along with the cost of that plan. You also may want to print output describing how many nodes are visited and how much time has been taken for each value of \(W\).

def plan_width(initial: State, goal: State, WMax: int) -> Tuple[int, int, Optional[List[str]]]:
    start_time = time.time()
    all_propositions = recipe_propositions | state_propositions(initial) | state_propositions(goal)
    all_combinations: List[FrozenSet[Proposition]] = []
    # Increase W up to WMax
    for W in range(1, WMax + 1):
        visited = 0
        # Calculate all combinations of propositions at size W and add to all_combinations
        all_combinations += [frozenset(props) for props in itertools.combinations(all_propositions, W)]
        # Sanity check that this is 6279 for W=3, for example
        print("W=",W,"Combination count=",len(all_combinations))
        # Track, for each combination (by index), whether we have seen this combination before (0 for no, >0 for yes)
        seen_combinations: Set[FrozenSet[Proposition]] = set()
        # Initialize seen_combinations
        see_state(initial, all_combinations, seen_combinations)
        open_list: List[Tuple[int, State]] = [(0, initial)]
        best_costs: Dict[State, int] = {initial: 0}
        best_from: Dict[State, List[str]] = {initial: []}
        while open_list:
            cost, state = heapq.heappop(open_list)
            visited += 1
            # J. This should look like your graph search (Dijkstra's is a nice choice), except...
            # Call see_state on newly expanded states to update seen_combinations and use its return value to decide whether to add this state to the open list (is that the only thing that determines whether it should go on the open list?)
    return visited, -1, None

Try it out like so:

# Example
# Try harder ones once you have these down

print(width_search({},{'bench':1},4))
print(width_search({'wood':1},{'iron_pickaxe':1},4))
print(width_search({},{'rail':1},4))
print(width_search({},{'cart':1},4))
  1. What’s the largest planning problem (initial and goal state) you can think up which your algorithm can solve within 30 seconds? How many nodes does it visit at its deepest \(W\) level, how high does \(\mathit{WMax}\) have to be, and how long does it take in seconds
  2. How does increasing or decreasing the value of \(\mathit{WMax}\) change the time to find a solution, or: what sorts of craft planning situations benefit from increasing \(\mathit{WMax}\)?
  3. Is iterative widening guaranteed to give optimal solutions for this problem? In other words, does there exist a crafting problem where a width bound of \(w\) gives a suboptimal solution while \(w+1\) gives a better one? What does this mean for implementing your algorithm—where can you take shortcuts to get better performance if you already lost optimality, or can you not take such shortcuts?
  4. Can you think of a way to apply iterative widening to the path planning problem? Do you think it would perform better than A* there or worse? Why?

Submit your Python files and reflections, and take a well-deserved rest!

Validate