In this project, you will implement inference algorithms for Bayes Nets, specifically variable elimination and value-of-perfect-information computations. These inference algorithms will allow you to reason about the existence of invisible pellets and ghosts.
This project includes an autograder for you to grade your answers on your machine. This can be run on all questions with the command:
It can be run for one particular question, such as q2, by:
python autograder.py -q q2
It can be run for one particular test by commands of the form:
python autograder.py -t test_cases/q2/1-simple-eliminate
See the autograder tutorial in Project 0 for more information about using the autograder.
The code for this project contains the following files, available as a zip archive.
Files to Edit and Submit: You will fill in portions of factorOperations.py, inference.py, and bayesAgents.py during the assignment. You should submit these files with your code and comments. Please do not change the other files in this distribution or submit any of our original files other than these files.
Evaluation: Your code will be autograded for technical correctness. Please do not change the names of any provided functions or classes within the code, or you will wreak havoc on the autograder. However, the correctness of your implementation – not the autograder’s judgements – will be the final judge of your score. If necessary, we will review and grade assignments individually to ensure that you receive due credit for your work.
Academic Dishonesty: We will be checking your code against other submissions in the class for logical redundancy. If you copy someone else’s code and submit it with minor changes, we will know. These cheat detectors are quite hard to fool, so please don’t try. We trust you all to submit your own work only; please don’t let us down. If you do, we will pursue the strongest consequences available to us.
Getting Help: You are not alone! If you find yourself stuck on something, contact the course staff for help. Office hours, section, and the discussion forum are there for your support; please use them. If you can’t make our office hours, let us know and we will schedule more. We want these projects to be rewarding and instructional, not frustrating and demoralizing. But, we don’t know when or how to help unless you ask.
Discussion: Please be careful not to post spoilers.
Implement the constructBayesNet function in bayesAgents.py. It constructs an empty Bayes net with the structure described below. (We’ll specify the actual factors in the next question.)
The treasure hunting world is generated according to the following Bayes net:
Don’t worry if this looks complicated! We’ll take it step by step. As described in the code for constructBayesNet, we build the empty structure by listing all of the variables, their values, and the edges between them. This figure shows the variables and the edges, but what about their values?
- X positions determines which house goes on which side of the board. It is either food-left or ghost-left.
- Y positions determines how the houses are vertically oriented. It models the vertical positions of both houses simultaneously, and has one of four values: both-top, both-bottom, left-top, and left-bottom. “left-top” is as the name suggests: the house on the left side of the board is on top, and the house on the right side of the board is on the bottom.
- Food house and ghost house specify the actual positions of the two houses. They are both deterministic functions of “X positions” and “Y positions”
- The observations are measurements that Pacman makes while traveling around the board. Note that there are many of these nodes—one for every board position that might be the wall of a house. If there is no house in a given location, the corresponding observation is none; otherwise it is either red or blue, with the precise distribution of colors depending on the kind of house.
Implement the fillYCPT and fillObsCPT functions in bayesAgents.py. These take the Bayes net you constructed in the previous problem, and specify the factors governing the Y position and observation variables. (We’ve already filled in the X position and house factors for you.)
For an example of how to construct factors, look at the implementation of the factor for X positions in fillXCPT.
The Y positions are given by values BOTH_TOP, BOTH_BOTTOM, LEFT_TOP and LEFT_BOTTOM. These variables, and their associated probabilities, are provided by constants at the top of the file.
If you’re interested, you can look at the computation for house positions. All you need to remember is that each house can be in one of four positions: top-left, top-right, bottom-left, or bottom-right.
Observations are more interesting. Every possible observation position is adjacent to a possible center for a house. Pacman might observe that position to contain a red wall, a blue wall, or no wall. These outcomes occur with the following probabilities (again defined in terms of constants at the top of the file):
- If the adjacent house center is occupied by neither the ghost house or the food house, an observation is none with certainty (probability 1).
- If the adjacent house center is occupied by the ghost house, it is red with probability PROB_GHOST_RED and blue otherwise.
- If the adjacent house center is occupied by the food house, it is red with probability PROB_FOOD_RED and blue otherwise.
Important Note: the structure of the Bayes Net means that the food house and ghost house might be assigned to the same position. This will never occur in practice. But the observation CPT needs to be a proper distribution for every possible set of parents. In this case, you should use the food house distribution.
There are only four entries in the Y position factor, so you can specify each of those by hand. You’ll have to be cleverer for the observation variables. You’ll find it easiest to first loop over possible house positions, then over possible walls for each house, and finally over assignments to (wall color, ghost house position, food house position) triples. Remember to create a separate factor for every one of the 4*7=28 possible observation positions.
Implement the joinFactors function in factorOperations.py. It takes in a list of Factors and returns a new Factor whose probability entries are the product of the corresponding rows of the input Factors.
joinFactors can be used as the product rule, for example, if we have a factor of the form P(X|Y) and another factor of the form P(Y), then joining these factors will yield P(X, Y). So, joinFactors allows us to incorporate probabilities for conditioned variables (in this case, Y).
However, you should not assume that joinFactors is called on probability tables - it is possible to call joinFactors on Factors whose rows do not sum to 1.
- Your joinFactors should return a new Factor.
- Here are some examples of what joinFactors can do:
- joinFactors(P(X|Y), P(Y)) = P(X, Y)
- joinFactors(P(V, W|X, Y, Z), P(X, Y|Z)) = P(V, W, X, Y|Z)
- joinFactors(P(X|Y, Z), P(Y)) = P(X, Y|Z)
- joinFactors(P(V|W), P(X|Y), P(Z)) = P(V, X, Z|W, Y)
- For a general joinFactors operation, which variables are unconditioned in the returned Factor? Which variables are conditioned?
- Factors store a variableDomainsDict, which maps each variable to a list of values that it can take on (its domain). A Factor gets its variableDomainsDict from the BayesNet from which it was instantiated. As a result, it contains all the variables of the BayesNet, not only the unconditioned and conditioned variables used in the Factor. For this problem, you may assume that all the input Factors have come from the same BayesNet, and so their variableDomainsDicts are all the same.