Instagram claims that it is site for sharing photos online with friends. But it became popular for its easy to use photo editing tools and image filters In this assignment, you will learn how to make your own image filters. While they may not be as fancy as the ones provided by Instagram, this assignment will still teach the basic principles to help you with your own start-up someday.
As we have seen in class, an image is just a 2-dimensional list of pixels (which are themselves RGB objects). So must of the functions/methods that you write in this assignment will involve nested for-loops that read a pixel at a certain position and modify it. You will need to use what you have learned about multi-dimensional lists to help you with this assignment.
One major complication is that graphics cards prefer to represent images as a 1-dimensional list in a flattened presentation instead. It is much easier for hardware to process an 1-dimensional list than a 2-dimensional one. However, flattened presentation (which we explain below) can be really confusing to beginners. Therefore, another part of this assignment is learning to use classes to abstract a list of pixels, and present it in an easy-to-use package.
Finally, this assignment will introduce you to a Python package. This application is a much more complex GUI than the one that you used in Assignment 3. While you are still working on a relatively small section of the code, there are a lot of files that go into making this application work. Packages are how Python groups together a large number of modules into a single application.
Important: This assignment is due just before the exam. To reduce stress, we highly recommend that you follow the recommended micro-deadlines.
This assignment is designed to give you practice with the following skills:
- How to implement a class from its interface.
- How to enforce class invariants.
- How to use a classes to provide abstractions.
- How to write code for both 1-dimensional and 2-dimensional lists.
- How to manipulate images at the pixel level.
- How to write code for an underspecified function or method.
- How to program for unicode strings with foreign characters and emojis.
This assignment requires you to implement several parts before you have the whole application working. The key to finishing is to pace yourself, and make use of both the unit tests and the visualizer that we provide.
UPDATE: We have discovered a bug in the provided code that prevents Windows users from loading unicode text files with emojis. To fix the problem, download the latest interface.py and copy it into the imager folder.
To work on this assignment, you will need to download three files.
|imager.zip||The application package, with all the source code|
|samples.zip||Several sample images to test in the application|
|outputs.zip||The result of applying the filters to the sample images|
You should download the zip archive imager.zip from the link above. Unzip it and put the contents in a new directory. This time, you will see that this folder contains a lot of files. You do not not need to understand most of these files. The are similar to a3app.py in that they provide the GUI interface for the application.
You only need to pay attention to the files that start with a6. There are five of these. Two are completed and three are only stubs, waiting for you to complete them.
|a6image.py||The Image class, to be completed in Task 1|
|a6editor.py||The Editor class, which is completed already|
|a6filter.py||The Filter class, to be completed in Task 2|
|a6encoder.py||The Encoder class, to be completed in Task 3|
|a6test.py||The test script for the assignment, which is completed already|
You should skim all of these files before continuing with the assignment instructions.
This assignment has a problem: it is due the day before the exam. There is not too much we can do about this (other than making it due earlier). The last assignment must go out the day of the exam. In addition, everything in this assignment will be covered on the exam. So you really want to have this assignment completed beforehand, as it is one of the best ways to study.
With that said, we understand if you want to study in other ways (such as working on the old prelims. That is why, at the end of each part of the assignment, we have a suggested completion date. While this is not enforced, we recommend that you try to hit these deadlines. If you cannot make these deadlines, it might be a sign that you are having difficulty and need some extra help. In that case, you should go to office hours page as soon as possible.
Because there are so many files involved, this application is handled a little differently from previous assignments. To run the application, keep all of the files inside of the folder imager. Do not rename this folder. To run the program, change the directory in your command shell to just outside of the folder imager and type
In this case, Python will run the entire folder. What this really means is that it runs the script in __main__.py. This script imports each of the other modules in this folder to create a complex application.
Right now, this application will not do anything. However, once you complete the Image class], it will display two images of your instructor, like this:
As you work with this application, the left image will not change; it is the original image. The right image will change as you click buttons. The actions for the buttons Invert and Rotate.., are already implemented. Click on them to see what they do.
You will notice that this takes a bit of time (your instructor’s computer takes 2-3 seconds for most operations). The default image is 512x512. This is over 250 thousand pixels. The larger the image, the slower it will be. With that said, if you ever find this taking more than 30 seconds, your program is likely stuck and has a bug in it.
The effects of the buttons are cumulative. You can undo the last effect applied with Image.. Undo. To remove all of the effects, choose Image.. Clear. This will revert the right image to its original state.
You can load a new image at any time using the Image.. Load button. Alternatively, you can start up with a new image by typing
python imager myimage.png
where myimage.png is your image file. The program can handle PNG, JPEG, and GIF (not animated) files. You also can save the image on the right at any time by choosing Image.. Save. You can only save as a PNG file. The other formats cause problems with Part 3 of the assignment.
The remaining buttons of the application are not implemented. Reflect.. Horizontal works but the vertical choice does not. In Part 2 of the assignment, you will write the code to make them work.
If you are curious about how this application works, most of the code is in interface.py and filter.kv. The file filter.kv arranges the buttons and colors them. The module filter.py contains Python code that tells what the buttons do. However, the code in this module is quite advanced and we do not expect you to understand any of it.
Next to the Image.. button, you will see a button for Text… This button is used by the last part of the assignment, to store secret messages in text. To access these features, choose Text.. Show.
When you do this for the first time, you will see an error message that says “No hidden message found”. This is perfectly normal. You have not encoded anything yet, so there is nothing to decode. As a rule, the application will always decode the message hidden in the file at start-up, and you will see that error if there is no message.
To encode a message (once you have completed Part 3), type text in the box on the right. The box will change color to blue if you have unsaved changes. Once you press the Encode button, it will write the message inside the image and change color back. At this point, you will probably want to save the image to a PNG file to decode later. Applying any of the image filters (invert, reflect, etc.) will corrupt the hidden message and cause it to be lost.
Typing text in the box is a lot of work. Therefore, you can always read from a text file (or even a .py file). If you chose Text.. Import, this will load the file into the box on the right. However, it will not encode the text until you chose to do so. Hence, the text box will be blue after the file is loaded. Similarly, you can save the decoded text to a file with Text.. Export. These file features will be very useful for debugging more complex messages.
As we describe in the instructions below, your encode and decode operations will support full Unicode (Asian characters, emojis, etc.). However the font that Kivy uses cannot support emojis; it will display them as boxes with a cross in them (to indicate “missing”). For example, if you import this text into the application, you will see this
But if you export that text to a file, and open it up with an editor that can display emojis, it will look correct. If you want to encode emojis, write them in a text editor, import them, and encode. To get emojis back from a message, you decode and export.
The undo functionality works for all of these features as well. Choosing to undo will remove the most recently encoded message, restoring to a previously encoded message.
As with Turtles, debugging everything visually can be tricky. That is why we have provided you with a (partial) test script to help you with this assignment. This test script is integrated into the Imager application. To run it, type
python imager --test
The application will run test cases (provided in a6test.py) on the classes Image, Filter, and Encoder, in that order. This is incredibly useful, since you cannot even use the Imager app until you finish the Image class.
These test cases are designed so that you should be able to test your code in the order that you implement it. Howevever, if you want to “skip ahead” on a feature, you are allowed to edit a6test.py to remove a test. Those tests are simply there for your convenience.
This test script is fairly long, but if you learn to read what this script is doing, you will understand exactly what is going on in this assignment and it will be easier to understand what is going wrong when there is a bug. However, one drawback of this script is that (unlike a grading program), it does not provide a lot of detailed feedback. You are encouraged to sit down with a staff member to talk about this test script in order to understand it better.
As with the Turtles assignment, this test script is not complete. It does not have full coverage of all the major cases, and it may miss some bugs in your code. It is just enough to ensure that the GUI application is working correctly. You may lose points during grading even if you pass all the tests in this file (our grading program has a lot more tests). Therefore, you may want to add additional tests as you debug. With that said, we do not want you to submit the file a6test.py when you are done, even if you made modifications to the file.
There are so many parts to the Imager application that this assignment can feel very overwhelming. But in these instructions we take you through everything step-by-step. As long as you pay close attention to the specifications, you should be able to complete everything. This assignment may take longer than the others, but it is well within your ability.
You do not ever need to worry about writing code to load an image from a file. There are other modules in imager that handle that step for you. Those modules use the PIL module to extract pixel data from a file. The functions in this module return the image as a flattened list of pixels.
To understand what we mean by this, let’s talk about pixels first. A pixel is a single RGB (red-green-blue) value that instructs you computer monitor how to light up that portion of the screen. Each RGB component is given by a number in the range 0 to 255. Black is represented by (0, 0, 0), red by (255, 0, 0), green by (0, 255, 0), blue by (0, 0, 255), and white by (255, 255, 255).
In previous assignments, we stored these pixels as an RGB object defined in the introcs module. These were mutable objects where you could change each of the color values, and these objects would automatically enforce the 0..255 invariant. However, the pixels in this assignment will be 3-element tuples of integers. That is because they are faster to process, and Kivy prefers this format. Because image processing is slow enough already, we have elected to stick with this format. In addition, this means that you get some experience checking and enforcing that the pixels are in the correct format.
So if that is what we mean by a pixel, what is a “flattened list of pixels”? We generally think of an image as a rectangular list of pixels, where each pixel has a row and column (indicating its position on the screen). For example, a 3x4 pixel art image would look something like the illustration below. Note that we generally refer to the pixel in the top left corner as the “first” pixel.
However, graphics cards really like images as one-dimensional list. One-dimensional lists are a lot faster to process and are more natural for custom hardware. So a graphics card will flatten this image into the following one-dimensional list.
If you look at this picture carefully, you will notice that is it is very similar to row-major order introduced in class. Suppose we represented the 3x4 image above as follows:
E00 E01 E02 E03 E10 E11 E12 E13 E20 E21 E22 E23
The value Eij here represents the pixel at row i and column j. If were were to represent this image as a two-dimensional list in Python, we would write.
[[E00, E01, E02, E03], [E10, E11, E12, E13], [E20, E21, E22, E23]]
Flattened representation just removes those inner brackets, so that you are left with the one-dimensional list.
[E00, E01, E02, E03, E10, E11, E12, E13, E20, E21, E22, E23]
This is the format that the Image class will use to store the image. If you do not understand this, you should talk to a staff member before continuing.
Throughout this assignment, you will be asked to enforce preconditions. A common precondition that will come up over and over again is that a value is a pixel, or a value is a pixel list. Inside of the file a6image.py are two helper functions to help you enforce these preconditions: _is_pixel and _is_pixel_list. The first has been completed for you. The second is unfinished.
Before you do anything else, complete the function _is_pixel_list. Despite the fact that this is a hidden function, we do test it in a6test.py. So you should run the test script to verify that your implementation is correct.
This is not a hard function, and it is very similar to some of the nested-loop functions you have seen in class. But you did need to read all of these instructions to get this far. So we recommend that you finish this part by Tuesday, November 5. Finishing this part of the assignment will demonstrate that you understand how pixels work and allow you to get started on the assignment.
For some applications, flattened representation is just fine. For example, if you want to convert an image to greyscale, you do not need to known exactly where each pixel is inside of the file. You just modify each pixel individually. However, other effects like rotating and reflecting require that you know the position of each pixel. In those cases you would rather have a two-dimensional list.
The Image class has attributes and methods that allow you to treat the image either as a flattened one-dimensional list or as a two-dimensional list, depending on you application. This is what we mean by an abstraction. While the data is not stored in a two-dimensional list, methods like getPixel(row,col) allows you to pretend that it is.
The file a6image.py contains the class definition for Image. This class is fully specified. It has a class specification with the class invariants. It also has specifications for each of the methods. All you have to do is to write the code to satisfy these specifications.
As you work, you should run the test cases to verify that your code is correct. To get the most use out of the testing program, we recommend that you implement the methods in the same order that we test them.
To do anything at all, you have to be able to create an Image object and access the attributes. This means that you have to complete the initializer and the getters and setters for the three attributes: data, width and height. If you read the specifications, you will see that these are all self-explanatory. Note that these attributes are hidden, so the class invariant is given by (hidden) single line comments according to our specification style.
The only challenge here is the width and height. Note that there is an extra invariant that the following must be true at all times:
width*height == # of pixels
You must ensure this invariant in both the initialiers and the setters. In addition, we expect you to enforce all preconditions with asserts.
Except for the setters for width and height (which have the unusual invariant), this part is no harder than the Pair class in Lab 8. So you should be able to do this part quickly. We want you to get in the habit on working on this assignment a little bit every day. That will make this assignment easier and less stressful. That is why we recommend that you finish this part by Wednesday, November 6.
The getter getData already returns the image data as a flattened list of pixels. So you might think we do not need to do anything more here. However, notice that getData returns a copy of the pixel list. So it is not useful if you want to modify the image. Instead, the class Image has methods to allow modification of the image, while still enforcing the class invariant.
The getPixel and setPixel methods present the image as a two-dimensional list. This is a little trickier. You have to figure out how to compute the flat position from the row and column. Here is a hint: it is a formula that requires the row, the column, and the width. You do not need anything else. Look at the illustrations in our discussion of flattened representation to see if you can figure out the formula.
Figuring out the conversion formula is the only hard part of this exercise. Otherwise it is the same as for the one-dimensional operators. Make sure to enforce all of the preconditions.
Because this is very similar to the one-dimensional operators, you should try to finish this by the same day: Thursday, November 7. This pace will put you in good shape for the more complicated functions in Task 2.
The module a6filter.py contains the Filter class. You will notice that it is a subclass of the Editor class in a6editor.py. The Editor class is complete; you do not have to do anything with this class. It implements the Undo functionality in the imager application. This class implements an edit history and the getter getCurrent accesses the most recent update of the image.
You do not need to understand the Editor class at all, but you should read its specification. Since Filter is a subclass, it will need to access the inherited methods from Editor. In particular, you will notice that none of the methods in Filter take an image as an input. Instead, those methods are to work on the current image, which they access with the method getCurrent.
To make it easier to follow all this, we have provided you with several example methods to study. You will notice that some filters - like invert - modify the image with the one-dimensional operators. Others - like transpose and reflectHori - modify the image with the two-dimensional operators. Use this code as a guide for implementing the unfinished methods.
While working on these methods, you may find that you want to introduce new helper methods. For example,jail already has a _drawHBar helper provided. You may find that you want a _drawVBar method as well. This is fine and is actually expected. However, you must write a complete and thorough specification of any helper method you introduce. It is best to write the specification before you write the method body, which is standard practice in this course. It is a severe error not to have a specification, and points will be deducted for missing or inappropriate specifications.
We have provided you with several test cases for these filters. But the output of these test cases are limited and not always useful. A better way to test is just to load the sample images and try them out. We have provided you with the correct outputs for each filter applied to each sample image.
This method should reflect the image about a horizontal line through the middle of the image. Look at the method reflectHori for inspiration, since it does something similar. This method should be relatively straightforward. It is primarily intended as a warm-up to give you confidence.
Because it is just a warm-up, you should be able to complete this and the next method in one day. Assuming that you are keeping up with the recommended deadlines, this means you should complete this part by Monday, November 11.
In this method, you will change the image from color to either grayscale or sepia tone. The choice depends on the value of the parameter sepia. To implement this method, you should first calculate the overall brightness of each pixel using a combination of the original red, green, and blue values. The brightness is defined by:
brightness = 0.3 * red + 0.6 * green + 0.1 * blue
For grayscale, you should set each of the three color components (red, green, and blue) to the same value, int(brightness).
Sepia was a process used to increase the longevity of photographic prints. To simulate a sepia-toned photograph, darken the green channel to int(0.6 * brightness) and blue channel to int(0.4 * brightness), producing a reddish-brown tone. As a handy quick test, white pixels stay white for grayscale, and black pixels stay black for both grayscale and sepia tone.
To implement this method, you should get the color value for each pixel, recompute a new color value, and set the pixel to that color. Look at the method invert to see how this is done.
If you can figure out how to properly use the brightness value, this is another short method. We recommend that you complete this part by Monday, November 11. That way you will have more time for the harder methods.
Always a crowd favorite, the jail method draws a a red boundary and vertical bars on the image. You can see the result in the picture below. The specification is very clear about how many bars to create and how to space them. Follow this specification clearly when implementing the function.
We have given you helper method _drawHBar to draw a horizontal bar (note that we have hidden it; helper functions do not need to be visible to other modules or classes). In the same way, you should implement a helper method _drawVBar to draw a vertical bar. Do not forget to include its specification in your code.
This method is one where you have to be very careful with round-off error, to make sure that the bars are evenly spaced. You need to be aware of your types at all times. The number of bars should be an integer, not a float (you cannot have part of a bar). However, the distance between bars should be a float. That means your column position of each bar will be a float. Wait to turn this column position into an int (by rounding and casting) until you are ready to draw the bar.
When you are finished with this method, open a picture and click the buttons Jail, Transpose, Jail, and Transpose again (in that order) for a nice effect.
This method is a little more complicated than the previous filters, but it is still not that bad. The hardest part of this method is making sure that you are handling the round-off error correctly. We recommend that you finish this method by Tuesday, November 12.
Camera lenses from the early days of photography often blocked some of the light focused at the edges of a photograph, producing a darkening toward the corners. This effect is known as vignetting, a distinctive feature of old photographs. You can simulate this effect using a simple formula. Pixel values in red, green, and blue are separately multiplied by the value
Like monochromification, this requires unpacking each pixel, modifying the RGB values, and repacking them (making sure that the values are ints when you do so). However, for this operation you will also need to know the row and column of the pixel you are processing, so that you can compute its distance from the center of the image.
For this reason, we highly recommend that you use the method getPixel and setPixel in the class Image. These methods treat the image as a two-dimensional list. Do not use the one-dimensional operators. That was fine for invert and monochromify, but that was because the row and column did not matter in those methods.
This is the last required method for the class Filter. It is also the hardest method in this class. We recommend that you finish this method by Thursday, November 14. This gives you more than a day to finish it, but still give you enough time to work on the final task and to study for the prelim.
The last method in the Filter class is not officially part of the assignment. It is an optional method if you are looking for a challenge. You should only work on this method when you have completed Task 3. If you implement this method, and it is correct, we will reward you with extra credit. However, it will be no more than 3 points and you cannot score above 100. So do not expect this to replace an unfinished method.
Pixellation simulates dropping the resolution of an image. You do not actually change the resolution (that is a completely different challenge). However, you replace the image with large blocks that look like giant pixels.
To construct one of these blocks, you start with a pixel position (row,col). You gather all of the pixels step many positions to the right and below and average the colors. Averaging is exactly what it sounds like. You sum up all the red values and divide them by the number of pixels. You do the same for the green and blue values.
When you are done averaging, you assign this average to every pixel in the block. That is every pixel starting at (row,col) and within step positions to the right or down gets this same pixel color. This result is illustrated below.
When you are near the bottom of the image, you might not have step pixels to the right or below. In that case, you should go the edge of the image and stop. We highly recommend that you write this averaging step as helper function. It will greatly simplify your code in pixellate.
One thing you do need to watch out for is how you construct your loops in pixellate. If you are not careful, the blocks will overlap each other, messing up the pixellation effect. Think very carefully about what you want to loop over.
The last official part of the assignment is the most involved. The Encoder class is built on top of the Filter class. That is because it is adding new functionality to the Filter class and it needs access to the current image via getCurrent. In fact we could have combined these two classes, but we separated them for reasons of readability.
This class allows us to support steganography, which is “the art and science of writing hidden messages in such a way that no one apart from the intended recipient even realizes there is a hidden message.” This is different from cryptography, where the existence of the message is not disguised but the content is obscured. Quite often, steganography deals with messages hidden in pictures.
This task is much more open-ended than anything we have done in the course before. There is no right way to hide a message, and the way that you choose to hide messages might be different than ours. All that matters is your decode method can extract messages hidden by your encode method.
To decide how to best hide (and reveal) messages, you should read all of the instructions below before starting on this class.
A byte is an integer between 0 and 255 (this should look very familiar by now). Computers are specifically designed to work with data in byte-sized chunks, which is why they are so common. The American Standard Code for Information Interchange (ASCII) is a way to represent Python strings as bytes. Each character corresponds to a number 0..255. As you saw in an early lab, you can use the function ord to convert a character to a byte and chr to convert a byte back to a character.
It is very easy to encode a byte in a pixel. Suppose we have a pixel whose RGB values are 199, 222, and 142 and we want to store the byte 107 (which corresponds to the character ‘k’). We change the least significant digit of each color component to one of the digits of 107, as shown below.
This change in each pixel is so slight that it is imperceptible to the human eye (unless the image is a rectangle of just one color).
Decoding the message, the reverse process, requires extracting the last digit of each color component of the pixel and forming a byte from the three extracted digits, then converting that byte back to a character. In the above example, we would extract the digits 1, 0, and 7 from the RBG components of the pixel (using % 10) and put them together to form 107, which is the ASCII value for ‘k’. Extracting the message does not change the image. The message stays in the image forever.
Unfortunately, all modern text is Unicode, not ASCII. Unicode supports all possible characters, including Asian characters and emojis. And Unicode strings are supported in Python. Try this out in the interactive shell:
To keep the international students from feeling left out, we are going to use Unicode this year instead of ASCII.
Unicode characters are not represented by bytes. Emojis require much larger integers than 255. But the UTF-8 encoding is a simple way to convert an unicode string into a list of bytes. All strings have an encode method that allows you to do this conversion. Take the string s above and try the following:
You will note that the number of bytes is longer than the length of the string. In UTF-8, all ASCII characters are a single byte, but other characters can be anywhere from one to four bytes.
You can also convert a list of byte-size integers back into a unicode string. You need to use the Python bytes function to covert the list into a bytes-only sequence, and then use the decode method as follows:
With this information, you can now use the technique shown above to hide or reveal any Unicode string.
You are to write code to hide characters of a message text in the pixels of an image in flattened representation, starting with pixel 0, 1, 2, Before you write any code at all, you need to think about the following three issues and solve them.
Indicating an Encoding
First, you need some way to recognize that the image actually contains a message. You need to hide data in the initial pixels 0, 1, 2, that has little chance of appearing in a real image. That way, when program detects the data in those first few pixels, it knows there is a message there. You cannot be complete sure that an image without a message does still contain that data, but the chances should be extremely small.
This beginning marker should be at least two pixels. If it is only one pixel, then we can corrupt your message by transposing the image (think about this). You can use more than two pixels, but the specification states that no more than 10 pixels may be used for encoding information other than the message text.
Indicating the Message Length
Next, you have to know where the message ends. You can do this in several ways. You can hide the length of the message in the first pixels in some way (how many pixels can that take?). You can also hide some unused marker at the end of the message. Or you can use some other scheme. You may assume that the message has fewer than one million bytes (e.g. the specification says that you can refuse to encode longer strings), but you must be prepared for a message with any sequence of bytes, including those produced by punctuation, emojis, and foreign characters.
Staying within Color Bounds
Finally, the largest value of a color component (e.g. blue) is 255. Suppose the blue component is 252 and you want to hide 107 in this pixel. In this case, the blue component would be changed to 257. But this impossible because a color component can be at most 255. Think about this problem and come up with some way to solve it. There is a way to do this without having to change the _decode_pixel helper that we have provided for you. However, you are allowed to change _decode_pixel if you figure out another way to do it.
As you can see, this part of the assignment is less defined than the previous ones. You get to come up with the solutions to some problems yourself. You should feel free to discuss this part of the problem with the course staff. They will not tell you how to solve these problems, but they will discuss your ideas with you and point out any problems.
You should complete the body of the methods encode and decode in the class Encoder. These two methods should hide a message and reveal the message in the image. When you design decode, make sure it attempts to extract the message only if its presence is detected.
Feel free to introduce other helper methods as needed. For example, we have provided a helper method called _decode_pixel, which takes a pixel position pos and extracts a 3-digit number from it, using the encoding that we suggested above. This suggests that you might want to create a helper method called _encode_pixel, which encodes a number into a pixel. The exact specification of such a helper is up to you (if you start with our specification, be sure to modify it as appropriate for your code).
Note that the answers to the three problems above greatly influence how you write all of these methods. Therefore, the the specifications of these methods must include any description of how you solved the three problems listed above. For example, the specification of _encode_pixel must describe how you handle the pixel overflow problem. In some case this may require modification of specifications written by us. You can change anything you want in the specification except the one line summary, the preconditions, and the last paragraph (the one that describes the only cases in which the method may fail).
As an aside, you will also notice that we use the operator __getitem__ in _decode_pixel. That is because it is very natural to think of an image as one-dimensional list when encoding text. While an image is 2-dimensional arrangment of values, text is not. Hence, once again, we see the advantage of abstraction in Image, allowing us to access the data how we want for the particular application.
We have provided some simple tests (including emoji) for these methods in a6test.py. So you should run the provided test script to check your answers. However, the test script is not very useful when you have bugs and need to find them.
Debugging encode and decode can be difficult. Do not assume that you can debug simply by calling encode and then decode to see whether the message comes out correctly. Instead, write and debug encode fully before going on to debug decode.
How can you debug encode without debug. Start with short messages to hide (up to three ASCII characters, each a single byte). Use the method getData() in Image and slice off that many pixels from the list. Print these out and verify that they are what you expect.
When encode and decode are both done, try hiding and revealing a long message (e.g. 1000, 2000, or 5000 characters). This is where you really make use of the Imager application. Use the Text.. Import feature to load in a text file and try to encode that. We highly recommend that you try to encode a Python program, as that is a good test of punctuation characters. International students are free to try to use foreign characters (though they will not display properly in the Kivy application).
Clicking Image.. Save saves the image in the specified directory with the filename you enter. The message is saved in .png format, which is a lossless format. Saving in .jpg format would not work, because doing so tries to compress the image, which would result in less space but would also clobber your hidden message.
With .png, you can hide a message, save, quit, and then restart the application with the message still there. If the message contained emojis or foreign characters, they will not display in the Imager application, but you can still export the result to a file with Text.. Export. You should try to do that.
Before you submit this assignment, you should be sure that everything is working and polished. Unlike the first assignment, you only get one submission for this assignment. If you make a mistake, you will not get an opportunity to correct it. With that said, you may submit multiple times before the due date. We will grade the most recent version submitted.
Once you have everything working you should go back and make sure that your program meets the class coding conventions. In particular, you should check that the following are all true:
- You have indented with spaces, not tabs (Atom Editor handles this automatically).
- Functions are each separated by two blank lines.
- Methods are each separated by one blank line.
- Lines are short enough (~80 characters) that horizontal scrolling is not necessary.
- Docstrings are only used for specifications, not general comments.
- Specifications for any new methods are complete and are docstrings.
- Specifications are immediately after the method header and indented.
- Your name(s) and netid(s) are in the comments at the top of the modules.
You will submit only three files for this assignment: a6image.py, a6filter.py, and a6encoder.py. Upload these files to CMS by the due date: Wednesday, November 20. We do not need any other files. In particular, we do not want the file a6test.py.