Python代写:CS1210 Handling Strings

代写Python基础作业,对文本文件进行处理,并统计,练习字符串基本用法。

Introduction

We’ve just completed our review of the most common Python datatypes, and you’ve been exposed to some simple operations, functions and methods for manipulating these datatypes. In this assignment, we’re going to develop some code that relies greatly on the string datatype as well as all sorts of iteration. First, a few general points.

  • (1) This is a challenging project, and you have been given two weeks to work on it. If you wait to begin, you will almost surely fail to complete it. The best strategy for success is to work on the project a little bit every day. To help incentivize you to do so, we will provide preliminary feedback to a partial draft you will upload by the draft due date shown at the top of this page (more on this below).
  • (2) The work you hand in should be only your own; you are not to work with or discuss your work with any other student. Sharing your code or referring to code produced by others is a violation of the student honor code and will be dealt with accordingly.
  • (3) Help is always available from the TAs or the instructor during their posted office hours. You may also post general questions on the discussion board (although you should never post your Python code). I have opened a discussion board topic specifically for HW1.

Background

In this assignment we will be processing text. With this handout, you will find a file containing the entire text of The Wind in the Willows, a children’s novel published in 1908. At some point during the course of this assignment, I will provide you additional texts for you to test your code on; updated versions of this handout may also be distributed as needed. You should think of this project as building tools to read in, manipulate, and analyze these texts.

The rest of these instructions outline the functions that you should implement, describing their input/output behaviors. As usual, you should start by completing the hawkid() function so that we may properly credit you for your work. Test hawkid() to ensure it in fact returns your own hawkid as the only element in a single element tuple. As you work on each function, test your work on the document provided to make sure your code functions as expected. Feel free to upload versions of your code as you go; we only grade the last version uploaded (although we do provide preliminary feedback on a draft version; see below), so this practice allows you to “lock in” working partial solutions prior to the deadline. Finally, some general guidance.

  • (1) You will be graded on both the correctness and the quality of your code, including the quality of your comments!
  • (2) As usual, respect the function signatures provided.
  • (3) Be careful with iteration; always choose the most appropriate form of iteration (comprehension, while, or for) as the function mandates. Poorly selected iterative forms may be graded down, even if they work!
  • (4) Finally, to incentivize getting an early start, you should upload an initial version of your homework by midnight Friday, September 22 (that’s one week from the start of the assignment). We will use the autograder to provide feedback on the first two functions, getBook() and cleanup(), only. We reserve the right to deduct points from the final homework grade for students who do not meet this preliminary milestone.

def_getBook(file):

This function should open the file named file, and return the contents of the file formatted as a single string. During processing, you should (1) remove any blank lines and, (2) remove any lines consisting entirely of CAPITALIZED WORDS. To understand why this is the case, inspect the wind. txt sample file provided. Notice that the frontspiece (title, index and so on) consists of ALL CAPS, and each CHAPTER TITLE also appears on a line in ALL CAPS.

def_cleanup(text):

This function should take as input a string such as might be returned by getBook() and return a new string with the following modifications to the input:

Remove possessives, i.e., "'s" at the end of a word;
Remove parenthesis, commas, colons, semicolons, hyphens and quotes (both single and double); and
Replace '!' and '?' with '.'

A condition of this function is that it should be easy to change or extend the substitutions made. In other words, a function that steps through each of these substitutions in an open-coded fashion will not get full credit; write your function so that the substitutions can be modified or extended without having to significantly alter the code. Here’s a hint: if your code for this function is more than a few lines long, you’re probably not doing it right.

def_extractWords(text):

This function should take as input a string such as might be returned by cleanup() and return an ordered list of words from the input string. The words returned should all be lowercase, and should contain only characters, no punctuation.

def_extractSentences(text):

This function returns a list of sentences, where each sentence consists of a string terminated by a ‘.’. def countSyllables(word):
This function takes as input a string representing a word (such as one of the words in the output from extractWords(), and returns an integer representing the number of syllables in that word. One problem is that the definition of syllable is unclear. As it turns out, syllables are amazingly difficult to define in English!
For the purpose of this assignment, we will define a syllable as follows. First, we strip any trailing ‘s’ or ‘e’ from the word (the final ‘e’ in English is often, but not always, silent). Next, we scan the word from beginning to end, counting each transition between a consonant and a vowel, where vowels are defined as the letters ‘a’, ‘e’, ‘i’, ‘o’ and ‘u’. So, for example, if the word is “creeps,” we strip the trailing ‘s’ to get “creep” and count one leading vowel (the ‘e’ following the ‘r’), or a single syllable. Thus:

>>> countSyllables('creeps')
1
>>> countSyllables('devotion')
3
>>> countSyllables('cry')
1

The last example hints at the special status of the letter ‘y’, which is considered a vowel when it follows a non-vowel, but considered a non-vowel when it follows a vowel. So, for example:

>>> countSyllables('coyote')
2

Here, the ‘y is a non-vowel so the two ‘o’s correspond to 2 transitions, or 2 syllables (don’t forget we stripped the trailing ‘e’). And while that’s not really right (‘coyote’ has 3 syllables, because the final ‘e’ is not silent here), it does properly recognize that the ‘y’ is acting as a consonant.

You will find this definition of syllable works pretty well for simple words, but fails for more complex words; English is a complex language with many orthographic bloodlines, so it may be unreasonable to expect a simple definition of syllable! Consider, for example:

>>> countSyllables('consumes')
3
>>> countSyllables('splashes')
2

Here, it is tempting to treat the trailing -es as something else to strip, but that would cause ‘splashes’ to have only a single syllable. Clearly, our solution fails under some conditions; but I would argue it is close enough for our intended use.

def_ars(text):

Next, we turn our attention to computing a variety of readability indexes. Readability indexes have been used since the early 1900’s to determine if the language used in a book or manual is too hard for a particular audience. At that time, of course, most of the population didn’t have a high school degree, so employers and the military were concerned that their instructions or manuals might be too difficult to read. Today, these indexes are largely used to rate books by difficulty for younger readers.
The Automated Readability Score, or ARS, like all the indexes here, is based on a sample of the text (we’ll be using the text in its entirety).

The ARS is based on two computed paramters; the average number of characters per word (cpw) and the average number of words per sentence (wps). The formula is:

ARS = 4. 71 * cpw + 0. 5 * wps - 21. 43

were the weights are fixed as shown. Texts with longer words or sentences have a greater ARS; the value of the ARS is supposed to approximate the US grade level. Thus a text with an ARS of 12 corresponds roughly to high school senior reading level.

def_fki(text):

The Flesch-Kincaid Index, or FKI, is also based on the average number of words per sentence (wps), but instead of characters per word (cpw) like the ARS, it uses syllables per word (spw).

The formula is:

FKI = 0. 39 * wps + 11. 8 * spw - 15. 59

As with the ARS, a greater value indicates a harder text. This is the scale used by the US military; like with the ARS, the value should approximate the intended US grade level. Of course, as the FKI was developed in the 1940’s, it was intended to be calculated by people who had no trouble counting syllables without relying on an algorithm to do so.

def_cli(text):

The Coleman-Liau Index, or CLI, also approximates the US grade level, but it is a more recent index, developed to take advantage of computers.

The CLI thus uses average number of characters per 100 words (cphw) and average number of sentences per 100 words (sphw), and thus avoids the difficulties encountered with counting syllables by computer.

CLI = 0. 0588 * cphw 0. 296 * sphw - 15. 8

Testing Your Code

I have provided a function, evalBook(), that you can use to manage the process of evaluating a book. Feel free to comment out readability indexes you haven’t yet tried to use.
I’ve also provided three texts for you to play with. The first, ‘test.txt’, is a simple passage taken from the readbility formulas website listed above. The output my solution produces is:

>>> evalBook('test.txt') Evaluating TEST.TXT:
10.59 Automated Readability Score
10.17 Flesch-Kincaid Index
7.28 Coleman-Liau Index

The second, ‘wind.txt’, is the complete text to The Wind in the Willows by Kenneth Grahame. My output:

>>> evalBook('wind.txt') Evaluating WIND.TXT:
7.47 Automated Readability Score
7.63 Flesch-Kincaid Index
7.23 Coleman-Liau Index

as befits a book intended for young adults. Finally, ‘iliad.txt’, is an English translation of Homer’s Iliad. My output:

>>> evalBook('iliad.txt')
Evaluating ILIAD.TXT:
12.36 Automated Readability Score
10.50 Flesch-Kincaid Index
9.46 Coleman-Liau Index

which I think, correctly, establishes the relative complexity of the language used.