zoukankan      html  css  js  c++  java
  • Python自然语言处理学习笔记(28):3.12 练习

    3.12   Exercises  练习

    1. ☼ Define a string s = 'colorless'. Write a Python statement that changes this to "colourless" using only the slice and concatenation operations.

    2. ☼ We can use the slice notation to remove morphological endings on words. For example, 'dogs'[:-1] removes the last character of dogs, leaving dog. Use slice notation to remove the affixes from these words (we've inserted a hyphen to indicate the affix boundary, but omit this from your strings): dish-es, run-ning, nation-ality, un-do, pre-heat.

    3. ☼ We saw how we can generate an IndexError by indexing beyond the end of a string. Is it possible to construct an index that goes too far to the left, before the start of the string?

    4. ☼ We can specify a "step" size for the slice. The following returns every second character within the slice: monty[6:11:2]. It also works in the reverse direction: monty[10:5:-2] Try these for yourself, then experiment with different step values.

    5. ☼ What happens if you ask the interpreter to evaluate monty[::-1]? Explain why this is a reasonable result.

    6. ☼ Describe the class of strings matched by the following regular expressions.

      1. [a-zA-Z]+
      2. [A-Z][a-z]*
      3. p[aeiou]{,2}t
      4. \d+(\.\d+)?
      5. ([^aeiou][aeiou][^aeiou])*
      6. \w+|[^\w\s]+

      Test your answers using nltk.re_show().

    7. ☼ Write regular expressions to match the following classes of strings:

      1. A single determiner (assume that a, an, and the are the only determiners).
      2. An arithmetic expression using integers, addition, and multiplication, such as 2*3+8.
    8. ☼ Write a utility function that takes a URL as its argument, and returns the contents of the URL, with all HTML markup removed. Use urllib.urlopen to access the contents of the URL, e.g. raw_contents = urllib.urlopen('http://www.nltk.org/').read().

    9. ☼ Save some text into a file corpus.txt. Define a function load(f) that reads from the file named in its sole argument, and returns a string containing the text of the file.

      1. Use nltk.regexp_tokenize() to create a tokenizer that tokenizes the various kinds of punctuation in this text. Use one multi-line regular expression, with inline comments, using the verbose flag (?x).
      2. Use nltk.regexp_tokenize() to create a tokenizer that tokenizes the following kinds of expression: monetary amounts; dates; names of people and organizations.
    10. ☼ Rewrite the following loop as a list comprehension:

       
      >>> sent = ['The', 'dog', 'gave', 'John', 'the', 'newspaper'] 
      >>> result = []
      >>> for word in sent:
      ... word_len = (word, len(word))
      ... result.append(word_len)
      >>> result
      [('The', 3), ('dog', 3), ('gave', 4), ('John', 4), ('the', 3), ('newspaper', 9)]
    11. ☼ Define a string raw containing a sentence of your own choosing. Now, split raw on some character other than space, such as 's'.

    12. ☼ Write a for loop to print out the characters of a string, one per line.

    13. ☼ What is the difference between calling split on a string with no argument or with ' ' as the argument, e.g. sent.split() versus sent.split(' ')? What happens when the string being split contains tab characters, consecutive space characters, or a sequence of tabs and spaces? (In IDLE you will need to use '\t' to enter a tab character.)

    14. ☼ Create a variable words containing a list of words. Experiment with words.sort() and sorted(words). What is the difference?

    15. ☼ Explore the difference between strings and integers by typing the following at a Python prompt: "3" * 7 and 3 * 7. Try converting between strings and integers using int("3") and str(3).

    16. ☼ Earlier, we asked you to use a text editor to create a file called test.py, containing the single line monty = 'Monty Python'. If you haven't already done this (or can't find the file), go ahead and do it now. Next, start up a new session with the Python interpreter, and enter the expression monty at the prompt. You will get an error from the interpreter. Now, try the following (note that you have to leave off the .py part of the filename):

       
      >>> from test import monty 
      >>> monty

      This time, Python should return with a value. You can also try import test, in which case Python should be able to evaluate the expression test.monty at the prompt.

    17. ☼ What happens when the formatting strings %6s and %-6s are used to display strings that are longer than six characters?

    18. ◑ Read in some text from a corpus, tokenize it, and print the list of all wh-word types that occur. (wh-words in English are used in questions, relative clauses and exclamations: who, which, what, and so on.) Print them in order. Are any words duplicated in this list, because of the presence of case distinctions or punctuation?

    19. ◑ Create a file consisting of words and (made up) frequencies, where each line consists of a word, the space character, and a positive integer, e.g. fuzzy 53. Read the file into a Python list using open(filename).readlines(). Next, break each line into its two fields using split(), and convert the number into an integer using int(). The result should be a list of the form: [['fuzzy', 53], ...].

    20. ◑ Write code to access a favorite webpage and extract some text from it. For example, access a weather site and extract the forecast top temperature for your town or city today.

    21. ◑ Write a function unknown() that takes a URL as its argument, and returns a list of unknown words that occur on that webpage. In order to do this, extract all substrings consisting of lowercase letters (using re.findall()) and remove any items from this set that occur in the Words Corpus (nltk.corpus.words). Try to categorize these words manually and discuss your findings.

    22. ◑ Examine the results of processing the URL http://news.bbc.co.uk/ using the regular expressions suggested above. You will see that there is still a fair amount of non-textual data there, particularly Javascript commands. You may also find that sentence breaks have not been properly preserved. Define further regular expressions that improve the extraction of text from this web page.

    23. ◑ Are you able to write a regular expression to tokenize text in such a way that the word don't is tokenized into do and n't? Explain why this regular expression won't work: «n't|\w+».

    24. ◑ Try to write code to convert text into hAck3r, using regular expressions and substitution, where e3, i1, o0, l|, s5, .5w33t!, ate8. Normalize the text to lowercase before converting it. Add more substitutions of your own. Now try to map s to two different values: $ for word-initial s, and 5 for word-internal s.

    25. Pig Latin is a simple transformation of English text. Each word of the text is converted as follows: move any consonant (or consonant cluster) that appears at the start of the word to the end, then append ay, e.g. string → ingstray, idle → idleay. http://en.wikipedia.org/wiki/Pig_Latin

      1. Write a function to convert a word to Pig Latin.
      2. Write code that converts text, instead of individual words.
      3. Extend it further to preserve capitalization, to keep qu together (i.e. so that quiet becomes ietquay), and to detect when y is used as a consonant (e.g. yellow) vs a vowel (e.g. style).
    26. ◑ Download some text from a language that has vowel harmony (e.g. Hungarian), extract the vowel sequences of words, and create a vowel bigram table.

    27. ◑ Python's random module includes a function choice() which randomly chooses an item from a sequence, e.g. choice("aehh ") will produce one of four possible characters, with the letter h being twice as frequent as the others. Write a generator expression that produces a sequence of 500 randomly chosen letters drawn from the string "aehh ", and put this expression inside a call to the ''.join() function, to concatenate them into one long string. You should get a result that looks like uncontrolled sneezing or maniacal laughter: he haha ee heheeh eha. Use split() and join() again to normalize the whitespace in this string.

    28. ◑ Consider the numeric expressions in the following sentence from the MedLine Corpus: The corresponding free cortisol fractions in these sera were 4.53 +/- 0.15% and 8.16 +/- 0.23%, respectively. Should we say that the numeric expression 4.53 +/- 0.15% is three words? Or should we say that it's a single compound word? Or should we say that it is actually nine words, since it's read "four point five three, plus or minus fifteen percent"? Or should we say that it's not a "real" word at all, since it wouldn't appear in any dictionary? Discuss these different possibilities. Can you think of application domains that motivate at least two of these answers?

    29. ◑ Readability measures are used to score the reading difficulty of a text, for the purposes of selecting texts of appropriate difficulty for language learners. Let us define μw to be the average number of letters per word, and μs to be the average number of words per sentence, in a given text. The Automated Readability Index (ARI) of the text is defined to be: 4.71 μw + 0.5 μs - 21.43. Compute the ARI score for various sections of the Brown Corpus, including section f (popular lore) and j (learned). Make use of the fact that nltk.corpus.brown.words() produces a sequence of words, while nltk.corpus.brown.sents() produces a sequence of sentences.

    30. ◑ Use the Porter Stemmer to normalize some tokenized text, calling the stemmer on each word. Do the same thing with the Lancaster Stemmer and see if you observe any differences.

    31. ◑ Define the variable saying to contain the list ['After', 'all', 'is', 'said', 'and', 'done', ',', 'more', 'is', 'said', 'than', 'done', '.']. Process this list using a for loop, and store the result in a new list lengths. Hint: begin by assigning the empty list to lengths, using lengths = []. Then each time through the loop, use append() to add another length value to the list.

    32. ◑ Define a variable silly to contain the string: 'newly formed bland ideas are inexpressible in an infuriating way'. (This happens to be the legitimate interpretation that bilingual English-Spanish speakers can assign to Chomsky's famous nonsense phrase, colorless green ideas sleep furiously according to Wikipedia). Now write code to perform the following tasks:

      1. Split silly into a list of strings, one per word, using Python's split() operation, and save this to a variable called bland.
      2. Extract the second letter of each word in silly and join them into a string, to get 'eoldrnnnna'.
      3. Combine the words in bland back into a single string, using join(). Make sure the words in the resulting string are separated with whitespace.
      4. Print the words of silly in alphabetical order, one per line.
    33. ◑ The index() function can be used to look up items in sequences. For example, 'inexpressible'.index('e') tells us the index of the first position of the letter e.

      1. What happens when you look up a substring, e.g. 'inexpressible'.index('re')?
      2. Define a variable words containing a list of words. Now use words.index() to look up the position of an individual word.
      3. Define a variable silly as in the exercise above. Use the index() function in combination with list slicing to build a list phrase consisting of all the words up to (but not including) in in silly.
    34. ◑ Write code to convert nationality adjectives like Canadian and Australian to their corresponding nouns Canada and Australia (see http://en.wikipedia.org/wiki/List_of_adjectival_forms_of_place_names).

    35. ◑ Read the LanguageLog post on phrases of the form as best as p can and as best p can, where p is a pronoun. Investigate this phenomenon with the help of a corpus and the findall() method for searching tokenized text described in Section 3.5. http://itre.cis.upenn.edu/~myl/languagelog/archives/002733.html

    36. ◑ Study the lolcat version of the book of Genesis, accessible as nltk.corpus.genesis.words('lolcat.txt'), and the rules for converting text into lolspeak at http://www.lolcatbible.com/index.php?title=How_to_speak_lolcat Define regular expressions to convert English words into corresponding lolspeak words.

    37. ◑ Read about the re.sub() function for string substitution using regular expressions, using help(re.sub) and by consulting the further readings for this chapter. Use re.sub in writing code to remove HTML tags from an HTML file, and to normalize whitespace.

    38. ★ An interesting challenge for tokenization is words that have been split across a line-break. E.g. if long-term is split, then we have the string long-\nterm.

      1. Write a regular expression that identifies words that are hyphenated at a line-break. The expression will need to include the \n character.
      2. Use re.sub() to remove the \n character from these words.
      3. How might you identify words that should not remain hyphenated once the newline is removed, e.g. 'encyclo-\npedia'.
    39. ★ Read the Wikipedia entry on Soundex. Implement this algorithm in Python.

    40. ★ Obtain raw texts from two or more genres and compute their respective reading difficulty scores as in the earlier exercise on reading difficulty. E.g. compare ABC Rural News and ABC Science News (nltk.corpus.abc). Use Punkt to perform sentence segmentation.

    41. ★ Rewrite the following nested loop as a nested list comprehension:

       
      >>> words = ['attribution', 'confabulation', 'elocution', 
      ... 'sequoia', 'tenacious', 'unidirectional']
      >>> vsequences = set()
      >>> for word in words:
      ... vowels = []
      ... for char in word:
      ... if char in 'aeiou':
      ... vowels.append(char)
      ... vsequences.add(''.join(vowels))
      >>> sorted(vsequences)
      ['aiuio', 'eaiou', 'eouio', 'euoia', 'oauaio', 'uiieioa']
    42. ★ Use WordNet to create a semantic index for a text collection. Extend the concordance search program in Example 3.6, indexing each word using the offset of its first synset, e.g. wn.synsets('dog')[0].offset (and optionally the offset of some of its ancestors in the hypernym hierarchy).

    43. ★ With the help of a multilingual corpus such as the Universal Declaration of Human Rights Corpus (nltk.corpus.udhr), and NLTK's frequency distribution and rank correlation functionality (nltk.FreqDist, nltk.spearman_correlation), develop a system that guesses the language of a previously unseen text. For simplicity, work with a single character encoding and just a few languages.

    44. ★ Write a program that processes a text and discovers cases where a word has been used with a novel sense. For each word, compute the WordNet similarity between all synsets of the word and all synsets of the words in its context. (Note that this is a crude approach; doing it well is a difficult, open research problem.)

    45. ★ Read the article on normalization of non-standard words (Sproat et al, 2001), and implement a similar system for text normalization.

  • 相关阅读:
    反爬虫破解系列-汽车之家利用css样式替换文字破解方法
    保留注释换行的python模块configparser
    python logging 实现的进程安全的文件回滚日志类
    我要自学网视频免登陆观看破解技巧
    修改mac host文件绑定域名
    Position属性四个值:static、fixed、absolute和relative的区别和用法
    vue项目通过webpack打包生成的dist文件放到express环境里运行(vue+webpack+express)
    npm ERR! mathine_call@1.0.0 dev: `webpack-dev-server --inline --progress --config build/webpack.dev.conf.js` npm ERR! Exit status 1
    Do not use built-in or reserved HTML elements as component id: header
    vue 项目 使用sass
  • 原文地址:https://www.cnblogs.com/yuxc/p/2135549.html
Copyright © 2011-2022 走看看