zoukankan      html  css  js  c++  java
  • NLTK

    python -m pip install nltk==3.5
    python -m pip install numpy matplotlib
    python
    import nltk
    nltk.download()

    Tokenizing
    Tokenizing by word:
    Tokenizing by sentence:


    from nltk.tokenize import sent_tokenize, word_tokenize
    example_string = """
    ... Muad'Dib learned rapidly because his first training was in how to learn.
    ... And the first lesson of all was the basic trust that he could learn.
    ... It's shocking to find how many people do not believe they can learn,
    ... and how many more believe learning to be difficult."""

    You can use sent_tokenize() to split up example_string into sentences:

    >>> sent_tokenize(example_string)
    ["Muad'Dib learned rapidly because his first training was in how to learn.",
    'And the first lesson of all was the basic trust that he could learn.',
    "It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult."]

    Tokenizing example_string by sentence gives you a list of three strings that are sentences:

    "Muad'Dib learned rapidly because his first training was in how to learn."
    'And the first lesson of all was the basic trust that he could learn.'
    "It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult."

    Now try tokenizing example_string by word:


    >>> word_tokenize(example_string)
    ["Muad'Dib",
    'learned',
    'rapidly',
    'because',
    'his',
    'first',
    'training',
    'was',
    'in',
    'how',
    'to',
    'learn',
    '.',
    'And',
    'the',
    'first',
    'lesson',
    'of',
    'all',
    'was',
    'the',
    'basic',
    'trust',
    'that',
    'he',
    'could',
    'learn',
    '.',
    'It',
    "'s",
    'shocking',
    'to',
    'find',
    'how',
    'many',
    'people',
    'do',
    'not',
    'believe',
    'they',
    'can',
    'learn',
    ',',
    'and',
    'how',
    'many',
    'more',
    'believe',
    'learning',
    'to',
    'be',
    'difficult',
    '.']

    You got a list of strings that NLTK considers to be words, such as:

    "Muad'Dib"
    'training'
    'how'
    But the following strings were also considered to be words:

    "'s"
    ','
    '.'

    See how "It's" was split at the apostrophe to give you 'It' and "'s", but "Muad'Dib" was left whole? This happened because NLTK knows that 'It' and "'s" (a contraction of “is”) are two distinct words, so it counted them separately. But "Muad'Dib" isn’t an accepted contraction like "It's", so it wasn’t read as two separate words and was left intact.


    Filtering Stop Words

    Stop words are words that you want to ignore, so you filter them out of your text when you’re processing it. Very common words like 'in', 'is', and 'an' are often used as stop words since they don’t add a lot of meaning to a text in and of themselves.

    Here’s how to import the relevant parts of NLTK in order to filter out stop words:
    >>> nltk.download("stopwords")
    >>> from nltk.corpus import stopwords
    >>> from nltk.tokenize import word_tokenize
    stop_words = set(stopwords.words("english"))
    >>> stop_words
    {'each', "needn't", "doesn't", 'he', 'all', "wouldn't", 'has', 'him', "don't", 'herself', 'after', 'nor', 'here', 'further', 'hers', 'how', 'some', 'as', 'up', 'd', 'ma', 'this', 'their', 'so', 'during', 'my', "it's", 'its', 'and', 'ourselves', "you'll", 'haven', 'whom', 'at', 'itself', 'be', 'from', 'just', 'until', 'been', 'theirs', "aren't", 'why', 'yours', 'when', 'no', 'it', 'had', 'do', 'same', 'such', 's', 'most', 'into', "mightn't", 'your', 'y', 'that', 'doesn', 't', 'about', "isn't", 'won', 'doing', 'was', 'have', 'than', 'very', 'can', 'didn', 'those', 'me', 'or', 'once', 'm', 're', 'ours', 'again', 'any', 'aren', 'what', 'were', 'a', 'for', 'off', 'them', "haven't", 'isn', 'o', 'more', 'our', 'she', 'couldn', 'yourselves', 'in', "didn't", "mustn't", "couldn't", 'then', 'only', "you're", "won't", 'having', 'ain', "weren't", 'where', 'which', 'before', "shan't", 'hadn', 'am', "should've", 'is', 'mightn', 'below', 'her', 'myself', 'on', "that'll", 'mustn', 'i', 'does', 'don', "you'd", 'but', 'both', 'by', 'who', 'an', 'there', 'shan', 'are', "hadn't", "she's", "you've", 'being', 'between', 'hasn', 'to', 'you', 'his', 'down', 'own', 'did', 'out', 'should', 'shouldn', 'other', 'against', 'themselves', 'if', 'the', 'will', 'wasn', 'above', 'not', 'now', 'because', 'll', 'we', 'these', 'weren', 'they', 'few', 'yourself', 'under', 'himself', 'over', 'needn', 'through', "hasn't", 'while', 'of', 'too', 'with', 'wouldn', "wasn't", "shouldn't", 've'}

    >>> worf_quote = "Sir, I protest. I am not a merry man!"

    >>> words_in_quote = word_tokenize(worf_quote)
    >>> words_in_quote
    ['Sir', ',', 'I', 'protest', '.', 'I', 'am', 'not', 'a', 'merry', 'man', '!']

    >>> filtered_list = [word for word in words_in_quote if word.casefold() not in stop_words]
    >>> filtered_list
    ['Sir', ',', 'protest', '.', 'merry', 'man', '!']

    Stemming #词干提取
    Stemming is a text processing task in which you reduce words to their root, which is the core part of a word. For example, the words “helping” and “helper” share the root “help.” Stemming allows you to zero in on the basic meaning of a word rather than all the details of how it’s being used. NLTK has more than one stemmer, but you’ll be using the Porter stemmer.

    Here’s how to import the relevant parts of NLTK in order to start stemming:

    >>> from nltk.stem import PorterStemmer
    >>> from nltk.tokenize import word_tokenize
    >>> stemmer = PorterStemmer()
    >>> string_for_stemming = """
    ... The crew of the USS Discovery discovered many discoveries.
    ... Discovering is what explorers do."""
    >>> words = word_tokenize(string_for_stemming)
    >>> words
    ['The', 'crew', 'of', 'the', 'USS', 'Discovery', 'discovered', 'many', 'discoveries', '.', 'Discovering', 'is', 'what', 'explorers', 'do', '.']

    >>> stemmed_words = [stemmer.stem(word) for word in words]
    >>> stemmed_words
    ['the', 'crew', 'of', 'the', 'uss', 'discoveri', 'discov', 'mani', 'discoveri', '.', 'discov', 'is', 'what', 'explor', 'do', '.']

    Tagging Parts of Speech #标记词性
    Part of speech is a grammatical term that deals with the roles words play when you use them together in sentences. Tagging parts of speech, or POS tagging, is the task of labeling the words in your text according to their part of speech.
    sagan_quote = """
    If you wish to make an apple pie from scratch,
    you must first invent the universe."""


    >>> from nltk.tokenize import word_tokenize
    >>> sagan_quote = """
    ... If you wish to make an apple pie from scratch,
    ... you must first invent the universe."""
    >>> sagan_quote
    ' If you wish to make an apple pie from scratch, you must first invent the universe.'
    >>> words_in_sagan_quote = word_tokenize(sagan_quote)
    >>> words_in_sagan_quote
    ['If', 'you', 'wish', 'to', 'make', 'an', 'apple', 'pie', 'from', 'scratch', ',', 'you', 'must', 'first', 'invent', 'the', 'universe', '.']
    >>> import nltk
    >>> nltk.pos_tag(words_in_sagan_quote)
    [('If', 'IN'), ('you', 'PRP'), ('wish', 'VBP'), ('to', 'TO'), ('make', 'VB'), ('an', 'DT'), ('apple', 'NN'), ('pie', 'NN'), ('from', 'IN'), ('scratch', 'NN'), (',', ','), ('you', 'PRP'), ('must', 'MD'), ('first', 'VB'), ('invent', 'VB'), ('the', 'DT'), ('universe', 'NN'), ('.', '.')]
    >>> nltk.help.upenn_tagset()
    $: dollar
    $ -$ --$ A$ C$ HK$ M$ NZ$ S$ U.S.$ US$
    '': closing quotation mark
    ' ''
    (: opening parenthesis
    ( [ {
    ): closing parenthesis
    ) ] }
    ,: comma
    ,
    --: dash
    --
    .: sentence terminator
    . ! ?
    :: colon or ellipsis
    : ; ...
    CC: conjunction, coordinating
    & 'n and both but either et for less minus neither nor or plus so
    therefore times v. versus vs. whether yet
    CD: numeral, cardinal
    mid-1890 nine-thirty forty-two one-tenth ten million 0.5 one forty-
    seven 1987 twenty '79 zero two 78-degrees eighty-four IX '60s .025
    fifteen 271,124 dozen quintillion DM2,000 ...
    DT: determiner
    all an another any both del each either every half la many much nary
    neither no some such that the them these this those
    EX: existential there
    there
    FW: foreign word
    gemeinschaft hund ich jeux habeas Haementeria Herr K'ang-si vous
    lutihaw alai je jour objets salutaris fille quibusdam pas trop Monte
    terram fiche oui corporis ...
    IN: preposition or conjunction, subordinating
    astride among uppon whether out inside pro despite on by throughout
    below within for towards near behind atop around if like until below
    next into if beside ...
    JJ: adjective or numeral, ordinal
    third ill-mannered pre-war regrettable oiled calamitous first separable
    ectoplasmic battery-powered participatory fourth still-to-be-named
    multilingual multi-disciplinary ...
    JJR: adjective, comparative
    bleaker braver breezier briefer brighter brisker broader bumper busier
    calmer cheaper choosier cleaner clearer closer colder commoner costlier
    cozier creamier crunchier cuter ...
    JJS: adjective, superlative
    calmest cheapest choicest classiest cleanest clearest closest commonest
    corniest costliest crassest creepiest crudest cutest darkest deadliest
    dearest deepest densest dinkiest ...
    LS: list item marker
    A A. B B. C C. D E F First G H I J K One SP-44001 SP-44002 SP-44005
    SP-44007 Second Third Three Two * a b c d first five four one six three
    two
    MD: modal auxiliary
    can cannot could couldn't dare may might must need ought shall should
    shouldn't will would
    NN: noun, common, singular or mass
    common-carrier cabbage knuckle-duster Casino afghan shed thermostat
    investment slide humour falloff slick wind hyena override subhumanity
    machinist ...
    NNP: noun, proper, singular
    Motown Venneboerger Czestochwa Ranzer Conchita Trumplane Christos
    Oceanside Escobar Kreisler Sawyer Cougar Yvette Ervin ODI Darryl CTCA
    Shannon A.K.C. Meltex Liverpool ...
    NNPS: noun, proper, plural
    Americans Americas Amharas Amityvilles Amusements Anarcho-Syndicalists
    Andalusians Andes Andruses Angels Animals Anthony Antilles Antiques
    Apache Apaches Apocrypha ...
    NNS: noun, common, plural
    undergraduates scotches bric-a-brac products bodyguards facets coasts
    divestitures storehouses designs clubs fragrances averages
    subjectivists apprehensions muses factory-jobs ...
    PDT: pre-determiner
    all both half many quite such sure this
    POS: genitive marker
    ' 's
    PRP: pronoun, personal
    hers herself him himself hisself it itself me myself one oneself ours
    ourselves ownself self she thee theirs them themselves they thou thy us
    PRP$: pronoun, possessive
    her his mine my our ours their thy your
    RB: adverb
    occasionally unabatingly maddeningly adventurously professedly
    stirringly prominently technologically magisterially predominately
    swiftly fiscally pitilessly ...
    RBR: adverb, comparative
    further gloomier grander graver greater grimmer harder harsher
    healthier heavier higher however larger later leaner lengthier less-
    perfectly lesser lonelier longer louder lower more ...
    RBS: adverb, superlative
    best biggest bluntest earliest farthest first furthest hardest
    heartiest highest largest least less most nearest second tightest worst
    RP: particle
    aboard about across along apart around aside at away back before behind
    by crop down ever fast for forth from go high i.e. in into just later
    low more off on open out over per pie raising start teeth that through
    under unto up up-pp upon whole with you
    SYM: symbol
    % & ' '' ''. ) ). * + ,. < = > @ A[fj] U.S U.S.S.R * ** ***
    TO: "to" as preposition or infinitive marker
    to
    UH: interjection
    Goodbye Goody Gosh Wow Jeepers Jee-sus Hubba Hey Kee-reist Oops amen
    huh howdy uh dammit whammo shucks heck anyways whodunnit honey golly
    man baby diddle hush sonuvabitch ...
    VB: verb, base form
    ask assemble assess assign assume atone attention avoid bake balkanize
    bank begin behold believe bend benefit bevel beware bless boil bomb
    boost brace break bring broil brush build ...
    VBD: verb, past tense
    dipped pleaded swiped regummed soaked tidied convened halted registered
    cushioned exacted snubbed strode aimed adopted belied figgered
    speculated wore appreciated contemplated ...
    VBG: verb, present participle or gerund
    telegraphing stirring focusing angering judging stalling lactating
    hankerin' alleging veering capping approaching traveling besieging
    encrypting interrupting erasing wincing ...
    VBN: verb, past participle
    multihulled dilapidated aerosolized chaired languished panelized used
    experimented flourished imitated reunifed factored condensed sheared
    unsettled primed dubbed desired ...
    VBP: verb, present tense, not 3rd person singular
    predominate wrap resort sue twist spill cure lengthen brush terminate
    appear tend stray glisten obtain comprise detest tease attract
    emphasize mold postpone sever return wag ...
    VBZ: verb, present tense, 3rd person singular
    bases reconstructs marks mixes displeases seals carps weaves snatches
    slumps stretches authorizes smolders pictures emerges stockpiles
    seduces fizzes uses bolsters slaps speaks pleads ...
    WDT: WH-determiner
    that what whatever which whichever
    WP: WH-pronoun
    that what whatever whatsoever which who whom whosoever
    WP$: WH-pronoun, possessive
    whose
    WRB: Wh-adverb
    how however whence whenever where whereby whereever wherein whereof why
    ``: opening quotation mark

    Lemmatizing #词形还原
    Now that you’re up to speed on parts of speech, you can circle back to lemmatizing. Like stemming, lemmatizing reduces words to their core meaning, but it will give you a complete English word that makes sense on its own instead of just a fragment of a word like 'discoveri'.
    >>> from nltk.stem import WordNetLemmatizer
    >>> lemmatizer = WordNetLemmatizer()
    >>> lemmatizer.lemmatize("scarves")
    'scarf'
    >>> string_for_lemmatizing = "The friends of DeSoto love scarves."
    >>> words = word_tokenize(string_for_lemmatizing)
    >>> words
    ['The', 'friends', 'of', 'DeSoto', 'love', 'scarves', '.']
    >>> lemmatized_words = [lemmatizer.lemmatize(word) for word in words]
    >>> lemmatized_words
    ['The', 'friend', 'of', 'DeSoto', 'love', 'scarf', '.']
    >>> lemmatizer.lemmatize("worst")
    'worst'
    >>> lemmatizer.lemmatize("worst", pos="a")
    'bad'

    Chunking #分块 ,短语
    While tokenizing allows you to identify words and sentences, chunking allows you to identify phrases.
    Chunking makes use of POS tags to group words and apply chunk tags to those groups. Chunks don’t overlap, so one instance of a word can be in only one chunk at a time.

    Here’s how to import the relevant parts of NLTK in order to chunk:

    Now tokenize that string by word:

    >>> from nltk.tokenize import word_tokenize
    >>> lotr_quote = "It's a dangerous business, Frodo, going out your door."
    >>> words_in_lotr_quote = word_tokenize(lotr_quote)
    >>> words_in_lotr_quote
    ['It', "'s", 'a', 'dangerous', 'business', ',', 'Frodo', ',', 'going', 'out', 'your', 'door', '.']
    Now you’ve got a list of all of the words in lotr_quote.
    >>> nltk.download("averaged_perceptron_tagger")
    [nltk_data] Downloading package averaged_perceptron_tagger to
    [nltk_data] C:UserssongyuejieAppDataRoaming ltk_data...
    [nltk_data] Package averaged_perceptron_tagger is already up-to-
    [nltk_data] date!
    True
    The next step is to tag those words by part of speech:
    >>> lotr_pos_tags = nltk.pos_tag(words_in_lotr_quote)
    >>> lotr_pos_tags
    [('It', 'PRP'), ("'s", 'VBZ'), ('a', 'DT'), ('dangerous', 'JJ'), ('business', 'NN'), (',', ','), ('Frodo', 'NNP'), (',', ','), ('going', 'VBG'), ('out', 'RP'), ('your', 'PRP$'), ('door', 'NN'), ('.', '.')]
    You’ve got a list of tuples of all the words in the quote, along with their POS tag.
    In order to chunk, you first need to define a chunk grammar.
    Create a chunk grammar with one regular expression rule:
    >>> grammar = "NP: {<DT>?<JJ>*<NN>}"
    According to the rule you created, your chunks:

    Start with an optional (?) determiner ('DT')
    Can have any number (*) of adjectives (JJ)
    End with a noun (<NN>)
    Create a chunk parser with this grammar:

    >>> chunk_parser = nltk.RegexpParser(grammar)
    >>> tree = chunk_parser.parse(lotr_pos_tags)
    >>> tree.draw()

    You got two noun phrases:

    'a dangerous business' has a determiner, an adjective, and a noun.
    'door' has just a noun.

    Chinking
    Chinking is used together with chunking, but while chunking is used to include a pattern, chinking is used to exclude a pattern.
    >>> from nltk.tokenize import word_tokenize
    >>> lotr_quote = "It's a dangerous business, Frodo, going out your door."
    >>> words_in_lotr_quote = word_tokenize(lotr_quote)
    >>> words_in_lotr_quote
    ['It', "'s", 'a', 'dangerous', 'business', ',', 'Frodo', ',', 'going', 'out', 'your', 'door', '.']
    >>> lotr_pos_tags = nltk.pos_tag(words_in_lotr_quote)
    Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    NameError: name 'nltk' is not defined
    >>> import nltk
    >>> lotr_pos_tags = nltk.pos_tag(words_in_lotr_quote)
    >>> lotr_pos_tags
    [('It', 'PRP'), ("'s", 'VBZ'), ('a', 'DT'), ('dangerous', 'JJ'), ('business', 'NN'), (',', ','), ('Frodo', 'NNP'), (',', ','), ('going', 'VBG'), ('out', 'RP'), ('your', 'PRP$'), ('door', 'NN'), ('.', '.')]
    >>> grammar = """
    ... Chunk: {<.*>+}
    ... }<JJ>{"""
    >>> chunk_parser = nltk.RegexpParser(grammar)
    >>> tree = chunk_parser.parse(lotr_pos_tags)
    >>> tree
    Tree('S', [Tree('Chunk', [('It', 'PRP'), ("'s", 'VBZ'), ('a', 'DT')]), ('dangerous', 'JJ'), Tree('Chunk', [('business', 'NN'), (',', ','), ('Frodo', 'NNP'), (',', ','), ('going', 'VBG'), ('out', 'RP'), ('your', 'PRP$'), ('door', 'NN'), ('.', '.')])])
    >>> tree.draw()

  • 相关阅读:
    你为什么老是找不到满意的工作? 多思考自己的原因 !
    微信企业号开发之-如何获取secret 序列号
    可以免费自学编程的12个网站
    软件职业:聊聊学习这件事!
    致行业初学者:关于某些培训机构,老徐聊几句,也许对你们有用
    查看centos版本号
    配置mysql远程访问权限,大家可能掉过的那些坑~
    测试必备技能系列6:软件安装部署是最基本的能力!
    一个测试老鸟对职业技术交流群的几点看法
    为什么你一直找不到工作:因为你什么都想学,什么都不会!
  • 原文地址:https://www.cnblogs.com/songyuejie/p/14965648.html
Copyright © 2011-2022 走看看