zoukankan      html  css  js  c++  java
  • 关于bertTokenizer

    具体实例

    from transformers import BertTokenizer
    import os
    
    tokens = ['我','爱','北','京','天','安','门']
    
    tokenizer = BertTokenizer(os.path.join('/content/drive/MyDrive/simpleNLP/model_hub/bert-base-case','vocab.txt'))
    encode_dict = tokenizer.encode_plus(text=tokens,
                      max_length=256,
                      pad_to_max_length=True,
                      is_pretokenized=True,
                      return_token_type_ids=True,
                      return_attention_mask=True)
    tokens = ['[CLS]'] + tokens + ['[SEP]']
    print(' '.join(tokens))
    print(encode_dict['input_ids'])
    

    结果:

    Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
    [CLS] 我 爱 北 京 天 安 门 [SEP]
    [101, 100, 100, 993, 984, 1010, 1016, 100, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
    /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py:2079: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).
      FutureWarning,
    
    from transformers import BertTokenizer
    import os
    
    tokens = ['我','爱','北','京','天','安','门']
    
    tokenizer = BertTokenizer(os.path.join('/content/drive/MyDrive/simpleNLP/model_hub/bert-base-case','vocab.txt'))
    tokens_a = '我 爱 北 京 天 安 门'.split(' ')
    tokens_b = '我 爱 打 英 雄 联 盟 啊 啊'.split(' ')
    
    encode_dict = tokenizer.encode_plus(text=tokens_a,
                      text_pair=tokens_b,
                      max_length=20,
                      pad_to_max_length=True,
                      truncation_strategy='only_second',
                      is_pretokenized=True,
                      return_token_type_ids=True,
                      return_attention_mask=True)
    tokens = " ".join(['[CLS]'] + tokens_a + ['[SEP]'] + tokens_b + ['[SEP]'])
    token_ids = encode_dict['input_ids']
    attention_masks = encode_dict['attention_mask']
    token_type_ids = encode_dict['token_type_ids']
    
    print(tokens)
    print(token_ids)
    print(attention_masks)
    print(token_type_ids)
    

    结果:

    Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
    [CLS] 我 爱 北 京 天 安 门 [SEP] 我 爱 打 英 雄 联 盟 啊 啊 [SEP]
    [101, 100, 100, 993, 984, 1010, 1016, 100, 102, 100, 100, 100, 100, 100, 100, 100, 100, 100, 102, 0]
    [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]
    /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py:2079: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).
      FutureWarning,
    
  • 相关阅读:
    use sortedset
    关于WPF 的快捷键
    关于WPF的UI控件焦点问题
    Hive2.x 版本的安装及配置 以及要注意的事项
    nginx长连接设置
    nginx响应时间监控脚本
    [转]细说Redis监控和告警
    mongodb高级查询
    Python导入自定义包或模块
    [转]大数据hadoop集群硬件选择
  • 原文地址:https://www.cnblogs.com/xiximayou/p/14880999.html
Copyright © 2011-2022 走看看