如何将普通Tokenizer变成Fast Tokenizer
现在的huggingface库里面Tokenizer有两种,一种就是普通的,另一种是fast的。fast和普通的区别就是fast使用rust语言编写,在处理大量文本的时候会更快。我自己测试的时候单一一句的话fast要比普通的慢一些,当量叠上来,到100个句子,1000个句子的时候,fast要成倍数的更快。
下面使用构建自己模型的Tokenizer-CSDN博客中构造的自己的Tokenizer,把它变成TokenizerFast。
首先要导入对sentencepiece模型进行转换的包:
from transformers.convert_slow_tokenizer import SpmConverter
from tokenizers import processors
from transformers import T5TokenizerFast, PreTrainedTokenizerBase
其实主要的转换就是对分词模型的转换。processors用来规定tokenize之后的句子之后是否要加“</s>”之类的special token。
接下来,要定义一个用来convert的类。这个类会将普通Tokenizer的instance,变成fast的Tokenizer的instance。
class MyTokenizerConvertor(SpmConverter):def vocab(self, proto):vocab = [(piece.piece, piece.score) for piece in proto.pieces]loc_extra_ids = self.original_tokenizer._loc_extra_idsvocab = vocab + [("<loc_{}>".format(i), 0.0) for i in range(0, loc_extra_ids)]return vocabdef post_processor(self):return processors.TemplateProcessing(single=["$A", "</s>"],pair=["$A", "</s>", "$B", "</s>"],special_tokens=[("</s>", self.original_tokenizer.convert_tokens_to_ids("</s>")),])
其中vocab将新的词表进行了更新,使得词表长度是原来的词表长度加上我们额外定义的special token的长度。
post_processor定义了,当我们使用.encode方法时候,单句和两句的分词行为:
有post_processor定义,在使用的时候会自动添加special token,这里post_processor最多处理两句,多句就报错了。
定义一个进行转换的函数:
def convert_slow_to_fast(MyTokenizer):return MyTokenizerConvertor(MyTokenizer).converted()
接下来就可以定义我们的TokenizerFast了:
class MyTokenizerFast(T5TokenizerFast):slow_tokenizer_class = MyTokenizerdef __init__(self,vocab_file,tokenizer_file=None,eos_token="</s>",unk_token="<unk>",pad_token="<pad>",loc_extra_ids=100,sp_model_kwargs=None,additional_special_tokens=[],**kwargs):self.vocab_file = vocab_fileself._loc_extra_ids = loc_extra_ids# self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs# self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)# self.sp_model.Load(self.vocab_file)additional_special_tokens.extend(["<loc_{}>".format(i) for i in range(0, self._loc_extra_ids)])self.additional_special_tokens = additional_special_tokensslow_tokenizer = self.slow_tokenizer_class(vocab_file,tokenizer_file=tokenizer_file,eos_token=eos_token,unk_token=unk_token,pad_token=pad_token,loc_extra_ids=loc_extra_ids,additional_special_tokens=self.additional_special_tokens,**kwargs)fast_tokenizer = convert_slow_to_fast(slow_tokenizer)self._tokenizer = fast_tokenizerPreTrainedTokenizerBase.__init__(self,tokenizer_file=tokenizer_file,eos_token=eos_token,unk_token=unk_token,pad_token=pad_token,additional_special_tokens=self.additional_special_tokens,**kwargs,)
上面就大功告成了,可以分别初始化一个普通的和一个fast的看看效果:
mytokenizer = MyTokenizer("path/to/spiece.model")
mytokenizerfast = MyTokenizerFast("path/to/spiece.model")
import timetexts = ["This is a test sentence to tokenize." for _ in range(1000)] # 100 个句子# 修改计时函数以处理多个句子
def measure_time_batch(tokenizer, texts, iterations=100):start_time = time.time()for _ in range(iterations):_ = tokenizer.batch_encode_plus(texts)end_time = time.time()return end_time - start_timeslow_tokenizer_time = measure_time_batch(mytokenizer, texts)
print(f"Slow tokenizer time for batch: {slow_tokenizer_time:.4f} seconds")fast_tokenizer_time = measure_time_batch(mytokenizerfast, texts)
print(f"Fast tokenizer time for batch: {fast_tokenizer_time:.4f} seconds")