Tokenization is typically done on CPU and is rarely (if ever) a bottleneck for training or inference.
GPU kernels typically dominate in terms of wall clock time, the only exception might be very small models.
Thus the latency of tokenization can essentially be “hidden”, by having the CPU prepare the next batch while the GPU finishes the current batch.