Related, I love Rob Pike's talk about lexical Scanning in Go (2011).
Educational and elegant approach.
I feel like that talk has more to do with expressing concurrency, in problems where concurrency is a natural thing to think about, than it does with lexing.
That talk is great, but I remember some discussion later about Go actually NOT using this technique because of goroutine scheduling overhead and/or inefficient memory allocation patterns? The best discussion I could find is [1].
Another great talk about making efficient lexers and parsers is Andrew Kelley's "Practical Data Oriented Design" [2]. Summary: "it explains various strategies one can use to reduce memory footprint of programs while also making the program cache friendly which increase throughput".
--
1: https://news.ycombinator.com/item?id=31649617
2: https://www.youtube.com/watch?v=IroPQ150F6c