I took an ASIC design class in college, unfortunately with a heavy course load that didn't allow me to focus on it. For our final project we were given a numbered dictionary and asked to design a chip that would accept the characters on a 7 bit interface (ASCII), one character per clock cycle and output the dictionary number on an output interface but I can't remember how wide. We were graded on the size of the resulting ASIC and how many clock cycles it took from the last character in to the number on the output.
I started designing my modules, a ROM, a register with a ROM pointer, etc, etc, writing the Verilog and working out the clock sync between modules. Then I got 'lazy' and wrote a trie tree like implementation in Java, and have it spit out the whole tree in Verilog. It worked and just one clock cycle after the last letter my number would output. Fastest in the class! Also the most number of gates in the class. Surprised I got a 90 grade given I didn't use any of the advanced ASIC design the class taught. The TA didn't know what the hell they were looking at.
Yep! Something a bit counterintuitive on circuit design is that dedicated transistors will always beat reusing existing components. If we do reuse existing components like ALUs, multipliers, or state machines, we save on chip area but pay the penalty in clock cycles. Your approach was the extreme version of this tradeoff. You essentially unrolled the entire dictionary lookup into pure combinatorial logic (well, with registers for the input characters). One clock cycle latency because you weren't doing any sequential searching, comparing, or state machine transitions just racing electrons through logic gates.