you forgot the logic to strip the final digit and assign it to v.
processing the whole number is absurd
Converting to decimal is just as absurd.
All you need is the final binary digit, which incidentally is the most optimal codegen, `v & 1`.
Look at Mr. Rocket Scientist over here...
I think the idea is to fill in the ellipses with even/odd numbers, up to 4B.
You know, to save the performance cost of processing the input as a string, and chomping off all but the last character.