At least for classic textbook CPUs, it's common to run both sides of a decision in parallel while it's still being made, then finally use a transistor to select the desired result after the fact. No one wants the latency of powering everything up and down every cycle, except on the largest scales where power consumption makes a big difference.
I don't understand what you mean by decision. In a textbook CPU, where there is no speculative execution and no pipelining, the hardware runs one instruction at a time, the one from the instruction pointer. If that instruction is a conditional jump, this is a single computation that sets the instruction pointer to a single value (either next instruction or the specified jump point). After the single new value of the instruction pointer is computed, the process begins again - it fetches one instruction from the memory address it finds in the instruction point register, decodes it, and executes it.
Even if you add pipelining, the basic textbook design will stall the pipeline when it encounters a conditional/computed jump instruction. Even if you add basic speculative execution, you still don't get both branches executed at once, necessarily - assuming you had a single ALU, you'd still get only one branch executed at once, and if the wrong one was predicted, you'll revert and execute the other one once the conditional finishes being computed.