No. HDR can encode high dynamic range because (typically) it uses floating point encoding.
From a technical point of view, HDR is just a set of standards and formats for encoding absolute-luminance scene-referred images and video, along with a set of standards for reproduction.
I think most HDR formats do not typically use 32 bit floating point. The first HDR file format I can remember is Greg Ward’s RGBE format, which is also now more commonly known as .HDR and I think is pretty widely used.
https://www.graphics.cornell.edu/~bjw/rgbe.html
It uses a type of floating point, in a way, but it’s a shared 8 bit exponent across all 3 channels, and the channels are still 8 bits each, so the whole thing fits in 32 bits. Even the .txt file description says it’s not “floating point” per-se since that implies IEEE single precision floats.
Cameras and displays don’t typically use floats, and even CG people working in HDR and using, e.g., OpenEXR, might use half floats more often that float.
Some standards do exist, and it’s improving over time, but the ideas and execution of HDR in various ways preceded any standards, so I think it’s not helpful to define HDR as a set of standards. From my perspective working in CG, HDR began as a way to break away from 8 bits per channel RGB, and it included improving both color range and color resolution, and started the discussion of using physical metrics as opposed to relative [0..1] ranges.
No. HDR video (and images) don't use floating point encoding. They generally use a higher bit depth (10 bits or more vs 8 bits) to reduce banding and different transfer characteristics (i.e. PQ or HLG vs sRGB or BT.709), in addition to different YCbCr matrices and mastering metadata.
And no, it's not necessarily absolute luminance. PQ is absolute, HLG is not.