PTP requires support not only on your network, but also on your peripheral bus and inside your CPU. It can't achieve better-than-NTP results without disabling PCI power saving features and deep CPU sleep states.
How so? If the NIC is processing the timestamps as it arrives/leaves on the wire, the latency and jitter in the rest of the system shouldn't matter.
PTP does not require support on your network beyond standard ethernet packet forwarding when used in ethernet mode.
In multicast IP mode, with multiple switches, it requires what anything running multicast between switches/etc would require (IE some form of IGMP snopping or multicast routing or .....)
In unicast IP mode, it requires nothing from your network.
Therefore, i have no idea what it means to "require support on the network".
I have used both ethernet and multicast PTP across a complete mishmash of brands and types and medias of switches, computers, etc, with no issues.
The only thing that "support" might improve is more accurate path delay data through transparent clocks. If both master and slave do accurate hardware timestamping already, and the path between them is constant, it is easily possible to get +-50 nanoseconds without any transparent clock support.
Here is the stats from a random embedded device running PTP i just accessed a second ago:
Reference ID : 50545030 (PTP0)
Stratum : 1
Ref time (UTC) : Sun Dec 28 02:47:25 2025
System time : 0.000000029 seconds slow of NTP time
Last offset : -0.000000042 seconds
RMS offset : 0.000000034 seconds
Frequency : 8.110 ppm slow
Residual freq : -0.000 ppm
Skew : 0.003 ppm
So this embedded ARM device, which is not special in any way, is maintaining time +-35ns of the grandmaster, and currently 30ns of GPS time.The card does not have an embedded hardware PTP clock, but it does do hardware timestamp and filtering.
This grandmaster is an RPI with an intel chipset on it and the PPS input pin being used to discipline the chipset's clock. It stays within +-2ns (usually +-1ns) of GPS time.
Obviously, holdover sucks, but not the point :)
This qualifies as better-than-NTP for sure, and this setup has no network support. No transparent clocks, etc. These machines have multiple media transitions involved (fiber->ethernet), etc.
The main thing transparent clock support provides in practice is dealing with highly variable delay. Either from mode of transport, number of packet processors in between your nodes, etc. Something that causes the delay to be hard to account for.
The ethernet packet processing in ethernet mode is being handled in hardware by the switches and basically all network cards. IP variants would probably be hardware assisted but not fully offloaded on all cards, and just ignored on switches (assuming they are not really routers in disguise).
The hardware timestamping is being done in the card (and the vast majority of ethernet cards have supported PTP harware timestamping for >1 decade at this point), and works perfectly fine with deep CPU sleep states.
Some don't do hardware filtering so they essentially are processing more packets that necessary but .....
You can if you just run PTP (almost) entirely on your NIC. The best PTP implementations take their packet timestamps at the MAC on the NIC and keep time based on that. Nothing about CPU processing is time-critical in that case.