logoalt Hacker News

Fiveplustoday at 10:43 AM2 repliesview on HN

This is a fantastic result, but I am dying to know how the G.hn chipset creates the bit-loading map on a topology with that many bridge taps. In VDSL2 deployment, any unused extension socket in the house acts as an open-circuited stub, creating signal reflections that notch out specific frequencies (albeit usually killing performance).

If the author is hitting 940 Mbps on a daisy-chain, either the echo cancellation or the frequency diversity on these chips must be lightyears ahead of standard DSLAMs. Does the web interface expose the SNR-per-tone graph? I suspect you would see massive dips where the wiring splits to the other rooms, but the OFDM is just aggressively modulating around them.


Replies

user5994461today at 11:34 AM

A view from the the debugging tools since you asked https://thehftguy.com/wp-content/uploads/2026/01/screenshot_...

I don't think there is anything too fancy compared to a DSLAM. It's just that DSLAM are low-frequency long-range by design.

Numbers for nerds, on top of my head:

* ADSL1 is 1Mhz 8Mbps (2 kilometer)

* ADSL2 is 2Mhz 20Mbps (1 kilometer)

* VSDL1 is 15Mhz 150Mbps (less than 1 kilometer)

* Gigabit Ethernet is 100Mhz over four pairs (100 meters). It either works or it doesn't.

* The G.hn device here is up to 200 MHz. It automatically detects what can be done on the medium.

show 1 reply
zbrozektoday at 4:26 PM

It's been many years since I implemented G.Hn hardware, but if memory serves the chipsets are typically able to split the available bandwidth into 1 or 2 MHz wide bins and choose different symbol densities and FEC levels for each bin. If you have a bin that has horrible reflections, you don't use it at all.

I also recall that the chipsets don't do toning automatically, and so it's up the the management device to decide when to re-probe the channel and reconfigure the bins.