It’s about coupling and being able to maintain that in the long term.
What does that mean? This is all the kind of abstract programming advice that sounds nice until someone needs an example.
A narrow focus helps to test each individual unit in isolation from each other.
A function operating on a data structure is already a narrow focus.
It is true that a database appears to be a single datastructure with hundreds of methods from the users perspective
And also from a reality perspective because it's literally what a database is about.
However if you were to look into how a database is implemented you would get to see the composition of data structures, like btrees that are tested in isolation.
I don't know what point you're trying to make. Data structures should be tested? I don't think anyone is saying they shouldn't.
To put it simply if you have a function f(ctx) = ctx.a + ctx.b it is hard to see what are the arguments producing the output. Which are the elements in the datastructure you need to vary in order to have exhaustive tests. Whereas if one refactors it as f(ctx) = g(ctx.a, ctx.b) you only need to test function g with respect to (a, b) whereas forwarding of methods can be simply covered in integration tests without any care whether function g is implemented correctly.
To make such testing strategy to work data structures need to be small. It is better to have multiple small data structures rather than one big universal one where methods are defined at the ctx level making exhaustive tests difficult.
Perhaps, I haven’t been clear. I agree with Pike’s advice strongly. What I am trying to say here is that the Perl’s rule 9 is diametrically opposite of what Pike says.