logoalt Hacker News

gz09yesterday at 5:51 AM1 replyview on HN

Yep, this comment sums it up well.

We have many large enterprises from wildly different domains use feldera and from what I can tell there is no correlation between the domain and the amount of columns. As fiddlerwoaroof says, it seems to be more a function of how mature/big the company is and how much time it had to 'accumulate things' in their data model. And there might be very good reasons to design things the way they did, it's very hard to question it without being a domain expert in their field, I wouldn't dare :).


Replies

locknitpickeryesterday at 7:26 AM

> I can tell there is no correlation between the domain and the amount of columns.

This is unbelievable. In purely architectural terms that would require your database design to be an amorphous big ball of everything, with no discernible design or modelling involved. This is completely unrealistic. Are queries done at random?

In practical terms, your assertion is irrelevant. Look at the sparse columns. Figure out those with sparse rows. Then move half of the columns to a new table and keep the other half in the original table. Congratulations, you just cut down your column count by half, and sped up your queries.

Even better: discover how your data is being used. Look at queries and check what fields are used in each case. Odds are, that's your table right there.

Let's face it. There is absolutely no technical or architectural reason to reach this point. This problem is really not about structs.

show 2 replies