The saddest part about this article being from 2014 is that the situation has arguably gotten worse.
We now have even more layers of abstraction (Airflow, dbt, Snowflake) applied to datasets that often fit entirely in RAM.
I've seen startups burning $5k/mo on distributed compute clusters to process <10GB of daily logs, purely because setting up a 'Modern Data Stack' is what gets you promoted, while writing a robust bash script is seen as 'unscalable' or 'hacky'. The incentives are misaligned with efficiency.
I agree - and it's not just what gets you promoted, but also what gets you hired, and what people look for in general.
You're looking for your first DevOps person, so you want someone who has experience doing DevOps. They'll tell you about all the fancy frameworks and tooling they've used to do Serious Business™, and you'll be impressed and hire them. They'll then proceed to do exactly that for your company, and you'll feel good because you feel it sets you up for the future.
Nobody's against it. So you end up in that situation, which even a basic home desktop would be more than capable of handling.
I've spent my last 2 decades doing what's right, using the technologies that make sense instead of the techs that are cool on my resume.
And then I got laid off. Now, I've got very few modern frameworks on my resume and I've been jobless for over a year.
I'm feeling a right fool now.
> datasets that often fit entirely in RAM.
Yep, and a lot more datasets fit entirely into RAM now. Ignoring the recent price spikes for a moment, 128GB of RAM in a laptop is entirely achievable and not even the limit of what is possible. That was a pipe dream in 2014 when computers with only 4GB were still common. And of course for servers the max RAM is much higher, and in a lot of scenarios streaming data off a fast local SSD may be almost as good.
I think it’s not so much engineers actually setting up a distributed compute, as it is dropping a credit card into a paid cloud service, which behind the scenes sets up a distributed compute cluster and bills you for the compute in an obfuscated way, then gives a 20% discount + SSO if you sign up for annual enterprise plan.
This kind of practice is insidious because early on, they charge $20/month to get started on the first 100mb of log ingestion, and you can have it up and running in 30 seconds with a credit card. Who would turn that down?
Revisit that set up 2 years later and it’s turned into a 60k/y behemoth that no one can unwind
I’ve seen this pattern play out before. The pushback on simpler alternatives seems from a legitimate need for short time to market from the demand some of the equation and a lack of knowledge on the supply side. Every time I hear an engineer call something hacky, they are at the edge of their abilities.
Worse in some ways, better in others. DuckDB is often an excellent tool for this kind of task. Since it can run parallelized reads I imagine it's often faster than command line tool, and with easier to understand syntax
On the contrary, the key message from the blog post is not to load the entire dataset to RAM unless necessary. The trick is to stream when the pattern works. This is how our field routinely works with files over 100GB.
Yep. The cloud providers however always get paid, and get paid twice on Sunday when the dev-admins forget to turn stuff off.
It’s the same story as always, just it used to be Oracle certified tech, now it’s the AWS tech certified to ensure you pay Amazon.
For a dasaset that live in RAM, the best solution are DuckDB or clickhouse-local. Using SQLish data is easier than a bunch of bash script and really powerful.
Because developers are incentivized to have marketable software skills. Not marketable build things that are cheap and profitable skills.
Moore's law was supposed to make it simpler and cheaper to do more computationally expensive tasks. But in the meantime, everyone kept inflating the difficulty of a task faster than Moore could keep up.
I think some of this is because of the incredible amounts of capital that startups seem to be able to acquire. If startups had to demonstrate profitability before they were given any money to scale, the story would be very different I think.
Airflow and dbt serve a real purpose.
The issue is you can run sub tib jobs on a few small/standard instances with better tooling. Spark and Hadoop are for when you need multiple machines.
Dbt and airflow let you represent your data as a DAG and operate on that, which is critical if you want to actually maintain and correct data issues and keep your data transforms timely.
edit: a little surprised at multiple downvotes. My point is, you can run airflow and dbt on small instances, and you can do all your data processing on small instances with tools like duckdb or polars.
But it is very useful to use a tool like dbt that allows you to re-build and manage your data in a clear way, or a tool like airflow which lets you specify dependencies for runs.
After say 30 jobs or so, you'll find that being able to re-run all downstreams of a model starts to payoff.
Our lot burns a fortune on snowflake every month but no one is using it. Not enough data is being piped into it and the shitty old reports we have which just run some SQL work fine.
It looked good on someone’s resume and that was it. They are long gone.
This reminds me of this reddit comment from a long time ago: https://www.reddit.com/r/programming/comments/8cckg/comment/...
> a robust bash script
These hardly exist in practice.
But I get what you mean.
I see this at work too. They are ingesting a few GB per day but running the data through multiple systems. So the same functionality we delivered with a python script within a week now takes months to develop and constantly breaks.
Well. I try for a middle ground. I am currently ditching both airflow and dbt. In Snowflake, I use scheduled tasks that call stored procedures. The stored procedures do everything I need to do. I even call external APIs like Datadog’s and Okta’s and pull down the logs directly into snowflake. I do try to name my stored procedures with meaningful names. I also add generous comments including urls back to the original story.
On the other hand, now we have duckdb for all the “small big data”, and a slew of 10-100x faster than Java equivalent stuff in the data x rust ecosystem, like DataFusion, Feldera, ByteWax, RisingWave, Materialize etc
"I've seen startups burning $5k/mo on distributed compute clusters to process <10GB of daily logs, purely because setting up a 'Modern Data Stack' is what gets you promoted, while writing a robust bash script is seen as 'unscalable' or 'hacky'."
Also seen strange responses from HN commenters when it's mentioned that bash is large and slow compared to ash and bash is better suited for use as an interactive shell whereas ash is better suited for use as a non-interactive shell, i.e., a scripting shell
I also use ash (with tabcomplete) as an interactive shell for several reasons
happy middle ground: https://www.definite.app/ (I'm the founder).
datalake (DuckLake), pipelines (hubspot, stripe, postgres), and dashboards in a single app for $250/mo.
marketing/finance get dashboards, everyone else gets SQL + AI access. one abstraction instead of five, for a fraction of your Snowflake bill.
ENG are building what MGMT has told them to build for, the scale they want, not the scale they have
> because setting up a 'Modern Data Stack' is what gets you promoted
It’s not just that, it’s that you better know their specific tech stack to even get hired. It’s a lot of dumb engineering leaders pretending that AWS, Azure and Snowflake are such wildly different ecosystems that not having direct experience in theirs is disqualifying (for pure DE roles, not talking broader sysadmin).
The entire data world is rife with people who don’t have the faintest clue what they’re doing, who really like buzzwords, and who have never thought about their problem space critically.
If airflow is a layer of abstraction something is wrong.
Yes it is an additional layer, but if your orchestration starts concerning itself with what it is doing then something is wrong. It is not a layer on top of other logic, it is a single layer where you define how to start your tasks, how to tell when something is wrong, and when to run them.
If you don't insist on doing heavy compitations within the airflow worker it is dirt cheap. If it's something that can easily be done in bash or python you can do it within the worker as long as you're willing to throw a minimal amount of hardware at it.
I've done a handful of interviews recently where the 'scaling' problem involves something that comfortably fits on one machine. The funniest one was ingesting something like 1gb of json per day. I explained, from first principals, how it fits, and received feedback along the lines of "our engineers agreed with your technical assessment, but that's not the answer we wanted, so we're going to pass". I've had this experience a good handful of times.
I think a lot of people don't realize machines come with TBs of RAM and hundreds of physical cores. One machine is fucking huge these days.