Do you have a demo video?
What are you using for processing (polars)?
Marketing note: I'm sure you're proud of P Core/V Core, but that doesn't matter to your users, it's an implementation detail. At a maximum I'd write "intelligent execution that scales from small files to large files".
As an implementation note, I would make it simple to operate on just the first 1000 (10k or 100k) rows so responses are super quick, then once the users are happy about the transform, make it a single click to operate on the entire file with a time estimate.
Another feature I'd like in this vein is execute on a small subset, then if you find an error with a larger subset, try to reduce the larger subset to a small quick to reproduce version. Especially for deduping.
> Marketing note: I'm sure you're proud of P Core/V Core, but that doesn't matter to your users, it's an implementation detail. At a maximum I'd write "intelligent execution that scales from small files to large files".
Speaking personally, "intelligent execution that scales from small files to large files" sounds like marketing buzz that could mean absolutely nothing. I like that it mentions specifically switching between RAM and disk-powered engines, because that suggests it's not just marketing speak, but was actually engineered. Maybe P vs V Core is not the best way to market it, but I think it's worth mentioning that design.
Thanks for the thoughtful feedback!
Yes, Data.olllo uses including Polars under the hood for fast and efficient processing. A demo video is in the works and should be up soon.
Good point about the "P Core/V Core" naming—I'll simplify that to focus more on the user benefit, like scaling from small to large files smoothly.
I also like your idea of running transformations on a sample first with a one-click full run—very aligned with the vision. And subset reproduction for errors is a great suggestion, especially for things like deduping. Appreciate it!