logoalt Hacker News

iLoveOncall11/07/20243 repliesview on HN

Your website has a serious issue. Trying to play the YouTube video makes the page slow down to a crawl, even in 1080p, while playing it on YouTube directly has no issue, even in 4K.

On the project itself, I don't really find it exciting at all, I'm sorry. It's just another wrapper for a 3rd party model, and the fact that you can 1) describe the entire workflow in 3 paragraphs, and 2) built it and launched it in around 4 months, emphasizes that.

Congrats on launch I guess.


Replies

brandonchen11/07/2024

Weird, thanks for flagging – we're just using a Youtube embed in an iframe but I'll take a look.

No worries if this isn't a good fit for you. You're welcome to try it out for free anytime if you change your mind!

FWIW I wasn't super excited when James first showed me the project. I had tried so many AI code editors before, but never found them to be _actually usable_. So when James asked me to try, I just thought I'd be humoring him. Once I gave it a real shot, I found Codebuff to be great because of its form factor and deep context awareness: CLI allows for portability and system integration that plugins or extensions really can't do. And when AI actually understands my codebase, I just get a lot more done.

Not trying to convince you to change your mind, just sharing that I was in your shoes not too long ago!

show 1 reply
roopepal11/07/2024

> Your website has a serious issue.

I was thinking the same. My (admittedly old-ish) 2070 Super runs at 25-30% just looking at the landing page. Seems a bit crazy for a basic web page. I'm guessing it's the background animation.

CharlieDigital11/08/2024

    > I'm sorry. It's just another wrapper for a 3rd party model
The main challenge with working with LLMs is actually one of "ETL" and understanding what data to load and how to transform it into some form that leads to the desired output.

For trivial tasks, this is certainly easy. For complicated tasks, like understanding a codebase or a product catalog of tens of thousands of entries, this is non-trivial.

My team is not working in the code gen space, but even though we also "just wrap" an API, almost all of our work is in data acquisition, transformation, the retrieval strategy, and structuring of the request context.

The API call to the LLM is like hitting "bake" on an oven: all of the real work happens before that.