> Why buy this book when ChatGPT can generate the same style of tutorial for ANY project that is customized to you?
Isn't it obvious? Because the ChatGPT output wouldn't be reviewed!
You buy books like these exactly because they are written by a professional, who has taken the time to divide it up into easily digestible chunks which form a coherent narrative, with runnable intermediate stages in-between.
For example, I expect a raytracing project to start with simple ray casting of single-color objects. After that it can add things like lights and Blinn-Phong shading, progress with Whitted-style recursive raytracing for the shiny reflections and transparent objects, then progress to modern path tracing with things like BRDFs, and end up with BVHs to make it not horribly slow.
You can stop at any point and still end up with a functional raytracer, and the added value of each step is immediately obvious to the reader. There's just no way in hell ChatGPT at its current level is going to guide you flawlessly through all of that if you start with a simple "I want to build a raytracer" prompt!
100% right. I buy lots of Japanese cookbooks secondhand. I found an Okinawa cook book for $8. When I received it, it was clear the author was just a content farmer pumping out various recipe books with copied online recipes. Once I looked up their name I saw hundreds of books across cooking baking etc. there was no way they even tried all of the recipes.
So yes, review and “narrative voice” will be more valuable than ever.
> There's just no way in hell ChatGPT at its current level is going to guide you flawlessly through all of that if you start with a simple "I want to build a raytracer" prompt!
This is the entire crux of your argument. If it's false, then everything else you wrote is wrong - because all that the consumer of the book cares about is the quality of the output.
I'd be pretty surprised if you couldn't get a tutorial exactly as good as you want, if you're willing to make a prompt that's a bit better than just "I want to build a ray tracer" prompt. I'd be even more surprised if LLMs won't be able to do this in 6 months. And that's not even considering the benefits of using an LLM (something unclear in the tutorial? Ask and it's answered).
Agreed. Still amazed that people keep trusting the service that has like a 60% failure rate, who would want to buy something that fails over half the time?
Shame OP stopped their book, it would definitely have found an audience easily. I know many programmers that love these styles of books.
In my dream world, you take that book plus information about yourself (how good of a programmer you already are), feed that into AI and get a customized version that is much better for you. Possibly shorter. Skips boring stuff you know. And slows down for stuff you have never been exposed to. Everyone wins.
Why buy the book when big AI can add it to their training data. Multitudes of people can then enjoy slightly higher quality output without you being compensated a single cent.
It would actually be nice to have a book-LLM. That is, an LLM that embodies a single (human-written) book, like an interactive book. With a regular book, you can get stuck when the author didn’t think of some possible stumbling block, or thinks along slightly differently lines than the reader. An LLM could fill in the gaps, and elaborate on details when needed.
Of course, nowadays you can ask an LLM separately. But that isn’t the same as if it were an integrated feature, focused on (and limited to) the specific book.
Absolutely. And further because when you prompt ChatGPT as you write your ray tracer you don't know what the important things to ask are. Sure, you can get their with enough prompts of "what should I be asking you" or "explain to me the basics" of so and so. But the point of the book is all of that work has already been done for you in a vetted way.
> Isn't it obvious? Because the ChatGPT output wouldn't be reviewed!
Reviewed by a human. It's trivial to take the output from one LLM and have another LLM review it.
Also, often mediocrity is enough, especially if it is cheap.
This discussion might be a bit more grounded if we were to discuss a concrete LLM response. Seems pretty freaking good to me:
https://chatgpt.com/share/6955a171-e7a4-8012-bd78-9848087058...
You’re hitting at the core problem. Experts have done the intensive research to create guides on the Internet which ChatGPT is trained on. For example, car repairs. ChatGPT can guide you through a lot of issues. But who is going to take the time to seriously investigate and research a brand new issue in a brand new model of car? An expert. Not an AI model. And as many delegate thinking to AI models, we end up with fewer experts.
ChatGPT is not an expert, it’s just statistically likely to regurgitate something very similar to what existing experts (or maybe amateurs or frauds!) have already said online. It’s not creating any information for itself.
So if we end up with fewer people willing to do the hard work of creating the underlying expert information these AI models are so generously trained on, we see stagnation in progress.
So encouraging people to write books and do real investigative research, digging for the truth, is even more important than ever. A chatbot’s value proposition is repackaging that truth in a way you can understand, surfacing it when you might not have found it. Without people researching the truth, that already fragile foundation crumbles.
I wouldn’t be surprised if publishers today delegated some of the reviewing to LLMs.
Does this type of ray tracing book exist? It’s something never learned about and would love to know what courses or books others have found valuable
> There's just no way in hell ChatGPT at its current level is going to guide you flawlessly through all of that if you start with a simple "I want to build a raytracer" prompt!
I mean, maybe not "flawlessly", and not in a single prompt, but it absolutely can.
I've gone deep in several areas, essentially consuming around a book's worth of content from ChatGPT over the course of several days, each day consisting of about 20 prompts and replies. It's an astonishingly effective way to learn, because you get to ask it to go simpler when you're confused and explain more, in whatever mode you want (i.e. focus on the math, focus on the geometry, focus on the code, focus on the intuition). And then whenever you feel like you've "got" the current stage, ask it what to move onto next, and if there are choices.
This isn't going to work for cutting-edge stuff that you need a PhD advisor to guide you through. But for most stuff up to about a master's-degree level where there's a pretty "established" progression of things and enough examples in its training data (which ray-tracing will have plenty of), it's phenomenal.
If you haven't tried it, you may be very surprised. Does it make mistakes? Yes, occasionally. Do human-authored books also make mistakes? Yes, and often probably at about the same rate. But you're stuck adapting yourself to their organization and style and content, whereas with ChatGPT it adapts its teaching and explanations and content to you and your needs.
> a professional, who has taken the time to divide it up into easily digestible chunks which form a coherent narrative, with runnable intermediate stages in-between.
Tangentially related, but I think the way to get to this is to build a "learner model" that LLMs could build and update through frequent integrated testing during instruction.
One thing that books can't do is go back and forth with you, having you demonstrate understanding before moving on, or noticing when you forget something you've already learned. That's what tutors do. The best books can do is put exercises at the end of a chapter, and pitch the next chapter at someone who can complete those exercises successfully. An LLM could drop a single-question quiz in as soon as you ask a weird question that doesn't jibe with the model, and fall back into review if you blow it.
There's just no way in hell ChatGPT at its current level is going to guide you flawlessly through all of that if you start with a simple "I want to build a raytracer" prompt!
Have you tried? Lately? I'd be amazed if the higher-end models didn't do just that. Ray-tracing projects and books on 3D graphics in general are both very well-represented in any large training set.
> Because the ChatGPT output wouldn't be reviewed!
So what? If it's not already, frontier LLM one-shot output will be as good as heavily edited human output soon.
[dead]
I heard the other day that LLMs won't replace writers, just mediocre writing.
On the one hand, I can see the point- you'll never get chatgpt to come up with something on par with the venerable Crafting Interpreters.
On the other hand, that means that all the hard-won lessons from writing poorly and improving with practice will be eliminated for most. When a computer can do something better than you right now, why bother trying to get better on your own? You never know if you'll end up surpassing it or not. Much easier to just put out mediocre crap and move on.
Which, I think, means that we will see fewer and fewer masters of crafts as more people are content with drudgery.
After all, it is cheaper and generally healthier and tastier to cook at home, yet for many people fast food or ordering out is a daily thing.