I agree partially.
I do think that it was weird to focus on the AI aspect so much. AI is going to pollute everything going forward whether you like it or not. And honestly who cares, either it is a good ressource for learning or it’s not. You have to decide that for yourself and not based on whether AI helped writing it.
However I think some of the critique was because he stole the code for the interactive editor and claimed he made it himself, which of course you shouldn’t do.
You can correct me if I'm wrong, but I believe the actual claim was that Zigbook had not complied with the MIT license's attribution clause for code someone believed was copied. MIT only requires attribution for copies of "substantial portions" of code, and the code copied was 22 lines.
Does that count as substantial? I'm not sure because I'm not a lawyer, but this was really an issue about definitions in an attribution clause over less code than people regularly copy from stack overflow without a second thought. By the time this accusation was made, the Zigbook author was already under attack from the community which put them in a defensive posture.
Now, just to be clear, I think the book author behaved poorly in response. But the internet is full of young software engineers who would behave poorly if they wrote a book for a community and the community turned around and vilified them for it. I try not to judge individuals by the way they behave on their worst days. But I do think something like a community has a behavior and culture of its own and that does need to be guided with intention.
> I do think that it was weird to focus on the AI aspect so much. AI is going to pollute everything going forward whether you like it or not.
The bigger issue is that they claimed no AI was used. That’s an outright lie which makes you think if you should trust anything else about it.
> And honestly who cares, either it is a good ressource for learning or it’s not. You have to decide that for yourself and not based on whether AI helped writing it.
You have no way of knowing if something is a good resource for learning until you invest your time into it. If it turns out it’s not a good resource, your time was wasted. Worse, you may have learned wrong ideas you now have to unlearn. If something was generated with an LLM, you have zero idea which parts are wrong or right.