logoalt Hacker News

ComputerGurulast Sunday at 5:07 PM1 replyview on HN

Not very rigorous or scientific, honestly, I would say it's just clickbait spam with some pretty graphs. Everything on twitter is now a "deep dive". No info on how the 10M "random examples" were generated and how that prevents the model from collapsing around variations of the same output. Others already mentioned how the "classification" of output by coding language is bunk with a good explanation for how Perl can come out on top even if it's not actually Perl, but I was struck by OP saying "(btw, from my analysis Java and Kotlin should be way higher. classifier may have gone wrong)" but then merrily continuing to use the data.

Personally, I expect more rigor from any analysis and would hold myself to a higher standard. If I see anomalous output at a stage, I don't think "hmm looks like one particular case may be bad but the rest is fine" but rather "something must have gone wrong and the entire output/methodology is unusable garbage" until I figure out exactly how and why it went wrong. And 99 times out of a 100 it wasn't the one case (that happened to be languages OP was familiar with) but rather something fundamentally incorrect in the approach that means the data isn't usable and doesn't tell you anything.


Replies

godelskilast Monday at 1:25 AM

  > Personally, I expect more rigor from any analysis and would hold myself to a higher standard.
When something is "pretty bizarre" the most likely conclusion is "I fucked up", which is very likely in this case. I really wonder if he actually checked the results of the classifier. These things can be wildly inaccurate since differences in languages can be quite small at times and some are very human language oriented. He even admits that Java and Kotlin should be higher but then doesn't question Perl, R, Applescript, Rust, and the big drop to Python. What's the joke? If you slam your head on the keyboard you'll generate a valid Perl program?

It worries me that I get this feeling from quite a number of ML people who are being hired and paid big bucks from big tech companies. I say this as someone in ML too. There's a propensity to just accept outputs rather than question them. This is like a basic part of doing any research, you should always be incredibly suspicious of your own results. What did Feynman say? Something like "The first rule is not to be fooled and you're the easiest person to fool"?

show 1 reply