logoalt Hacker News

pamalast Tuesday at 11:23 PM1 replyview on HN

At least the authors acknowledge it for what it is: a tiny model on a tiny corpus and worse than the comparable transformers in terms of accuracy. I like the experimentation with new designs and one doesnt always need to show near SOTA results. From a brief inspection, however, I think it will be hard for the work to become a high profile conference acceptance without significan additional work.


Replies

jeffjeffbearlast Wednesday at 12:04 AM

I would really like to see more testing with a deeper hierarchy and alpha and beta nonzero.