logoalt Hacker News

big-chungus4last Saturday at 10:37 AM0 repliesview on HN

"Formulas that update backwards" isn't really the main idea behind neural networks. It's an efficient way of computing gradients, but there are other ways. For example forward propagation would compute a jacobian-matrix product of input wrt output with an identity matrix. Backpropagation is similar to bidi-calc to the same extent as it is similar to many other algorithms which traverse some graph backward.

I think you should be able to use bidi-calc to train a neural net, altough I haven't tried. You'd define a neural net, and then change it's random output to what you want it to output. However as I understand it, it won't find a good solution. It might find a least squares solution to the last layer, then you'd want previous layer to output something that reduces error of the last layer, but bidi-calc will no longer consider last layer at all.