logoalt Hacker News

spwa406/29/20250 repliesview on HN

If you add a dimension to the input vector you can do them independently and more efficiently. Look at this. Let's say you have a 2x2 network, and you apply it to an input vector of two values:

[i1 i2 ]⋅[w1 w3 ; w2 w4 ] = [i1 ⋅w1 +i2 ⋅w3 i1 ⋅w2 +i2 ⋅w4 ]

Cool. Now what happens if we make the input vector a 2x2 matrix with, for some reason, a second set of two input values:

[i1 i2 ; j1 j2 ]⋅[w1 w3 ; w2 w4 ] = [i1 ⋅ w1 +i2 ⋅ w3 i1 ⋅ w2 +i2 ⋅ w4 ; j1 ⋅ w1 +j2 ⋅ w3 j1 ⋅ w2 +j2 ⋅w4 ]

Look at that! The input has 2 rows, each row has an input value for the network and the output matrix has 2 rows, each containing the outputs for the respective inputs. So you can "just" apply your neural network to any number of input values by just putting one to each row. You could do 2, or 1000 this way ... and a number of values would only need to be calculated once.