Andre Ye
Nov 22, 2021

You're right in that you can get the weights and biases, but often in neural network interpretability/explainability we're trying to convert those hundreds of thousands of parameters stored in functionally unreadable matrices into insights about the network's decision making processes and such.

Andre Ye
Andre Ye

No responses yet