Oct 22, 2024
Yep, absolutely. Compute off-chain, validate on-chain. As I wrote in my article, one of the solutions is zkSNARKs, but it is very compute-intensive, and deep learning is already compute-intensive. So, I suggest doing the off-chain training and creating a full log of the training (how weights are modified in every round). If you store every parameter of every step then the full training is reproducible. Then an independent validator chooses random steps and checks them. It's not perfect, but I think the best available solution is until we don't find more optimal ways (maybe special hardware) to generate ZK proofs.