Understanding maximum computing supply for machine learning requires a decentralised coordination
Gensyn's answer to providing decentralised demand/supply compute infrastructure for machine learning to train model for everyone
Maximum computing means the sum of the compute resources (processor, storage, and memory) from all heterogeneous devices.
It is heterogeneous. It is not on-demand AWS, in-house infrastructure, cloud supercomputer cluster, nor benevolent computing resources. It is nothing about hardware but a coordination infrastructure for every possible electronic device in the world to supply their excessive computing power. To who? Individuals who want to train or build a machine learning model themselves but lack of the financial capital to do so.
Gensyn hooks the individuals and the heterogeneous computing resources together through a decentralised ledger system. The individuals, called submitters, in the system can pay fiat to access the computing resources. The solver is the one who is training the submitter’s task by supplying its compute. The verifier and whistleblower are verifying the correctness of the compute.
The ledger system is connected to offchain machine learning through zero-knowledge (zk) proofs. Ideally, the verifier’s is also using zk to prove the correctness of solver’s whole work.
The team at the time was not so sure about if there is a transient onchain data storage system to store the checkpoints of solver’s work. Arweave is too expensive in this case as it is designed for long-term storage.