Optimizing Decentralized Online Learning for Supervised Regression and Classification Problems

ADI
2
,
1
-
14
;
January 27, 2025
Read Full Research Paper

Decentralized learning networks aim to synthesize a single network inference from a set of raw inferences provided by multiple participants. To determine the combined inference, these networks must adopt a mapping from historical participant performance to weights, and to appropriately incentivize contributions they must adopt a mapping from performance to fair rewards. Despite the increased prevalence of decentralized learning networks, there exists no systematic study that performs a calibration of the associated free parameters. Here we present an optimization framework for key parameters governing decentralized online learning in supervised regression and classification problems. These parameters include the slope of the mapping between historical performance and participant weight, the timeframe for performance evaluation, and the slope of the mapping between performance and rewards. These parameters are optimized using a suite of numerical experiments that mimic the design of the Allora Network, but have been extended to handle classification tasks in addition to regression tasks. This setup enables a comparative analysis of parameter tuning and network performance optimization (loss minimization) across both problem types. We demonstrate how the optimal performance-weight mapping, performance timeframe, and performance-reward mapping vary with network composition and problem type. Our findings provide valuable insights for the optimization of decentralized learning protocols, and we discuss how these results can be generalized to optimize any inference synthesis-based, decentralized AI network.

@article{10.70235/allora.0x20001,
   author = {Kruijssen, J. M. Diederik and Valieva, Renata and Longmore, Steven N.},
   title = "{Optimizing Decentralized Online Learning for Supervised Regression and Classification Problems}",
   journal = {Allora Decentralized Intelligence},
   volume = {2},
   pages = {1-14},
   year = {2025},
   month = {1},
   day = {27},
   abstract = "{Decentralized learning networks aim to synthesize a single network inference from a set of raw inferences provided by multiple participants. To determine the combined inference, these networks must adopt a mapping from historical participant performance to weights, and to appropriately incentivize contributions they must adopt a mapping from performance to fair rewards. Despite the increased prevalence of decentralized learning networks, there exists no systematic study that performs a calibration of the associated free parameters. Here we present an optimization framework for key parameters governing decentralized online learning in supervised regression and classification problems. These parameters include the slope of the mapping between historical performance and participant weight, the timeframe for performance evaluation, and the slope of the mapping between performance and rewards. These parameters are optimized using a suite of numerical experiments that mimic the design of the Allora Network, but have been extended to handle classification tasks in addition to regression tasks. This setup enables a comparative analysis of parameter tuning and network performance optimization (loss minimization) across both problem types. We demonstrate how the optimal performance-weight mapping, performance timeframe, and performance-reward mapping vary with network composition and problem type. Our findings provide valuable insights for the optimization of decentralized learning protocols, and we discuss how these results can be generalized to optimize any inference synthesis-based, decentralized AI network.}",
   doi = {10.70235/allora.0x20001},
   url = {https://doi.org/10.70235/allora.0x20001},
   eprint = {2501.16519},
}

Provider: Allora Labs
Database: Allora Decentralized Intelligence
Content: text/plain; charset="UTF-8"
TY  - JOUR
AU  - Kruijssen, J. M. Diederik
AU  - Valieva, Renata
AU  - Longmore, Steven N.
T1  - Optimizing Decentralized Online Learning for Supervised Regression and Classification Problems
PY  - 2025
Y1  - 2025/01/27
DO  - 10.70235/allora.0x20001
JO  - Allora Decentralized Intelligence
JA  - ADI
VL  - 2
SP  - 1
EP  - 14
AB  - Decentralized learning networks aim to synthesize a single network inference from a set of raw inferences provided by multiple participants. To determine the combined inference, these networks must adopt a mapping from historical participant performance to weights, and to appropriately incentivize contributions they must adopt a mapping from performance to fair rewards. Despite the increased prevalence of decentralized learning networks, there exists no systematic study that performs a calibration of the associated free parameters. Here we present an optimization framework for key parameters governing decentralized online learning in supervised regression and classification problems. These parameters include the slope of the mapping between historical performance and participant weight, the timeframe for performance evaluation, and the slope of the mapping between performance and rewards. These parameters are optimized using a suite of numerical experiments that mimic the design of the Allora Network, but have been extended to handle classification tasks in addition to regression tasks. This setup enables a comparative analysis of parameter tuning and network performance optimization (loss minimization) across both problem types. We demonstrate how the optimal performance-weight mapping, performance timeframe, and performance-reward mapping vary with network composition and problem type. Our findings provide valuable insights for the optimization of decentralized learning protocols, and we discuss how these results can be generalized to optimize any inference synthesis-based, decentralized AI network.
UR  - https://doi.org/10.70235/allora.0x20001
C1  - eprint: arXiv:2501.16519
ER  -

%0 Journal Article
%A Kruijssen, J. M. Diederik
%A Valieva, Renata
%A Longmore, Steven N.
%T Optimizing Decentralized Online Learning for Supervised Regression and Classification Problems
%B Allora Decentralized Intelligence
%D 2025
%R 10.70235/allora.0x20001
%J Allora Decentralized Intelligence
%V 2
%P 1-14
%X Decentralized learning networks aim to synthesize a single network inference from a set of raw inferences provided by multiple participants. To determine the combined inference, these networks must adopt a mapping from historical participant performance to weights, and to appropriately incentivize contributions they must adopt a mapping from performance to fair rewards. Despite the increased prevalence of decentralized learning networks, there exists no systematic study that performs a calibration of the associated free parameters. Here we present an optimization framework for key parameters governing decentralized online learning in supervised regression and classification problems. These parameters include the slope of the mapping between historical performance and participant weight, the timeframe for performance evaluation, and the slope of the mapping between performance and rewards. These parameters are optimized using a suite of numerical experiments that mimic the design of the Allora Network, but have been extended to handle classification tasks in addition to regression tasks. This setup enables a comparative analysis of parameter tuning and network performance optimization (loss minimization) across both problem types. We demonstrate how the optimal performance-weight mapping, performance timeframe, and performance-reward mapping vary with network composition and problem type. Our findings provide valuable insights for the optimization of decentralized learning protocols, and we discuss how these results can be generalized to optimize any inference synthesis-based, decentralized AI network.
%U https://doi.org/10.70235/allora.0x20001
%= eprint: arXiv:2501.16519