Machine intelligence belongs to everyone.
We are proud to introduce the Allora Network litepaper, which lays out Allora’s novel Inference Synthesis mechanism, as well as additional context on what problems the network aims to solve.
Recent advances in data access and computing power have enabled the first forms of machine intelligence capable of offering meaningful insights. However, the tremendous resources required have caused these solutions to be closed and siloed by industry monoliths. To achieve the full potential of machine intelligence, the data, algorithms, and participants must be maximally connected. Network solutions are needed, and the decentralized nature of blockchain technology is ideal for solving this problem.
Allora is a self-improving, decentralized machine intelligence network that surpasses the capabilities of its individual participants. Allora achieves this through two main innovations:
- Allora is context-aware
- Allora has a differentiated incentive structure
Unlike other decentralized AI networks, which lack the capability for context awareness, Allora is designed to evaluate and combine the predictive outputs of ML models based on both their historical performance and their relevance to current contextual conditions. In doing so, Allora ensures that the network always leverages the most appropriate models for any given situation.
Other attempts to create self-improving networked intelligence rely on a cumulative reputation system to aggregate inferences, a method that inhibits context awareness. As such, these networks tend to favor models with historically strong performance, overlooking the fact that some models perform well only under certain conditions. By dismissing models that don’t consistently perform well, these networks miss out on optimizing accuracy.
Allora’s context awareness allows it to understand that in certain conditions — and only in certain conditions — some models are worth paying attention to. This leads to context-aware machine intelligence.
The second critical hurdle to achieving decentralized machine intelligence is creating custom incentive structures that appropriately reward different actions within the network.
Therefore, workers in the network are rewarded for two primary actions:
- Their direct inferences, which contribute to the network’s collective intelligence.
- The accuracy of their forecasts regarding the performance (or losses) of other participants’ inferences (more on that later).
What Makes Allora Context-Aware?
Allora’s context awareness is facilitated by the network’s rewarding participants’ forecasts of each other’s performance under certain conditions. In many networks, worker nodes only provide an inference and nothing else.
On Allora, worker nodes have the capability of providing two outputs:
- An inference and
- A forecast of the accuracy of each others’ inferences
In other words, the network supports a logic whereby worker nodes are rewarded for commenting on each other’s expected performance. This works by something called Forecast-Implied Inference. This is the core mechanism that makes Allora context-aware.
For example, in a topic (or sub-network) on Allora that aims to predict the price of BTC, other workers might comment their awareness that an individual model performs worse when US equities markets are closed.
How Forecast-Implied Inference Works
Like their counterparts in other decentralized AI networks, workers in Allora provide their inferences to the network. What sets Allora apart is the additional responsibility of the forecasting task: workers forecast the accuracy of other participants’ inferences within their specific topic (or sub-network). This dual-layer contribution significantly enriches the network’s intelligence.
The entire process of condensing inferences and forecasted losses into a single inference is referred to as Inference Synthesis. Here’s how the Inference Synthesis mechanism unfolds:
- Workers Provide Loss Forecasts — Workers estimate the potential losses (or inaccuracies) of models submitted by their peers in the same topic.
- The Network Weights According to These Forecasts — The network assesses these forecasted losses, applying weights to workers’ inferences based on their expected accuracy. A lower forecasted loss means higher accuracy, warranting a higher weight, and a higher forecasted loss means lower accuracy, warranting a lower weight.
- The Topic Optimizes Model Contributions — The topic doesn’t simply favor the models with the lowest forecasted losses. Instead, each topic intelligently combines elements from various contributions — taking, for example, 80% from one worker’s model and 20% from another’s — to craft a single, robust forecast-implied inference.
- The Network Combines All Inferences — The forecast-implied inferences are then combined with all other inferences, taking into account their historical performance, to formulate a comprehensive, topic-wide inference. This method ensures the network leverages the most effective inferences, thereby always outperforming the individual models in the network.
The figure above represents a demonstration of Allora’s self-improving intelligence. The dotted black line represents the accuracy of a basic network that combines individual model inferences without considering current conditions. The solid black line shows Allora’s accuracy when using the forecasting task. This innovation lets workers forecast each other’s performance based on context awareness. Relying solely on a model’s historical success fails to account for its contextual prowess — ignoring the fact that a model might excel in specific situations due to its nuanced understanding of particular aspects. Allora’s method overcomes this limitation, ensuring no model is overlooked simply because its strength lies in niche, context-specific scenarios.
This framework is complemented with the network’s differentiated incentive structure, which rewards workers for their inferences, and rewards reputers for their scoring of other workers’ inferences.
How Allora’s Differentiated Incentive Structure Works
In the Allora Network, one cannot buy the truth. However, one can be — and should be — rewarded for reporting on the ground truth. An inference’s assigned reward should be a means of rewarding the truth, not rewarding a worker’s stake in the network. However, reputers still stake because the network must have economic security. This stake is then used for determining the ground truth, which does not require any particular insight; it simply solves the oracle problem of incentivizing reputers to relay information honestly.
This ethos shapes how contributions are valued and rewarded in the Allora Network. Because network participants adopt different roles in Allora, they are rewarded through different incentive structures:
- Workers — They provide AI-powered inferences to the network. There exist two kinds of inference that workers produce within a topic: the first refers to the target variable that the network topic is generating; the second refers to the forecasted losses of the inferences produced by other workers. These forecasted losses represent the fundamental ingredient that makes the network context-aware, as they provide insight into the accuracy of a worker under the current conditions. For each worker, the network uses these forecasted losses to generate a forecast-implied inference that combines the original inferences of all workers. A worker can choose to provide either or both types of inference, and receives rewards proportional to its unique contribution to the network accuracy, both in terms of its own inference and its forecast-implied inference.
- Reputers — They evaluate the quality of inferences and forecast-implied inferences provided by workers. This is done by comparing the inferences with the ground truth when it becomes available. A reputer receives rewards proportional to its stake and the consensus between its evaluations and those of other reputers.
The graphic illustrates the distribution of rewards across the network’s task classes — the inference task (blue), the forecasting task (cyan), and the reputer task (red), along with a cumulative total (black). The breakdown shows rewards per time step and their cumulative sum, offering insights into the individual and collective contributions to the network’s intelligence. Because of this differentiated incentive structure between workers and reputers, the network optimally combines the inferences produced by the network participants without diluting their weights by something unrelated to accuracy. This is achieved by recognizing — and rewarding — both the historical and context-dependent accuracy of each inference.
Stronger Than the Individual
Allora’s collective intelligence will always outperform any individual contributing to the network.
The original mission in creating Allora was to commoditize the world’s intelligence. The innovations of context-aware inferences and the differentiated incentive structure address two major challenges that make that mission possible.
The network invites contributions from anyone with data or algorithms that improve the network. As such, the quest for machine intelligence is not a winner-take-all race. The same way Allora will always outperform the individual contributions to the network, so too will individual networks always pale in comparison to the step changes of decentralized AI at-large.
With the versatility and accessibility of Allora, we foresee a future where machine intelligence will eventually become fully commoditized and integrated with the economy, technology, and society. How exactly this may happen will be shaped by the community building on Allora. We are excited to be part of that journey.
To read the full litepaper, click here.