Templar, often stylized as τemplar, is a decentralized artificial intelligence (AI) protocol that operates as Subnet 3 (SN3) on the Bittensor network. The project is designed to create an incentivized, internet-wide marketplace for computational resources, primarily for the distributed training of large-scale AI models. It aims to provide a more democratic, cost-effective, and scalable alternative for AI development compared to centralized cloud providers. [1]
Templar's core mission is to democratize access to the immense computational power required for training advanced AI, particularly Large Language Models (LLMs). The protocol functions as a Decentralized Physical Infrastructure Network (DePIN), connecting participants who contribute their hardware (Miners) with the network's need for processing power. In return for contributing GPU and CPU resources to collaborative training tasks, miners are rewarded with the project's native cryptocurrency. [1]
The project leverages the underlying architecture and consensus mechanism of the Bittensor (τ) blockchain. As Subnet 3, Templar competes for a share of the Bittensor network's token emissions, which are allocated based on the value and intelligence demonstrated by the subnet. The protocol's incentive system is designed to evaluate and reward the quality of computational work, ensuring that contributions genuinely improve the collective AI model being trained. [2]
Templar's technology focuses on overcoming the significant barriers associated with distributed computing, such as high communication costs and a lack of coordination. It employs communication-efficient algorithms and a blockchain-based incentive system to organize a global, permissionless network of contributors. [3] The project's vision also includes the future development of a decentralized marketplace where users could hire autonomous AI agents developed on the network for various digital tasks. [4]
The foundational research and whitepaper for Templar were published in the second and third quarters of 2024, outlining the project's vision for a decentralized autonomous agent network and a framework for distributed LLM training. [1] [4] Following this, the project's native token, TPLR, was introduced through an Initial DEX Offering (IDO) in August 2024. [1]
In September 2024, Templar successfully secured a slot on the Bittensor network and officially launched its mainnet as Subnet 3 (SN3), becoming one of the first ten subnets in the ecosystem. [1] [4] In the first quarter of 2025, the project team created and distributed the SN3 subnet token via a fair launch mechanism, airdropping it to early network participants and liquidity providers. [4]
A significant milestone occurred in the first quarter of 2025 with the initiation of the first "Crusade" campaign. This large-scale, coordinated training event focused on developing a specialized Large Language Model for code generation, serving as a major proof-of-concept for the network's capabilities. [1] Later in 2025, the team released "Templar v2," an update to the subnet's incentive mechanism aimed at better rewarding agent creativity and problem-solving abilities. [4]
By the fourth quarter of 2025, the network's participation had grown to surpass 1,000 active, concurrent miners. [1] In early 2026, the Templar research team published a paper detailing a comparative analysis that showed up to a 30% cost-efficiency improvement for specific AI workloads compared to traditional cloud providers. During the same period, the SN3 token experienced a surge in interest amid broader growth in the Bittensor ecosystem, reaching its all-time high price. [1] [4]
Templar's architecture is built on the Bittensor network and is designed as a peer-to-peer system that coordinates two primary roles: Miners and Validators. The system uses specialized algorithms and a robust incentive mechanism to facilitate distributed machine learning. [2]
Templar’s incentive structure is based on Bittensor’s Proof-of-Intelligence consensus, rewarding participants based on the value they add to the collective. The project developed a proprietary system named Gauntlet to manage this process. [1] [3]
Gauntlet assesses participants using a two-stage mechanism:
To score and rank miners against each other, the system uses the OpenSkill (PlackettLuce) rating algorithm, a method designed for multi-player rating environments. The resulting ratings are translated into on-chain weights, and miners receive rewards from the Bittensor network in direct proportion to their assigned weights. This ensures that high-quality contributions earn greater rewards. [2]
A core technical challenge in distributed training is the high communication cost of exchanging large amounts of data between nodes. Templar addresses this with its SparseLoCo algorithm and a multi-step gradient compression technique. [3] [2]
SparseLoCo is a communication-efficient training algorithm designed for training LLMs in low-bandwidth environments like the public internet. It combines two key techniques:
This combination allows for extreme compression, reducing communication costs while reportedly improving model performance compared to alternative methods. [3]
The technical process for compressing and exchanging gradients involves several steps:
topk_compression ratio set to 32.This entire process is managed by a communication system built with Python's asyncio to handle concurrent operations. [2]
Templar has undertaken several significant projects to demonstrate and validate its technology for large-scale, distributed AI training. [3]
This project served as an early proof-of-concept, involving the training of a 1.2 billion parameter Large Language Model. It was the first major demonstration of the Gauntlet incentive system being deployed on the Bittensor blockchain to orchestrate a training effort with completely permissionless contributions from a global network of participants. The project successfully validated the use of token-based incentives for organizing distributed AI training. [3]
The pre-training of Covenant-72B, a 72-billion parameter LLM, was described by the project as the largest collaborative and globally distributed pre-training run of its kind. This project showcased the SparseLoCo algorithm at a massive scale, enabling open and permissionless participation from contributors worldwide. The training was managed by a live blockchain protocol and was conducted on a dataset of approximately 1.1 trillion tokens. [3]
The Templar ecosystem appears to involve two distinct tokens, based on information from different sources: the main project token, TPLR, and a subnet-specific token, SN3. [1] [4]
The TPLR token is presented as the primary token for the Templar protocol.
Ticker: TPLR
Maximum Supply: 1,000,000,000 $TPLR
The information in this subsection is based on the project's official website. [1]
The SN3 token is specific to Templar's operations as Subnet 3 within the Bittensor ecosystem and is primarily traded on decentralized exchanges native to that environment.
Ticker: SN3
Asset Type: Bittensor Subnet Token
Max Supply: 21,000,000 SN3
Circulating Supply (As of April 10, 2026): 4,268,617 SN3
Distribution: The SN3 token followed a "fair launch" model with no pre-mine or venture capital allocation, primarily distributed to early participants.
The information in this subsection is based on data provided by CoinGecko. [4]
The development team behind Templar operates under pseudonyms. The project is led by a lead developer known as 'Helios,' who is responsible for the protocol's architecture. The research division is headed by a figure identified as 'Aethel' on the project's website and as 'Aethelred' in other ecosystem data sources; this individual focuses on AI training and validation methodologies. [1] [4]
The centralization of AI training poses a systemic risk to innovation. τemplar aims to democratize access to hyperscale AI by creating a transparent, permissionless, and incentivized market for computation. — From the Templar Whitepaper [1]
Every GPU that joins τemplar is a vote for an open and decentralized AI future. We are not just building a network; we are forging a collective intelligence. — 'Helios', Lead Developer [1]