Samuel "Sam" B. Dare is an artificial intelligence (AI) researcher, entrepreneur, and prominent advocate for decentralized AI development. He is the founder of Covenant AI and Templar, affiliated organizations focused on creating open-source, decentralized foundation models and the infrastructure to support them. Dare is known for his work within the Bittensor ecosystem, his contributions to decentralized training protocols, and his philosophical arguments concerning the "political economy of foundation models," which posit that decentralization is a necessary counterweight to the concentration of power in the hands of large technology corporations.
Sam Dare graduated from the Saïd Business School at the University of Oxford in 2018 and holds a significant number of professional certifications in finance, investment, and blockchain technology, including Passed Level 2 of the CFA Program, Passed Level 1 of the Financial Risk Manager (FRM), and Certified Bitcoin Professional [2].
Dare's career is centered on the development and promotion of decentralized artificial intelligence. He is described as a "blockchain veteran" with a background that merges experience in enterprise software with decentralized systems [4]. After a period as an independent researcher focusing on mechanism design for distributed systems and the political economy of AI, he began formalizing his work through a series of ventures [5].
In September 2023, Dare founded Templar, an AI research and development lab focused on studying the geopolitical and economic power structures surrounding foundation models [3]. This was followed in January 2024 by the founding of Covenant AI, which acts as the operational arm to Templar's research, coordinating the practical development of open-source, decentralized AI models [3] [2]. These entities, often referred to interchangeably as Templar AI, Templar Covenant, or Covenant.ai, are at the core of Dare's work as a prominent builder within the Bittensor decentralized AI network. Some accounts list him as having previously worked as an AI Researcher at the Defense Advanced Research Projects Agency (DARPA) and as a Postdoctoral Researcher at the University of Cambridge [3].
Dare's work primarily involves the creation of protocols, models, and networks designed to make the development of advanced AI accessible and permissionless.
Templar and Covenant are the two main entities founded by Dare to pursue his vision of decentralized AI.
Under Dare's leadership, Covenant AI successfully coordinated the training of Covenant-72B, a 72-billion parameter large language model (LLM). The project was publicized as the "largest decentralized LLM training run" at the time of its completion. The training was distributed across a network of approximately 160 GPUs operated by 20 anonymous peers, leveraging the Bittensor network among other resources. The project served as a significant proof-of-concept, demonstrating that state-of-the-art AI models could be trained without exclusive reliance on centralized hyperscale data centers controlled by large corporations [6] [5].
A core component of Dare's work is the design of economic incentives to coordinate permissionless network participants.
Dare is the architect of the Templar Training Protocol, designed to manage the complexities of decentralized model training across standard internet connections [5]. A key innovation within this work is the "Gauntlet" incentive system, detailed in a May 2025 research paper co-authored by Dare. Gauntlet is a system for coordinating and verifying the contributions of anonymous participants in a distributed LLM training process.
The system uses a two-stage evaluation process where network peers assess each other's computational work. It leverages the OpenSkill algorithm to rate the reliability of each node and an optimizer (DeMo) suited for asynchronous distributed environments. The viability of the Gauntlet system was demonstrated through the successful training of Templar-1B, a 1.2-billion parameter LLM that served as a proof-of-concept. [1]
Dare and his team at Templar AI create and operate foundational subnets on the Bittensor network, which are specialized networks with their own incentive mechanisms.
Dare is a vocal proponent of a specific philosophical vision for AI, which he articulates through writings, social media, and public appearances. His arguments center on the distribution of power over information technology.
Dare's core thesis, outlined in writings such as "The Political Economy of Foundation Models," is that an "AI oligopoly" is forming due to the immense concentration of capital and computational resources required to train foundation models [5] [4]. He is critical of the dominant role played by large technology corporations, questioning their suitability as stewards of foundational AI. In a podcast appearance, he stated, "Mark Zuckerberg stole democracy. Let us never forget that. Google has committed a litany of privacy and human rights violations. Are these the people you want to give Prometheus's fire to?" [5].
He argues that true control over AI lies not with those who use or fine-tune models ("consumption"), but with those who can train them from scratch ("creation"). He considers reliance on open-weight models released by centralized companies a strategic vulnerability, stating, "The problem with 'open-weight' releases from centralized entities is that their commercial interests may change and the soil can be salted at any time, breaking the supply chain for every single developer downstream" [6].
Dare promotes the concept of "sovereign AI," where communities, organizations, and nations can pool their resources to build AI models aligned with their own cultural values and economic interests. This provides an alternative to technological dependence on a few, primarily U.S.-based, providers. In commentary for Mint, he identified India as a nation well-suited to this approach: "With a rapidly growing developer ecosystem and strong government support, India is uniquely positioned to cultivate sovereign AI capabilities" [7] [5].
Dare frames the push for decentralized technology within a historical context he calls the "Iterations vs. Themes" framework. In this view, the overarching "theme" is the persistent goal of distributing power over information technology. Specific technological movements—like the early internet or peer-to-peer file sharing—are "iterations" in service of this theme. He argues that while individual iterations can fail, the underlying theme of decentralization endures and eventually finds a successful form. He described the current effort as critical, stating, "Decentralized training is, I think, humanity's last stand... Because an iteration fails, iterations can fail, but ultimately the theme survives" [5].
Dare maintains an active public presence to communicate project progress and share his views on the AI industry. He uses the X (formerly Twitter) handle @DistStateAndMe to provide regular, detailed updates, including weekly "TGIF" (Thank God It's Friday) community roundups about projects like Templar and Grail [4]. He has provided expert commentary for media outlets like Mint on the global AI market. As of April 2026, he is scheduled to be a featured guest and speaker at the 2027 Bittensor Subnet Ideathon during the Sankalp Africa Summit in Nairobi, Kenya [4] [7].
In an interview published on November 17, 2025, on the YouTube channel Hash Rate Podcast (Episode 145), Sam Dare discussed the structure and intended function of Covenant’s subnet system, consisting of Templar (3), Grail (39), and Basilica (81).
Dare describes Covenant as an initiative focused on distributing different stages of AI model development across separate subnets. In his explanation, Templar is associated with pre-training processes using geographically distributed compute resources, Grail is associated with post-training activities such as model refinement, and Basilica is designed to coordinate the allocation of compute through an incentive-based system.
He states that this structure is intended to operate as an alternative to centralized AI training environments. According to his account, the use of distributed GPU networks can reduce training costs while maintaining comparable processing performance, despite challenges related to coordination and latency.
Dare indicates that models trained through Templar currently correspond to mid-range performance levels when compared to centralized systems, estimating them at approximately 60% of leading model capabilities. He notes that further improvements involve increasing complexity and resource requirements.
He also explains that Basilica applies an incentive mechanism in which participants are evaluated based on the efficiency and quality of compute provided, rather than on availability alone. This model is described as a method for allocating resources within the network.
The interview includes references to potential use cases involving organizations such as companies, public institutions, and academic groups, which, according to Dare, could utilize such infrastructure for repeated model training. He presents this as a change in how access to AI training resources may be structured.
The discussion also references token-related mechanisms within the BitTensor ecosystem, including plans to integrate value flows across the different subnets. [8]
In an interview published on June 4, 2025, on the YouTube channel Ventura Labs, Samuel Dare discussed his involvement with Templar (Subnet 3) within the Bittensor network and described his views on decentralized AI training.
Dare states that Templar operates as a permissionless platform focused on the decentralized pretraining of large-scale machine learning models. He presents decentralized AI development as an alternative to systems associated with large technology companies, referencing Google as a primary point of comparison, while distinguishing it from organizations such as OpenAI.
He describes a shift from earlier work in blockchain toward a focus on decentralized AI systems. In this context, Templar is presented as a network in which participants, referred to as miners, contribute computational resources to model training and are evaluated through competitive mechanisms. Dare notes that the design of such systems depends on incentive structures that encourage consistent participation and limit adversarial behavior.
Regarding technical aspects, Dare identifies challenges related to scaling distributed training processes, including communication overhead and coordination between nodes. He references the use of gradient compression methods and synchronous training approaches to address bandwidth and efficiency constraints. He also discusses the role of hardware infrastructure, including the potential relevance of open-source hardware in relation to proprietary systems used by large-scale AI providers.
Dare also outlines a model in which contributors may hold a form of stake in trained models through token-based structures. He describes Templar as a system intended to distribute control and participation across its network rather than concentrating ownership.
The interview presents Dare’s perspective that decentralized AI training systems may function as an alternative organizational model for developing machine learning infrastructure, with an emphasis on distributed participation and incentive-based coordination. [9]