Cocoon

Wiki Powered byIconIQ
Cocoon

The Agent Tokenization Platform (ATP):Build autonomous agents with the Agent Development Kit (ADK)
Visit IQ AI

Cocoon

Cocoon (Confidential Compute Open Network) is a decentralized artificial intelligence (AI) computing network developed and launched by Telegram founder . [1] [2] It functions as a two-sided marketplace built on The (TON) blockchain, designed to connect owners of Graphics Processing Units (GPUs) with developers who require computational power for AI tasks. The network's core feature is its utilization of confidential computing technology to ensure that AI models and user data remain private and secure during processing. [3]

Overview

Cocoon was created to provide a decentralized, privacy-preserving, and economically efficient alternative to centralized cloud computing services offered by major technology corporations like Amazon Web Services and Microsoft Azure. [4] The project's vision, articulated by Telegram founder , is to democratize access to AI computation and prevent the centralization of AI power within a few companies. [5] It aims to address what its proponents describe as the high costs and data privacy vulnerabilities associated with traditional AI cloud providers. [6]

The network operates as a Decentralized Physical Infrastructure Network () where participants contribute hardware in exchange for cryptocurrency rewards. GPU owners can connect their compatible hardware to the network and earn Toncoin (TON) for processing AI inference jobs. [7] Developers, in turn, gain access to a global, distributed pool of computational resources to run their AI applications at prices determined by an open market. Crucially, the use of confidential computing is intended to guarantee that the GPU owner providing the hardware cannot access the developer's proprietary AI model or the end-user's data. [3]

Telegram acts as the project's strategic initiator and first major client, intending to leverage its large user base to generate initial demand for the network. This integration is designed to solve the "cold start" problem many decentralized networks face and to power new, privacy-focused AI features within the Telegram application. [2] The project's source code is open-source and hosted on GitHub under the official Telegram Messenger organization. [8]

History

The technological concepts underpinning Cocoon were first publicly discussed in April 2024. During the TON DEV CONF DUBAI '24 conference, a developer from Telegram delivered a technical presentation titled "Confidential AI Compute on TON," which previewed the core architecture of running AI models in a secure, verifiable manner on the TON blockchain. [7]

The following video is the presentation from the TON DEV CONF DUBAI '24 event, where the core technical concepts of Cocoon were first introduced.

On November 21, 2024, Telegram founder officially announced the Cocoon project through a post on the social media platform X (formerly Twitter). [7] Following this, Durov presented the vision for Cocoon in more detail during his keynote address at the 2025 conference in Dubai on October 29, 2025. He framed Cocoon as a critical step toward a more open and decentralized AI landscape and confirmed that Telegram would be the network's first major client. [9]

On November 30, 2025, announced that the Cocoon network was live and had begun processing its first AI requests from users. He confirmed that GPU owners on the network were already earning TON tokens for their contributions. [4] On the same day, the project's initial source code was published on the TelegramMessenger/cocoon GitHub repository, released under the Apache-2.0 License. The initial roadmap following the launch focused on scaling the network by onboarding more GPU providers and attracting developer demand. [6] [8]

Technical Architecture

Cocoon's architecture is built on three pillars: confidential computing for privacy, a decentralised network model for infrastructure, and integration with the TON for payments and coordination. The system is generally composed of three main parties: the Client (developer or user submitting a request), the Proxy (an intermediary that matches requests to workers), and the Worker (the GPU server executing the task). [6]

Confidential Computing

The core of Cocoon's privacy promise is its mandatory use of confidential computing, which leverages hardware-based (TEEs) to isolate and protect data and code during processing. [3]

Intel® Trust Domain Extensions (TDX)

Cocoon's worker nodes run AI models inside "confidential virtual machines" or "TDX guests," which are hardware-isolated environments created by Intel TDX technology. This prevents the host server operator, including the GPU owner, from inspecting or tampering with the AI model or user data being processed within the virtual machine. [7]

Intel® Software Guard Extensions (SGX)

The network also utilizes Intel SGX for a component called the seal-server. This component runs on the host machine and uses an SGX enclave—a protected area of memory—to securely derive, manage, and "seal" cryptographic keys for the TDX guest. This ensures that the worker's identity and keys are tied to its specific hardware and software state and can persist across reboots without being exposed to the host system. [7]

NVIDIA Confidential Computing (CC)

To extend security to the GPU itself, Cocoon requires NVIDIA GPUs that support Confidential Computing, such as those from the H100 series and newer. This technology protects data while it is in use on the GPU, enabling hardware-accelerated AI inference while maintaining the security perimeter of the TEE. [7]

Remote Attestation

Before a client sends any sensitive data, it performs remote attestation to cryptographically verify the integrity of the remote worker node. This process ensures the client is communicating with a genuine Cocoon worker running the correct software inside a legitimate Intel TDX environment. Cocoon implements this via RA-TLS (Remote Attestation over Transport Layer Security), which integrates the attestation evidence into the TLS handshake to establish a secure, end-to-end encrypted channel. [3]

Blockchain Integration

The Open Network (TON) serves as the trustless economic and coordination backbone for Cocoon.

  • Payments: Developers reward GPU providers with (TON) for successfully completed AI inference jobs. These transactions are managed by smart contracts on the TON blockchain, creating a global and transparent payment system without a central intermediary.
  • Cocoon Root Contract: A primary smart contract on TON acts as the network's central registry. It maintains a list of verified TDX guest image hashes and supported AI model hashes, stores network proxy addresses, and governs the trustless payment system. [7]

System Components

  • Cocoon Worker: The core software that runs inside the TDX guest VM and executes AI inference tasks on the GPU.
  • Seal Server: A mandatory process running on the host machine that uses an SGX enclave to manage cryptographic keys for the worker.
  • Health Client (**health-client**): A command-line utility for GPU owners to monitor the status, performance, and logs of their worker node. [7]
  • Codebase: The project is primarily written in C++ (77.3%), with smaller parts in CMake, Python, Shell, C, and Go. [8]

Ecosystem Participants

Cocoon's marketplace is designed around the interactions of several key groups.

GPU Owners

GPU owners are the supply side of the network. By contributing their computational resources, they earn rewards in TON. To participate, they must have a server with specific hardware, including an Intel CPU with TDX support and an NVIDIA GPU with Confidential Computing support. The setup involves enabling security features in the BIOS, downloading the official Cocoon distribution, and running the seal-server and cocoon-launch scripts. Owners can set a price multiplier for their services via a configuration file. [7]

Developers

Developers are the demand side of the network. They integrate Cocoon to run AI models for their applications, benefiting from lower costs set by a competitive market and the ability to offer privacy-centric AI features. The workflow for a developer involves discovering a worker node via the on-chain registry, performing remote attestation, establishing a secure RA-TLS channel, sending encrypted inputs for inference, and receiving encrypted results, with payment handled automatically by TON smart contracts. [3]

End-Users

End-users of applications built on Cocoon, including future AI features in Telegram, are the ultimate beneficiaries of the network's privacy model. They can interact with advanced AI tools with the assurance that their data and queries are kept confidential from all third parties, including the hardware operators. [6]

REFERENCES

HomeCategoriesRankEventsGlossary