Cocoon by Telegram

Cocoon by Telegram

01/12/2025
Cocoon connects GPU power, AI, and Telegram’s vast ecosystem – all built on privacy and blockchain.
cocoon.org

Overview

As artificial intelligence capabilities expand, access to computational resources remains concentrated among a handful of technology corporations. Cocoon, formally named the Confidential Compute Open Network, introduces a decentralized alternative built on the TON blockchain and integrated directly with Telegram. Launched in November 2025 by Telegram founder Pavel Durov at Blockchain Life 2025, Cocoon creates a marketplace connecting GPU hardware owners with developers and applications requiring private AI computation. The platform emphasizes confidential computing, ensuring that data processed through the network remains encrypted and invisible even to the hardware providers executing the workloads.

Key Features

Cocoon operates through several interconnected mechanisms designed to create a self-sustaining ecosystem for private AI computation:

GPU Contribution for TON Rewards: Hardware owners can connect their GPUs to the network and earn Toncoin (TON) for processing AI inference requests. Unlike traditional cryptocurrency mining, this involves executing actual AI workloads within secure environments rather than solving cryptographic puzzles.

Affordable AI Compute Access: Developers gain access to distributed GPU resources at prices intended to undercut centralized cloud providers like Amazon Web Services and Microsoft Azure. The decentralized model eliminates intermediary overhead, potentially reducing costs for AI application deployment.

Native TON and Telegram Integration: Built directly on the TON blockchain, Cocoon leverages Telegram’s ecosystem of over one billion users as both a distribution channel and an initial demand source. Telegram serves as the platform’s first major customer, with AI features like message translation already utilizing Cocoon infrastructure.

Confidential Computing Architecture: The network employs Trusted Execution Environments (TEEs), specifically Intel Trust Domain Extensions (TDX), to create hardware-isolated processing environments. This ensures that neither the GPU provider nor any network intermediary can access the data or model being processed.

Supported AI Models: The platform currently supports open-source models including DeepSeek and Qwen, with infrastructure designed to accommodate various AI inference workloads.

Technical Operation

Cocoon functions through a three-component architecture: clients initiate requests and pay fees, proxies route requests to appropriate worker nodes based on hardware capability and reputation, and worker nodes execute AI tasks within TEE-protected environments.

When a developer submits an AI inference request, the proxy selects a suitable worker node running the Cocoon protocol stack on TEE-enabled hardware. The request is processed inside a confidential virtual machine created by Intel TDX technology, preventing the host server operator from inspecting or tampering with the data. Upon completion, the worker receives TON payment through smart contracts that verify task completion.

Hardware requirements for GPU providers include Linux servers with Intel TDX-capable CPUs, NVIDIA GPUs with confidential computing support (H100 or newer), and specific software stack configurations. This technical barrier ensures that only properly configured hardware can participate in the network.

Practical Applications

The network’s focus on confidential AI computation enables several distinct use cases:

Private AI Inference: Organizations requiring AI capabilities without exposing sensitive data to third parties can route requests through Cocoon. Healthcare, legal, and financial applications may benefit from processing proprietary information without centralized cloud exposure.

Telegram AI Features: Native integration means Telegram can offer AI-powered features like translation and summarization while maintaining user privacy claims. The messaging platform’s scale provides immediate demand for network capacity.

Passive Income for GPU Owners: Individuals with capable hardware can monetize idle GPU capacity by running the Cocoon worker software. Earnings depend on network demand, hardware performance, and uptime.

Developer Cost Reduction: Startups and independent developers can access AI inference capabilities without committing to expensive cloud contracts or purchasing dedicated hardware.

Strengths and Limitations

Advantages

Privacy-First Architecture: The mandatory use of TEEs provides hardware-level guarantees that processed data cannot be accessed by infrastructure operators, addressing a significant concern with centralized AI services.

Telegram Distribution Channel: Integration with one of the world’s largest messaging platforms provides immediate access to potential users and creates organic demand from Telegram’s own AI feature requirements.

No Additional Token: Unlike many blockchain projects, Cocoon does not introduce a new cryptocurrency, instead using the established TON token for all transactions.

Limitations

Early Stage Development: As of December 2025, the network operates with approximately 4,487 TON in total value locked, 30 worker nodes, 18 proxies, and 12 clients. This represents initial deployment rather than mature infrastructure.

Hardware Requirements: Participation as a GPU provider requires specific Intel TDX-capable hardware and NVIDIA GPUs with confidential computing support (H100 series or newer), limiting the pool of potential contributors.

AI Inference Focus: The platform currently emphasizes inference workloads rather than model training. Organizations requiring distributed training capabilities may need to look elsewhere.

Network Availability: Available compute capacity depends entirely on active GPU providers. During periods of low participation, developers may experience limited resources or increased costs.

How Does It Compare?

The decentralized compute landscape includes several established platforms with different technical approaches and market positions:

Render Network: Focuses primarily on GPU rendering for 3D graphics, animation, and visual effects production. The platform supports creative workflows with OctaneRender, Redshift, and Blender Cycles integration, alongside recent expansion into generative AI tools. Render Network has processed over 750,000 rendering jobs with 40 million frames rendered, targeting professional creative production rather than general AI inference.

io.net: Operates a decentralized GPU network with over 320,000 verified GPUs and nearly 80,000 CPUs available for AI and machine learning workloads. The platform emphasizes enterprise-grade deployment through IO Cloud, supporting Ray framework for distributed computing. io.net positions itself as a direct competitor to centralized cloud providers with claims of up to 90% cost reduction.

Akash Network: Provides general-purpose decentralized cloud computing with particular strength in GPU resources for AI workloads. The Akash Supercloud includes over 700 high-performance NVIDIA GPUs (388 H100s and 123 A100s as of late 2024), with partnerships including Brev.dev (acquired by NVIDIA), Venice.ai, and Prime Intellect. The network operates as an open marketplace with provider-set pricing.

Bittensor (TAO): Takes a different approach by creating a decentralized AI marketplace where participants are rewarded for contributing and validating machine learning models rather than raw compute. The network operates through specialized subnets focused on different AI capabilities, with TAO tokens serving both as rewards and access credentials.

Aethir: Targets enterprise-scale GPU cloud computing for AI and gaming with a network of 91,000+ community-owned Checker Nodes and over 360,000 GPUs. The platform emphasizes enterprise clients with specific focus on AI inference, large language model operations, and cloud gaming infrastructure.

Cocoon distinguishes itself through three primary characteristics: mandatory confidential computing using Intel TDX for all workloads, native integration with the Telegram ecosystem providing immediate demand and distribution, and exclusive use of the TON blockchain for settlements. While competitors offer larger resource pools or broader use cases, Cocoon’s privacy guarantees may appeal to applications where data confidentiality represents a primary concern.

Conclusion

Cocoon represents Telegram’s entry into decentralized AI infrastructure, combining TON blockchain settlement with Intel’s confidential computing technology. The platform addresses growing concerns about data privacy in AI services by ensuring that processed information remains encrypted throughout execution. For developers seeking privacy-preserving AI compute and GPU owners looking to monetize capable hardware, Cocoon offers a distinct proposition within the broader DePIN ecosystem.

The project remains in early deployment with limited network scale, making it more suitable for privacy-focused applications and early adopters than organizations requiring guaranteed capacity at scale. As Telegram integrates more AI features and the network attracts additional GPU providers, the platform’s practical utility will depend on whether confidential computing demand grows to match the infrastructure investment required for participation.

Cocoon connects GPU power, AI, and Telegram’s vast ecosystem – all built on privacy and blockchain.
cocoon.org