Gonka optimizes PoC mechanism: activation time compressed to 5 seconds, multi-level GPU implementation for continuous participation

robot
Abstract generation in progress

Gonka Decentralized AI Computing Power Network recently announced important adjustments to its core consensus mechanism. The PoC mechanism (Proof of Compute), which is the key method for verifying each node’s genuine computing contribution in the network, has undergone a systematic upgrade focusing on activation efficiency, model operation modes, and computing power weight calculation. The goal is to enable GPU resources to be more efficiently utilized for actual AI computations.

Unified Operation of PoC and Inference, Near-Real-Time Activation Mechanism

Under the new mechanism design, Gonka unifies the model operating environment for PoC verification and inference tasks. Previously, PoC used a delayed switching mode, causing nodes to frequently switch between different tasks, resulting in significant GPU idle time. After the improvement, activation has shifted from passive delay adjustment to active triggering, compressing the entire activation cycle to within 5 seconds.

This means nodes no longer need to wait for long switching times, allowing GPUs to enter working states more quickly. Co-founder David stated that this optimization is not aimed at short-term profit maximization but is an inevitable evolution during the rapid expansion phase of the network’s computing power scale. The primary goal is to maintain network stability and security under high load conditions.

Precise Alignment of Computing Power Weights with Actual Computational Costs

The Gonka team has re-evaluated the real computational consumption associated with different GPU hardware and model sizes. The previous weighting system failed to adequately reflect the differences in computing power among various models—small models, although having fewer parameters, do not proportionally reduce actual computational costs at the same token count. This led to a relatively higher token output from small model nodes, which could cause an imbalance in the computing power structure over time.

The new weight calculation scheme makes incentives more aligned with actual computational costs. By increasing the weight share of large models and high-performance hardware, the network is guided to gradually accumulate higher-density computing resources, preparing for more complex and larger-scale AI workloads. This alignment not only optimizes the profit expectations for individual nodes but also standardizes the resource allocation direction of the entire network.

Diverse Participation Options for Single Card and Small to Medium-Scale GPUs

Addressing community concerns about how small to medium-scale GPUs can remain competitive, Gonka provides specific participation pathways. Through a mining pool collaboration mechanism, single cards and small to medium-scale GPUs can join forces to participate collectively, consolidating computing power for more stable rewards. Additionally, a flexible Epoch-based participation mechanism allows nodes to dynamically join or exit based on their load conditions.

Furthermore, an independent revenue channel for inference tasks offers a supplementary mechanism for smaller computing resources. Compared to PoC verification, inference tasks have more flexible hardware requirements for each computation. Nodes can freely choose between the two channels—participating in network consensus or contributing actual AI work. Gonka emphasizes that in the future, hardware scale differences will not exclude any participants; instead, differentiated incentive designs will enable GPUs at every level to find their own place.

Unified model operation, near-real-time activation, and precise weight alignment—these three layers of optimization collectively aim at a core goal: making computing power and rewards more transparent and fair, allowing the Gonka network to maintain security and efficiency during scale expansion.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Hot Gate Fun

    View More
  • MC:$3.43KHolders:1
    0.00%
  • MC:$3.43KHolders:1
    0.00%
  • MC:$3.43KHolders:1
    0.00%
  • MC:$3.42KHolders:1
    0.00%
  • MC:$3.42KHolders:1
    0.00%
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)