HIVE's expansion strategy is focused on delivering scalable, renewable energy-powered AI compute capacity through BUZZ HPC. This expansion in British Columbia, coupled with an enlargement in Manitoba, enhances Canada's sovereign compute capabilities, enabling faster GPU deployment for AI workloads and enterprise clients. This move solidifies HIVE's capital-light growth model, leveraging existing partnerships while pursuing high-margin, recurring GPU revenue. The expansion further underscores HIVE's strategy to position Canada as a hub for AI infrastructure and innovation by accelerating the company's GPU cloud development.
HIVE's BUZZ HPC has expanded its data center footprint in British Columbia, quadrupling the capacity of its liquid-cooled AI data centers. This news release is a “Designated News Release” pursuant to the Company’s prospectus supplement dated November 25, 2025.

The expansion adds a co-location facility in British Columbia, immediately providing 5 MW of capacity with options to expand by an additional 7.6 MW. The new immediate capacity will enable the Company to deploy up to 2,000 next-generation, high-density AI-optimized GPUs in British Columbia, complementing the approximately 2,000 GPUs at BUZZ's existing facility in Manitoba. In total, the Company has now initiated the deployment of over 4,000 GPUs through its data center partners and owned sites, accelerating the Company’s GPU AI cloud deployment targets announced for the calendar year 2026.
HIVE's AI co-location footprint with Bell Canada AI Fabric now spans two provinces in Western Canada. Through its strategic data center partnership with Bell Canada AI Fabric, the Company now has a growth path to add over 6,000 GPU deployments in Canada, providing the infrastructure for its GPU cloud revenue targets.

Notably, this expanded co-location capacity requires no additional capital expenditure. The Company's deposits with its strategic data center partners in 2025 are sufficient to secure the entire growth pipeline. Standard operating costs associated with GPU procurement, installation, and ongoing data center operations remain separate charges and are expected as part of normal business activities.
The Company previously disclosed its target of 6,000 new deployments of the latest generation of GPUs for its AI cloud. This co-location expansion provides the infrastructure needed to achieve this goal, with contracted revenue expected from 4,000 next-generation AI-optimized GPUs (including 2,000 high-density GPUs) within the next 6 months. The Company also anticipates a further 2,000 high-density GPUs through additional partner data centers or owned data centers.

