Illustration of Huawei’s approach to integrating High Bandwidth Memory (HBM) in their AI chips.
Huawei has announced its detailed plan for the development of its Ascend series of AI chips, focusing on integrating in-house High Bandwidth Memory (HBM) technology and creating large-scale computing clusters. This move signals a significant investment in advanced AI infrastructure by Huawei.
The roadmap includes plans to enhance the efficiency of their Ascend series chips by developing an internal HBM solution that could offer better performance for data-intensive workloads, such as training and deploying large neural networks. This will be a major step forward in the company’s efforts to reduce reliance on external components.
By building its own HBM technology, Huawei can tailor it specifically to meet the demands of AI applications, potentially leading to better cost-effectiveness and performance optimization. Additionally, creating large-scale clusters will enable the company to handle bigger datasets and more complex models, crucial for advancing research in deep learning and artificial intelligence.
As Huawei continues to invest heavily in these areas, it positions itself as a key player in the AI hardware market.
ByteDance invests $5.6 billion in Huawei's AI chips amid US curbs on NVIDIA.
Gabon and Huawei team up to advance digital transformation through improved internet infrastructure and education…
Huawei and Sungrow have secured top positions in Wood Mackenzie’s latest inverter market ranking, underscoring…
Huawei is gearing up for an early release of its Pura X2 smartphone, expected to…
Leaked information indicates Huawei Pura X2 could make an early debut in 2026 with advanced…
Honor introduces its latest tablets: the Pad 10 Pro with a larger battery and improved…