Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    China visa-free plan for UK passport holders announced after Beijing talks

    January 31, 2026

    Cambridge Health Group Renews Long-standing Kozyavkin Medical Group Partnership to Expand Specialized Neurological Rehabilitation Across the GCC

    January 30, 2026

    Chinese brand BanmeGo introduces a LEGO-style smart unmanned delivery solution designed to revolutionize industry efficiency

    January 30, 2026
    Facebook X (Twitter) Instagram
    • Home
    • Contact Us
    Emirat DailyEmirat Daily
    • Automotive
    • Business
    • Entertainment
    • Health
    • Lifestyle
    • Luxury
    • News
    • Sports
    • Technology
    • Travel
    Emirat DailyEmirat Daily
    Home » Microsoft launches Maia 200 chip for Azure AI inference
    Technology

    Microsoft launches Maia 200 chip for Azure AI inference

    January 28, 2026
    Facebook Twitter Pinterest LinkedIn Tumblr Email

    MENA Newswire, SAN FRANCISCO: Microsoft on Jan. 26 introduced Maia 200, the second generation of its in-house artificial intelligence accelerator, built to run AI models in production across Azure data centres. The company said Maia 200 is designed for inference, the stage where trained models generate responses to live requests, and will be used to support a range of Microsoft AI services.

    Microsoft launches Maia 200 chip for Azure AI inference
    Microsoft Maia 200 targets faster AI inference in Azure data centers using custom silicon. (AI-generated image)

    Maia 200 is manufactured on TSMC’s 3-nanometer process and includes more than 140 billion transistors, Microsoft said. The chip pairs compute with a new memory system that includes 216 gigabytes of HBM3e high-bandwidth memory and about 272 megabytes of on-chip SRAM, aimed at sustaining large-scale token generation and other inference-heavy workloads.

    Microsoft said Maia 200 delivers more than 10 petaflops of performance at 4-bit precision and about 5 petaflops at 8-bit precision, formats commonly used to run modern generative AI efficiently. The company also said the system is designed around a 750-watt power envelope and is built with scalable networking so chips can be linked for larger deployments.

    The company said the new hardware has begun coming online in an Azure U.S. Central data centre in Iowa, with an additional location planned in Arizona. Microsoft described Maia 200 as its most efficient inference system deployed to date, reporting a 30% improvement in performance per dollar compared with its existing inference systems.

    AI inference focus and Azure deployment

    Microsoft said Maia 200 is intended to support AI products and services that rely on high-volume, low-latency model execution, including workloads running in Azure and Microsoft’s own applications. The company said it has designed the chip and the surrounding system as part of an end-to-end infrastructure approach that includes silicon, servers, networking and software for deploying AI models at scale.

    Alongside the chip, Microsoft announced early access to a Maia software development kit for developers and researchers working on model optimization. The company said the tooling is aimed at helping teams compile and tune models for Maia-based systems, and is structured to fit into common AI development workflows used for deploying inference in the cloud.

    Performance claims and model support

    Microsoft said Maia 200 is built to run large language models and advanced reasoning systems, and that it will be used for internal and hosted model deployments in Azure. The company has positioned the chip as a production inference accelerator, distinguishing it from training-focused systems that are typically used to build models before deployment.

    Microsoft has accelerated custom silicon work as demand has grown for compute to serve generative AI applications, where costs and availability of accelerators can affect how quickly services scale. Maia 200 follows Maia 100, which Microsoft introduced in 2023, and represents the company’s latest iteration of its dedicated AI accelerator line for datacenter inference.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    China visa-free plan for UK passport holders announced after Beijing talks

    January 31, 2026

    Canada South Korea pact targets EV batteries and car supply chains

    January 30, 2026

    EU firms in India plan expansion after EU India trade deal

    January 30, 2026
    Latest News

    China visa-free plan for UK passport holders announced after Beijing talks

    January 31, 2026

    Canada South Korea pact targets EV batteries and car supply chains

    January 30, 2026

    EU firms in India plan expansion after EU India trade deal

    January 30, 2026

    2027 Mercedes-Benz S-Class adds DIGITAL LIGHT micro-LEDs

    January 30, 2026

    FDA and EMA outline principles for AI use in drug development

    January 29, 2026

    Indonesia counts 50 dead after West Bandung landslide buries homes

    January 28, 2026

    Microsoft launches Maia 200 chip for Azure AI inference

    January 28, 2026

    U.S. international arrivals fell in 2025 as Canada trips dropped

    January 28, 2026
    © 2026 Emirat Daily | All Rights Reserved
    • Home
    • Contact Us

    Type above and press Enter to search. Press Esc to cancel.