Announcing the availability of Azure OpenAI Data Zones and latest updates from Azure AI

1 month ago 15
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

Summarizing caller capabilities this period crossed Azure AI portfolio that supply greater choices and flexibility to physique and standard AI solutions.

Over 60,000 customers including AT&T, H&R Block, Volvo, Grammarly, Harvey, Leya, and much leverage Microsoft Azure AI to thrust AI transformation. We are excited to spot the increasing adoption of AI crossed industries and businesses tiny and large. This blog summarizes caller capabilities crossed Azure AI portfolio that supply greater prime and flexibility to physique and standard AI solutions. Key updates include:

Azure OpenAI Data Zones for the United States and European Union

We are thrilled to denote Azure OpenAI Data Zones, a caller deployment enactment that provides enterprises with adjacent much flexibility and power implicit their information privateness and residency needs. Tailored for organizations successful the United States and European Union, Data Zones let customers to process and store their information wrong circumstantial geographic boundaries, ensuring compliance with determination information residency requirements portion maintaining optimal performance. By spanning aggregate regions wrong these areas, Data Zones connection a equilibrium betwixt the cost-efficiency of planetary deployments and the power of determination deployments, making it easier for enterprises to negociate their AI applications without sacrificing information oregon speed.

This caller diagnostic simplifies the often-complex task of managing information residency by offering a solution that allows for higher throughput and faster entree to the latest AI models, including newest innovation from Azure OpenAI Service. Enterprises tin present instrumentality vantage of Azure’s robust infrastructure to securely standard their AI solutions portion gathering stringent information residency requirements. Data Zones is disposable for Standard (PayGo) and coming soon to Provisioned.

graphical idiosyncratic    interface, application

Azure OpenAI Service updates

Earlier this month, we announced general availability of Azure OpenAI Batch API for Global deployments. With Azure OpenAI Batch API, developers tin negociate large-scale and high-volume processing tasks much efficiently with abstracted quota, a 24-hour turnaround time, astatine 50% little outgo than Standard Global. Ontada, an entity wrong McKesson, is already leveraging Batch API to process ample measurement of diligent information crossed oncology centers successful the United States efficiently and outgo effectively.

 ”Ontada is astatine the unsocial presumption of serving providers, patients and beingness subject partners with data-driven insights. We leverage the Azure OpenAI Batch API to process tens of millions of unstructured documents efficiently, enhancing our quality to extract invaluable objective information. What would person taken months to process present takes conscionable a week. This importantly improves evidence-based medicine signifier and accelerates beingness subject merchandise R&D. Partnering with Microsoft, we are advancing AI-driven oncology research, aiming for breakthroughs successful personalized crab attraction and cause development.” — Sagran Moodley, Chief Innovation and Technology Officer, Ontada

We person besides enabled Prompt Caching for o1-preview, o1-mini, GPT-4o, and GPT-4o-mini models connected Azure OpenAI Service. With Prompt Caching developers tin optimize costs and latency by reusing precocious seen input tokens. This diagnostic is peculiarly utile for applications that usage the aforesaid discourse repeatedly specified arsenic codification editing oregon agelong conversations with chatbots. Prompt Caching offers a 50% discount on cached input tokens on Standard offering and faster processing times.

For Provisioned Global deployment offering, we are lowering the archetypal deployment quantity for GPT-4o models to 15 Provisioned Throughput Unit (PTUs) with further increments of 5 PTUs. We are besides lowering the terms for Provisioned Global Hourly by 50% to broaden entree to Azure OpenAI Service. Learn much here astir managing costs for AI deployments. 

In addition, we’re introducing a 99% latency work level statement (SLA) for token generation. This latency SLA ensures that tokens are generated astatine faster and much accordant speeds, particularly astatine precocious volumes.

New models and customization

We proceed to grow exemplary prime with the summation of caller models to the exemplary catalog. We person respective caller models disposable this month, including Healthcare manufacture models and models from Mistral and Cohere. We are besides announcing customization capabilities for Phi-3.5 household of models.

  • Healthcare manufacture models, comprising of precocious multimodal aesculapian imaging models including MedImageInsight for representation analysis, MedImageParse for representation segmentation crossed imaging modalities, and CXRReportGen that tin make elaborate structured reports. Developed successful collaboration with Microsoft Research and manufacture partners, these models are designed to beryllium fine-tuned and customized by healthcare organizations to conscionable circumstantial needs, reducing the computational and information requirements typically needed for gathering specified models from scratch. Explore contiguous successful Azure AI exemplary catalog.
  • Ministral 3B from Mistral AI: Ministral 3B represents a important advancement successful the sub-10B category, focusing connected knowledge, commonsense reasoning, function-calling, and efficiency. With enactment for up to 128k discourse length, these models are tailored for a divers array of applications—from orchestrating agentic workflows to processing specialized task workers. When utilized alongside larger connection models similar Mistral Large, Ministral 3B tin service arsenic businesslike intermediary for function-calling successful multi-step agentic workflows.
  • Cohere Embed 3: Embed 3, Cohere’s industry-leading AI hunt model, is present disposable successful the Azure AI Model Catalog—and it’s multimodal! With the quality to make embeddings from some substance and images, Embed 3 unlocks important worth for enterprises by allowing them to hunt and analyse their immense amounts of data, nary substance the format. This upgrade positions Embed 3 arsenic the astir almighty and susceptible multimodal embedding exemplary connected the market, transforming however businesses hunt done analyzable assets similar reports, merchandise catalogs, and plan files. 
  • Fine-tuning wide availability for Phi 3.5 family, including Phi-3.5-mini and Phi-3.5-MoE. Phi household models are good suited for customization to amended basal exemplary show crossed a assortment of scenarios including learning a caller accomplishment oregon a task oregon enhancing consistency and prime of the response. Given their tiny compute footprint arsenic good arsenic unreality and borderline compatibility, Phi-3.5 models connection a outgo effectual and sustainable alternate erstwhile compared to models of the aforesaid size oregon adjacent size up. We’re already seeing adoption of Phi-3.5 household for usage cases including borderline reasoning arsenic good arsenic non-connected scenarios. Developers tin fine-tune Phi-3.5-mini and Phi-3.5-MoE contiguous done exemplary arsenic a level offering and utilizing serverless endpoint.
graphical idiosyncratic    interface

AI app development

We are gathering Azure AI to beryllium an open, modular platform, truthful developers tin spell from thought to codification to unreality quickly. Developers tin present research and entree Azure AI models straight done GitHub Marketplace done Azure AI exemplary inference API. Developers tin effort antithetic models and comparison exemplary show successful the playground for escaped (usage limits apply) and erstwhile acceptable to customize and deploy, developers tin seamlessly setup and login to their Azure relationship to standard from escaped token usage to paid endpoints with enterprise-level information and monitoring without changing thing other successful the code.

We besides announced AI App Templates to velocity up AI app development. Developers tin usage these templates successful GitHub Codespaces, VS Code, and Visual Studio. The templates connection flexibility with assorted models, frameworks, languages, and solutions from providers similar Arize, LangChain, LlamaIndex, and Pinecone. Developers tin deploy afloat apps oregon commencement with components, provisioning resources crossed Azure and spouse services.

Our ngo is to empower each developers crossed the globe to physique with AI. With these updates, developers tin rapidly get started successful their preferred environment, take the deployment enactment that champion fits the request and standard AI solutions with confidence.

New features to physique secure, enterprise-ready AI apps

At Microsoft, we’re focused connected helping customers usage and physique AI that is trustworthy, meaning AI that is secure, safe, and private. Today, I americium excited to stock 2 caller capabilities to physique and standard AI solutions confidently.

The Azure AI exemplary catalog offers implicit 1,700 models for developers to explore, evaluate, customize, and deploy. While this immense enactment empowers innovation and flexibility, it tin besides contiguous important challenges for enterprises that privation to guarantee each deployed models align with their interior policies, information standards, and compliance requirements. Now, Azure AI administrators tin use Azure policies to pre-approve prime models for deployment from the Azure AI exemplary catalog, simplifying exemplary enactment and governance processes. This includes pre-built policies for Models-as-a-Service (MaaS) and Models-as-a-Platform (MaaP) deployments, portion a elaborate usher facilitates the instauration of customized policies for Azure OpenAI Service and different AI services. Together, these policies supply implicit sum for creating an allowed exemplary database and enforcing it crossed Azure Machine Learning and Azure AI Studio.

To customize models and applications, developers whitethorn request entree to resources located on-premises, oregon adjacent resources not supported with backstage endpoints but inactive located successful their customized Azure virtual web (VNET). Application Gateway is simply a load balancer that makes routing decisions based connected the URL of an HTTPS request. Application Gateway volition enactment a backstage transportation from the managed VNET to immoderate resources utilizing HTTP oregon HTTPs protocol. Today, it is verified to enactment a backstage transportation to Jfrog Artifactory, Snowflake Database, and Private APIs. With Application Gateway successful Azure Machine Learning and Azure AI Studio, present disposable successful nationalist preview, developers tin entree on-premises oregon customized VNET resources for their training, fine-tuning, and inferencing scenarios without compromising their information posture.

Start contiguous with Azure AI

It has been an unthinkable six months being present astatine Azure AI, delivering state-of-the-art AI innovation, seeing developers physique transformative experiences utilizing our tools, and learning from our customers and partners. I americium excited for what comes next. Join america astatine Microsoft Ignite 2024 to perceive astir the latest from Azure AI.

Additional resources:

Read Entire Article