Paul Krill
Editor at Large

Nutanix announces AI partner program, GPT-in-a-Box 2.0

news
May 30, 20242 mins

GPT-in-a-Box is a full-stack platform for running generative AI workloads that integrates Nvidia NIMs and the Hugging Face LLM library.

AI (Artificial Intelligence) technology, chip IC on PCB, PCB circuit board, microprocessor
Credit: Ken stocker / Shutterstock

Cloud software company Nutanix has announced the Nutanix AI Partner Program, intended to unite AI solutions and services partners to support customers wanting to run generative AI applications atop the Nutanix Cloud Platform and the company’s GPT-in-a-Box platform.

Unveiled May 21, the Nutanix AI Partner Program provides customers with simplified access to an expanded ecosystem of AI partners to offer “real world” generative AI solutions. Partners will help organizations build and secure third-party and homegrown generative AI applications on top of Nutanix Cloud Platform and the Nutanix GPT-in-a-Box solution, targeted at prominent AI use cases. This ecosystem of partners will help address diverse uses cases including operations, cybersecurity, fraud detection, and customer support across industries such as health care, financial services, and legal and professional services, Nutanix said. Among the initial partners are Codeium, DataRobot, Dkue, Instabase, Neural Magic, and RunAI.

Nutanix also announced GPT-in-a-Box 2.0, an upgrade of the company’s full-stack AI platform that is due in the second half of this year. It is intended to deliver expanded Nvidia accelerated computing and LLM (large language model) support, as well as simplified model management and integration with Nvidia NIMs (Nvidia inference microservices) and the Hugging Face LLM library.

Nutanix GPT-in-a-Box is a full-stack platform intended to simplify enterprise AI adoption, with integration with Nutanix Objects and Nutanix Files for data and model storage. GPT-in-a-Box 2.0 will include a unified user interface for foundation model management, API endpoint creation, and end user access key management. It also will integrate Nvidia Tensor Core GPUs.

Paul Krill

Paul Krill is editor at large at InfoWorld. Paul has been covering computer technology as a news and feature reporter for more than 35 years, including 30 years at InfoWorld. He has specialized in coverage of software development tools and technologies since the 1990s, and he continues to lead InfoWorld’s news coverage of software development platforms including Java and .NET and programming languages including JavaScript, TypeScript, PHP, Python, Ruby, Rust, and Go. Long trusted as a reporter who prioritizes accuracy, integrity, and the best interests of readers, Paul is sought out by technology companies and industry organizations who want to reach InfoWorld’s audience of software developers and other information technology professionals. Paul has won a “Best Technology News Coverage” award from IDG.

More from this author