NVIDIA AI Workbench Speeds Adoption of Custom Generative AI for World’s Enterprises

Facebook
Twitter
LinkedIn
WhatsApp

Breaking News from SIGGRAPH— NVIDIA has just unveiled the NVIDIA AI Workbench, a comprehensive, user-friendly toolkit designed to empower developers to swiftly build, experiment with, and tailor pretrained generative AI models on a PC or workstation. The toolkit can then be scaled to virtually any data center, public cloud, or NVIDIA DGX™ Cloud.

The AI Workbench eliminates the complexities of initiating an enterprise AI project. It provides a streamlined interface on a local system, enabling developers to modify models from renowned repositories like Hugging Face, GitHub, and NVIDIA NGC™ using custom data. These models can then be effortlessly shared across multiple platforms.

“Businesses worldwide are striving to find the right infrastructure and build generative AI models and applications,” said Manuvir Das, vice president of enterprise computing at NVIDIA. “NVIDIA AI Workbench offers a simplified pathway for cross-organizational teams to create the AI-based applications that are becoming increasingly vital in today's business landscape.”

Welcoming a New Era for AI Developers
Despite the availability of hundreds of thousands of pretrained models, customizing them with various open-source tools can be a daunting task. It often involves searching through multiple online repositories for the right framework, tools, and containers, and acquiring the necessary skills to tailor a model for a specific use case.

NVIDIA AI Workbench is here to change that. It enables developers to customize and run generative AI with just a few clicks. It brings together all necessary enterprise-grade models, frameworks, software development kits, and libraries from open-source repositories and the NVIDIA AI platform into a unified developer toolkit.

Top AI infrastructure providers — including Dell Technologies, Hewlett Packard Enterprise, HP Inc., Lambda, Lenovo, and Supermicro — are embracing AI Workbench for its ability to enhance their latest generation of multi-GPU-capable desktop workstations, high-end mobile workstations, and virtual workstations.

Developers using a Windows or Linux-based NVIDIA RTX™ PC or workstation can now initiate, test, and fine-tune enterprise-grade generative AI projects on their local RTX systems, and easily access data center and cloud computing resources to scale as needed.

Introducing NVIDIA AI Enterprise 4.0 Software: Accelerating AI Deployment
To further expedite the adoption of generative AI, NVIDIA has announced the latest version of its enterprise software platform, NVIDIA AI Enterprise 4.0. This platform equips businesses with the tools needed to adopt generative AI, while also providing the security and API stability required for reliable production deployments.

The new software and tools supported in NVIDIA AI Enterprise that help streamline generative AI deployment include:

  • NVIDIA NeMo™, a cloud-native framework for building, customizing, and deploying large language models. With NeMo, NVIDIA AI Enterprise provides end-to-end support for creating and customizing LLM applications.
  • NVIDIA Triton™ Management Service, which helps automate and optimize production deployments. It allows enterprises to automatically deploy multiple NVIDIA Triton Inference Server instances in Kubernetes with model orchestration for efficient operation of scalable AI.
  • NVIDIA Base Command Manager Essentials cluster management software, which helps enterprises maximize performance and utilization of AI servers across data center, multi-cloud, and hybrid-cloud environments.

NVIDIA AI Enterprise software — which lets users build and run NVIDIA AI-enabled solutions across the cloud, data center, and edge — is certified to run on mainstream NVIDIA-Certified Systems™, NVIDIA DGX systems, all major cloud platforms, and newly announced NVIDIA RTX workstations.

Leading software companies ServiceNow and Snowflake, as well as infrastructure provider Dell Technologies, which offers Dell Generative AI Solutions, recently announced they are collaborating with NVIDIA to enable new generative AI solutions and services on their platforms. The integration of NVIDIA AI Enterprise 4.0 and NVIDIA NeMo provides a foundation for production-ready generative AI for customers.

NVIDIA AI Enterprise 4.0 will be integrated into partner marketplaces, including AWS MarketplaceGoogle Cloud and Microsoft Azure, as well as through NVIDIA cloud partner Oracle Cloud Infrastructure.

Moreover, MLOps providers, including Azure Machine Learning, ClearML, Domino Data Lab, Run:AI, and Weights & Biases, are integrating seamlessly with the NVIDIA AI platform to simplify the development of production-grade generative AI models.

Extensive Partner Support
“Dell Technologies and NVIDIA are dedicated to assisting enterprises in building purpose-built AI models to tap into the immense potential of generative AI. With NVIDIA AI Workbench, developers can leverage the full Dell Generative AI Solutions portfolio to customize models on PCs, workstations, and data center infrastructure.” — Meghana Patwardhan, vice president of commercial client products at Dell Technologies

“Most enterprises lack the expertise, budget, and data center resources to manage the high complexity of AI software and systems. We are excited about the potential of NVIDIA AI Workbench to simplify generative AI project creation with one-click training and deployment on the HPE GreenLake edge-to-cloud platform.” Evan Sparks, chief product officer for AI at HPE

“As a market leader in workstations offering the performance and efficiency needed for the most demanding data science and AI models, we have a long history of collaboration with NVIDIA. HP is embracing the next generation of high-performance systems, coupled with NVIDIA RTX Ada Generation GPUs and NVIDIA AI Workbench, and bringing the power of generative AI to our enterprise customers and helping move AI workloads between the cloud and locally.” — Jim Nottingham, senior vice president of advanced computing solutions at HP Inc.

“Lenovo and NVIDIA are helping customers overcome deployment complexities and more easily implement generative AI to deliver transformative services and products to the market. NVIDIA AI Workbench and the Lenovo AI-ready portfolio enable developers to leverage the power of their smart devices and scale across edge-to-cloud infrastructure.” Rob Herman, vice president and general manager of Lenovo Workstation & Client AI

“The longstanding VMware and NVIDIA partnership has helped unlock the power of AI for every business by delivering an end-to-end enterprise platform optimized for AI workloads. Together, we are making generative AI more accessible and easier to implement in the enterprise. With AI Workbench, NVIDIA is giving developers a set of powerful tools to help enterprises accelerate gen AI adoption. With the new NVIDIA AI Workbench, development teams can seamlessly move AI workloads from the desktop to production.” — Chris Wolf, vice president of VMware AI Labs

Watch NVIDIA founder and CEO Jensen Huang’s SIGGRAPH keynote address on demand to learn more about NVIDIA AI Workbench and NVIDIA AI Enterprise 4.0.

AI Workbench is coming soon in early access. Sign up to get notified when it becomes available.

Leave a comment

Your email address will not be published. Required fields are marked *

Table of Contents