Topic

Generative AI

Core generative AI workflows and ecosystem knowledge.

Learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks.

LLMGenerative AIMachine LearningMLOps
8 hrsliveChecked Mar 16, 2026

Learn how deep learning works through hands-on exercises in computer vision and natural language processing.

LLMGenerative AIMachine LearningComputer VisionMultimodal
8 hrsself-pacedChecked Mar 16, 2026
Verified freebasic

In this no-coding course, learn Generative AI concepts and applications, as well as the challenges and opportunities in this exciting field.

LLMGenerative AI
2 hrsself-pacedChecked Mar 16, 2026
Pricing not statedbasic

Thanks to improvements in computing power and scientific theory, generative AI is more accessible than ever before.

LLMGenerative AIMachine Learning
8 hrsliveChecked Mar 16, 2026

Learn how Transformers are used as the building blocks of modern large language models (LLMs).

LLMGenerative AI
6 hrsself-pacedChecked Mar 16, 2026

In this course you'll learn the end-to-end development workflow for generating synthetic data using Transformers, including data preprocessing, model pre-training, fine-tuning, inference, and evaluation.

LLMGenerative AIComputer VisionMultimodal
4 hrsself-pacedChecked Mar 16, 2026
Pricing not statedbasic

Take a deeper dive into denoising diffusion models, which are a popular choice for text-to-image pipelines, with applications in creative content generation, data augmentation, simulation and planning, anomaly detection, drug discovery, personalized recommendations, and more.

LLMGenerative AIMachine LearningMultimodalSimulation & Physical AI
8 hrsself-pacedChecked Mar 16, 2026

About This Course Very large deep neural networks (DNNs), whether applied to natural language processing (e.g., GPT-3), computer vision (e.g., huge Vision Transformers), or speech AI (e.g., Wave2Vec 2) have certain properties that set them apart from their smaller counterparts. As DNNs become larger and are trained on progressively larger datasets, they can adapt to new tasks with just a handful of training examples, accelerating the route toward general artificial intelligence. Training models that contain tens to hundreds of billions of parameters on vast datasets isn’t trivial and requires a unique combination of AI, high-performance computing (HPC), and systems knowledge. In this workshop, participants will learn how to: Train neural networks across multiple servers Use techniques such as activation checkpointing, gradient accumulation, and various forms of model parallelism to overcome the challenges associated with large-model memory footprint Capture and understand training performance characteristics to optimize model architecture Deploy very large multi-GPU models to production using NVIDIA Triton™ Inference Server The goal of this course is to demonstrate how to train the largest of neural networks and deploy them to production. Requirements Familiarity with: Good understanding of PyTorch Good understanding of deep learning and data parallel training concepts Practice with deep learning and data parallel are useful, but optional Tools, libraries, frameworks used: PyTorch, Megatron-LM, DeepSpeed, Slurm, Triton Inference Server Related Training Building Transformer-Based Natural Language Processing Applications Learn how to use Transformer-based natural language processing models for text classification tasks, such as categorizing documents. Fundamentals of Deep Learning for Multi-GPUs echniques for training deep neural networks on multi-GPU technology to shorten the training time required for data-intensive applications. For additional hands-on training through the NVIDIA Deep Learning Institute, visit www.nvidia.com/dli .

LLMGenerative AIMachine LearningComputer VisionMultimodal
8 hrsliveChecked Mar 16, 2026
Paidamateur

Agents powered by large language models (LLMs) have shown great retrieval capability for using tools, looking at documents, and plan their approaches.

LLMGenerative AIMachine LearningRAGAI Agents
8 hrsself-pacedChecked Mar 16, 2026
Pricing not statedamateur

Agents powered by large language models (LLMs) have shown great retrieval capability for using tools, looking at documents, and plan their approaches.

LLMGenerative AIMachine LearningRAGAI Agents
8 hrsliveChecked Mar 16, 2026

In this introductory course, we will provide a high-level overview of Retrieval Augmented Generation and how it improves Generative AI (GenAI).

LLMGenerative AIRAG
1 hrsself-pacedChecked Mar 16, 2026
Paidamateur

This course teaches AI practitioners to optimize and deploy large language models using NVIDIA Inference Microservices.

LLMGenerative AIMLOps
3 hrsself-pacedChecked Mar 16, 2026

The course focuses on teaching production-level deployment of LLM applications especially enterprise-grade deployment of RAG pipelines.

LLMGenerative AIRAGMLOps
4 hrsself-pacedChecked Mar 16, 2026

Learn techniques that can take your RAG system from an interesting proof-of-concept to a serious asset.

LLMGenerative AIRAGMLOps
4 hrsself-pacedChecked Mar 16, 2026

Learn how NIM enables the building, deploying, and scaling of AI applications.

LLMGenerative AIMLOps
2 hrsself-pacedChecked Mar 16, 2026
Pricing not statedbasic

Learn how to build a variety of LLM-based applications through the use of modern prompt engineering techniques.

LLMGenerative AIMLOps
8 hrsliveChecked Mar 16, 2026
Pricing not statedbasic

Just like how humans have multiple senses to perceive the world around them, computers have a variety of sensors to help perceive the human world.

LLMGenerative AIMachine LearningAI AgentsMultimodal
8 hrsliveChecked Mar 16, 2026

Get started quickly in developing LLM-based applications by exploring the open-sourced ecosystem including pretrained LLMs.

LLMGenerative AIMLOps
9 hrsliveChecked Mar 16, 2026
Pricing not statedprofessional

Retrieval-Augmented Generation (RAG) pipelines are revolutionizing enterprise operations.

LLMGenerative AIRAGMLOps
8 hrsliveChecked Mar 16, 2026
Pricing not statedbasic

Learn how to build a variety of LLM-based applications through the use of modern prompt engineering techniques.

LLMGenerative AI
8 hrsself-pacedChecked Mar 16, 2026

Get started quickly in developing LLM-based applications by exploring the open-sourced ecosystem including pretrained LLMs.

LLMGenerative AIMLOps
8 hrsself-pacedChecked Mar 16, 2026
Pricing not statedamateur

Learn how to design intelligent agents that can be adapted for arbitrary environments at scale.

LLMGenerative AIAI Agents
8 hrsliveChecked Mar 16, 2026
Verified freebasic

NeMo Curator is a GPU-accelerated data-curation tool that improves generative AI model accuracy by processing text, image, and video data at scale for training and customization.

LLMGenerative AI
1 hrsself-pacedChecked Mar 16, 2026

Gain hands-on experience with Cosmos WFMs and tools to generate data to train physical AI.

LLMGenerative AIComputer VisionRoboticsSimulation & Physical AI
2 hrsself-pacedChecked Mar 16, 2026
OpenCourseMap