MLOps
Operational practices for ML systems.
Explore how to use Numba—the just-in-time, type-specializing Python function compiler—to create and launch CUDA kernels to accelerate Python programs on massively parallel NVIDIA GPUs.
Learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks.
Deploy a deep learning model to automate disaster management use cases.
An Even Easier Introduction to CUDA
NVIDIA DLI
An interactive accompaniment to Mark Harris's popular blog post An Even Easier Introduction to CUDA.
Learn to write simple, portable, parallel-first applications using only standard C++ language features that can be compiled without modification to take advantage of NVIDIA GPU-accelerated environments.
Organizations analyze large amounts of tabular data to uncover insights, improve products and services, and achieve efficiency.
Explore how to use Numba—the just-in-time, type-specializing Python function compiler—to create and launch CUDA kernels to accelerate Python programs on massively parallel NVIDIA GPUs.
Learn to write simple, portable, parallel-first applications using only standard C++ language features that can be compiled without modification to take advantage of NVIDIA GPU-accelerated environments.
The NVIDIA Omniverse platform provides developers with SDKs, APIs, and microservices for developing OpenUSD-based workflows and applications that enable industrial-scale 3D digital twins, from planning to simulation and operations.
Sizing LLM Inference Systems
NVIDIA DLI
This course teaches AI practitioners to optimize and deploy large language models using NVIDIA Inference Microservices.
Fundamentals of Working With OpenUSD
NVIDIA DLI
In this introductory level training lab, we will cover the fundamentals of working with Universal Scene Description (OpenUSD) - an open and extensible ecosystem for describing, composing, simulating, and collaborating within 3D worlds.
The course focuses on teaching production-level deployment of LLM applications especially enterprise-grade deployment of RAG pipelines.
Learn techniques that can take your RAG system from an interesting proof-of-concept to a serious asset.
Learn how NIM enables the building, deploying, and scaling of AI applications.
Learn how to build a variety of LLM-based applications through the use of modern prompt engineering techniques.
Learn how to use two powerful NVIDIA developer tools: Nsight Systems and Nsight Compute.
Get started quickly in developing LLM-based applications by exploring the open-sourced ecosystem including pretrained LLMs.
Retrieval-Augmented Generation (RAG) pipelines are revolutionizing enterprise operations.
Learn how to write, compile, and run GPU-accelerated code, leverage CUDA core libraries to harness the power of massive parallelism provided by modern GPU accelerators, optimize memory migration between CPU and GPU, and implement your own algorithms.
Fundamentals of Accelerated Data Science
NVIDIA DLI
Data science is about using scientific methods, processes, algorithms, and systems to analyze and extract insights from data.
Data science is about using scientific methods, processes, algorithms, and systems to analyze and extract insights from data.
Get started quickly in developing LLM-based applications by exploring the open-sourced ecosystem including pretrained LLMs.
Learn to build end-to-end medical AI workflows using the latest tools.
Explore how BioNeMo NIM Microservices and Blueprints can accelerate drug development, improve prediction accuracy, and enable more efficient analysis of biochemical data, showcasing real-world examples of their transformative potential in drug discovery.