Machine Learning
Classical ML fundamentals and model evaluation.
Fundamentals of Deep Learning
NVIDIA DLI
Businesses worldwide are using artificial intelligence to solve their greatest challenges.
Learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks.
Learn how to identify anomalies and failures in time series data, estimate the remaining useful life of the corresponding parts, and use this information to map anomalies to failure conditions.
Applications of AI for Anomaly Detection
NVIDIA DLI
Whether your organization needs to monitor cybersecurity threats, fraudulent financial transactions, product defects, or equipment health, artificial intelligence can help catch data abnormalities before they impact your business.
Integrating Sensors with NVIDIA DRIVE®
NVIDIA DLI
Learn how to integrate your sensor of choice for NVIDIA DRIVE®.
Deploy a deep learning model to automate disaster management use cases.
Getting Started with Deep Learning
NVIDIA DLI
Learn how deep learning works through hands-on exercises in computer vision and natural language processing.
Building Real-Time Video AI Applications
NVIDIA DLI
Learn the skills you need to enable real-time transformation of raw video data from widely-deployed camera sensors into deep learning-based insights.
Getting Started with AI on Jetson Nano
NVIDIA DLI
Build and train a classification data set and model with the NVIDIA Jetson Nano.
Building A Brain in 10 Minutes
NVIDIA DLI
The notebook explores the biological inspiration for early neural networks.
Whether companies are manufacturing semiconductor chips, airplanes, automobiles, smartphones, or food or beverages, quality and throughput are key benefits of optimization.
Introduction to Graph Neural Networks
NVIDIA DLI
Learn the basic concepts, implementations, and applications of graph neural networks (GNN) with hands-on interactive activities so that you can get started using GNN as a graph analysis tool.
Modern deep learning challenges leverage increasingly larger datasets and more complex models.
Building Conversational AI Applications
NVIDIA DLI
Learn how to quickly build and deploy a conversational AI pipeline including transcription, NLP, and speech.
Organizations analyze large amounts of tabular data to uncover insights, improve products and services, and achieve efficiency.
Learn to build, train, fine-tune, and deploy a GPU-accelerated automatic speech recognition service with NVIDIA Riva that includes customized features.
Generative AI with Diffusion Models
NVIDIA DLI
Thanks to improvements in computing power and scientific theory, generative AI is more accessible than ever before.
Generative AI with Diffusion Models
NVIDIA DLI
Take a deeper dive into denoising diffusion models, which are a popular choice for text-to-image pipelines, with applications in creative content generation, data augmentation, simulation and planning, anomaly detection, drug discovery, personalized recommendations, and more.
About This Course Very large deep neural networks (DNNs), whether applied to natural language processing (e.g., GPT-3), computer vision (e.g., huge Vision Transformers), or speech AI (e.g., Wave2Vec 2) have certain properties that set them apart from their smaller counterparts. As DNNs become larger and are trained on progressively larger datasets, they can adapt to new tasks with just a handful of training examples, accelerating the route toward general artificial intelligence. Training models that contain tens to hundreds of billions of parameters on vast datasets isn’t trivial and requires a unique combination of AI, high-performance computing (HPC), and systems knowledge. In this workshop, participants will learn how to: Train neural networks across multiple servers Use techniques such as activation checkpointing, gradient accumulation, and various forms of model parallelism to overcome the challenges associated with large-model memory footprint Capture and understand training performance characteristics to optimize model architecture Deploy very large multi-GPU models to production using NVIDIA Triton™ Inference Server The goal of this course is to demonstrate how to train the largest of neural networks and deploy them to production. Requirements Familiarity with: Good understanding of PyTorch Good understanding of deep learning and data parallel training concepts Practice with deep learning and data parallel are useful, but optional Tools, libraries, frameworks used: PyTorch, Megatron-LM, DeepSpeed, Slurm, Triton Inference Server Related Training Building Transformer-Based Natural Language Processing Applications Learn how to use Transformer-based natural language processing models for text classification tasks, such as categorizing documents. Fundamentals of Deep Learning for Multi-GPUs echniques for training deep neural networks on multi-GPU technology to shorten the training time required for data-intensive applications. For additional hands-on training through the NVIDIA Deep Learning Institute, visit www.nvidia.com/dli .
Building RAG Agents with LLMs
NVIDIA DLI
Agents powered by large language models (LLMs) have shown great retrieval capability for using tools, looking at documents, and plan their approaches.
Exploring Adversarial Machine Learning
NVIDIA DLI
In this course, which is designed to be accessible to both data scientists and security practitioners, you'll explore the security risks and vulnerabilities that adopting machine learning might expose you to.
Building RAG Agents with LLMs
NVIDIA DLI
Agents powered by large language models (LLMs) have shown great retrieval capability for using tools, looking at documents, and plan their approaches.
Just like how humans have multiple senses to perceive the world around them, computers have a variety of sensors to help perceive the human world.
Learn how to use two powerful NVIDIA developer tools: Nsight Systems and Nsight Compute.