Tutorials¶
Discover a wealth of tutorials from our developers. Choose from videos, notebooks, or traditional tutorials.
DeepSeek-R1-Distill-Llama-70B
Run DeepSeek-R1-Distill-Llama-70B using text-generation-inference (TGI) on the Intel® Data Center GPU Max Series.
Expose Apps, Services with Tunnels
Use popular tunnel tools to expose your apps or services on Intel® Tiber™ AI Cloud.
Fine-Tune Meta* Llama 3.2-Vision-Instruct Multimodal LLM
Fine-tune a Multimodal Large Language Model (MLLM) on an image-caption dataset using the Intel® Gaudi® 2 processor
Fine-tune Meta* Llama-3.2-3B-Instruct for multilingual ChatBot
Fine-tune the Llama-3.2 model with LoRA. Test translation. Try a ChatBot exercise.
Orchestrate AI Workloads with dstack
Improve AI workload orchestration with support for Intel® Gaudi® AI accelerators.
Intel® Gaudi® 2 processor
Onboarding Intel® Gaudi® 2 processor for inference and training
MLOps and AI optimization
Operational AI for scalable production solutions
Accelerated NumPy* Calculations
Accelerated Python loops with the Intel® Math Kernel Library.
Visual Studio Code Dev
Set up the Visual Studio Code* app to work on a compute instance.
XPU Verify Tool
Run a suite of tests for discrete GPUs on Linux* operating systems.