Mastering Generative AI with LLMs: A Hugging Face Guide



LLMs & Transformers: Dive into Hugging Face, Fine-tuning, Tokenization, and Datasets

What you will learn

How to build your own tokenizer using Hugging Face’s Transformers library

How to build a custom dataset in Hugging Face

How to train a GPT-2 model from scratch using Hugging Face’s Transformers library

How to instruction fine-tune a LLM using PEFT

Description

Welcome to “Mastering Generative AI with Large Language Models: A Hugging Face Guide”. In today’s AI-driven world, Large Language Models (LLMs) have revolutionized the realm of generative AI, enabling machines to generate human-like text, answer questions, and even author original content. This course is meticulously crafted to provide you with a deep understanding of these models and how to harness their power using the renowned Hugging Face platform.

Our journey begins with a robust introduction to the world of LLMs, deciphering their intricacies, and exploring the management of their compute requirements. From there, we transition into the Hugging Face universe, a pivotal platform offering an array of pre-trained models ready to be utilized in innovative applications.

‘;
}});

But understanding the theory isn’t enough; practical knowledge is vital. This is why our second section delves deep into the workings of Transformers, one of the core components behind LLMs. Get hands-on with manipulating datasets, building your custom models, and understanding the art of tokenization.

Lastly, a special emphasis is laid on training and fine-tuning. Learn how to tailor LLMs to your specific needs, be it summarization or text generation. With techniques like Instruction Fine-tuning and PEFT, you’ll master the art of tweaking models to perfection. We will finish off the course by training a GPT-2 completely from scratch to generate text on our custom dataset.

Introduction to LLMs & Hugging Face

Understanding Large Language Models In Hugging Face
Managing Compute Requirements For LLMs
What Is Hugging Face?
Accessing Pre-Trained Models From Hugging Face: Hands-On Example

Working With Transformers in Hugging Face

Notebook for Next 2 Lectures
Tokenizer Fundamentals
Working With Models In Transformers
Creating Our Own Dataset In Hugging Face
Creating Our Own Tokenizer In Hugging Face

Training & Fine-tuning LLMs from Hugging Face

Instruction Fine-tuning & PEFT
Full Fine-tuning BART For Summarization
Training Our Summarization Model Using PEFT
Training GPT-2 From Scratch In Hugging Face

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

Powered By
100% Free SEO Tools - Tool Kits PRO

Check Today's 30+ Free Courses on Telegram!

X