Free NPTEL Course: Introduction to Large Language Models (LLMs) by IIT Delhi & Bombay
Course Details
| Exam Registration | 7420 |
|---|---|
| Course Status | Ongoing |
| Course Type | Elective |
| Language | English |
| Duration | 12 weeks |
| Categories | Electrical, Electronics and Communications Engineering, Information Technology, Communication and Signal Processing, Data Science, Artificial Intelligence |
| Credit Points | 3 |
| Level | Undergraduate/Postgraduate |
| Start Date | 19 Jan 2026 |
| End Date | 10 Apr 2026 |
| Enrollment Ends | 02 Feb 2026 |
| Exam Registration Ends | 20 Feb 2026 |
| Exam Date | 17 Apr 2026 IST |
| NCrF Level | 4.5 — 8.0 |
Master the Fundamentals of Large Language Models with This Free NPTEL Course
The field of Artificial Intelligence is being revolutionized by Large Language Models (LLMs) like GPT-4, Llama, and Gemini. Understanding their core principles is no longer just for researchers—it's a crucial skill for engineers, developers, and tech enthusiasts. A new, comprehensive course offered by the National Programme on Technology Enhanced Learning (NPTEL) provides a unique opportunity to learn these concepts from leading experts at India's premier institutes.
Introduction to Large Language Models (LLMs) is a detailed 12-week program designed and taught by distinguished professors from IIT Delhi and IIT Bombay. This course is meticulously structured to take you from the basics of Natural Language Processing (NLP) to the cutting-edge advancements in LLM research, including alignment, prompting, and ethical considerations.
Learn from Renowned IIT Faculty and Industry Experts
The course brings together the expertise of two acclaimed professors:
- Prof. Tanmoy Chakraborty (IIT Delhi): Holder of the Rajiv Khemani Young Faculty Chair in AI, Prof. Chakraborty leads the Laboratory for Computational Social Systems (LCS2). His research focuses on building economical, interpretable, and faithful language models for mental health and cyber-informatics. A Google PhD Scholar alumnus and recipient of prestigious fellowships like Ramanujan and Humboldt, he is also the author of the textbook Introduction to Large Language Models.
- Prof. Soumen Chakrabarti (IIT Bombay): A Professor of Computer Science and a Shanti Swarup Bhatnagar Prize awardee, Prof. Chakrabarti has extensive research and industry experience. His work on linking text to knowledge bases and graph search has been recognized with best paper awards at top-tier conferences. He has also worked at IBM Almaden, Carnegie Mellon, and Google.
This combination of deep academic research and real-world industry experience ensures the course content is both theoretically sound and practically relevant.
Who Should Enroll in This LLM Course?
This course is ideally suited for:
- Undergraduate and Postgraduate students in Computer Science, Electrical Engineering, Electronics & Communication, Information Technology, Mathematics, and Data Science.
- Professionals and developers looking to build a strong foundational understanding of how LLMs work.
- Anyone with an interest in AI and NLP who wants to move beyond surface-level knowledge.
Prerequisites: A mandatory understanding of Machine Learning and Python Programming is required. Familiarity with Deep Learning is optional but helpful. NPTEL offers excellent preparatory courses for these topics.
Detailed 12-Week Course Curriculum
The course is structured to provide a logical and comprehensive learning journey:
| Week | Topics Covered |
|---|---|
| Weeks 1-2 | Course & NLP Introduction, Statistical Language Models |
| Weeks 3-5 | Deep Learning Fundamentals, Word Embeddings (Word2Vec, GloVe), Neural Language Models (RNN, LSTM, Attention) |
| Week 6 | Transformer Architecture: Self-Attention, Multi-Head Attention, Positional Encoding (PyTorch Implementation) |
| Week 7 | Pre-Training Strategies: ELMo, BERT, GPT-family models. Introduction to HuggingFace. |
| Week 8 | Prompting & Alignment: Instruction Tuning, Advanced Prompting Techniques, RLHF (Reinforcement Learning from Human Feedback) |
| Week 9 | Retrieval-Augmented Generation (RAG): Open-book QA, REALM, RAG, FiD, Knowledge Graph QA |
| Week 10 | Knowledge Graphs (KGs) and their integration with LLMs |
| Week 11 | Parameter-Efficient Fine-Tuning (LoRA, Prefix Tuning), Transformer Interpretability |
| Week 12 | Overview of GPT-4, Llama, Claude, Gemini; Ethical NLP – Bias, Toxicity, and Societal Impact |
Key Learning Outcomes and Industry Relevance
By the end of this course, you will be able to:
- Comprehend the mathematical and architectural foundations of modern LLMs.
- Understand and implement core components like the Transformer architecture.
- Critically evaluate different pre-training and fine-tuning strategies.
- Apply advanced techniques like prompt engineering, RAG, and parameter-efficient adaptation.
- Discuss the ethical challenges, limitations (hallucination, bias), and future directions of LLM research.
Industry Support: The skills taught are directly applicable in industries at the forefront of AI, including Google, Microsoft, Adobe, IBM, Accenture, JP Morgan, Amazon, and numerous tech startups.
Resources and Certification
The course is based on Prof. Tanmoy Chakraborty's textbook, Introduction to Large Language Models (Wiley, 2025), and supplements learning with seminal research papers from conferences like ACL, NeurIPS, and ICML. NPTEL typically offers a certification for participants who successfully complete the course assignments and exam, adding significant value to your academic or professional profile.
Don't miss this chance to learn about one of the most transformative technologies of our time from the best in the field. Enroll in Introduction to Large Language Models (LLMs) on the NPTEL platform and build the expertise to navigate the future of AI.
Enroll Now →