Deep Learning Course: Concepts, Applications & Syllabus | IISc & RVU
Course Details
| Exam Registration | 4992 |
|---|---|
| Course Status | Ongoing |
| Course Type | Core |
| Language | English |
| Duration | 12 weeks |
| Categories | Computer Science and Engineering, Artificial Intelligence, Data Science |
| Credit Points | 3 |
| Level | Undergraduate/Postgraduate |
| Start Date | 19 Jan 2026 |
| End Date | 10 Apr 2026 |
| Enrollment Ends | 02 Feb 2026 |
| Exam Registration Ends | 20 Feb 2026 |
| Exam Date | 25 Apr 2026 IST |
| NCrF Level | 4.5 — 8.0 |
Foundations of Deep Learning: Concepts and Applications
Deep Learning has emerged as the cornerstone of modern Artificial Intelligence, driving revolutionary advancements in fields ranging from computer vision and natural language processing to healthcare diagnostics and autonomous systems. As AI programs rapidly expand across academic institutions, there is a pressing need for a structured, foundational course that demystifies complex concepts and equips learners with practical skills.
This 12-week course, designed and delivered by leading academics from the Indian Institute of Science (IISc) and RV University, Bangalore, offers a meticulously crafted journey from the basics of neural networks to the frontiers of Large Language Models (LLMs) and Diffusion Models. It is tailored specifically for undergraduate and postgraduate students, providing both conceptual depth and hands-on implementation experience.
Meet Your Distinguished Instructors
The course brings together a powerhouse of expertise from both premier research and academic institutions.
Prof. Sriram Ganapathy, Indian Institute of Science, Bangalore
Prof. Sriram Ganapathy is an Associate Professor in the Department of Electrical Engineering at IISc and heads the LEAP Lab. With a Ph.D. from Johns Hopkins University and rich industry research experience at IBM Watson Research Center and Google Research India/DeepMind, he brings cutting-edge insights from the forefront of AI research. His work spans speech/audio processing, deep learning, and explainable AI, with significant contributions like the Coswara project for COVID-19 screening. An award-winning researcher and educator, he ensures the course content is both rigorous and relevant to real-world challenges.
Prof. Ashwini Kodipalli, RV University, Bangalore
Dr. Ashwini Kodipalli, a Ph.D. from IISc, is a faculty member at RV University with over 15 years of teaching experience. Passionate about simplifying complex topics, her research expertise lies in Biomedical Image Analysis using Deep Learning. She has led funded projects on early disease detection and has extensive experience conducting faculty development programs and hands-on sessions, making her exceptionally skilled at translating theory into accessible knowledge.
Prof. Baishali Garai, RV University, Bangalore
Dr. Baishali Garai, also an IISc Ph.D. graduate, is an Associate Professor at RV University. A recipient of the SERB Early Career Research Award, she has led multiple government-funded research projects (ISRO, SERB, DST). Her research focuses on applying deep learning to material science. With her strong foundation in both advanced concepts and practical applications, she bridges the gap between deep learning theory and scientific problem-solving.
Course Overview & Objectives
This course is a self-contained guide designed to build a strong, practical foundation in deep learning. No prior background in the subject is required, though basic Python familiarity is recommended.
Key Objectives:
- Understand the core mathematical principles behind neural networks, CNNs, RNNs, and Transformers.
- Gain hands-on experience implementing models using Python and Google Colab.
- Learn to apply deep learning to real-world problems in vision, language, and more.
- Explore advanced topics like Explainable AI (XAI), LLMs, and Generative Models.
- Develop the ability to design, train, and evaluate deep learning systems.
Who Should Enroll?
This course is ideally suited for:
- UG and PG students from AICTE-affiliated institutions.
- Aspiring Machine Learning Engineers, Data Scientists, AI Researchers, Computer Vision Engineers, and NLP Specialists.
- Professionals and enthusiasts seeking a structured introduction to deep learning fundamentals.
Industry Relevance & Support
The skills taught in this course are in high demand across the global tech landscape. Industries that actively seek these competencies include:
- Technology Giants: Google, Microsoft, Amazon, Meta, Apple, IBM, NVIDIA.
- AI Research Firms: OpenAI, DeepMind.
- IT & Consulting: TCS, Infosys, Wipro, Accenture, Cognizant.
- Healthcare & Biotech: Siemens Healthineers, Philips, GE Healthcare.
- Automotive & Finance: Tesla, Bosch, JPMorgan Chase, Razorpay.
Detailed 12-Week Course Layout
| Week | Topics Covered | Key Takeaways |
|---|---|---|
| Week 1 | ML vs. DL, History of DL, Perceptron, MLP, Activation & Loss Functions, Intro to Colab. | Foundational understanding of neural network architecture. |
| Week 2 | Gradient Descent variants, Backpropagation, Regularization (Dropout, L1/L2), Hyperparameter Tuning. | Mastery of training algorithms and optimization techniques. |
| Week 3 | Fundamentals of CNNs: Convolution, Pooling, Layers. Image preprocessing & augmentation. | Ability to build CNN models for image classification from scratch. |
| Week 4 | Standard Architectures (AlexNet, VGG, ResNet), Transfer Learning, Ensemble Models. | Knowledge of advanced CNN architectures and practical transfer learning. |
| Week 5 | Explainable AI (XAI): SHAP, LIME, Grad-CAM. Interpreting model decisions. | Skills to explain and debug deep learning model predictions. |
| Week 6 | CNN for Segmentation (UNet) and Object Detection (YOLO, R-CNN). | Understanding of advanced vision tasks beyond classification. |
| Week 7 | Introduction to RNNs, Backpropagation Through Time, Challenges (Vanishing Gradients). | Conceptual foundation for sequential data modeling. |
| Week 8 | LSTMs, GRUs, Attention Mechanisms. Hands-on with sequential data. | Practical knowledge of advanced RNN variants for time-series/NLP. |
| Week 9 | NLP Fundamentals: Text Preprocessing, Word Embeddings (Word2Vec, GloVe). | Foundation for applying deep learning to text data. |
| Week 10 | Unsupervised Learning: Autoencoders (Architecture, Types, Training). | Understanding of dimensionality reduction and feature learning. |
| Week 11 | Transformer Architecture: Self-Attention, Encoder-Decoder. Self-Supervised Learning. | Strong conceptual foundation in modern attention-based models. |
| Week 12 | Large Language Models (Training, Alignment, Evaluation), Diffusion Models (Stable Diffusion). | Insight into state-of-the-art generative AI and LLMs. |
Recommended Textbooks & Resources
- Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. (The definitive textbook).
- Zhang, Aston, et al. Dive into Deep Learning. Cambridge University Press, 2023. (Interactive, code-first approach).
- CS231n: Convolutional Neural Networks for Visual Recognition, Stanford University.
- Practical Deep Learning for Coders, fast.ai.
Conclusion
The Foundations of Deep Learning: Concepts and Applications course is more than just a syllabus; it's a guided pathway into one of the most transformative technologies of our time. Led by professors who are active contributors to the field, the course balances theory with practice, ensuring students not only understand the 'why' but also master the 'how'. Whether you aim to pursue research, launch a career in AI, or simply understand the technology shaping the future, this course provides the essential building blocks for your journey into deep learning.
Enroll Now →