23 July 2021 • Jet New
In the summer of 2021, from 20 May to 22 July, the NUS Statistics and Data Science Society conducted our annual study group. This iteration, we did MIT 6.S191 - Introduction to Deep Learning. This study group is internal to society members and friends to keep the group small, so do join us or ask a friend from the society for an invite!
The MIT 6.S191 course covers a surface-level introduction to the vast field of deep learning, so is meant to emphasise breadth rather than depth. We covered 8 weeks of content, followed by a guest speaker sharing and the final project presentations:
Intro to Deep Learning
Deep Sequence Modeling
Deep Computer Vision
Deep Generative Modeling
Deep Reinforcement Learning
Limitations and New Frontiers
Evidential Deep Learning
Bias and Fairness
Fireside Chat with Koo Ping Shung from Data Science SG
Group Project Presentations
Week 1 - Introduction to Deep Learning
by Jet New
We officially kick off our study group! The MIT lectures are 1 hour each week, which we dive into in each study group session. Everyone is expected to facilitate the sharing for 1 week and present a group project, and will receive a Certificate of Completion at the end! Today, we recap neural networks, gradient descent, backpropagation, overfitting and regularization!
Week 2 - Deep Sequence Modeling
by Axel Lau, Ananya Gupta, Ang Yi Zhe
Starting off exploring the GPT-3 beta API, we proceed with explaining recurrent neural networks (RNNs), long short-term memory (LSTMs), applications on temporal data, and a thorough technical walkthrough of deep learning-based music generation!
Week 3 - Deep Computer Vision
by Joel, Stephen
This week, we share about CNNs in applications, e.g. medical imaging, self-driving cars. We dive deep into the convolution and pooling operations of a CNN, and then share about transfer learning of pre-trained image models and some practical code walkthrough! We end off with object detection with YOLOv5, performance metrics and how to prepare custom datasets in labelImg!
Week 4 - Deep Generative Modeling
by Tejas, Oakar, Guan Hao
What's the difference between supervised and unsupervised learning? How do autoencoders work, and what's the variational autoencoder (VAE)? We then introduce the generative adversarial network (GAN) architecture, their interesting minimax loss functions, and explain common problems of training GANs such as mode collapse and convergence! We then round up with topic with GAN variations, e.g. Conditional GANs, StyleGAN, StyleGAN, and appreciate the anime artwork that can be created using GANs!
Week 5 - Deep Reinforcement Learning
by Wei Liang, Andre, Simon, Jet
This week, we explore reinforcement learning and its many notations! We learn about the Q-function, the target network, policy learning, deep Q networks (DQN), exploration vs exploitation, learning vs planning, prediction vs control. We introduce concepts about intrinsic curiousity, then present a code walkthrough of how to train your own RL agent using Stable Baselines 3 and OpenAI Gym on Google Colab!
Week 6 - Limitations and New Frontiers
by Vinod, Rama, Renee
Having completed half of the study group content, we first re-cap and summarise what we've learnt in the past 5 weeks, then introduce the universal approximation theorem of neural networks, and a thorough, beginner-friendly, practical walkthrough of AutoML for feature engineering! We then conclude with a discussion of algorithmic bias, real-world case studies and some current methods that aim to alleviate by improving model explanability!
Week 7 - Evidential Deep Learning
by Xue Ying, Shreya, Edmund
This week, we explore how probabilistic learning can capture uncertainty as a form of model confidence of predictions! From aleatoric to epistemic uncertainty, we introduce the Bayesian neural network and how maximum likelihood estimation (MLE) is used in deep evidential regression. Evidential methods help with robustness to adversarial samples and out-of-distribution testing! A lot of math-heavy work here are thoroughly explained.
Week 8 - Bias and Fairness
by Geraldine, Isabella, Chen Hong
What types of bias and sources of bias are there? Selection bias, sampling bias, reporting bias, correlation fallacy, overgeneralization and automation bias are only a non-exhaustive list of common biases! Some manifest in the form of imbalanced datasets, so some methods like upsampling, weighting are shared, along with practical applications in cancer diagnosis on imbalanced data! A thorough walkthrough of the synthetic minority oversampling technique (SMOTE), among others, are some that are covered this week!
Week 9 - Fireside Chat with Koo Ping Shung
by Axel, Jet
This week, we are excited to invite Koo, co-founder of Data Science SG of over 10K+ industry professionals, to join us and share with our students insights about the field of data science in Singapore! What do you foresee the data science landscape to be like in the years to come? What is the next up and coming industry for AI? Do I need a postgrad to work in data science? Thank you to Koo for his valuable sharing!
Week 10 - Finale: Group Project Sharing!
by Oakar, Tejas, Stephen, Rui En, Jet, Geraldine, Chen Hong, Isabella, Edmund, Shreya, Ariel, Charles
Finale! After a few weeks of working on a practical project, 4 groups are sharing! We have Natural Language Processing with Disaster Tweets, From GANs to CycleGAN, I'm Something of a Painter Myself, and Classifying IMDB Review Sentiments using Natural Language Processing and Deep Learning! Lots of amazing and cool work done!