Data Science Daily Newsletter - September 24, 2020

Here are three links you should check out today:

  1. [Cheatsheet] Machine Learning Super VIP Cheatsheet: This repository aims at summing up in the same place all the important notions that are covered in Stanford’s CS 229 Machine Learning course including refreshers, cheatsheets, and an ultimate compilation of concepts.
    https://github.com/afshinea/stanford-cs-229-machine-learning/blob/master/en/super-cheatsheet-machine-learning.pdf


  1. [Paper] Annotated Deep Learning Research Papers: Check out this repository by Akash Nain if you find research papers intimidating and are looking for annotated papers in ML & DL that are much easier to understand. https://github.com/AakashKumarNain/annotated_research_papers


  1. [Tutorial] CNN Model Compression with Keras: Knowledge Distillation is a procedure for model compression, in which a small (student) model is trained to match a large pre-trained (teacher) model. Learn to implement knowledge distillation using Keras.

https://keras.io/examples/vision/knowledge_distillation/


We’d love to hear from you - just reply on this thread.

  • Working on a course or a project? Let us know.
  • Came across an interesting article or tutorial? Share it here.
  • Are you stuck with something? Ask a question.

Want to get this daily newsletter in your email inbox? Just click the bell icon and select “Watching First Post” on the newsletter homepage.

5 Likes

Hey guys, I found this amazing video explanation of cuda and GPU basically. Just in case anyone wants to quickly be clear about why deep learning uses parallel computing/GPU you need to CHECK THIS OUT.

hey akash, i got this error . please help me!
this my code for lenet 5
and i got this error

[here is my jovian notebook](https://jovian.ai/chamschamanthi36/mini-pro-error)