728x90
반응형
Week 1:
- Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy (GitHub: karpathy)
- The Unreasonable Effectiveness of Recurrent Neural Networks (Andrej Karpathy blog, 2015)
- deepjazz (GitHub: jisungk)
- Learning Jazz Grammars (Gillick, Tang & Keller, 2010)
- A Grammatical Approach to Automatic Improvisation (Keller & Morrison, 2007)
- Surprising Harmonies (Pachet, 1999)
Week 2:
- Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings (Bolukbasi, Chang, Zou, Saligrama & Kalai, 2016)
- GloVe: Global Vectors for Word Representation (Pennington, Socher & Manning, 2014)
- Woebot.
Week 4:
- Natural Language Processing Specialization (by DeepLearning.AI)
- Attention Is All You Need (Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser & Polosukhin, 2017)
728x90
반응형
'Code > 딥러닝(NL)' 카테고리의 다른 글
Categorical_crossentropy? Sparse_categorical_crossentropy? (0) | 2021.10.21 |
---|---|
[numpy] np.newaxis는 무엇이고 언제 사용하는가? (0) | 2021.10.04 |
[quiz] Hyperparameter tuning, Batch Normalization, Programming Frameworks (0) | 2021.08.24 |
Logistic Regression Loss Function 미분하기 (0) | 2021.08.22 |
YOLO4 (0) | 2021.06.03 |