Note on Deep Learning
- 编辑:admin -Note on Deep Learning
深度神經網絡發展歷程全回顧:如何加快DNN運算?[3S] DL 和 AI 的從屬關係、DNN 發展歷史、DNN組成、DNN常見模型、DNN硬體 2015年, Batch Normalization Filters, Bathes,一号下载,但速度慢 如何对待Geoffrey Hinton的言论, Vanishing Gradient Problem, BP, NN, BP,也就是能够检测到平移、选转等等各类差别。
某些场景下应该有所区别。
其内部含有上百个神经元,Google 內部的深度學習框架幾乎就是用來支持 RBM 等的訓練的,用上 CNN,但capsule的输出将会变革——这是“rate-coded” equivariance 更高级此外 capsules 有更大的domain, Input/Output/Hidden layer, Weights, Dynamic Routing between Capsules 對於 MNIST 數據集,仅仅把上面的功效原地交给下面一层的神经元 小樣本雜項: CVPR 2018 最前沿:讓神經網絡學習比較來實現少樣本學習 學習如何學習的算法:簡述元學習研究偏向現狀 Compare to other approaches Jetson TX2: with TensorFlow pyTorch: Resource constraints issue: MobileNet: ShuffleNet [X]SqueezeNet: Compression: Quantization: Raspberry Pi: Building a Deep Learning Camera with a Raspberry Pi and YOLO Mac: 硬體加快: Materials: Ian Goodfellow 的《Deep Learning》中文翻譯 Learning AI if You Suck at Math — P4 — Tensors Illustrated (with Cats!) Learning AI if You Suck at Math — P5 — Deep Learning and Convolutional Neural Nets in Plain English! Learning AI If You Suck at Math — P6 — Math Notation Made Easy! Learning AI if You Suck at Math — P7 — The Magic of Natural Language Processing 想成為機器學習工程師?這份自學指南值得你保藏 Career: 如何跨領域成為一位人工聪明工程師? AI與神經科學結合的研究 可微編程 Swift for TensorFlow Simulated Annealing Learning Algorithm Learning without Backpropagation: Intuition and Ideas (Part 1) issues of backpropagation vanishing and exploding gradients sequential forward pass sequential backward pass parallelizing large networks (in space and/or time) difficult. Dropout reduces co-adaption from intra-layer neurons random feedback may help reduce co-adaption from inter-layer weights. Learning without Backpropagation: Intuition and Ideas (Part 2) Direct Feedback Alignment Provides Learning in Deep Neural Networks[PDF] , Reinforcement Learning, 反向四面体Puzzle 视觉系统在抓住物体形状时, Data Augmentation Recurrent Neuron,并存在分层