每天一本优质电子书

Deep Learning for Computer Vision with Python 2-Practitioner Bundle

《Deep Learning for Computer Vision with Python 2-Practitioner Bundle》

作者:Adrian Rosebrock

出版时间:2017.09

官网链接:Pyimagesearch

下载地址:百度网盘(truePDF)

内容简介

Welcome to the Practitioner Bundle of Deep Learning for Computer Vision with Python! This
volume is meant to be the next logical step in your deep learning for computer vision education
after completing the Starter Bundle.
At this point, you should have a strong understanding of the fundamentals of parameterized
learning, neural networks, and Convolutional Neural Networks (CNNs). You should also feel
relatively comfortable using the Keras library and the Python programming language to train your
own custom deep learning networks.
The purpose of the Practitioner Bundle is to build on your knowledge gained from the Starter
Bundle and introduce more advanced algorithms, concepts, and tricks of the trade—these techniques
will be covered in three distinct parts of the book.

Table of contents : 
1 Introduction……Page 13
2.1 What Is Data Augmentation?……Page 15
2.2 Visualizing Data Augmentation……Page 16
2.3.1 The Flowers-17 Dataset……Page 19
2.3.2 Aspect-aware Preprocessing……Page 20
2.3.3 Flowers-17: No Data Augmentation……Page 23
2.3.4 Flowers-17: With Data Augmentation……Page 27
2.4 Summary……Page 31
3 Networks as Feature Extractors……Page 33
3.1 Extracting Features with a Pre-trained CNN……Page 34
3.1.1 What Is HDF5?……Page 35
3.1.2 Writing Features to an HDF5 Dataset……Page 36
3.2 The Feature Extraction Process……Page 39
3.2.1 Extracting Features From Animals……Page 43
3.2.3 Extracting Features From Flowers-17……Page 44
3.3 Training a Classifier on Extracted Features……Page 45
3.3.2 Results on CALTECH-101……Page 47
3.4 Summary……Page 48
4.1 Ranked Accuracy……Page 51
4.1.1 Measuring rank-1 and rank-5 Accuracies……Page 53
4.1.2 Implementing Ranked Accuracy……Page 54
4.2 Summary……Page 56
5.1 Transfer Learning and Fine-tuning……Page 59
5.1.1 Indexes and Layers……Page 62
5.1.2 Network Surgery……Page 63
5.1.3 Fine-tuning, from Start to Finish……Page 65
5.2 Summary……Page 71
6.1 Ensemble Methods……Page 73
6.1.1 Jensen’s Inequality……Page 74
6.1.2 Constructing an Ensemble of CNNs……Page 75
6.1.3 Evaluating an Ensemble……Page 79
6.2 Summary……Page 82
7.1 Adaptive Learning Rate Methods……Page 85
7.1.2 Adadelta……Page 86
7.1.4 Adam……Page 87
7.2.1 Three Methods You Should Learn how to Drive: SGD, Adam, and RMSprop……Page 88
7.3 Summary……Page 89
8.1 A Recipe for Training……Page 91
8.2 Transfer Learning or Train from Scratch……Page 95
8.3 Summary……Page 96
9.1 Downloading Kaggle: Dogs vs. Cats……Page 97
9.2 Creating a Configuration File……Page 98
9.2.1 Your First Configuration File……Page 99
9.3 Building the Dataset……Page 100
9.4 Summary……Page 104
10.1 Additional Image Preprocessors……Page 105
10.1.1 Mean Preprocessing……Page 106
10.1.2 Patch Preprocessing……Page 107
10.1.3 Crop Preprocessing……Page 109
10.2 HDF5 Dataset Generators……Page 111
10.3 Implementing AlexNet……Page 114
10.4 Training AlexNet on Kaggle: Dogs vs. Cats……Page 119
10.5 Evaluating AlexNet……Page 122
10.6.1 Extracting Features Using ResNet……Page 125
10.6.2 Training a Logistic Regression Classifier……Page 129
10.7 Summary……Page 130
11 GoogLeNet……Page 133
11.1.1 Inception……Page 134
11.1.2 Miniception……Page 135
11.2 MiniGoogLeNet on CIFAR-10……Page 136
11.2.1 Implementing MiniGoogLeNet……Page 137
11.2.2 Training and Evaluating MiniGoogLeNet on CIFAR-10……Page 142
11.2.3 MiniGoogLeNet: Experiment #1……Page 145
11.2.4 MiniGoogLeNet: Experiment #2……Page 146
11.2.5 MiniGoogLeNet: Experiment #3……Page 147
11.3 The Tiny ImageNet Challenge……Page 148
11.3.2 The Tiny ImageNet Directory Structure……Page 149
11.3.3 Building the Tiny ImageNet Dataset……Page 150
11.4.1 Implementing DeeperGoogLeNet……Page 155
11.4.3 Creating the Training Script……Page 163
11.4.4 Creating the Evaluation Script……Page 165
11.4.5 DeeperGoogLeNet Experiments……Page 167
11.5 Summary……Page 170
12.1 ResNet and the Residual Module……Page 173
12.1.1 Going Deeper: Residual Modules and Bottlenecks……Page 174
12.1.2 Rethinking the Residual Module……Page 176
12.2 Implementing ResNet……Page 177
12.3 ResNet on CIFAR-10……Page 182
12.3.1 Training ResNet on CIFAR-10 With the ctrl + c Method……Page 183
12.3.2 ResNet on CIFAR-10: Experiment #2……Page 187
12.4 Training ResNet on CIFAR-10 with Learning Rate Decay……Page 190
12.5 ResNet on Tiny ImageNet……Page 194
12.5.1 Updating the ResNet Architecture……Page 195
12.5.2 Training ResNet on Tiny ImageNet With the ctrl + c Method……Page 196
12.5.3 Training ResNet on Tiny ImageNet with Learning Rate Decay……Page 200
12.6 Summary……Page 204

点赞

发表评论

电子邮件地址不会被公开。 必填项已用*标注