สอนการแบ่งแยกภาพลายมือ 0-9
สมมุติว่าเรามีภาพลายมือ 0-9 ที่ตัดมาแล้ว สมมุติว่าเราต้องการตราวจจับรหัสไปรษณีย์เพื่อสร้างระบบคัดแยกไปรษณีย์อัตโนมัติที่อ่านเลขลายมือ 0-9 แล้วแยกหมวดหมู่ตามรหัสไปรษณีย์ เราจะสามารถใช้กล้องตรวจจับแล้วแยกแยะได้อย่างไร เราจะสอนตั้งแต่ต้นเลย
1. ลงOS ลินุกซ์มินท์(Linux Mint) 17หรือ 17.3
2. ลงโปรแกรม OpenCv ตามวิธีดังนี้
http://peerajakwitoonchart.blogspot.com/2015/02/install-opencv-2410-on-linux-ubuntu.html
3. ลงโปรแกรม Caffe ตามวิธีดังนี้
http://peerajakwitoonchart.blogspot.com/2015/02/caffe-installation-on-ubuntu-1404-log.html
4. ที่ $CAFFE_ROOT เรียก
jupyter notebook examples/01-learning-lenet.ipynb
5. ทำตา่มตัวอย่างด้านล่าง
01-learning-lenet
จบแล้ว คุณสามารถทำให้คอมพิวเตอร์คุณแยกแยะภาพลายมือ0-9ได้
1. ลงOS ลินุกซ์มินท์(Linux Mint) 17หรือ 17.3
2. ลงโปรแกรม OpenCv ตามวิธีดังนี้
http://peerajakwitoonchart.blogspot.com/2015/02/install-opencv-2410-on-linux-ubuntu.html
3. ลงโปรแกรม Caffe ตามวิธีดังนี้
http://peerajakwitoonchart.blogspot.com/2015/02/caffe-installation-on-ubuntu-1404-log.html
4. ที่ $CAFFE_ROOT เรียก
jupyter notebook examples/01-learning-lenet.ipynb
5. ทำตา่มตัวอย่างด้านล่าง
Python solving with LeNet¶
In this example, we'll explore learning with Caffe in Python, using the fully-exposedSolver
interface.
In [1]:
import os
os.chdir('..')
In [2]:
import sys
sys.path.insert(0, './python')
import caffe
import numpy as np
from pylab import *
%matplotlib inline
import matplotlib.pyplot as plt
We'll be running the provided LeNet example (make sure you've downloaded the data and created the databases, as below).
ดาวโหลด และ เตรียมข้อมูล¶
!data/mnist/get_mnist.sh
!examples/mnist/create_mnist.sh
We need two external files to help out:
This network expects to read from pregenerated LMDBs, but reading directly from
- the net prototxt, defining the architecture and pointing to the train/test data
- the solver prototxt, defining the learning parameters
This network expects to read from pregenerated LMDBs, but reading directly from
ndarray
s is also possible using MemoryDataLayer
.
from caffe import layers as L
from caffe import params as P
def lenet(lmdb, batch_size):
with open('examples/mnist/lenet_auto_test.prototxt', 'w') as f: f.write(str(lenet('examples/mnist/mnist_test_lmdb', 100)))
def lenet(lmdb, batch_size):
# our version of LeNet: a series of linear and simple nonlinear transformations
n = caffe.NetSpec()
n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,
transform_param=dict(scale=1./255), ntop=2)
n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier'))
n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)
n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier'))
n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)
n.ip1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier'))
n.relu1 = L.ReLU(n.ip1, in_place=True)
n.ip2 = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier'))
n.loss = L.SoftmaxWithLoss(n.ip2, n.label)
return n.to_proto()
with open('examples/mnist/lenet_auto_train.prototxt', 'w') as f:
f.write(str(lenet('examples/mnist/mnist_train_lmdb', 64)))with open('examples/mnist/lenet_auto_test.prototxt', 'w') as f: f.write(str(lenet('examples/mnist/mnist_test_lmdb', 100)))
The net has been written to disk in more verbose but human-readable serialization format using Google's protobuf library. You can read, write, and modify this description directly. Let's take a look at the train net.
!cat examples/mnist/lenet_auto_train.prototxt
Now let's see the learning parameters, which are also written as a
prototxt
file. We're using SGD with momentum, weight decay, and a specific learning rate schedule.
In [ ]:
!cat examples/mnist/lenet_auto_solver.prototxt
Let's pick a device and load the solver. We'll use SGD (with momentum), but Adagrad and Nesterov's accelerated gradient are also available.
In [3]:
caffe.set_device(0)
caffe.set_mode_gpu()
solver = caffe.SGDSolver('examples/mnist/lenet_auto_solver.prototxt')
To get an idea of the architecture of our net, we can check the dimensions of the intermediate features (blobs) and parameters (these will also be useful to refer to when manipulating data later).
In [4]:
# each output is (batch size, feature dim, spatial dim)
[(k, v.data.shape) for k, v in solver.net.blobs.items()]
Out[4]:
In [5]:
# just print the weight sizes (not biases)
[(k, v[0].data.shape) for k, v in solver.net.params.items()]
Out[5]:
Before taking off, let's check that everything is loaded as we expect. We'll run a forward pass on the train and test nets and check that they contain our data.
In [6]:
solver.net.forward() # train net
solver.test_nets[0].forward() # test net (there can be more than one)
Out[6]:
In [9]:
# we use a little trick to tile the first eight images
plt.imshow(solver.net.blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')
print solver.net.blobs['label'].data[:8]
In [10]:
plt.imshow(solver.test_nets[0].blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')
print solver.test_nets[0].blobs['label'].data[:8]
Both train and test nets seem to be loading data, and to have correct labels.
Let's take one step of (minibatch) SGD and see what happens.
Let's take one step of (minibatch) SGD and see what happens.
Do we have gradients propagating through our filters? Let's see the updates to the first layer, shown here as a $4 \times 5$ grid of $5 \times 5$ filters.
In [11]:
solver.step(2)
#solver.solve()
In [12]:
solver.iter
restore_name = 'examples/LSP_CnnFeatssvm26/save/lspcnnfeat1713nobp_partof_nobpthenbp_lmdb__iter_'+str(solver.iter)+'.solverstate'
print restore_name
print str(1000)
In [13]:
plt.imshow(solver.net.params['conv1'][0].diff[:, 0].reshape(4, 5, 5, 5)
.transpose(0, 2, 1, 3).reshape(4*5, 5*5), cmap='gray')
Out[13]:
Something is happening. Let's run the net for a while, keeping track of a few things as it goes.
Note that this process will be the same as if training through the
caffe
binary. In particular:- logging will continue to happen as normal
- snapshots will be taken at the interval specified in the solver prototxt (here, every 5000 iterations)
- testing will happen at the interval specified (here, every 500 iterations)
- write a custom stopping criterion
- change the solving process by updating the net in the loop
In [14]:
%%time
niter = 200
test_interval = 25
# losses will also be stored in the log
train_loss = np.zeros(niter)
test_acc = np.zeros(int(np.ceil(niter / test_interval)))
output = np.zeros((niter, 8, 10))
# the main solver loop
for it in range(niter):
solver.step(1) # SGD by Caffe
# store the train loss
train_loss[it] = solver.net.blobs['loss'].data
# store the output on the first test batch
# (start the forward pass at conv1 to avoid loading new data)
solver.test_nets[0].forward(start='conv1')
output[it] = solver.test_nets[0].blobs['ip2'].data[:8]
# run a full test every so often
# (Caffe can also do this for us and write to a log, but we show here
# how to do it directly in Python, where more complicated things are easier.)
if it % test_interval == 0:
print 'Iteration', it, 'testing...'
correct = 0
for test_it in range(100):
solver.test_nets[0].forward()
correct += sum(solver.test_nets[0].blobs['ip2'].data.argmax(1)
== solver.test_nets[0].blobs['label'].data)
test_acc[it // test_interval] = correct / 1e4
Let's plot the train loss and test accuracy.
In [18]:
_, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(np.arange(niter), train_loss)
ax2.plot(test_interval * np.arange(len(test_acc)), test_acc, 'r')
ax1.set_xlabel('iteration')
ax1.set_ylabel('train loss')
ax2.set_ylabel('test accuracy')
Out[18]:
The loss seems to have dropped quickly and coverged (except for stochasticity), while the accuracy rose correspondingly. Hooray!
Since we saved the results on the first test batch, we can watch how our prediction scores evolved. We'll plot time on the $x$ axis and each possible label on the $y$, with lightness indicating confidence.
Since we saved the results on the first test batch, we can watch how our prediction scores evolved. We'll plot time on the $x$ axis and each possible label on the $y$, with lightness indicating confidence.
In [20]:
for i in range(8):
plt.figure(figsize=(2, 2))
plt.imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')
plt.figure(figsize=(10, 2))
plt.imshow(output[:50, i].T, interpolation='nearest', cmap='gray')
plt.xlabel('iteration')
plt.ylabel('label')
We started with little idea about any of these digits, and ended up with correct classifications for each. If you've been following along, you'll see the last digit is the most difficult, a slanted "9" that's (understandably) most confused with "4".
Note that these are the "raw" output scores rather than the softmax-computed probability vectors. The latter, shown below, make it easier to see the confidence of our net (but harder to see the scores for less likely digits).
Note that these are the "raw" output scores rather than the softmax-computed probability vectors. The latter, shown below, make it easier to see the confidence of our net (but harder to see the scores for less likely digits).
In [22]:
for i in range(8):
plt.figure(figsize=(2, 2))
plt.imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')
plt.figure(figsize=(10, 2))
plt.imshow(np.exp(output[:50, i].T) / np.exp(output[:50, i].T).sum(0), interpolation='nearest', cmap='gray')
plt.xlabel('iteration')
plt.ylabel('label')
In [39]:
del solver
The below are some uncorrelated testing
In [40]:
solver = caffe.SGDSolver('examples/mnist/lenet_solver.prototxt')
In [41]:
solver.solve()
In [45]:
accuracy_test = 0
total_test_num = 0
test_iters = int(10000 / solver.test_nets[0].blobs['data'].num)
for i in range(test_iters):
solver.test_nets[0].forward()
accuracy_test += solver.test_nets[0].blobs['accuracy'].data
total_test_num += solver.test_nets[0].blobs['data'].num
accuracy_test /= test_iters
print("Testing Accuracy: {:.7f}".format(accuracy_test))
print 'total_test_num:',total_test_num
print("Testing Error: {:.7f}".format(1-accuracy_test))
In [ ]:
แม่นยำ 99.05%
ต้องเป้นรีนุกอย่างเดวหรอ
ตอบลบโอเอสอื่นก็ได้ครับ ผมเลือกลินุกซ์เพราะใช้งานง่ายในวงการนี้.
ตอบลบขอบคุณครับ มีแบบที่เป็นภาษาไทยไหมครับ ผมไม่เก่ง อิ้ง
ตอบลบครับ ตามคอมเม้น ครับ
ตอบลบ