สอนการแบ่งแยกภาพลายมือ 0-9

สมมุติว่าเรามีภาพลายมือ 0-9 ที่ตัดมาแล้ว  สมมุติว่าเราต้องการตราวจจับรหัสไปรษณีย์เพื่อสร้างระบบคัดแยกไปรษณีย์อัตโนมัติที่อ่านเลขลายมือ 0-9 แล้วแยกหมวดหมู่ตามรหัสไปรษณีย์  เราจะสามารถใช้กล้องตรวจจับแล้วแยกแยะได้อย่างไร  เราจะสอนตั้งแต่ต้นเลย

1.  ลงOS ลินุกซ์มินท์(Linux Mint) 17หรือ 17.3
2.  ลงโปรแกรม OpenCv ตามวิธีดังนี้
http://peerajakwitoonchart.blogspot.com/2015/02/install-opencv-2410-on-linux-ubuntu.html
3.  ลงโปรแกรม Caffe ตามวิธีดังนี้
http://peerajakwitoonchart.blogspot.com/2015/02/caffe-installation-on-ubuntu-1404-log.html
4. ที่ $CAFFE_ROOT เรียก
jupyter notebook examples/01-learning-lenet.ipynb

5.  ทำตา่มตัวอย่างด้านล่าง

01-learning-lenet

Python solving with LeNet

In this example, we'll explore learning with Caffe in Python, using the fully-exposed Solver interface.
In [1]:
import os
os.chdir('..')
In [2]:
import sys
sys.path.insert(0, './python')
import caffe
import numpy as np
from pylab import *
%matplotlib inline
import matplotlib.pyplot as plt
We'll be running the provided LeNet example (make sure you've downloaded the data and created the databases, as below).

ดาวโหลด และ เตรียมข้อมูล

!data/mnist/get_mnist.sh
!examples/mnist/create_mnist.sh
We need two external files to help out:
  • the net prototxt, defining the architecture and pointing to the train/test data
  • the solver prototxt, defining the learning parameters
We start with the net. We'll write the net in a succinct and natural way as Python code that serializes to Caffe's protobuf model format.
This network expects to read from pregenerated LMDBs, but reading directly from ndarrays is also possible using MemoryDataLayer.
from caffe import layers as L from caffe import params as P
def lenet(lmdb, batch_size):
# our version of LeNet: a series of linear and simple nonlinear transformations
n = caffe.NetSpec()
n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,
                         transform_param=dict(scale=1./255), ntop=2)
n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier'))
n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)
n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier'))
n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)
n.ip1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier'))
n.relu1 = L.ReLU(n.ip1, in_place=True)
n.ip2 = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier'))
n.loss = L.SoftmaxWithLoss(n.ip2, n.label)
return n.to_proto()

with open('examples/mnist/lenet_auto_train.prototxt', 'w') as f: f.write(str(lenet('examples/mnist/mnist_train_lmdb', 64)))
with open('examples/mnist/lenet_auto_test.prototxt', 'w') as f: f.write(str(lenet('examples/mnist/mnist_test_lmdb', 100)))
The net has been written to disk in more verbose but human-readable serialization format using Google's protobuf library. You can read, write, and modify this description directly. Let's take a look at the train net.
!cat examples/mnist/lenet_auto_train.prototxt
Now let's see the learning parameters, which are also written as a prototxt file. We're using SGD with momentum, weight decay, and a specific learning rate schedule.
In [ ]:
!cat examples/mnist/lenet_auto_solver.prototxt
Let's pick a device and load the solver. We'll use SGD (with momentum), but Adagrad and Nesterov's accelerated gradient are also available.
In [3]:
caffe.set_device(0)
caffe.set_mode_gpu()
solver = caffe.SGDSolver('examples/mnist/lenet_auto_solver.prototxt')
To get an idea of the architecture of our net, we can check the dimensions of the intermediate features (blobs) and parameters (these will also be useful to refer to when manipulating data later).
In [4]:
# each output is (batch size, feature dim, spatial dim)
[(k, v.data.shape) for k, v in solver.net.blobs.items()]
Out[4]:
[('data', (64, 1, 28, 28)),
 ('label', (64,)),
 ('conv1', (64, 20, 24, 24)),
 ('pool1', (64, 20, 12, 12)),
 ('conv2', (64, 50, 8, 8)),
 ('pool2', (64, 50, 4, 4)),
 ('ip1', (64, 500)),
 ('ip2', (64, 10)),
 ('loss', ())]
In [5]:
# just print the weight sizes (not biases)
[(k, v[0].data.shape) for k, v in solver.net.params.items()]
Out[5]:
[('conv1', (20, 1, 5, 5)),
 ('conv2', (50, 20, 5, 5)),
 ('ip1', (500, 800)),
 ('ip2', (10, 500))]
Before taking off, let's check that everything is loaded as we expect. We'll run a forward pass on the train and test nets and check that they contain our data.
In [6]:
solver.net.forward()  # train net
solver.test_nets[0].forward()  # test net (there can be more than one)
Out[6]:
{'loss': array(2.344059944152832, dtype=float32)}
In [9]:
# we use a little trick to tile the first eight images
plt.imshow(solver.net.blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')
print solver.net.blobs['label'].data[:8]
[ 5.  0.  4.  1.  9.  2.  1.  3.]
In [10]:
plt.imshow(solver.test_nets[0].blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')
print solver.test_nets[0].blobs['label'].data[:8]
[ 7.  2.  1.  0.  4.  1.  4.  9.]
Both train and test nets seem to be loading data, and to have correct labels.
Let's take one step of (minibatch) SGD and see what happens.
Do we have gradients propagating through our filters? Let's see the updates to the first layer, shown here as a $4 \times 5$ grid of $5 \times 5$ filters.
In [11]:
solver.step(2)
#solver.solve()
In [12]:
solver.iter
restore_name = 'examples/LSP_CnnFeatssvm26/save/lspcnnfeat1713nobp_partof_nobpthenbp_lmdb__iter_'+str(solver.iter)+'.solverstate'
print restore_name
print str(1000)
examples/LSP_CnnFeatssvm26/save/lspcnnfeat1713nobp_partof_nobpthenbp_lmdb__iter_2.solverstate
1000
In [13]:
plt.imshow(solver.net.params['conv1'][0].diff[:, 0].reshape(4, 5, 5, 5)
       .transpose(0, 2, 1, 3).reshape(4*5, 5*5), cmap='gray')
Out[13]:
<matplotlib.image.AxesImage at 0x7fb1d018a910>
Something is happening. Let's run the net for a while, keeping track of a few things as it goes. Note that this process will be the same as if training through the caffe binary. In particular:
  • logging will continue to happen as normal
  • snapshots will be taken at the interval specified in the solver prototxt (here, every 5000 iterations)
  • testing will happen at the interval specified (here, every 500 iterations)
Since we have control of the loop in Python, we're free to compute additional things as we go, as we show below. We can do many other things as well, for example:
  • write a custom stopping criterion
  • change the solving process by updating the net in the loop
In [14]:
%%time
niter = 200
test_interval = 25
# losses will also be stored in the log
train_loss = np.zeros(niter)
test_acc = np.zeros(int(np.ceil(niter / test_interval)))
output = np.zeros((niter, 8, 10))

# the main solver loop
for it in range(niter):
    solver.step(1)  # SGD by Caffe
    
    # store the train loss
    train_loss[it] = solver.net.blobs['loss'].data
    
    # store the output on the first test batch
    # (start the forward pass at conv1 to avoid loading new data)
    solver.test_nets[0].forward(start='conv1')
    output[it] = solver.test_nets[0].blobs['ip2'].data[:8]
    
    # run a full test every so often
    # (Caffe can also do this for us and write to a log, but we show here
    #  how to do it directly in Python, where more complicated things are easier.)
    if it % test_interval == 0:
        print 'Iteration', it, 'testing...'
        correct = 0
        for test_it in range(100):
            solver.test_nets[0].forward()
            correct += sum(solver.test_nets[0].blobs['ip2'].data.argmax(1)
                           == solver.test_nets[0].blobs['label'].data)
        test_acc[it // test_interval] = correct / 1e4
Iteration 0 testing...
Iteration 25 testing...
Iteration 50 testing...
Iteration 75 testing...
Iteration 100 testing...
Iteration 125 testing...
Iteration 150 testing...
Iteration 175 testing...
CPU times: user 15.4 s, sys: 3.7 s, total: 19.1 s
Wall time: 14.3 s
Let's plot the train loss and test accuracy.
In [18]:
_, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(np.arange(niter), train_loss)
ax2.plot(test_interval * np.arange(len(test_acc)), test_acc, 'r')
ax1.set_xlabel('iteration')
ax1.set_ylabel('train loss')
ax2.set_ylabel('test accuracy')
Out[18]:
<matplotlib.text.Text at 0x7fb1d006b290>
The loss seems to have dropped quickly and coverged (except for stochasticity), while the accuracy rose correspondingly. Hooray!
Since we saved the results on the first test batch, we can watch how our prediction scores evolved. We'll plot time on the $x$ axis and each possible label on the $y$, with lightness indicating confidence.
In [20]:
for i in range(8):
    plt.figure(figsize=(2, 2))
    plt.imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')
    plt.figure(figsize=(10, 2))
    plt.imshow(output[:50, i].T, interpolation='nearest', cmap='gray')
    plt.xlabel('iteration')
    plt.ylabel('label')
We started with little idea about any of these digits, and ended up with correct classifications for each. If you've been following along, you'll see the last digit is the most difficult, a slanted "9" that's (understandably) most confused with "4".
Note that these are the "raw" output scores rather than the softmax-computed probability vectors. The latter, shown below, make it easier to see the confidence of our net (but harder to see the scores for less likely digits).
In [22]:
for i in range(8):
    plt.figure(figsize=(2, 2))
    plt.imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')
    plt.figure(figsize=(10, 2))
    plt.imshow(np.exp(output[:50, i].T) / np.exp(output[:50, i].T).sum(0), interpolation='nearest', cmap='gray')
    plt.xlabel('iteration')
    plt.ylabel('label')
In [39]:
del solver
The below are some uncorrelated testing
In [40]:
solver = caffe.SGDSolver('examples/mnist/lenet_solver.prototxt')
In [41]:
solver.solve()
In [45]:
accuracy_test = 0
total_test_num = 0

test_iters = int(10000 / solver.test_nets[0].blobs['data'].num)
for i in range(test_iters):
    solver.test_nets[0].forward()
    accuracy_test += solver.test_nets[0].blobs['accuracy'].data
    total_test_num += solver.test_nets[0].blobs['data'].num

    
accuracy_test /= test_iters 



print("Testing Accuracy: {:.7f}".format(accuracy_test))
print 'total_test_num:',total_test_num
print("Testing Error: {:.7f}".format(1-accuracy_test))
Testing Accuracy: 0.9905000
total_test_num: 10000
Testing Error: 0.0095000
In [ ]:
 แม่นยำ 99.05%  
 
จบแล้ว  คุณสามารถทำให้คอมพิวเตอร์คุณแยกแยะภาพลายมือ0-9ได้

ความคิดเห็น

  1. ต้องเป้นรีนุกอย่างเดวหรอ

    ตอบลบ
  2. โอเอสอื่น​ก็ได้​ครับ​ ผมเลือกลินุกซ์เพราะ​ใช้​งาน​ง่าย​ใน​วงการ​นี้.​

    ตอบลบ
  3. ขอบคุณครับ มีแบบที่เป็นภาษาไทยไหมครับ ผมไม่เก่ง อิ้ง

    ตอบลบ
  4. ครับ ตามคอมเม้น ครับ

    ตอบลบ

แสดงความคิดเห็น

บทความที่ได้รับความนิยม