TensorBoardを使った可視化を試してみる

  • 学習は前回同様、lfw_peopleデータの分類学習をKerasを使って行います。
  • 最適化関数はAdamです。
In [ ]:
# Load the TensorBoard notebookextension
%load_ext tensorboard
In [ ]:
# clear any logs from previois runs
!rm -rf ./logs/
In [3]:
# sklearnから
# データセットfetch_lfw_people
# 学習データと検証用データ分割用モジュール
# 標準化/正規化モジュール
from sklearn.datasets        import fetch_lfw_people
from sklearn.model_selection import train_test_split
from sklearn.preprocessing   import StandardScaler, MinMaxScaler

# Kerasから
# モデル作成用モジュール
from keras       import models, optimizers,layers, callbacks
from keras.utils import np_utils

import datetime
Using TensorFlow backend.

データの準備

In [ ]:
lfw = fetch_lfw_people(data_home='./scikit_learn_data/', min_faces_per_person=100, resize=0.5)

学習用データと検証用データの分離

In [5]:
X = lfw.data
y = lfw.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
v,h = lfw.images.shape[1:3] # 画像の垂直・水平サイズを保持する
n_train = X_train.shape[0]  # 学習データ数を保持する
n_test  = X_test.shape[0]   # 検証データ数を保持する
print('Image Size        [{0}, {1}]'.format(v,h))
print('num of train data [{0}]'.format(n_train))
print('num of test  data [{0}]'.format(n_test))
Image Size        [62, 47]
num of train data [855]
num of test  data [285]

標準化と正規化

In [ ]:
sc = StandardScaler()
sc.fit(X_train)
X_train_sc = sc.transform(X_train)
X_test_sc  = sc.transform(X_test)

ms = MinMaxScaler(feature_range=(0,1))
ms.fit(X_train_sc)
X_train_sc = ms.transform(X_train_sc)
X_test_sc  = ms.transform(X_test_sc)

X_train_sc = X_train_sc.reshape([n_train, v, h, 1])
X_test_sc  = X_test_sc.reshape([n_test, v, h, 1])

正解データをカテゴリ変数化する(One-Hot Encoding)

In [ ]:
y_train_cat = np_utils.to_categorical(y_train,5)
y_test_cat  = np_utils.to_categorical(y_test,5)

モデルの定義

In [ ]:
model = models.Sequential()
# 入力: サイズがvxhで1チャンネルをもつ画像 -> (v, h, 1) のテンソル
# それぞれのlayerで3x3の畳み込み処理を適用している
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(v, h, 1)))
model.add(layers.Conv2D(32, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(0.25))

model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(0.25))

model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(5, activation='softmax'))
In [9]:
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 60, 45, 32)        320       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 58, 43, 32)        9248      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 29, 21, 32)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 29, 21, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 27, 19, 64)        18496     
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 25, 17, 64)        36928     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 12, 8, 64)         0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 12, 8, 64)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 6144)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 256)               1573120   
_________________________________________________________________
dropout_3 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 5)                 1285      
=================================================================
Total params: 1,639,397
Trainable params: 1,639,397
Non-trainable params: 0
_________________________________________________________________

モデルのコンパイル

  • 最適化関数 Adam
  • 損失関数 categorical_crossentropy
  • 評価指標 accuracy
In [ ]:
lr     = 0.001
beta_1 = 0.9
beta_2 = 0.999
decay  = 0.0
optimizers.Adam(lr=lr, beta_1=beta_1, beta_2=beta_2, epsilon=None, decay=decay, amsgrad=False)
model.compile(optimizer = 'Adam',
              loss      = 'categorical_crossentropy',
              metrics   = ['acc'])

学習させる

In [ ]:
n_epoc   = 20


for num in range(10) :
  log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
  tensorboard_cbr = callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
  hist = model.fit(X_train_sc,
                   y_train_cat,
                   epochs=n_epoc,
                   validation_data=(X_test_sc, y_test_cat),
                   verbose=0,
                   callbacks = [tensorboard_cbr] )
In [12]:
%tensorboard --logdir logs/fit
Reusing TensorBoard on port 6006 (pid 259), started 0:42:33 ago. (Use '!kill 259' to kill it.)
In [ ]: