+-
深度学习之简单卷积神经网络Fashion MNIST
首页 专栏 tensorflow 文章详情
0

深度学习之简单卷积神经网络Fashion MNIST

小鸡 发布于 4 月 30 日

Fashion MNIST分类

Fashion MNIST数据集现在称之为深度学习的Hello World。替代了之前的手写体识别了。

原因应该是深度学习的发展,手写体识别变得太简单了。

官方例子

官方代码

尝试使用卷积神经网络来识别

官方卷积神经网络参考

获取Fashion MNIST数据

import tensorflow as tf from tensorflow import keras # Helper libraries import numpy as np import matplotlib.pyplot as plt print(tf.__version__) fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] train_images, test_images = train_images / 255.0, test_images / 255.0 print(train_images.shape)

定义卷积神经网络模型

model = tf.keras.models.Sequential() model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28,1))) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu')) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu')) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(64, activation='relu')) model.add(tf.keras.layers.Dense(10,activation='softmax')) # model = tf.keras.models.Sequential([ # tf.keras.layers.Conv2D(32,kernel_size=(3,3),activation='relu', input_shape=(28, 28,1)), # tf.keras.layers.MaxPooling2D(), # tf.keras.layers.Conv2D(64,kernel_size=(5,5),activation='relu'), # tf.keras.layers.MaxPooling2D(), # tf.keras.layers.Flatten(), # tf.keras.layers.Dense(64,activation='relu'), # tf.keras.layers.Dense(10,activation='softmax')] # ) model.summary()

训练模型

model.compile(optimizer='adam', loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) history = model.fit(train_images.reshape(-1,28,28,1), train_labels, epochs=10, validation_data=(test_images.reshape(-1,28,28,1), test_labels),batch_size=1000)

训练结果

可以看到同样的10次迭代训练,使用卷积神经网络训练的结果

val_loss: 0.3499 - val_accuracy: 0.8765

比官方的使用普通神经网络训练的数据要准确一些
loss: 0.3726 - accuracy: 0.8635

Train on 60000 samples, validate on 10000 samples Epoch 1/10 60000/60000 [==============================] - 105s 2ms/sample - loss: 1.1311 - accuracy: 0.6256 - val_loss: 0.6830 - val_accuracy: 0.7482 Epoch 2/10 60000/60000 [==============================] - 105s 2ms/sample - loss: 0.5803 - accuracy: 0.7813 - val_loss: 0.5385 - val_accuracy: 0.8018 Epoch 3/10 60000/60000 [==============================] - 105s 2ms/sample - loss: 0.4933 - accuracy: 0.8199 - val_loss: 0.4858 - val_accuracy: 0.8226 Epoch 4/10 60000/60000 [==============================] - 106s 2ms/sample - loss: 0.4413 - accuracy: 0.8405 - val_loss: 0.4407 - val_accuracy: 0.8419 Epoch 5/10 60000/60000 [==============================] - 106s 2ms/sample - loss: 0.4057 - accuracy: 0.8553 - val_loss: 0.4226 - val_accuracy: 0.8514 Epoch 6/10 60000/60000 [==============================] - 105s 2ms/sample - loss: 0.3795 - accuracy: 0.8643 - val_loss: 0.3933 - val_accuracy: 0.8591 Epoch 7/10 60000/60000 [==============================] - 108s 2ms/sample - loss: 0.3583 - accuracy: 0.8727 - val_loss: 0.3835 - val_accuracy: 0.8633 Epoch 8/10 60000/60000 [==============================] - 106s 2ms/sample - loss: 0.3437 - accuracy: 0.8780 - val_loss: 0.3694 - val_accuracy: 0.8641 Epoch 9/10 60000/60000 [==============================] - 106s 2ms/sample - loss: 0.3322 - accuracy: 0.8810 - val_loss: 0.3570 - val_accuracy: 0.8701 Epoch 10/10 60000/60000 [==============================] - 116s 2ms/sample - loss: 0.3258 - accuracy: 0.8831 - val_loss: 0.3499 - val_accuracy: 0.8765

做下简单预测

确实是一件上衣哈

print(model.predict(test_images[10].reshape(-1,28,28,1))) print(test_labels[10]) plt.figure() plt.imshow(test_images[10]) plt.colorbar() plt.grid(False) plt.show() class_names[4]

tensorflow
阅读 34 发布于 4 月 30 日
收藏
分享
本作品系原创, 采用《署名-非商业性使用-禁止演绎 4.0 国际》许可协议
大数据学习笔记
记录自己对大数据学习的点滴
关注专栏
avatar
小鸡

1.01的365次方=37.8

206 声望
18 粉丝
关注作者
0 条评论
得票数 最新
提交评论
你知道吗?

注册登录
avatar
小鸡

1.01的365次方=37.8

206 声望
18 粉丝
关注作者
宣传栏
目录

Fashion MNIST分类

Fashion MNIST数据集现在称之为深度学习的Hello World。替代了之前的手写体识别了。

原因应该是深度学习的发展,手写体识别变得太简单了。

官方例子

官方代码

尝试使用卷积神经网络来识别

官方卷积神经网络参考

获取Fashion MNIST数据

import tensorflow as tf from tensorflow import keras # Helper libraries import numpy as np import matplotlib.pyplot as plt print(tf.__version__) fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] train_images, test_images = train_images / 255.0, test_images / 255.0 print(train_images.shape)

定义卷积神经网络模型

model = tf.keras.models.Sequential() model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28,1))) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu')) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu')) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(64, activation='relu')) model.add(tf.keras.layers.Dense(10,activation='softmax')) # model = tf.keras.models.Sequential([ # tf.keras.layers.Conv2D(32,kernel_size=(3,3),activation='relu', input_shape=(28, 28,1)), # tf.keras.layers.MaxPooling2D(), # tf.keras.layers.Conv2D(64,kernel_size=(5,5),activation='relu'), # tf.keras.layers.MaxPooling2D(), # tf.keras.layers.Flatten(), # tf.keras.layers.Dense(64,activation='relu'), # tf.keras.layers.Dense(10,activation='softmax')] # ) model.summary()

训练模型

model.compile(optimizer='adam', loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) history = model.fit(train_images.reshape(-1,28,28,1), train_labels, epochs=10, validation_data=(test_images.reshape(-1,28,28,1), test_labels),batch_size=1000)

训练结果

可以看到同样的10次迭代训练,使用卷积神经网络训练的结果

val_loss: 0.3499 - val_accuracy: 0.8765

比官方的使用普通神经网络训练的数据要准确一些
loss: 0.3726 - accuracy: 0.8635

Train on 60000 samples, validate on 10000 samples Epoch 1/10 60000/60000 [==============================] - 105s 2ms/sample - loss: 1.1311 - accuracy: 0.6256 - val_loss: 0.6830 - val_accuracy: 0.7482 Epoch 2/10 60000/60000 [==============================] - 105s 2ms/sample - loss: 0.5803 - accuracy: 0.7813 - val_loss: 0.5385 - val_accuracy: 0.8018 Epoch 3/10 60000/60000 [==============================] - 105s 2ms/sample - loss: 0.4933 - accuracy: 0.8199 - val_loss: 0.4858 - val_accuracy: 0.8226 Epoch 4/10 60000/60000 [==============================] - 106s 2ms/sample - loss: 0.4413 - accuracy: 0.8405 - val_loss: 0.4407 - val_accuracy: 0.8419 Epoch 5/10 60000/60000 [==============================] - 106s 2ms/sample - loss: 0.4057 - accuracy: 0.8553 - val_loss: 0.4226 - val_accuracy: 0.8514 Epoch 6/10 60000/60000 [==============================] - 105s 2ms/sample - loss: 0.3795 - accuracy: 0.8643 - val_loss: 0.3933 - val_accuracy: 0.8591 Epoch 7/10 60000/60000 [==============================] - 108s 2ms/sample - loss: 0.3583 - accuracy: 0.8727 - val_loss: 0.3835 - val_accuracy: 0.8633 Epoch 8/10 60000/60000 [==============================] - 106s 2ms/sample - loss: 0.3437 - accuracy: 0.8780 - val_loss: 0.3694 - val_accuracy: 0.8641 Epoch 9/10 60000/60000 [==============================] - 106s 2ms/sample - loss: 0.3322 - accuracy: 0.8810 - val_loss: 0.3570 - val_accuracy: 0.8701 Epoch 10/10 60000/60000 [==============================] - 116s 2ms/sample - loss: 0.3258 - accuracy: 0.8831 - val_loss: 0.3499 - val_accuracy: 0.8765

做下简单预测

确实是一件上衣哈

print(model.predict(test_images[10].reshape(-1,28,28,1))) print(test_labels[10]) plt.figure() plt.imshow(test_images[10]) plt.colorbar() plt.grid(False) plt.show() class_names[4]