Introduction
Daftar Anggota:
Hiskia Anggi Puji Pratama - A11.2020.12730
Moh. Arda Fadly Robby - A11.2020.13087
Muhammad Ariq Pratama - A11.2020.12944
Satrio Arda Alfiansyah - A11.2020.12739
Data Preparation
import zipfile,os
local_zip = 'kitset.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/kitchenset')
zip_ref.close()
import splitfolders
import shutil
old_tmp_folder = "/tmp/kitchenset/"
new_tmp_folder = "/tmp/images"
splitfolders.ratio(old_tmp_folder, output=new_tmp_folder, seed=42, ratio=(.8, .2))
if os.path.isdir(old_tmp_folder):
shutil.rmtree(old_tmp_folder)
Copying files: 100 files [00:00, 9688.18 files/s]
base_dir = '/tmp/images'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'val')
Data Preprocessing
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=20,
zoom_range=0.2,
shear_range=0.2,
fill_mode='nearest',
validation_split=0.1)
test_datagen = ImageDataGenerator(
rescale=1./255)
2023-06-22 23:44:23.315287: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-22 23:44:23.451138: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-06-22 23:44:23.458264: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-06-22 23:44:23.458288: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2023-06-22 23:44:23.490744: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-06-22 23:44:26.084274: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-06-22 23:44:26.084346: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-06-22 23:44:26.084352: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(150, 150),
batch_size=16,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=16,
class_mode='categorical')
Found 80 images belonging to 14 classes.
Found 20 images belonging to 14 classes.
train_generator.class_indices
Modelling
import tensorflow as tf
from tensorflow.keras.layers import Input
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.applications import ResNet152V2
model = tf.keras.models.Sequential([
ResNet152V2(weights="imagenet", include_top=False, input_tensor=Input(shape=(150, 150, 3))),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(14, activation='softmax')
])
model.layers[0].trainable = False
2023-06-22 23:44:30.889579: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2023-06-22 23:44:30.889609: W tensorflow/stream_executor/cuda/cuda_driver.cc:263] failed call to cuInit: UNKNOWN ERROR (303)
2023-06-22 23:44:30.889627: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (p-85613052-604f-4701-a64d-5cd8d2d94925): /proc/driver/nvidia/version does not exist
2023-06-22 23:44:30.889799: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet152v2_weights_tf_dim_ordering_tf_kernels_notop.h5
234545216/234545216 [==============================] - 2s 0us/step
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
resnet152v2 (Functional) (None, 5, 5, 2048) 58331648
dropout (Dropout) (None, 5, 5, 2048) 0
flatten (Flatten) (None, 51200) 0
dense (Dense) (None, 512) 26214912
dense_1 (Dense) (None, 256) 131328
dense_2 (Dense) (None, 14) 3598
=================================================================
Total params: 84,681,486
Trainable params: 26,349,838
Non-trainable params: 58,331,648
_________________________________________________________________
model.compile(optimizer=tf.optimizers.Adam(),
loss='categorical_crossentropy',
metrics = ['accuracy'])
history = model.fit(train_generator,
validation_data=validation_generator,
epochs=50,
verbose=2)
Epoch 1/50
5/5 - 23s - loss: 14.0062 - accuracy: 0.2500 - val_loss: 13.2614 - val_accuracy: 0.3500 - 23s/epoch - 5s/step
Epoch 2/50
5/5 - 14s - loss: 7.5526 - accuracy: 0.6625 - val_loss: 1.2857 - val_accuracy: 0.9500 - 14s/epoch - 3s/step
Epoch 3/50
5/5 - 14s - loss: 1.4350 - accuracy: 0.9125 - val_loss: 2.6200 - val_accuracy: 0.8000 - 14s/epoch - 3s/step
Epoch 4/50
5/5 - 15s - loss: 0.6849 - accuracy: 0.9375 - val_loss: 1.6239 - val_accuracy: 0.8500 - 15s/epoch - 3s/step
Epoch 5/50
5/5 - 14s - loss: 0.1046 - accuracy: 0.9750 - val_loss: 1.2118 - val_accuracy: 0.8500 - 14s/epoch - 3s/step
Epoch 6/50
5/5 - 14s - loss: 0.4717 - accuracy: 0.9750 - val_loss: 0.6606 - val_accuracy: 0.8500 - 14s/epoch - 3s/step
Epoch 7/50
5/5 - 14s - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.9214 - val_accuracy: 0.9000 - 14s/epoch - 3s/step
Epoch 8/50
5/5 - 14s - loss: 0.0650 - accuracy: 0.9875 - val_loss: 1.0412 - val_accuracy: 0.9500 - 14s/epoch - 3s/step
Epoch 9/50
5/5 - 14s - loss: 0.1087 - accuracy: 0.9875 - val_loss: 1.1998 - val_accuracy: 0.9500 - 14s/epoch - 3s/step
Epoch 10/50
5/5 - 14s - loss: 9.6320e-06 - accuracy: 1.0000 - val_loss: 1.6609 - val_accuracy: 0.9000 - 14s/epoch - 3s/step
Epoch 11/50
5/5 - 14s - loss: 0.0575 - accuracy: 0.9875 - val_loss: 1.6428 - val_accuracy: 0.8500 - 14s/epoch - 3s/step
Epoch 12/50
5/5 - 14s - loss: 0.2565 - accuracy: 0.9625 - val_loss: 0.8037 - val_accuracy: 0.9500 - 14s/epoch - 3s/step
Epoch 13/50
5/5 - 14s - loss: 2.6822e-08 - accuracy: 1.0000 - val_loss: 0.4103 - val_accuracy: 0.9500 - 14s/epoch - 3s/step
Epoch 14/50
5/5 - 15s - loss: 0.0795 - accuracy: 0.9875 - val_loss: 0.0110 - val_accuracy: 1.0000 - 15s/epoch - 3s/step
Epoch 15/50
5/5 - 14s - loss: 1.3411e-08 - accuracy: 1.0000 - val_loss: 6.0797e-07 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 16/50
5/5 - 14s - loss: 0.1118 - accuracy: 0.9875 - val_loss: 2.3842e-07 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 17/50
5/5 - 14s - loss: 1.4752e-06 - accuracy: 1.0000 - val_loss: 2.7418e-07 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 18/50
5/5 - 14s - loss: 0.0091 - accuracy: 0.9875 - val_loss: 1.1921e-08 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 19/50
5/5 - 14s - loss: 0.0020 - accuracy: 1.0000 - val_loss: 4.1723e-08 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 20/50
5/5 - 14s - loss: 5.5134e-08 - accuracy: 1.0000 - val_loss: 8.9012e-05 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 21/50
5/5 - 14s - loss: 1.4859e-04 - accuracy: 1.0000 - val_loss: 0.0898 - val_accuracy: 0.9500 - 14s/epoch - 3s/step
Epoch 22/50
5/5 - 14s - loss: 9.7018e-04 - accuracy: 1.0000 - val_loss: 0.3177 - val_accuracy: 0.9500 - 14s/epoch - 3s/step
Epoch 23/50
5/5 - 14s - loss: 0.0000e+00 - accuracy: 1.0000 - val_loss: 0.4670 - val_accuracy: 0.9500 - 14s/epoch - 3s/step
Epoch 24/50
5/5 - 14s - loss: 0.0172 - accuracy: 0.9875 - val_loss: 0.0032 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 25/50
5/5 - 14s - loss: 8.4950e-05 - accuracy: 1.0000 - val_loss: 0.0027 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 26/50
5/5 - 14s - loss: 0.3048 - accuracy: 0.9750 - val_loss: 1.2003e-04 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 27/50
5/5 - 14s - loss: 7.5996e-08 - accuracy: 1.0000 - val_loss: 0.6815 - val_accuracy: 0.9500 - 14s/epoch - 3s/step
Epoch 28/50
5/5 - 15s - loss: 0.2592 - accuracy: 0.9875 - val_loss: 1.7078 - val_accuracy: 0.9500 - 15s/epoch - 3s/step
Epoch 29/50
5/5 - 17s - loss: 5.7074e-05 - accuracy: 1.0000 - val_loss: 3.1437 - val_accuracy: 0.9500 - 17s/epoch - 3s/step
Epoch 30/50
5/5 - 15s - loss: 0.1664 - accuracy: 0.9750 - val_loss: 0.0031 - val_accuracy: 1.0000 - 15s/epoch - 3s/step
Epoch 31/50
5/5 - 15s - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.3637 - val_accuracy: 0.9500 - 15s/epoch - 3s/step
Epoch 32/50
5/5 - 15s - loss: 5.4537e-07 - accuracy: 1.0000 - val_loss: 0.4190 - val_accuracy: 0.9500 - 15s/epoch - 3s/step
Epoch 33/50
5/5 - 15s - loss: 1.0722e-05 - accuracy: 1.0000 - val_loss: 0.4590 - val_accuracy: 0.9500 - 15s/epoch - 3s/step
Epoch 34/50
5/5 - 15s - loss: 0.0134 - accuracy: 1.0000 - val_loss: 0.4890 - val_accuracy: 0.9500 - 15s/epoch - 3s/step
Epoch 35/50
5/5 - 15s - loss: 1.3755e-04 - accuracy: 1.0000 - val_loss: 0.8189 - val_accuracy: 0.9500 - 15s/epoch - 3s/step
Epoch 36/50
5/5 - 15s - loss: 0.2785 - accuracy: 0.9875 - val_loss: 0.9720 - val_accuracy: 0.9500 - 15s/epoch - 3s/step
Epoch 37/50
5/5 - 15s - loss: 0.0852 - accuracy: 0.9875 - val_loss: 3.0712e-04 - val_accuracy: 1.0000 - 15s/epoch - 3s/step
Epoch 38/50
5/5 - 15s - loss: 8.9407e-09 - accuracy: 1.0000 - val_loss: 1.1957 - val_accuracy: 0.9000 - 15s/epoch - 3s/step
Epoch 39/50
5/5 - 15s - loss: 4.9403e-06 - accuracy: 1.0000 - val_loss: 2.5522 - val_accuracy: 0.9000 - 15s/epoch - 3s/step
Epoch 40/50
5/5 - 15s - loss: 3.7516e-06 - accuracy: 1.0000 - val_loss: 3.9725 - val_accuracy: 0.8000 - 15s/epoch - 3s/step
Epoch 41/50
5/5 - 15s - loss: 0.2441 - accuracy: 0.9875 - val_loss: 1.3186 - val_accuracy: 0.8500 - 15s/epoch - 3s/step
Epoch 42/50
5/5 - 15s - loss: 8.2699e-07 - accuracy: 1.0000 - val_loss: 0.0239 - val_accuracy: 1.0000 - 15s/epoch - 3s/step
Epoch 43/50
5/5 - 14s - loss: 0.0047 - accuracy: 1.0000 - val_loss: 1.7881e-08 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 44/50
5/5 - 14s - loss: 3.4273e-08 - accuracy: 1.0000 - val_loss: 0.0000e+00 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 45/50
5/5 - 15s - loss: 0.0000e+00 - accuracy: 1.0000 - val_loss: 0.0000e+00 - val_accuracy: 1.0000 - 15s/epoch - 3s/step
Epoch 46/50
5/5 - 14s - loss: 1.4901e-09 - accuracy: 1.0000 - val_loss: 5.9605e-09 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 47/50
5/5 - 14s - loss: 0.0000e+00 - accuracy: 1.0000 - val_loss: 5.9605e-09 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 48/50
5/5 - 14s - loss: 0.0000e+00 - accuracy: 1.0000 - val_loss: 5.9605e-09 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 49/50
5/5 - 14s - loss: 0.0000e+00 - accuracy: 1.0000 - val_loss: 5.9605e-09 - val_accuracy: 1.0000 - 14s/epoch - 3s/step
Epoch 50/50
5/5 - 13s - loss: 0.0000e+00 - accuracy: 1.0000 - val_loss: 5.9605e-09 - val_accuracy: 1.0000 - 13s/epoch - 3s/step
Summary
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Akurasi Model')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
Deploy
Kode di bawah menunjukkan bagaimana menyimpan model pengklasifikasi citra yang telah dibuat dan dilatih menggunakan TensorFlow. Fungsi save() digunakan untuk menyimpan model ke dalam file dengan nama 'pcd-ta.h5'. Ini memungkinkan model untuk digunakan kembali pada waktu yang akan datang tanpa perlu melakukan pelatihan ulang. File yang disimpan menggunakan format HDF5, yang merupakan format yang umum digunakan untuk menyimpan model pembelajaran mesin.
model.save('pcd-ta.h5')