Multi-Label Classification Using Machine Learning for IoT Application
Problem
Smart thermostats are a standard device in most houses now. To set the desired temperature, the thermostat must know if the occupants are home, not home, or sleeping. Most thermostats come equipped with the following sensors:
Sound level sensor
Light level sensor
Vibration sensor
Real-time clock (Time of Day)
Based on this data a pattern recognition algorithm can be used to predict whether the occupants are at home, sleeping, or not at home. This notebook shows the implementation of a simple Neural network to perform this task.
2022-11-26 03:49:59.280007: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-26 03:49:59.425154: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-11-26 03:49:59.425182: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-11-26 03:49:59.464255: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-26 03:50:00.227488: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-11-26 03:50:00.227564: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-11-26 03:50:00.227573: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Collecting openpyxl
Downloading openpyxl-3.0.10-py2.py3-none-any.whl (242 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 242.1/242.1 KB 26.3 MB/s eta 0:00:00
Collecting et-xmlfile
Downloading et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB)
Installing collected packages: et-xmlfile, openpyxl
Successfully installed et-xmlfile-1.1.0 openpyxl-3.0.10
WARNING: You are using pip version 22.0.4; however, version 22.3.1 is available.
You should consider upgrading via the '/root/venv/bin/python -m pip install --upgrade pip' command.
Importing Data
The dataset is stored in an excel file and contains multiple rows of data with 4 input features and 1 output feature (Status). A snippet of the data is shown below.
Exploring Data
The first step before developing a model is to explore the dataset and based on that information an appropriate model can be selected.
Sound (dB) Light (Lux) Vibration (db) Time of day (H) Status
0 40 340 35 9 Home
1 36 20 23 4 Sleeping
2 56 39 15 13 Not Home
3 32 50 15 1 Sleeping
4 63 430 19 10 Home
5 34 110 21 14 Not Home
Sound (dB) Light (Lux) Vibration (db) Time of day (H)
count 38.000000 38.000000 38.000000 38.000000
mean 46.000000 189.105263 35.578947 9.868421
std 10.669482 143.332497 13.595770 5.215212
min 32.000000 13.000000 15.000000 1.000000
25% 39.000000 57.500000 23.250000 6.000000
50% 43.500000 160.000000 34.000000 10.000000
75% 53.500000 300.000000 42.750000 15.000000
max 69.000000 430.000000 68.000000 20.000000
False
Model Selection
Given that the input data is numerical and quite structured, a good model for predicting the output could be a vanilla neural network. The network would have 4 numerical input features and one output class prediction. The problem can be described as a multi-class classification problem.
Data Preparation
In order to prepare the data for training, the input columns will have to be converted to floats. Additionally, the categorical labels will have to be one hot encoded, so that a softmax layer can be used at the output layer of the network.
Training a Model
The first step before training is to split the data into training and validation sets. The training set will be used to train the model (find the optimized weights of the NN) and the validation set will be used to save the weights with the highest accuracy. 70% of the data set for training and 30% for validation. Due to the low number of data points, a separate set is not used for testing.
Once the data is split, a neural network can be created using Keras by sequentially adding layers. The designed neural network contains the following layers.
Batch Normalization: Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1.
Input Layer: The input layer contains four inputs which are the 4 features from the dataset.
Hidden Layers: This NN contains three hidden layers with 100 neurons each with relu activation function applied to the output of each neuron.
Dropout Layer: The Dropout layer randomly sets input units to 0 with a frequency of rate (0.2 in this network) at each step during training time, which helps prevent overfitting. Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over all inputs is unchanged.
Callback: Callback in this network is used to perform early stopping. In this case, the model is stopped to maximize validation accuracy.
Epoch 1/200
3/3 [==============================] - 1s 94ms/step - loss: 1.1583 - accuracy: 0.1923 - val_loss: 1.5996 - val_accuracy: 0.1667
Epoch 2/200
3/3 [==============================] - 0s 28ms/step - loss: 1.0634 - accuracy: 0.4231 - val_loss: 1.0585 - val_accuracy: 0.4167
Epoch 3/200
3/3 [==============================] - 0s 12ms/step - loss: 1.0167 - accuracy: 0.5000 - val_loss: 0.9977 - val_accuracy: 0.3333
Epoch 4/200
3/3 [==============================] - 0s 13ms/step - loss: 0.8867 - accuracy: 0.7308 - val_loss: 1.0829 - val_accuracy: 0.3333
Epoch 5/200
3/3 [==============================] - 0s 23ms/step - loss: 0.8036 - accuracy: 0.7692 - val_loss: 1.1619 - val_accuracy: 0.3333
Epoch 6/200
3/3 [==============================] - 0s 12ms/step - loss: 0.7671 - accuracy: 0.6923 - val_loss: 1.2999 - val_accuracy: 0.3333
Epoch 7/200
3/3 [==============================] - 0s 15ms/step - loss: 0.6581 - accuracy: 0.7692 - val_loss: 1.3778 - val_accuracy: 0.3333
Epoch 8/200
3/3 [==============================] - 0s 12ms/step - loss: 0.6408 - accuracy: 0.7308 - val_loss: 1.4492 - val_accuracy: 0.3333
Epoch 9/200
3/3 [==============================] - 0s 13ms/step - loss: 0.6360 - accuracy: 0.7308 - val_loss: 1.5248 - val_accuracy: 0.3333
Epoch 10/200
3/3 [==============================] - 0s 13ms/step - loss: 0.7349 - accuracy: 0.6538 - val_loss: 1.6442 - val_accuracy: 0.3333
Epoch 11/200
3/3 [==============================] - 0s 18ms/step - loss: 0.5543 - accuracy: 0.7692 - val_loss: 1.6109 - val_accuracy: 0.3333
Epoch 12/200
3/3 [==============================] - 0s 24ms/step - loss: 0.4941 - accuracy: 0.8077 - val_loss: 1.6375 - val_accuracy: 0.3333
Epoch 13/200
3/3 [==============================] - 0s 12ms/step - loss: 0.4700 - accuracy: 0.8077 - val_loss: 1.6886 - val_accuracy: 0.3333
Epoch 14/200
3/3 [==============================] - 0s 27ms/step - loss: 0.4932 - accuracy: 0.7308 - val_loss: 1.6593 - val_accuracy: 0.3333
Epoch 15/200
3/3 [==============================] - 0s 16ms/step - loss: 0.4314 - accuracy: 0.8077 - val_loss: 1.6144 - val_accuracy: 0.3333
Epoch 16/200
3/3 [==============================] - 0s 26ms/step - loss: 0.4198 - accuracy: 0.8462 - val_loss: 1.6172 - val_accuracy: 0.3333
Epoch 17/200
3/3 [==============================] - 0s 15ms/step - loss: 0.3782 - accuracy: 0.8846 - val_loss: 1.6085 - val_accuracy: 0.3333
Epoch 18/200
3/3 [==============================] - 0s 13ms/step - loss: 0.3583 - accuracy: 0.8846 - val_loss: 1.6103 - val_accuracy: 0.2500
Epoch 19/200
3/3 [==============================] - 0s 13ms/step - loss: 0.3742 - accuracy: 0.8846 - val_loss: 1.6136 - val_accuracy: 0.2500
Epoch 20/200
3/3 [==============================] - 0s 13ms/step - loss: 0.3613 - accuracy: 0.8846 - val_loss: 1.6087 - val_accuracy: 0.2500
Epoch 21/200
3/3 [==============================] - 0s 27ms/step - loss: 0.3493 - accuracy: 0.8462 - val_loss: 1.6264 - val_accuracy: 0.3333
Epoch 22/200
3/3 [==============================] - 0s 12ms/step - loss: 0.4740 - accuracy: 0.7692 - val_loss: 1.6566 - val_accuracy: 0.3333
Epoch 23/200
3/3 [==============================] - 0s 25ms/step - loss: 0.3488 - accuracy: 0.9231 - val_loss: 1.5800 - val_accuracy: 0.4167
Epoch 24/200
3/3 [==============================] - 0s 12ms/step - loss: 0.3951 - accuracy: 0.9231 - val_loss: 1.5982 - val_accuracy: 0.4167
Epoch 25/200
3/3 [==============================] - 0s 12ms/step - loss: 0.6913 - accuracy: 0.8462 - val_loss: 1.5509 - val_accuracy: 0.4167
Epoch 26/200
3/3 [==============================] - 0s 32ms/step - loss: 0.4767 - accuracy: 0.8846 - val_loss: 1.4857 - val_accuracy: 0.4167
Epoch 27/200
3/3 [==============================] - 0s 12ms/step - loss: 0.3197 - accuracy: 0.8462 - val_loss: 1.4459 - val_accuracy: 0.4167
Epoch 28/200
3/3 [==============================] - 0s 31ms/step - loss: 0.3500 - accuracy: 0.8846 - val_loss: 1.3482 - val_accuracy: 0.4167
Epoch 29/200
3/3 [==============================] - 0s 13ms/step - loss: 0.3098 - accuracy: 0.8846 - val_loss: 1.3371 - val_accuracy: 0.4167
Epoch 30/200
3/3 [==============================] - 0s 12ms/step - loss: 0.3405 - accuracy: 0.8846 - val_loss: 1.3076 - val_accuracy: 0.4167
Epoch 31/200
3/3 [==============================] - 0s 17ms/step - loss: 0.3238 - accuracy: 0.8846 - val_loss: 1.2759 - val_accuracy: 0.4167
Epoch 32/200
3/3 [==============================] - 0s 11ms/step - loss: 0.2701 - accuracy: 0.8846 - val_loss: 1.2337 - val_accuracy: 0.4167
Epoch 33/200
3/3 [==============================] - 0s 24ms/step - loss: 0.2369 - accuracy: 0.9231 - val_loss: 1.2000 - val_accuracy: 0.4167
Epoch 34/200
3/3 [==============================] - 0s 12ms/step - loss: 0.3698 - accuracy: 0.9231 - val_loss: 1.1658 - val_accuracy: 0.5000
Epoch 35/200
3/3 [==============================] - 0s 24ms/step - loss: 0.2628 - accuracy: 0.9231 - val_loss: 1.0779 - val_accuracy: 0.5833
Epoch 36/200
3/3 [==============================] - 0s 12ms/step - loss: 0.2592 - accuracy: 0.8846 - val_loss: 1.0107 - val_accuracy: 0.5833
Epoch 37/200
3/3 [==============================] - 0s 13ms/step - loss: 0.2222 - accuracy: 0.9231 - val_loss: 0.9704 - val_accuracy: 0.6667
Epoch 38/200
3/3 [==============================] - 0s 30ms/step - loss: 0.2315 - accuracy: 0.9231 - val_loss: 0.9384 - val_accuracy: 0.6667
Epoch 39/200
3/3 [==============================] - 0s 13ms/step - loss: 0.2856 - accuracy: 0.8846 - val_loss: 0.9154 - val_accuracy: 0.6667
Epoch 40/200
3/3 [==============================] - 0s 23ms/step - loss: 0.2333 - accuracy: 0.9231 - val_loss: 0.9098 - val_accuracy: 0.6667
Epoch 41/200
3/3 [==============================] - 0s 17ms/step - loss: 0.3656 - accuracy: 0.7692 - val_loss: 0.8726 - val_accuracy: 0.7500
Epoch 42/200
3/3 [==============================] - 0s 24ms/step - loss: 0.3843 - accuracy: 0.7692 - val_loss: 0.8165 - val_accuracy: 0.7500
Epoch 43/200
3/3 [==============================] - 0s 15ms/step - loss: 0.3271 - accuracy: 0.8077 - val_loss: 0.7826 - val_accuracy: 0.8333
Epoch 44/200
3/3 [==============================] - 0s 12ms/step - loss: 0.2967 - accuracy: 0.8462 - val_loss: 0.7581 - val_accuracy: 0.8333
Epoch 45/200
3/3 [==============================] - 0s 27ms/step - loss: 0.2852 - accuracy: 0.8077 - val_loss: 0.7440 - val_accuracy: 0.8333
Epoch 46/200
3/3 [==============================] - 0s 12ms/step - loss: 0.2834 - accuracy: 0.9231 - val_loss: 0.7477 - val_accuracy: 0.8333
Epoch 47/200
3/3 [==============================] - 0s 26ms/step - loss: 0.3358 - accuracy: 0.8462 - val_loss: 0.7551 - val_accuracy: 0.8333
Epoch 48/200
3/3 [==============================] - 0s 12ms/step - loss: 0.3523 - accuracy: 0.8462 - val_loss: 0.7573 - val_accuracy: 0.8333
Epoch 49/200
3/3 [==============================] - 0s 11ms/step - loss: 0.1871 - accuracy: 0.9615 - val_loss: 0.7754 - val_accuracy: 0.6667
Epoch 50/200
3/3 [==============================] - 0s 13ms/step - loss: 0.2908 - accuracy: 0.8846 - val_loss: 0.7924 - val_accuracy: 0.6667
Epoch 51/200
3/3 [==============================] - 0s 11ms/step - loss: 0.2654 - accuracy: 0.8462 - val_loss: 0.7982 - val_accuracy: 0.7500
Epoch 52/200
3/3 [==============================] - 0s 29ms/step - loss: 0.2846 - accuracy: 0.8846 - val_loss: 0.8062 - val_accuracy: 0.7500
Epoch 53/200
3/3 [==============================] - 0s 12ms/step - loss: 0.2139 - accuracy: 0.9231 - val_loss: 0.8172 - val_accuracy: 0.8333
Epoch 54/200
3/3 [==============================] - 0s 12ms/step - loss: 0.1893 - accuracy: 0.9231 - val_loss: 0.8238 - val_accuracy: 0.8333
Epoch 55/200
3/3 [==============================] - 0s 12ms/step - loss: 0.2161 - accuracy: 0.9231 - val_loss: 0.8302 - val_accuracy: 0.8333
Epoch 56/200
3/3 [==============================] - 0s 11ms/step - loss: 0.3440 - accuracy: 0.8462 - val_loss: 0.8459 - val_accuracy: 0.8333
Epoch 57/200
3/3 [==============================] - 0s 27ms/step - loss: 0.6185 - accuracy: 0.8077 - val_loss: 0.8576 - val_accuracy: 0.8333
Epoch 58/200
3/3 [==============================] - 0s 11ms/step - loss: 0.1832 - accuracy: 0.9615 - val_loss: 0.8691 - val_accuracy: 0.8333
Epoch 59/200
3/3 [==============================] - 0s 20ms/step - loss: 0.2317 - accuracy: 0.8846 - val_loss: 0.8859 - val_accuracy: 0.8333
Epoch 60/200
3/3 [==============================] - 0s 12ms/step - loss: 0.1938 - accuracy: 0.8846 - val_loss: 0.8964 - val_accuracy: 0.8333
Epoch 61/200
3/3 [==============================] - 0s 12ms/step - loss: 0.3276 - accuracy: 0.8462 - val_loss: 0.8947 - val_accuracy: 0.8333
Epoch 62/200
3/3 [==============================] - 0s 28ms/step - loss: 0.3011 - accuracy: 0.8077 - val_loss: 0.8882 - val_accuracy: 0.8333
Epoch 63/200
3/3 [==============================] - 0s 14ms/step - loss: 0.2250 - accuracy: 0.8846 - val_loss: 0.8879 - val_accuracy: 0.8333
Epoch 64/200
3/3 [==============================] - 0s 15ms/step - loss: 0.2919 - accuracy: 0.9231 - val_loss: 0.8952 - val_accuracy: 0.8333
Epoch 65/200
3/3 [==============================] - 0s 16ms/step - loss: 0.2260 - accuracy: 0.8846 - val_loss: 0.8972 - val_accuracy: 0.8333
Epoch 66/200
3/3 [==============================] - 0s 14ms/step - loss: 0.1810 - accuracy: 0.9615 - val_loss: 0.9040 - val_accuracy: 0.9167
Epoch 67/200
3/3 [==============================] - 0s 27ms/step - loss: 0.1539 - accuracy: 0.9615 - val_loss: 0.9159 - val_accuracy: 0.9167
Epoch 68/200
3/3 [==============================] - 0s 11ms/step - loss: 0.2422 - accuracy: 0.8846 - val_loss: 0.9263 - val_accuracy: 0.9167
Epoch 69/200
3/3 [==============================] - 0s 23ms/step - loss: 0.2198 - accuracy: 0.8846 - val_loss: 0.9342 - val_accuracy: 0.9167
Epoch 70/200
3/3 [==============================] - 0s 12ms/step - loss: 0.2515 - accuracy: 0.9615 - val_loss: 0.9513 - val_accuracy: 0.8333
Epoch 71/200
3/3 [==============================] - 0s 13ms/step - loss: 0.1826 - accuracy: 0.9231 - val_loss: 0.9586 - val_accuracy: 0.8333
Epoch 72/200
3/3 [==============================] - 0s 29ms/step - loss: 0.2437 - accuracy: 0.8462 - val_loss: 0.9680 - val_accuracy: 0.8333
Epoch 73/200
3/3 [==============================] - 0s 11ms/step - loss: 0.4618 - accuracy: 0.7692 - val_loss: 0.9850 - val_accuracy: 0.8333
Epoch 74/200
3/3 [==============================] - 0s 27ms/step - loss: 0.1563 - accuracy: 0.9231 - val_loss: 0.9843 - val_accuracy: 0.8333
Epoch 75/200
3/3 [==============================] - 0s 12ms/step - loss: 0.1774 - accuracy: 0.9231 - val_loss: 0.9916 - val_accuracy: 0.8333
Epoch 76/200
3/3 [==============================] - 0s 11ms/step - loss: 0.2498 - accuracy: 0.8846 - val_loss: 1.0029 - val_accuracy: 0.8333
Epoch 77/200
3/3 [==============================] - 0s 30ms/step - loss: 0.2189 - accuracy: 0.9231 - val_loss: 1.0214 - val_accuracy: 0.8333
Epoch 78/200
3/3 [==============================] - 0s 11ms/step - loss: 0.1876 - accuracy: 0.9231 - val_loss: 1.0429 - val_accuracy: 0.8333
Epoch 79/200
3/3 [==============================] - 0s 19ms/step - loss: 0.2931 - accuracy: 0.8846 - val_loss: 1.0557 - val_accuracy: 0.8333
Epoch 80/200
3/3 [==============================] - 0s 12ms/step - loss: 0.2732 - accuracy: 0.8077 - val_loss: 1.0694 - val_accuracy: 0.8333
Epoch 81/200
3/3 [==============================] - 0s 11ms/step - loss: 0.1412 - accuracy: 0.9615 - val_loss: 1.0718 - val_accuracy: 0.8333
Epoch 82/200
3/3 [==============================] - 0s 24ms/step - loss: 0.1806 - accuracy: 0.9615 - val_loss: 1.0731 - val_accuracy: 0.8333
Epoch 83/200
3/3 [==============================] - 0s 11ms/step - loss: 0.1988 - accuracy: 0.8846 - val_loss: 1.0762 - val_accuracy: 0.8333
Epoch 84/200
3/3 [==============================] - 0s 29ms/step - loss: 0.1252 - accuracy: 0.9615 - val_loss: 1.0745 - val_accuracy: 0.8333
Epoch 85/200
3/3 [==============================] - 0s 12ms/step - loss: 0.1901 - accuracy: 0.8846 - val_loss: 1.0814 - val_accuracy: 0.9167
Epoch 86/200
3/3 [==============================] - 0s 12ms/step - loss: 0.3657 - accuracy: 0.8462 - val_loss: 1.1004 - val_accuracy: 0.9167
Epoch 87/200
3/3 [==============================] - 0s 32ms/step - loss: 0.2770 - accuracy: 0.8846 - val_loss: 1.1185 - val_accuracy: 0.9167
Epoch 88/200
3/3 [==============================] - 0s 13ms/step - loss: 0.1611 - accuracy: 0.9231 - val_loss: 1.1359 - val_accuracy: 0.8333
Epoch 89/200
3/3 [==============================] - 0s 19ms/step - loss: 0.1491 - accuracy: 0.9231 - val_loss: 1.1614 - val_accuracy: 0.8333
Epoch 90/200
3/3 [==============================] - 0s 12ms/step - loss: 0.0976 - accuracy: 1.0000 - val_loss: 1.1853 - val_accuracy: 0.8333
Epoch 91/200
3/3 [==============================] - 0s 12ms/step - loss: 0.1177 - accuracy: 0.9615 - val_loss: 1.2119 - val_accuracy: 0.8333
Epoch 92/200
3/3 [==============================] - 0s 28ms/step - loss: 0.1990 - accuracy: 0.9231 - val_loss: 1.2357 - val_accuracy: 0.8333
Epoch 93/200
3/3 [==============================] - 0s 11ms/step - loss: 0.1656 - accuracy: 0.9231 - val_loss: 1.2478 - val_accuracy: 0.8333
Epoch 94/200
3/3 [==============================] - 0s 24ms/step - loss: 0.1112 - accuracy: 0.9615 - val_loss: 1.2543 - val_accuracy: 0.8333
Epoch 95/200
3/3 [==============================] - 0s 12ms/step - loss: 0.1289 - accuracy: 0.9615 - val_loss: 1.2685 - val_accuracy: 0.8333
Epoch 96/200
3/3 [==============================] - 0s 14ms/step - loss: 0.1707 - accuracy: 0.9231 - val_loss: 1.2883 - val_accuracy: 0.8333
Epoch 97/200
3/3 [==============================] - 0s 24ms/step - loss: 0.8526 - accuracy: 0.7692 - val_loss: 1.3042 - val_accuracy: 0.7500
Epoch 98/200
3/3 [==============================] - 0s 11ms/step - loss: 0.2337 - accuracy: 0.8462 - val_loss: 1.2895 - val_accuracy: 0.8333
Epoch 99/200
3/3 [==============================] - 0s 31ms/step - loss: 0.3367 - accuracy: 0.7692 - val_loss: 1.2885 - val_accuracy: 0.8333
Epoch 100/200
3/3 [==============================] - 0s 13ms/step - loss: 0.1782 - accuracy: 0.9231 - val_loss: 1.2981 - val_accuracy: 0.8333
Epoch 101/200
3/3 [==============================] - 0s 11ms/step - loss: 0.1612 - accuracy: 0.8846 - val_loss: 1.3117 - val_accuracy: 0.8333
Epoch 102/200
3/3 [==============================] - 0s 25ms/step - loss: 0.1157 - accuracy: 0.9615 - val_loss: 1.3292 - val_accuracy: 0.8333
Epoch 103/200
3/3 [==============================] - 0s 12ms/step - loss: 0.1425 - accuracy: 0.9615 - val_loss: 1.3330 - val_accuracy: 0.8333
Epoch 104/200
3/3 [==============================] - 0s 20ms/step - loss: 0.1424 - accuracy: 0.9231 - val_loss: 1.3445 - val_accuracy: 0.8333
Epoch 105/200
3/3 [==============================] - 0s 12ms/step - loss: 0.2059 - accuracy: 0.8846 - val_loss: 1.3549 - val_accuracy: 0.8333
Epoch 106/200
3/3 [==============================] - 0s 12ms/step - loss: 0.1368 - accuracy: 0.9231 - val_loss: 1.3592 - val_accuracy: 0.8333
Epoch 107/200
3/3 [==============================] - 0s 13ms/step - loss: 0.1365 - accuracy: 0.9615 - val_loss: 1.3674 - val_accuracy: 0.8333
Epoch 108/200
3/3 [==============================] - 0s 14ms/step - loss: 0.3579 - accuracy: 0.9231 - val_loss: 1.3599 - val_accuracy: 0.8333
Epoch 109/200
3/3 [==============================] - 0s 25ms/step - loss: 0.1485 - accuracy: 0.9615 - val_loss: 1.3344 - val_accuracy: 0.8333
Epoch 110/200
3/3 [==============================] - 0s 12ms/step - loss: 0.1278 - accuracy: 0.9231 - val_loss: 1.3213 - val_accuracy: 0.8333
Epoch 111/200
3/3 [==============================] - 0s 12ms/step - loss: 0.1192 - accuracy: 0.8846 - val_loss: 1.3064 - val_accuracy: 0.8333
Epoch 112/200
3/3 [==============================] - 0s 16ms/step - loss: 0.1213 - accuracy: 0.9615 - val_loss: 1.2766 - val_accuracy: 0.8333
Epoch 113/200
3/3 [==============================] - 0s 13ms/step - loss: 0.1490 - accuracy: 0.9615 - val_loss: 1.2399 - val_accuracy: 0.8333
Epoch 114/200
3/3 [==============================] - 0s 31ms/step - loss: 0.4886 - accuracy: 0.8077 - val_loss: 1.2308 - val_accuracy: 0.8333
Epoch 115/200
3/3 [==============================] - 0s 11ms/step - loss: 0.2520 - accuracy: 0.8462 - val_loss: 1.2397 - val_accuracy: 0.8333
Epoch 116/200
1/3 [=========>....................] - ETA: 0s - loss: 0.1400 - accuracy: 1.0000Restoring model weights from the end of the best epoch: 66.
3/3 [==============================] - 0s 23ms/step - loss: 0.1427 - accuracy: 0.9615 - val_loss: 1.2546 - val_accuracy: 0.8333
Epoch 116: early stopping
Using Model for Prediction
Once the model is created, it can be used to make predictions on a dataset that does not contain any labels. A separate data file is loaded below that does not contain any labels.
Sound (dB) Light (Lux) Vibration (db) Time of day (H) Status
0 32 27 17 3 NaN
1 54 20 27 6 NaN
2 43 200 23 10 NaN
The dataset is converted into a NumPy array and the Keras predict function is used to make a prediction. The output of the neural network passes through a softmax function which converts a vector of values to a probability distribution. The elements of the output vector are in the range (0, 1) and sum to 1.
[[4.5814482e-04 1.4562554e-04 9.9939620e-01]
[8.3412313e-01 1.5341413e-01 1.2462723e-02]
[7.4272847e-01 2.2784162e-01 2.9429976e-02]
[8.0065548e-02 9.1941237e-01 5.2217743e-04]]
The arg max function and the label mapping can be used to essentially decode the one hot encoded output of the NN back to their corresponding classes as shown below.
This produces a list of labels for each data point in the unseen dataset.
['Sleeping', 'Home', 'Home', 'Not Home']