C:\Users\Antonin VELLARD>pip install tensorflow
Collecting tensorflow
Downloading tensorflow-2.5.0-cp38-cp38-win_amd64.whl (422.6 MB)
|████████████████████████████████| 422.6 MB 8.3 kB/s
Collecting opt-einsum~=3.3.0
Downloading opt_einsum-3.3.0-py3-none-any.whl (65 kB)
|████████████████████████████████| 65 kB 1.0 MB/s
Collecting typing-extensions~=3.7.4
Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
Collecting tensorflow-estimator<2.6.0,>=2.5.0rc0
Downloading tensorflow_estimator-2.5.0-py2.py3-none-any.whl (462 kB)
|████████████████████████████████| 462 kB 2.2 MB/s
Collecting h5py~=3.1.0
Downloading h5py-3.1.0-cp38-cp38-win_amd64.whl (2.7 MB)
|████████████████████████████████| 2.7 MB 2.2 MB/s
Collecting termcolor~=1.1.0
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Collecting wheel~=0.35
Downloading wheel-0.36.2-py2.py3-none-any.whl (35 kB)
Collecting absl-py~=0.10
Downloading absl_py-0.13.0-py3-none-any.whl (132 kB)
|████████████████████████████████| 132 kB 1.7 MB/s
Collecting protobuf>=3.9.2
Downloading protobuf-3.17.3-py2.py3-none-any.whl (173 kB)
|████████████████████████████████| 173 kB 2.2 MB/s
Collecting tensorboard~=2.5
Downloading tensorboard-2.5.0-py3-none-any.whl (6.0 MB)
|████████████████████████████████| 6.0 MB 2.2 MB/s
Collecting keras-nightly~=2.5.0.dev
Downloading keras_nightly-2.5.0.dev2021032900-py2.py3-none-any.whl (1.2 MB)
|████████████████████████████████| 1.2 MB 2.2 MB/s
Collecting grpcio~=1.34.0
Downloading grpcio-1.34.1-cp38-cp38-win_amd64.whl (2.9 MB)
|████████████████████████████████| 2.9 MB 1.6 MB/s
Collecting astunparse~=1.6.3
Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting numpy~=1.19.2
Downloading numpy-1.19.5-cp38-cp38-win_amd64.whl (13.3 MB)
|████████████████████████████████| 13.3 MB 711 kB/s
Collecting flatbuffers~=1.12.0
Downloading flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Collecting wrapt~=1.12.1
Downloading wrapt-1.12.1.tar.gz (27 kB)
Collecting gast==0.4.0
Downloading gast-0.4.0-py3-none-any.whl (9.8 kB)
Collecting google-pasta~=0.2
Downloading google_pasta-0.2.0-py3-none-any.whl (57 kB)
|████████████████████████████████| 57 kB 1.1 MB/s
Collecting keras-preprocessing~=1.1.2
Downloading Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
|████████████████████████████████| 42 kB 1.7 MB/s
Collecting six~=1.15.0
Downloading six-1.15.0-py2.py3-none-any.whl (10 kB)
Collecting tensorboard-plugin-wit>=1.6.0
Downloading tensorboard_plugin_wit-1.8.0-py3-none-any.whl (781 kB)
|████████████████████████████████| 781 kB 2.2 MB/s
Collecting tensorboard-data-server<0.7.0,>=0.6.0
Downloading tensorboard_data_server-0.6.1-py3-none-any.whl (2.4 kB)
Collecting google-auth<2,>=1.6.3
Downloading google_auth-1.32.0-py2.py3-none-any.whl (147 kB)
|████████████████████████████████| 147 kB 2.2 MB/s
Requirement already satisfied: setuptools>=41.0.0 in c:\users\antonin vellard\appdata\local\programs\python\python38\lib\site-packages (from tensorboard~=2.5->tensorflow) (49.2.1)
Collecting markdown>=2.6.8
Downloading Markdown-3.3.4-py3-none-any.whl (97 kB)
|████████████████████████████████| 97 kB 1.3 MB/s
Collecting werkzeug>=0.11.15
Downloading Werkzeug-2.0.1-py3-none-any.whl (288 kB)
|████████████████████████████████| 288 kB 2.2 MB/s
Collecting google-auth-oauthlib<0.5,>=0.4.1
Downloading google_auth_oauthlib-0.4.4-py2.py3-none-any.whl (18 kB)
Collecting requests<3,>=2.21.0
Downloading requests-2.25.1-py2.py3-none-any.whl (61 kB)
|████████████████████████████████| 61 kB 2.0 MB/s
Collecting cachetools<5.0,>=2.0.0
Downloading cachetools-4.2.2-py3-none-any.whl (11 kB)
Collecting rsa<5,>=3.1.4; python_version >= "3.6"
Downloading rsa-4.7.2-py3-none-any.whl (34 kB)
Collecting pyasn1-modules>=0.2.1
Downloading pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
|████████████████████████████████| 155 kB 939 kB/s
Collecting requests-oauthlib>=0.7.0
Downloading requests_oauthlib-1.3.0-py2.py3-none-any.whl (23 kB)
Collecting certifi>=2017.4.17
Downloading certifi-2021.5.30-py2.py3-none-any.whl (145 kB)
|████████████████████████████████| 145 kB 1.7 MB/s
Collecting idna<3,>=2.5
Downloading idna-2.10-py2.py3-none-any.whl (58 kB)
|████████████████████████████████| 58 kB 1.7 MB/s
Collecting urllib3<1.27,>=1.21.1
Downloading urllib3-1.26.5-py2.py3-none-any.whl (138 kB)
|████████████████████████████████| 138 kB 1.7 MB/s
Collecting chardet<5,>=3.0.2
Downloading chardet-4.0.0-py2.py3-none-any.whl (178 kB)
|████████████████████████████████| 178 kB 2.2 MB/s
Collecting pyasn1>=0.1.3
Downloading pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
|████████████████████████████████| 77 kB 1.4 MB/s
Collecting oauthlib>=3.0.0
Downloading oauthlib-3.1.1-py2.py3-none-any.whl (146 kB)
|████████████████████████████████| 146 kB 1.6 MB/s
Using legacy 'setup.py install' for termcolor, since package 'wheel' is not installed.
Using legacy 'setup.py install' for wrapt, since package 'wheel' is not installed.
from tensorflow import keras
data = keras.datasets.mnist.load_data(path="mnist.npz")
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
print(type(data))
<class 'tuple'>
#A mon avis le premier élément de cette collection correspond à une partie train d'un dataset
print(data[0]) # le premier elements contient un dataset qui permet de former la première image
(array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8), array([5, 0, 4, ..., 5, 6, 8], dtype=uint8))
import matplotlib.pyplot as plt
(X_train, Y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
plt.imshow(X_train[5], cmap = 'gray')
plt.figure()
plt.imshow(X_train[18], cmap = 'gray')
plt.figure()
plt.imshow(X_train[42], cmap = 'gray')
plt.figure()
print(X_train.shape, x_test.shape, Y_train.shape, y_test.shape)
#Nous avons trois arguments au lieu de deux,elle ne correspond donc pas au format habituels utilisé par scikit learn.
(60000, 784) (10000, 784) (60000,) (10000,)
from sklearn.linear_model import LogisticRegression
X_train = X_train.reshape(X_train.shape[0], 784)
x_test = x_test.reshape(x_test.shape[0], 784)
Y_train = Y_train.reshape(Y_train.shape[0])
y_test = y_test.reshape(y_test.shape[0])
model = LogisticRegression()
model.fit(X_train,Y_train)
/shared-libs/python3.7/py/lib/python3.7/site-packages/sklearn/linear_model/_logistic.py:765: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
print('Train : ', model.score(X_train, Y_train), 'Test : ', model.score(x_test, y_test))
Train : 0.9339166666666666 Test : 0.9257
from sklearn.metrics import confusion_matrix, plot_confusion_matrix
plot_confusion_matrix(model,x_test,y_test)
#Il n'y a pas beaucoup d'erreurs de reconnaissance le modéle est donc convenable
x_test = x_test/255
#Y_test = Y_test/255
X_train = X_train/255
#Y_train = Y_train/255
model.fit(X_train,Y_train)
print("train normalized score",model.score(X_train, Y_train))
print("test normalized score",model.score(x_test, y_test))
y_preds = model.predict(x_test)
conf_mat = confusion_matrix(y_test, y_preds)
train normalized score 0.7637833333333334
test normalized score 0.1135
from tensorflow import keras
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.linear_model import LogisticRegression
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
plt.imshow(x_train[0])
plt.figure()
plt.imshow(x_train[1])
plt.figure()
plt.imshow(x_train[2])
plt.figure()
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
x_train = x_train.reshape(x_train.shape[0], 3072)
y_train = y_train.reshape(y_train.shape[0])
x_test = x_test.reshape(x_test.shape[0], 3072)
y_test = y_test.reshape(y_test.shape[0])
model = LogisticRegression()
model.fit(x_train,y_train)
print("train score :",model.score(x_train, y_train))
print("test score :",model.score(x_test, y_test))
y_preds = model.predict(x_test)
conf_mat = confusion_matrix(y_test, y_preds)
print(conf_mat)
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 3s 0us/step
(50000, 32, 32, 3)
(10000, 32, 32, 3)
(50000, 1)
(10000, 1)
/shared-libs/python3.7/py/lib/python3.7/site-packages/sklearn/linear_model/_logistic.py:765: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
train score : 0.4258
test score : 0.4024
[[484 46 56 41 22 30 22 50 176 73]
[ 62 486 14 32 22 35 35 52 95 167]
[121 44 272 85 112 93 136 68 48 21]
[ 43 56 99 257 52 192 121 54 48 78]
[ 65 23 133 58 289 95 148 125 33 31]
[ 51 46 88 151 86 342 85 78 44 29]
[ 12 33 74 117 100 87 485 40 21 31]
[ 49 49 65 50 89 79 51 442 44 82]
[172 77 18 27 10 51 8 16 520 101]
[ 88 190 17 20 16 28 39 48 107 447]]