Run to view results
Continuing from the information from above, generate and run code for the next step
Run to view results
Run to view results
Run to view results
Continuing from the information from above, generate and run code for the next step
Run to view results
Continuing from the information from above, generate and run code for the next step
Run to view results
Continuing from the information from above, generate and run code for the next step
Run to view results
Continuing from the information from above, generate and run code for the next step
Run to view results
The code has finished defining the generator and discriminator models for the GAN, and even trained it. It's important to mention that the model was trained for only 100 epochs, and the performance might not be optimal. However, these epochs already took a substantial amount of time to run, so increasing the number of epochs without optimizing the code or running it on a more powerful machine might not be tenable at the moment. Now, let's try to generate some new images with the trained generator. We will use the `predict` function of the generator model to generate new images. This function needs the noise input that the generator uses to generate images, so we'll also have to create this. Since our generator uses an array of 100 random values for its noise input, we'll have to create this for the new data. We'll also create labels for the images that we're going to generate. We'll just use random values for these, so that the generator creates a variety of images. After that, we can display the images that the generator creates. For this, we'll use the matplotlib library to plot the images. We'll arrange the images in a grid for easier viewing. This will allow us to see if the generator is able to create new plausible images after training. Keep in mind that these images are entirely new and do not correspond to any real data in the training set. The generator has learned to create these from the training process.
Run to view results
Continuing from the information from above, generate and run code for the next step
Sure, we will now evaluate the performance of our trained model. However, since we don't have any utility function/callback in place for saving the model at its best status during training, we need to proceed with the current version of the models. To do this, we will compare the generated synthetic images to authentic images from the `validation_images` set. We will measure performance using binary cross-entropy after applying softmax to transform the outputs into probabilities. We already trained the model with the same loss function, so the discriminator's accuracy after training should give us a general idea of the performance on new data. However, note that this is just one possible way to evaluate the GAN, and another common approach is a visual inspection of the generated images. In some cases, GANs are used in scenarios where the traditional metrics (like accuracy, precision, recall) are not applicable. For example, when GANs are used for generating realistic images, the success of the model is often determined by human judgment rather than mathematical metrics. Let's generate synthetic predictions for some synthetic and real images:
Run to view results
Run to view results
Run to view results
Continuing from the information from above, generate and run code for the next step
The GAN has already been trained, and new images have been generated. A comparison between synthetic and real images was made using the binary cross-entropy loss, and the training losses were plotted. The next logical steps could involve further investigation into the properties of the model, for example: 1. Visualize how different the model's output is from real images. 2. Fine-tune the model and try to improve the model's performance. 3. Visualize the loss and accuracy progression during the training process (already done to some extent) 4. Save the trained models. As a recommendation, let's generate some synthetic images and compare them side by side with real images. We could randomly select a few images from the validation set and generate corresponding synthetic images using the generator. Then, we could display them in a grid for comparison. Let's generate some real and synthetic images for comparison:
Run to view results
Continuing from the information from above, generate and run code for the next step
Alright, let's jump to the next step. Since we already have a trained GAN and used it for synthesizing new image data, it might be interesting to investigate the impact of noise on the synthetic image generation. We can take a single label and generate several synthetic images with this label using different noise inputs. This would allow us to see how much variance the network can produce given the same label but different noise. Let's try this: we generate 10 images using the same label but vary the noise input.
Continuing from the information from above, generate and run code for the next step
As indicated by the presence of .h5 files for both the generator and discriminator models in the filesystem, it is reasonable to assume the models were saved after the training process (though we don't see any code for it in the presented cells). It might be beneficial to load these models and use them for further analysis and visualization tasks. Since .h5 files are typically Keras model files, we can use the Keras `load_model` function to load these models. Let's load the saved models and display their summaries to confirm whether the models are loaded correctly. Note that we might need to recompile the models if we want to further train them. For analysis and making predictions, recompiling is not necessary.