Creating an Awesome Web App with Python and Streamlit
“How can we make a machine learning script and convert it into an app as simple as possible, so that it basically feels like a scripting exercise?”
— Adrien Treuille (inventor of Streamlit)
Photo by Luke Chesser on Unsplash
Web applications are a great way to display your results for a data science or machine learning project . Developing web applications from scratch requires a lot of time, effort and technical skills. An alternative way to build web applications is to use open-source python libraries like Streamlit.
Streamlit is an open-source Python framework that allows you to create beautiful interactive websites for Data Science and Machine Learning projects. Streamlit provides features to design and configure webpages, access and display data, and generate a variety of different types of charts and graphs. You can build and deploy a powerful web application in a few minutes.
Streamlit is compatible and integrates with many Python libraries including scikit-learn, Keras, PyTorch, NumPy and Pandas. Streamlit supports displaying text, data, interactive widgets, and many charting and graphing libraries including Matplotlib, Vega-Lite and Plotly. If you would like more in depth information visit this link https://docs.streamlit.io/
In this article we will create a script that will create a webpage including design and configuration, display of data and visualizations using charts and graphs. The following steps will be performed using Python and Streamlit frameworks.
1. Install the Packages.
2. Import the Libraries.
3. Import the Dataset.
4. Design and Configure the Web Page.
5. Display some Data on the Web Page.
6. Visualize the Stock Data.
Install the Packages
First we need to install the Streamlit and Tornado packages to develop and run our application.
Streamlit is an open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science. Tornado is a Python web framework and an asynchronous networking library that relies on non-blocking network I/O to serve web applications.
Import the Libraries
Import the Dataset
The dataset we are using for this application was sourced from Kaggle and can be found at the this link https://www.kaggle.com/datasets/kalilurrahman/nvidia-stock-data-latest-and-updated The dataset contains stock data for Nvidia Corporation from 1/22/1999 thru 11/12/2021. We will download this file into our local file folder.
We are developing in Google’s Colab environment so we need to load the file we just downloaded into Colab.
Next we will Import the dataset into a dataframe.
For this project we are only going to look at data from the past three years with dates from 01/02/2019 thru 11/12/2021. We will delete records prior to 01/01/2019. If you would like you can look at all of the history in the dataset or a subset of the data.
Let’s reset the index to the Date column.
Design and Configure the Web Page
We will begin by adding a title and logo on the web page.
Let’s add some social media links to our personal profile on the web page.
Let’s add a sidebar to the web page to display some basic information about the app.
The above specifications will produce the following web page.
Display some Data on the Web Page
Let’s look at the data and show some statistical information about the data.
This will produce the following items being displayed on the web page.
We will now add a filter that will give users the flexibility to select a date range for the data they want to analyze. This will provide useful insights for different time frames.
This will produce the following items being displayed on the web page. In this example we have selected a start date of 11/02/2020 thru 11/12/2021.
Visualize the Stock Data
The following graphs and charts will show the performance of Nvidia stock prices from 11/02/2020 thru 11/12/2021. This will include open and close prices, high and low prices, daily volume traded and moving averages of open and close prices.
We now need to bundle all of our code shown above into a file (app.py) for streamlit processing. Note: prior to running this step be sure that Streamlit and Tornado packages are installed. Also, if you are running in Google Colab environment, load the dataset into Colab.
We will now run the streamlit application which will process the file created in the previous step using the local tunnel. Local tunnel exposes your local host to the world for testing and sharing.
After the streamlit application runs you will see the screen below. You can view the web page that was created by clicking on your url link.
After selecting your url link you will see the screen shown below. This is informational and a reminder to verify the website address for security purposes. The web page also includes some options to bypass this page. For now, just select the Click to Continue button.
The newly created web page will display the screen images that were shown above. This includes title, logo, social media tags, sidebar, stock data, statistics, date range selection, and graphs and charts.
We have now completed developing a web app using python and the streamlit frameworks! You can deploy your web application on the Streamlit cloud platform for free. Streamlit Cloud is a workspace for you to deploy, manage, and collaborate with others on your Streamlit app. You deploy your app by setting up a streamlit cloud account, add your app and dependencies to your Github account, and deploy your app from Github by specifying the repository, branch and main file path. You can find more details for deployment by going to this link https://docs.streamlit.io/streamlit-cloud/get-started/deploy-an-app
Thanks so much for reading my article! If you have any comments or feedback please let me know.
You can unlock full access to my writing and the rest of Medium by signing up for Medium membership ($5 per month) using this link https://medium.com/@dniggl/membership . By signing up through this link, I will receive a portion of your membership fee at no additional cost to you. Thank you!