Salim Oyinlola
Hey-Hi: Storytelling as a Service


Hey-Hi: Storytelling as a Service

My First Machine Learning App: Deploying Locally

My First Machine Learning App: Deploying Locally

Salim Oyinlola's photo
Salim Oyinlola
·Nov 20, 2021·

3 min read

I'm putting this to writing about two weeks after the completion of the code for the Machine Learning model not by design nor coincidental preference but due to certain unforeseen circumstances. Apart from how hectic my preparation for the new academic semester (yes, I am a college student) has been, I initially had issues with installing some python libraries, especially regarding their build dependencies on my PC. The initial plan in place involved using Streamlit, which would be an easier means but troubles in installing the build dependencies would not let me. The other alternative was to go the Flask and jsonify route. I spent a couple of days with that. All to no avail. My lack of experience in full-stack software engineering proved to be an Achilles’ heel. Eventually, I went back to Streamlit and figured out what the issue with using it was, resolved it and it was up and running. Here’s a step-by-step detailed explanation of how I went about the local deployment. I hope you enjoy reading.

After dumping the ML model in pickle format, the .pkl file is seen in the folder where the model and .csv dataset is stored.


Thereafter, I opened up my text editor of choice and opened the folder in said folder. On here, I created two .py files and predict_page .py

The code

  • Importing the necessary libraries

    import streamlit as st
    import pickle
    import numpy as np

    The pickle library is key in loading the model. However, a function would have to be created to import the module as saved.

    def load_model():
      with open('saved_model.pkl', 'rb') as file:
          data = pickle.load(file)
      return data

    After creating a function, the function is then executed as shown below;

    data = load_model()

    Next, I accessed the different keys of the app (i.e. country, education etc.)

    regressor = data["model"]
    le_country = data["le_country"]
    le_education = data["le_education"]

    Next up, I created the prediction page using the function how_predict_page() as shown below;

    def show_predict_page():
      st.title("Salim's oftware Developer Salary Prediction")
      st.write("""### We need some information to predict the salary""")
      countries = (
          "United States",
          "United Kingdom",
          "Russian Federation",
      education = (
          "Less than a Bachelors",
          "Bachelor’s degree",
          "Master’s degree",
          "Post grad",
      country = st.selectbox("Country", countries)
      education = st.selectbox("Education Level", education)
      expericence = st.slider("Years of Experience", 0, 50, 3)
      ok = st.button("Calculate Salary")
      if ok:
          X = np.array([[country, education, expericence ]])
          X[:, 0] = le_country.transform(X[:,0])
          X[:, 1] = le_education.transform(X[:,1])
          X = X.astype(float)
          salary = regressor.predict(X)
          st.subheader(f"The estimated salary is ${salary[0]:.2f}")

    If you are conversant with python's syntax and the way streamlit works, the code written above can be well understood.

Next thing I did was to create a to run the app.

The code

The first step would be to import the needed external libraries as shown below;

import streamlit as st
from predict_page import show_predict_page

The first line is to import the streamlit library and the second is used to import the show_predict_page function from the file.


The function is then called. And that's it! Done writing the code.

Thereafter, I went to the terminal and wrote the code underneath;

conda activate [name_of_env]

This was to ensure that I was operating in the environment I created earlier. Thereafter, I ran the app;

streamlit run

The link to the beautiful app page is then seen.

Share this