Create GPTs to Automate Supply Chain Analytics

“The Supply Chain Analyst” is a Custom ChatGPT “GPT” that performs Pareto & ABC Analysis using sales data

Samir Saci
13 min readDec 6, 2023
Create GPTs to Automate Supply Chain Analytics — (Image by Author)

ABC Analysis in Supply Chain Management can be defined as a strategic product categorization method used for demand planning and inventory management.

In a previous article, you can find the methodology I used to automate this analysis with a web application deployed on the cloud.

ABC Analysis Simplified in 3 Steps [Link to the App] — (Image by Author)

This app gathered 1,000+ users with excellent feedback from Supply Chain Professionals and Data Scientists.

As a next step, I wanted to explore the idea of improving the user experience using a Large Language Model agent as an interface with users.

My first experiment was the design of an automated supply chain control tower connected to a database using the framework LangChain.

Supply Chain Control Tower Agent with LangChain SQL Agent [Article Link] — (Image by Author)

This prototype was working perfectly on my computer. However, the question of productization and deployment of this solution remains.

How can we easily deploy a GPT agent?

A few weeks later, OpenAI introduced a new feature allowing users to create custom versions of ChatGPT tailored for specific purposes.

This is an opportunity for me to easily create and deploy an agent to automate Pareto and ABC analyses.

In this article, I will introduce “The Supply Chain Analyst”, a custom GPT agent designed to automate supply chain analytics tasks and interact with users using natural language.

💌 New articles straight in your inbox for free: Newsletter
📘 Your complete guide for Supply Chain Analytics: Analytics Cheat Sheet

SUMMARY
I. "The Supply Chain Analyst": A GPT for Supply Chain Analytics
Introduction of this custom GPT designed to automate analytics tasks with a
user interface powered by GPT.
1. How do you utilize the "The Supply Chain Analyst"?
Users can have their analysis after following only two steps.
2. Users Can Ask for Advanced Analyses
Leverage the GPT model equipped with data and context
II. Design Approach for this first module
I have been using the GPT editor of ChatGPT to create this solution.
1. How does it work?
Upload data, scripts and add context to build a smart Agent
2. Navigating the Editor
Let me share a non-exhaustive list of encountered issues
III. Create the Supply Chain Analytics "Super Agent"
1. Create a Super-App with a UI Powered by GPT
Introduce several analytics products in a smart GPT agent
2. Module 2: Lean Six Sigma Statistical Tests
Support warehousing and transportation operations with statistical methods
3. Module 3: Inventory Management Rules
Optimize the process of restocking items to maintain an adequate supply.
4. A revolution in the design of analytics products
Revolutionize analytics product design by addressing key challenges

If you prefer to watch, you can check the video version of this article,

You can access the GPT via this link: 👇

I. “The Supply Chain Analyst”: A GPT for Supply Chain Analytics

How do you utilize the “The Supply Chain Analyst”?

How do I start the analysis?

I tried to make the user experience as smooth as possible; users can have their analysis after following only two steps.

Usage Procedure in Two Steps [Try the GPT: Link]— (Image by Author)

Step 1: Data Upload
You can upload sales data that will be used to perform analyses and build visuals.

The input dataset should be in a ‘.csv’ file with the sales transactions by day for each reference, including the following columns:

  • SKU: This column represents the Stock Keeping Unit, a unique identifier for each item (SKU 1234: Evian Mineral Water 1.5L Bottle)
  • FAMILY: This column indicates the family or category to which the item belongs. (FAMILY-23: Beverages)
  • DAY: This column should include the day number, representing a time frame for the analysis.
  • QTY: This is the quantity sold or moved for the SKU on that specific day.
  • TO: This stands for Turnover, reflecting the total sales value of the SKU on that day.

The dataset should look like this

Data sample for example [Test the GPT: Link] — (Image by Author)
📝 Note: Users can upload their own datasets or ask the agent to use a sample 
dataset
already loaded in the agent's local folder.

Step 2: Parameter Selection
The agent asks the user to select only one parameter metric_colto keep things simple.

This is the metric used to perform the analyses

  • ‘QTY’ if the objective is to analyse the sales distribution in (units)
  • ‘TO’ if the user wants to analyse the turnover distribution in (euros)

Our “Supply Chain Analyst” can now process the dataset and provide an initial analysis.

Initial Analysis [Test the GPT: Link] — (Image by Author)

The agent uses a template filled with calculated parameters to ensure that it provides consistent outputs for this initial answer.

Users Can Ask for Advanced Analyses

The agent has now stored the results in local variables that can be used to

  • Create visualizations like the Pareto or ABC charts
  • Answer questions, prepare reports or any other request formulated in natural language.

Let’s imagine a user asking for a Pareto Chart:

Pareto Chart Answer [Test the GPT: Link]— (Image by Author)

💡 We can see that the Agent is

  • Adapting the metric based on the choice of the user
  • Showing the chart with additional valuable comments
  • Proposing the ABC chart as a further analysis
ABC Chart by the Agent [Test the GPT: Link] — (Image by Author)

💡 We can see that the Agent is

  • Providing explanations and legends for the charts
  • Asking for further analysis

The question that may be burning your lips:

What’s the point of having a GPT for these simple analyses?

Indeed, they can be automated with a Python script that creates the visuals and generates the comments.

Let’s try to push our agent with a more advanced question:

Advanced Question [Test the GPT: Link]— (Image by Author)

The objective is to test if our agent can

  • Use the outputs of the initial analysis for a request that has not been pre-prompted
  • Use GPT capabilities to generate text from a context with constraints

The output is quite satisfying

Example of Email [Test the GPT: Link]— (Image by Author)

💡 We can see that the Agent is

  • Using the outputs of the analysis to write a concise email sharing key information
  • The concision is not really showing the full capabilities of the model.

👉 I’ll let you try by asking for a more detailed review of the results to see how the model performs.

Let’s try with a more tricky question:

The advanced question 2 [Test the GPT: Link]— (Image by Author)

The output is not bad

The output of the advanced question 2 [Test the GPT: Link] — (Image by Author)

I hope that this brief introduction triggered enough curiosity to test the solution.

🔔 Remember that you don’t need to share your data to test the solution, you can use the sample dataset included in the GPT.

Try it!

👇

Click on the picture to access the link — (Image by Author)

II. Design a Custom GPT with Python

I have been using the GPT editor of ChatGPT to create this solution.

How does it work?

Let me briefly share my experience using this editor if you want to create similar GPTs for other applications.

Screenshots of the ChatGPT editor [Test the GPT: Link]— (Image by Author)

The architecture of the initial prototype doing ABC analysis is very simple.

The architecture of the GPT [Test the GPT: Link]— (Image by Author)

In the editor, I have uploaded two files

  • A Python script, including the core model to process the data frame
  • A sample dataset that can be used if users don’t have data to test the solution.

The instructions prompt includes several sections

  • Section 1: Introducing the process expected
    In this section, I explain to the agent how it is supposed to interact with users, use the Python script with the dataset and manage the outputs.
  • Section 2: How do you communicate the results?
    I added a section ensuring that the agent uses the right variables to generate the charts.
  • Section 3: Prepare the ‘What if’ scenarios
    In this section, I detail the expected behaviour if a user asks ‘How can I start?’ or ‘What is the input data format?’.

💡 Based on my short experience, I can share the following tips

  • If you use ChatGPT with the tab “Create” to configure your agent, keep the records of the outputs in a separate file.
    The model may erase previous instructions without your consent.
  • Only use the create tab to initiate the design process and add instructions manually to ensure consistency of the agent’s behaviour.
  • Detail everything in your instructions.

Let me explain how I discovered the importance of the third tip.

Navigating the Editor: A List of Encountered Issues

The last tip is the most important, as the model may have hallucinations (creating new variables) or struggle with simple tasks.

Issue 1: Loading Local Files
The main issue I have faced is using loaded scripts and datasets.

The architecture of the GPT [Test the GPT: Link] — (Image by Author)

The files uploaded by you (or users) are stored in the folder /mnt/data/.

Your agent may sometimes struggle to find the Python script or the datasets.

Therefore, I added additional instructions like

The agent will record the CSV filename in the variable: filename and append 
the folder 'data' to avoid errors using this piece of code at the
beginning "import sys sys.path.append('/mnt/data')"
The agent should ask the users if they want to use the sample dataset 
'/mnt/data/abc_template.csv' in case he/she doesn't have data.

Issue 2: Managing Input and Output Variables

The Python script uses two variables for the input (filename and metric_col) and four output variables (REPORT, df_abc, to_a, to_b).

You need to explain that to the agent with explicit instructions like

2. Prompting the user to select a metric for calculation ('QTY' or 'TO'). 
- The agent will ask the user to choose between 'QTY' and 'TO'
- The agent will store the result in the variable metric_col
The agent will import and run the function processing(filename, metric_col) 
of the script with the variables filename and metric_col from the
input of the user in the script
- The function processing() will return a string variable REPORT, the
results dataframe df_abc and two parameters to_a and to_b

Issue 3: The agent lacks concision

It tends to share all the information in the instruction prompt with the users.

Therefore, I have explicitly stated what you don't want to share

- The agent should ask the users if they want to use the sample dataset 
'/mnt/data/abc_template.csv' in case he/she doesn't have data.
The agent will not mention the name of the file.
The GPT maintains a professional tone, never mentions 'abc_gpt.py' and 
sticks to the step presented above except if the user has additional requests.

Issue 4: The agent needs users’ guidance

I have to admit that I lied to you in the previous section. (I apologize 🙏)

For the second tricky question, the output was not that straightforward

The advanced question 2 [Test the GPT: Link] — (Image by Author)

Indeed, the initial output was not successful

The advanced question 2’s initial output [Test the GPT: Link] — (Image by Author)

The agent struggled to run two analyses with different parameters (metric_col = ‘QTY’ and ‘TO’), store the data and plot the two charts.

Therefore, I had to find a way to guide him

Additional Guidance [Test the GPT: Link] — (Image by Author)

I do not yet have a solution for this kind of issue as it’s impossible to pre-prompt all the potential scenarios of user requests.

❓ Do you have an idea? Please share it in the comment section!

Now that I have introduced my approach to designing this GPT, we can explore how to improve it by adding additional modules.

Create the Supply Chain Analytics “Super Agent”

Create a Super-App with a UI Powered by GPT

The initial objective was to test the capacity of custom GPT agents to improve the user experience of analytics products.

With our GPT “The Supply Chain Analyst”, users can

  • Launch a specific analysis using natural languages
  • Interact with a GPT agent equipped with a context, Python scripts and processing outputs stored in local variables
  • Ask for different output formats like email, reports or even images
  • Challenge the agent by asking about the hypotheses used and correcting them

Can we expand this to other analytics models?

I have short-listed 6 analytics models to be packaged into two additional modules.

Module 2: Lean Six Sigma Statistical Tests

This module supports warehousing and transportation operations with statistical methods of process improvement.

Lean Six Sigma (LSS) is a stepwise approach to continuous improvement following 5 steps (Define, Measure, Analyze, Improve and Control) to improve processes or solve problems with unknown causes.

I have a series of three articles in which I share three operational cases using LSS statistical methods to solve a problem:

  • Lean Six Sigma with Python — Kruskal Wallis Test
    Assess the effectiveness of warehouse operators by comparing the productivity of two samples of operators.
  • Lean Six Sigma with Python — Logistic Regression
    How much bonus do you need to provide to warehouse operators to reach your productivity targets?
  • Lean Six Sigma with Python — Chi-Squared Test
    Find the root cause of the shortage of drivers impacting your transportation network with a statistical test.

The objective is to create a module that would

  • Understand the operational problem of the users
  • Propose the best statistical solutions
  • Perform the calculation and provide recommendations
Module 3 Architecture — (Image by Author)
📝 Note: As soon as the current prototype (built with LangChain) will be
finalized, I'll work on adding this module.

Module 3: Inventory Management Rules

Replenishment in inventory management refers to the process of restocking items to maintain an adequate supply.

🧠 For example, let’s imagine a retail store of a popular brand of sneakers with a designated shelf stock level of 50 pairs.
After they sold 20 pairs, a replenishment is the action of ordering 20 pairs to restore the shelf stock back to its optimal level of 50.

This ensures that the store consistently meets customer demand without overstocking or running out of products.

Inventory management rules aim to build a replenishment policy that minimizes your ordering, holding and shortage costs.

I published a series of three articles introducing different policies

  • Inventory Management for Retail — Deterministic Demand
    This article introduces a methodology to implement the Economic Order Quantity (EOQ) for a replenishment policy assuming constant demand.
  • Inventory Management for Retail — Stochastic Demand
    This article provides a detailed tutorial on implementing the continuous review policy Order Point, Order Quantity (s, Q).
  • Inventory Management for Retail — Periodic Review Policy
    This final article focuses on the periodic review policy Order-Up-To-Level (R, S).

I want to create a module where users can see the impact of the 3 rules on their costs and potential shortages.

Module 3 Architecture — (Image by Author)

The GPT architecture is similar,

  • There are three models loaded in different Python scripts.
  • Three articles have been loaded to provide context to the agent.
  • Additional prompts to teach the model how to compute the metrics needed to assess the models.
📝 Note: Like the module 2, I am working on a prototype (built with LangChain)
before implenting it in "The Supply Chain Analyst".

A revolution in the design of analytics products

Solutions using LLMs like GPTs will revolutionize analytics product design by addressing key challenges like enhancing user experience.

The future user interface of analytics products will be a smart agent integrating advanced models and the processing capabilities of GPT models.

This shift promises a more engaging, efficient and user-friendly approach to supply chain analytics and decision-making processes.

🎬 If you share this enthusiasm, please suggest other analytics products that can be boosted with GPTs in the comments section!

About Me

Let’s connect on Linkedin and Twitter, I am a Supply Chain Engineer using data analytics to improve logistics operations and reduce costs.

If you are interested in Data Analytics and Supply Chain, have a look at my website

💡 Follow me on Medium for more articles related to 🏭 Supply Chain Analytics, 🌳 Sustainability and 🕜 Productivity.

📘 Your complete guide for Supply Chain Analytics

--

--

Samir Saci

Top Supply Chain Analytics Writer — Follow my journey using Data Science for Supply Chain Sustainability 🌳 and Productivity ⌛