Create GPTs to Automate Supply Chain Analytics

“The Supply Chain Analyst” is a Custom ChatGPT “GPT” that performs Pareto & ABC Analysis using sales data.

Samir Saci
13 min readDec 6, 2023
A diagram showing the relationship between Excel file input, data analysis, and communication processes. It depicts a workflow automated with a custom GPT where an Excel file leads to the automation of decision-making processes for supply chain optimization. The output involves a GPT called Supply Chain nalyst performing classification tasks, with the final result displayed in an understandable format for communication.
Create GPTs to Automate Supply Chain Analytics — (Image by Author)

ABC Analysis in Supply Chain Management can be defined as a strategic product categorization method used for demand planning and inventory management.

In a previous article, I described the methodology I used to automate this analysis using a web application deployed on the cloud.

A process flow diagram for the ABC analysis in Supply Chain visualizing steps from Excel data input to analysis and filtering using sliders. It illustrates how data is classified and analyzed into distinct categories (A, B, C) based on selected variables, which supports decision-making.
ABC Analysis Simplified in 3 Steps — (Image by Author)

This app gathered 1,000+ users with excellent feedback from Supply Chain Professionals and Data Scientists.

As a next step, I wanted to explore the idea of improving the user experience using a Large Language Model agent as an interface with users.

My first experiment involved designing an automated supply chain control tower connected to a database using the LangChain framework.

A diagram showing an automated supply chain control tower workflow with GPT and Langchain starting with ambiguous input (represented by question marks), proceeding through SQL queries, machine learning analysis, and generating insights that are communicated to users in an understandable form.
Supply Chain Control Tower Agent with LangChain SQL Agent [Article Link] — (Image by Author)

This prototype was working perfectly on my computer. However, the question of productization and deployment of this solution remains.

How can we easily deploy a GPT agent?

A few weeks later, OpenAI introduced a new feature allowing users to create custom versions of ChatGPT tailored for specific purposes.

This is an opportunity for me to easily create and deploy an agent to automate Pareto and ABC analyses.

In this article, I will introduce “The Supply Chain Analyst”, a custom GPT agent designed to automate supply chain analytics tasks and interact with users using natural language.

SUMMARY
I. "The Supply Chain Analyst": A GPT for Supply Chain Analytics
Introduction of this custom GPT designed to automate analytics tasks with a
user interface powered by GPT.
1. How do you utilize the "The Supply Chain Analyst"?
Users can have their analysis after following only two steps.
2. Users Can Ask for Advanced Analyses
Leverage the GPT model equipped with data and context
II. Design Approach for this first module
I have been using the GPT editor of ChatGPT to create this solution.
1. How does it work?
Upload data, scripts and add context to build a smart Agent
2. Navigating the Editor
Let me share a non-exhaustive list of encountered issues
III. Create the Supply Chain Analytics "Super Agent"
1. Create a Super-App with a UI Powered by GPT
Introduce several analytics products in a smart GPT agent
2. Module 2: Lean Six Sigma Statistical Tests
Support warehousing and transportation operations with statistical methods
3. Module 3: Inventory Management Rules
Optimize the process of restocking items to maintain an adequate supply.
4. A revolution in the design of analytics products
Revolutionize analytics product design by addressing key challenges

“The Supply Chain Analyst”: A GPT for Supply Chain Analytics

How do you utilize the “The Supply Chain Analyst”?

How do I start the analysis?

I tried to make the user experience as smooth as possible; users can have their analysis after following only two steps.

A step-by-step guide depicting data upload and variable selection in an automated GPTs called the Supply Chain Analyst. Users can upload their dataset or use a sample file, followed by selecting metrics (quantity or turnover). The system provides sales distribution analysis using Pareto and ABC charts to visualize and communicate key insights.
Usage Procedure in Two Steps [Try the GPT: Link]— (Image by Author)

Step 1: Data Upload
You can upload sales data that will be used to perform analyses and build visuals.

The input dataset should be in a ‘.csv’ file with the sales transactions by day for each reference, including the following columns:

  • SKU: This column represents the Stock Keeping Unit, a unique identifier for each item (SKU 1234: Evian Mineral Water 1.5L Bottle)
  • FAMILY: This column indicates the family or category to which the item belongs. (FAMILY-23: Beverages)
  • DAY: This column should include the day number, representing a time frame for the analysis.
  • QTY: This is the quantity sold or moved for the SKU on that specific day.
  • TO: This stands for Turnover, reflecting the total sales value of the SKU on that day.

The dataset should look like this

Data sample for example [Test the GPT: Link] — (Image by Author)
📝 Note: Users can upload their own datasets or ask the agent to use a sample 
dataset
already loaded in the agent's local folder.

Step 2: Parameter Selection
The agent asks the user to select only one parameter metric_colto keep things simple.

This is the metric used to perform the analyses

  • ‘QTY’ if the objective is to analyse the sales distribution in (units)
  • ‘TO’ if the user wants to analyse the turnover distribution in (euros)

Our “Supply Chain Analyst” can now process the dataset and provide an initial analysis.

A screenshot of a GPT conversation providing an ABC analysis based on the quantity metric. It categorizes items into A, B, and C classes based on sales contribution and demand variability, highlighting the strategic importance of focusing on high-priority SKUs while monitoring items with unstable demand.
Initial Analysis [Test the GPT: Link] — (Image by Author)

The agent uses a template filled with calculated parameters to ensure that it provides consistent outputs for this initial answer.

Users Can Ask for Advanced Analyses

The agent has now stored the results in local variables that can be used to

  • Create visualizations like the Pareto or ABC charts
  • Answer questions, prepare reports or any other request formulated in natural language.

Let’s imagine a user asking for a Pareto Chart:

A conversation with the GPT “The Supply Chain Analyst” presenting a Pareto chart that visualizes SKU sales distribution. It follows the 80/20 rule, showing that a small percentage of SKUs contributes to a large portion of total sales. The chart includes insights on inventory management and prioritization strategies.
Pareto Chart Answer [Test the GPT: Link]— (Image by Author)

💡 We can see that the Agent is

  • Adapting the metric based on the choice of the user
  • Showing the chart with additional valuable comments
  • Proposing the ABC chart as a further analysis
A conversation with the custom GPT “The Supply Chain Analyst” displaying an ABC analysis chart based on the quantity sold. The chart categorizes SKUs into A, B, and C classes, with additional analysis on demand variability and its impact on inventory management strategies. Red, green, and blue dots represent different SKU classifications.
ABC Chart by the Agent [Test the GPT: Link] — (Image by Author)

💡 We can see that the Agent is

  • Providing explanations and legends for the charts
  • Asking for further analysis

The question that may be burning your lips:

What’s the point of having a GPT for these simple analyses?

Indeed, they can be automated with a Python script that creates the visuals and generates the comments.

Let’s try to push our agent with a more advanced question:

A conversation with the custom GPT “The Supply Chain Analyst” where a user requests an email summarizing sales contribution and SKU complexity. The email draft includes visuals of the Pareto and ABC charts, providing key takeaways for inventory management, with a professional tone addressed to a logistics director.
Advanced Question [Test the GPT: Link]— (Image by Author)

The objective is to test if our agent can

  • Use the outputs of the initial analysis for a request that has not been pre-prompted
  • Use GPT capabilities to generate text from a context with constraints

The output is quite satisfying

A mock email generated by the custom GPT “The Supply Chain Analyst”, summarizing the sales contribution and complexity of items in a portfolio. It includes concise analysis, visual representations (Pareto and ABC charts), and commentary, formatted for professional communication with management.
Example of Email [Test the GPT: Link]— (Image by Author)

💡 We can see that the Agent is

  • Using the outputs of the analysis to write a concise email sharing key information
  • The concision is not really showing the full capabilities of the model.

👉 I’ll let you try by asking for a more detailed review of the results to see how the model performs.

Let’s try with a more tricky question:

The user interacts with the Supply Chain Analyst GPT, requesting a detailed comparison of sales performance using side-by-side Pareto charts based on two key metrics: ‘Quantity’ (QTY) and ‘Turnover’ (TO). This request enables the user to visually compare how much of their sales volume and revenue are concentrated among the top-performing SKUs, helping identify inventory priorities based on either volume or value.
The advanced question 2 [Test the GPT: Link]— (Image by Author)

The output is not bad

The resulting image shows two Pareto charts generated by the GPT “The Supply Chain Analyst”. On the left the chart based on quantity (QTY), displaying the cumulative percentage of total quantity sold per SKU. On the right is the chart based on turnover (TO), showing the cumulative percentage of revenue contribution. Both charts visually represent how a percentage of SKUs contributes to the majority of sales or revenue, following the 80/20 rule. This comparison aids inventory prioritization.
The output of the advanced question 2 [Test the GPT: Link] — (Image by Author)

I hope that this brief introduction triggered enough curiosity to test the solution.

🔔 Remember that you don’t need to share your data to test the solution, you can use the sample dataset included in the GPT.

Try it!

👇

The image promotes the “Supply Chain Analyst” GPT, encouraging users to automate their ABC analysis using sales data. With over 8,000 interactions, it emphasizes the GPT’s utility in helping supply chain professionals streamline inventory management and demand forecasting. Users can access the GPT to upload sales data, generate insights, and receive actionable recommendations for inventory control, providing a practical AI tool for data-driven decision-making.
Click on the picture to access the link — (Image by Author)

You can access the GPT via this link: 👇

Design a Custom GPT with Python

I have been using ChatGPT's GPT editor to create this solution.

How does it work?

Let me briefly share my experience using this editor if you want to create similar GPTs for other applications.

The interface allows users to create custom GPT models by naming the GPT, providing a description of its function, and adding instructions on how the GPT should behave. Users can tailor the model for specific tasks such as performing analytics, generating visuals, or other custom operations. The interface is simple, with options to either create or configure the GPT for further customization.
Screenshots of the ChatGPT editor [Test the GPT: Link]— (Image by Author)

The architecture of the initial prototype, which does ABC analysis, is very simple.

This diagram outlines how the custom GPTs for Supply Chain Analytics work. It starts with the user asking a question or requesting an analysis, the agent retrieves data (from the provided dataset or sample), processes it with a core Python script, and returns output as charts or comments. The flow clearly illustrates three key steps: initial prompt, data processing using the script, and final analysis outputs that is used by “The Supply Chain Analyst”.
The architecture of the GPT [Test the GPT: Link]— (Image by Author)

In the editor, I have uploaded two files

  • A Python script, including the core model to process the data frame
  • A sample dataset that can be used if users don’t have data to test the solution.

The instructions prompt includes several sections

  • Section 1: Introducing the process expected
    In this section, I explain to the agent how it is supposed to interact with users, use the Python script with the dataset and manage the outputs.
  • Section 2: How do you communicate the results?
    I added a section ensuring the agent uses the right variables to generate the charts.
  • Section 3: Prepare the ‘What if’ scenarios
    In this section, I detail the expected behaviour if a user asks, ‘How can I start?’ or ‘What is the input data format?’.

💡 Based on my short experience, I can share the following tips

  • If you use ChatGPT with the tab “Create” to configure your agent, keep the records of the outputs in a separate file.
    The model may erase previous instructions without your consent.
  • Only use the create tab to initiate the design process and manually add instructions to ensure the consistency of the agent’s behaviour.
  • Detail everything in your instructions.

Let me explain how I discovered the importance of the third tip.

Navigating the Editor: A List of Encountered Issues

The last tip is the most important, as the model may have hallucinations (creating new variables) or struggle with simple tasks.

Issue 1: Loading Local Files
The main issue I have faced is using loaded scripts and datasets.

This visual breaks down the process of performing supply chain analysis with custom GPTs. It shows the flow from the user providing input, the agent running the necessary Python scripts with data, and finally outputting the results (such as an ABC analysis) back to the user. The agent guides the user through selecting key variables and producing charts for decision-making support.
The architecture of the GPT [Test the GPT: Link] — (Image by Author)

The files uploaded by you (or users) are stored in the folder /mnt/data/.

Your agent may sometimes struggle to find the Python script or the datasets.

Therefore, I added additional instructions like

The agent will record the CSV filename in the variable: filename and append 
the folder 'data' to avoid errors using this piece of code at the
beginning "import sys sys.path.append('/mnt/data')"
The agent should ask the users if they want to use the sample dataset 
'/mnt/data/abc_template.csv' in case he/she doesn't have data.

Issue 2: Managing Input and Output Variables

The Python script uses two variables for the input (filename and metric_col) and four output variables (REPORT, df_abc, to_a, to_b).

You need to explain that to the agent with explicit instructions like

2. Prompting the user to select a metric for calculation ('QTY' or 'TO'). 
- The agent will ask the user to choose between 'QTY' and 'TO'
- The agent will store the result in the variable metric_col
The agent will import and run the function processing(filename, metric_col) 
of the script with the variables filename and metric_col from the
input of the user in the script
- The function processing() will return a string variable REPORT, the
results dataframe df_abc and two parameters to_a and to_b

Issue 3: The agent lacks concision

It tends to share all the information in the instruction prompt with the users.

Therefore, I have explicitly stated what you don't want to share

- The agent should ask the users if they want to use the sample dataset 
'/mnt/data/abc_template.csv' in case he/she doesn't have data.
The agent will not mention the name of the file.
The GPT maintains a professional tone, never mentions 'abc_gpt.py' and 
sticks to the step presented above except if the user has additional requests.

Issue 4: The agent needs users’ guidance

I have to admit that I lied to you in the previous section. (I apologize 🙏)

For the second tricky question, the output was not that straightforward

The user interacts with the Supply Chain Analyst GPT, requesting a detailed comparison of sales performance using side-by-side Pareto charts based on two key metrics: ‘Quantity’ (QTY) and ‘Turnover’ (TO). This request enables the user to visually compare how much of their sales volume and revenue are concentrated among the top-performing SKUs, helping identify inventory priorities based on either volume or value.
The advanced question 2 [Test the GPT: Link] — (Image by Author)

Indeed, the initial output was not successful

The user suggests splitting the analysis for ‘QTY’ and ‘TO’ metrics into two separate tasks. The user also provides specific instructions on renaming variables and storing outputs in distinct data frames. The goal is to streamline the process and allow the agent to handle both analyses separately, then merge the results for visual comparison.
The advanced question 2’s initial output [Test the GPT: Link] — (Image by Author)

The agent struggled to run two analyses with different parameters (metric_col = ‘QTY’ and ‘TO’), store the data and plot the two charts.

Therefore, I had to find a way to guide him

Additional Guidance [Test the GPT: Link] — (Image by Author)

I do not yet have a solution for this kind of issue as it’s impossible to pre-prompt all the potential scenarios of user requests.

❓ Do you have an idea? Please share it in the comment section!

Now that I have introduced my approach to designing this GPT, we can explore how adding additional modules can improve it.

Create the Supply Chain Analytics “Super Agent”

Create a Super-App with a UI Powered by GPT

The initial objective was to test the capacity of custom GPT agents to improve the user experience of analytics products.

With our GPT, “The Supply Chain Analyst”, users can

  • Launch a specific analysis using natural languages
  • Interact with a GPT agent equipped with a context, Python scripts and processing outputs stored in local variables
  • Ask for different output formats like email, reports or even images
  • Challenge the agent by asking about the hypotheses used and correcting them

Can we expand this to other analytics models?

I have short-listed 6 analytics models to be packaged into two additional modules.

Module 2: Lean Six Sigma Statistical Tests

This module supports warehousing and transportation operations with statistical methods of process improvement.

Lean Six Sigma (LSS) is a stepwise approach to continuous improvement following 5 steps (Define, Measure, Analyze, Improve and Control) to improve processes or solve problems with unknown causes.

I have a series of three articles in which I share three operational cases using LSS statistical methods to solve a problem:

  • Lean Six Sigma with Python — Kruskal Wallis Test
    Assess the effectiveness of warehouse operators by comparing the productivity of two samples of operators.
  • Lean Six Sigma with Python — Logistic Regression
    How much bonus do you need to provide to warehouse operators to reach your productivity targets?
  • Lean Six Sigma with Python — Chi-Squared Test
    Find the root cause of the shortage of drivers impacting your transportation network with a statistical test.

The objective is to create a module that would

  • Understand the operational problem of the users
  • Propose the best statistical solutions
  • Perform the calculation and provide recommendations
The image shows agent architecture to process the user’s request for two different analyses simultaneously. After receiving detailed instructions, the agent complete both tasks, prompting the user to guide it further. The flow illustrates how the agent resolves complex requests efficiently to promote Supply Chain Analytics with custom GPTs like “The Supply Chain Analyst”.
Module 3 Architecture — (Image by Author)
📝 Note: As soon as the current prototype (built with LangChain) will be
finalized, I'll work on adding this module.

Module 3: Inventory Management Rules

Replenishment in inventory management refers to the process of restocking items to maintain an adequate supply.

🧠 For example, let’s imagine a retail store of a popular brand of sneakers with a designated shelf stock level of 50 pairs.
After they sold 20 pairs, a replenishment is the action of ordering 20 pairs to restore the shelf stock back to its optimal level of 50.

This ensures that the store consistently meets customer demand without overstocking or running out of products.

Inventory management rules aim to build a replenishment policy that minimizes your ordering, holding and shortage costs.

I published a series of three articles introducing different policies

  • Inventory Management for Retail — Deterministic Demand
    This article introduces a methodology to implement the Economic Order Quantity (EOQ) for a replenishment policy assuming constant demand.
  • Inventory Management for Retail — Stochastic Demand
    This article provides a detailed tutorial on implementing the continuous review policy Order Point, Order Quantity (s, Q).
  • Inventory Management for Retail — Periodic Review Policy
    This final article focuses on the periodic review policy Order-Up-To-Level (R, S).

I want to create a module where users can see the impact of the 3 rules on their costs and potential shortages.

This image showcases a conversational workflow between the user and the Supply Chain GPTs . This begins with the user asking questions, such as how to start or what to minimize. The agent responds with prompts and outputs based on core Python scripts for supply chain analysis. It handles different scenarios and “what-if” questions by adapting the core module and providing specific insights like minimizing stockouts or ordering costs. The flow helps guide decision-making through interraction
Module 3 Architecture — (Image by Author)

The GPT architecture is similar,

  • There are three models loaded in different Python scripts.
  • Three articles have been loaded to provide context to the agent.
  • Additional prompts to teach the model how to compute the metrics needed to assess the models.
📝 Note: Like the module 2, I am working on a prototype (built with LangChain)
before implenting it in "The Supply Chain Analyst".

A revolution in the design of analytics products

Solutions using LLMs like GPTs will revolutionize analytics product design by addressing critical challenges like enhancing user experience.

The future user interface of analytics products will be a smart agent integrating advanced models and the processing capabilities of GPT models.

This shift promises a more engaging, efficient and user-friendly approach to supply chain analytics and decision-making processes.

🎬 If you share this enthusiasm, please comment on this post with other analytics products that can be boosted with GPTs!

If you need a tutorial on how to use this GPT

About Me

Let’s connect on Linkedin and Twitter, I am a Supply Chain Engineer using data analytics to improve logistics operations and reduce costs.

For consulting or advice on analytics and sustainable supply chain transformation, feel free to contact me via Logigreen Consulting.

If you are interested in Data Analytics and Supply Chain, look at my website.

💡 Follow me on Medium for more articles related to 🏭 Supply Chain Analytics, 🌳 Sustainability and 🕜 Productivity.

💌 New articles straight in your inbox for free: Newsletter
📘 Your complete guide for Supply Chain Analytics: Analytics Cheat Sheet

📘 Your complete guide for Supply Chain Analytics

--

--

Samir Saci

Top Supply Chain Analytics Writer — Follow my journey using Data Science for Supply Chain Sustainability 🌳 and Productivity ⌛ https://samirsaci.com/about