Continuing from the previous comment - For the same SKU, the values for SeasonalityModel, Profile1, level changed between two runs on different days. I am unsure what caused the change in these values - the input data remained the same.

Code before dispersion:


ItemsWeek.ItemLife =  1
ItemsWeek.CumSumMinusOneExt = 0
where ItemsWeek.Monday >= firstDate
  ItemsWeek.CumSumMinusOneExt = (sum(ItemsWeek.ItemLife) by ItemsWeek.Sku scan ItemsWeek.Week) - 1
ItemsWeek.CumSumMinusOneExtMonth = ceiling(ItemsWeek.CumSumMinusOneExt / 8)
ItemsWeek.WeekNum = rank() by Items.Sku scan -monday(Week)
nbWeeks = same(ItemsWeek.WeekNum) when(ItemsWeek.Week == week(today()))
ItemsWeek.ItemLifeWeight = 0.3 + 1.2*(ItemsWeek.WeekNum/nbWeeks)^(1/3) 
ItemsWeek.IsCache = ItemsWeek.Monday >= firstDate and ItemsWeek.Monday < today
ItemsWeek.Cache = if ItemsWeek.IsCache then 1 else 0
expect table Items max 30000
expect table ItemsWeek max 5m
table YearWeek[YearWeek] = by ((Week.Week - week(firstDate)) mod 52)
Items.SeasonalityGroup = Items.Category
table Groups[SeasonalityGroup] = by Items.SeasonalityGroup
table SeasonYW max 1m = cross(Groups, YearWeek)
Items.Level = avg(ItemsWeek.DemandQty) when(ItemsWeek.Monday >= today - 365 and ItemsWeek.Monday < today)
Items.Level = if Items.Level == 0 then -10 else
              if log(Items.Level) < -10 then - 10 else
              if log(Items.Level) > 10 then 10 else
              log(Items.Level)
maxEpochs = 1000
autodiff Items epochs:maxEpochs learningRate:0.01 with
  params Items.Affinity1 in [0..] auto(0.5, 0.166)
  params Items.Affinity2 in [0..] auto(0.5, 0.166)
  params Items.Level in [-10..10]
  params Items.LevelShift in [-0.5..0.5] auto(0, 0)
  params SeasonYW.Profile1  in [0..1] auto(0.5, 0.1)
  params SeasonYW.Profile2  in [0..1] auto(0.5, 0.1)
  SumAffinity =
    Items.Affinity1 +Items.Affinity2
  YearWeek.SeasonalityModel = SeasonYW.Profile1  * Items.Affinity1 +SeasonYW.Profile2  * Items.Affinity2
  Week.LinearTrend = ItemsWeek.Cache + (ItemsWeek.CumSumMinusOneExtMonth * ItemsWeek.Cache * Items.LevelShift / 10)
  Week.Baseline = exp(Items.Level) * YearWeek.SeasonalityModel * ItemsWeek.Cache *  Week.LinearTrend
  Week.Coeff =  ItemsWeek.ItemLifeWeight
  Week.DeltaSquare = (Week.Baseline - ItemsWeek.SmoothedDemandQty) ^ 2

  Sum = sum(Week.Coeff * Week.DeltaSquare) / 10000
  SumPowAffinity = (Items.Affinity1 ^2 +
                    Items.Affinity2 ^2 ) /\
                    (SumAffinity ^2)
  return ( \
          // Core Loss Function
          (1 + Sum) / (SumPowAffinity))
table ItemsYW = cross(Items, YearWeek)
ItemsYW.SeasonalityGroup = Items.SeasonalityGroup
ItemsWeek.YearWeek = Week.YearWeek
ItemsYW.Profile1 = SeasonYW.Profile1
ItemsYW.Profile2 = SeasonYW.Profile2
ItemsYW.SeasonalityModel = ItemsYW.Profile1  * Items.Affinity1 +
                            ItemsYW.Profile2  * Items.Affinity2
ItemsWeek.LinearTrend =  max(0, 1 + (ItemsWeek.CumSumMinusOneExtMonth * Items.LevelShift/ 10))
ItemsWeek.Baseline = exp(Items.Level) * ItemsYW.SeasonalityModel * ItemsWeek.LinearTrend

vermorel 10 months | flag | on: deleted post

Please try to ask self-contained questions. Without context, those questions are a bit cryptic to the community.

You can share code and/or links to the Envision playground. Think of this board as Stack Overflow, but for supply chain.

Cheers,

ToLok 10 months | flag | on: Forecast Analysis Performance Measures

Hello s40racer,

The forecast cockpit is evaluating the accuracy of the quantile 95 with respect to the past sales. In other words, it is measuring the percentage of time that the sales where over the quantile 95.
In a perfect forecast where we have the exact distribution of demand, this percentage should be equal to 5%: 95% of the time, the sales should be under the quantile 95 and 5% of the time sales should be over it.

In the example 11635178 - (Above: 9.62% - At: 0% - Below: 90.38%), it means that for the Reference 11635178, 9.62% of the time the sales were above the quantile 95 and 90.38% of the time there were below. In particular, this means that the forecast is a little underestimating the Demand for this specific Ref as we have actually 4.62% more weeks with sales over quantile 95 than expected.
This is completely normal to have small disparities such as this. If we didn't, we'd probably be overfitting the data.

Regarding the scope of the overall forecast sanity label, it is indeed an average (weighted) concerning only the Ref (not SKU) in the Forecast Sanity table. In details, this is looking only at the history dating of at least 1 year: hence the Items that are more recent are not in the analysis.

Hope it helps!

Conor 10 months | flag | on: ABC Analysis [pic]

Hopefully, the final nail in the coffin. We've already covered ABC (and ABC XYZ) in print and video, so we consider this matter put to rest!

Print:
https://www.lokad.com/abc-analysis-(inventory)-definition
https://www.lokad.com/abc-xyz-analysis-inventory

Video:
https://tv.lokad.com/journal/2018/9/12/abc-analysis/
https://tv.lokad.com/journal/2023/6/14/analyzing-abc-xyz/

In this interview recorded onsite at a Celio store in Rosny-sous-Bois, Joannes Vermorel and David Teboul (Managing Director of Operations at Celio) discuss the resurgence of Celio following the challenges of 2020-2021. David highlights the importance of a "normal" customer-focused approach in transforming the brand. Lokad supported this transformation by assisting in optimizing the supply chain to better cater to a diverse range of stores and offers. Despite increasing complexity and the rise of online commerce, David emphasizes the need for agility and the critical role of physical stores for Celio, while striving to understand and meet customer needs through various touchpoints.

manessi 10 months | flag | on: Editables: Runflow and IDE behavior

It is crucial to note that editables and the uploads tied to these will only be modified by a dashboard interaction followed by a "Start Run" from said dashboard.
A script has no control over what inputs it will receive when invoked from Runflow, from the IDE, or from the list of projects (basically, anywhere except from the dashboard). It will instead receive the same inputs as the previous run, unless manually overridden (through Runflow options, the “clear uploads” of the Run Details, or setting up dedicated inputs in the IDE).

manessi 10 months | flag | on: Reset/Clear uploaded file

If you need to reset an uploaded file or clear it altogether, the show upload can be tweaked into

show upload "Please upload File 1" editable:"upload1" with Hash, Name

The hash should be a 32-character hexadecimal hash, such as the one obtained from Files.Hash, and the name should be a valid filename (no forbidden characters), more importantly it should have the proper extension in order to be able to read the file .

If both the hash and the name are "", then that particular line is ignored (meaning, show upload "MyFile" with "", "" will clear the tile).

vermorel 10 months | flag | on: S&OP [pic]

S&OP is only ever touted as a "grand success" by consultants who directly profit from the massive overhead.

In contrast, I have met with 200+ supply chain directors in 15 years. I have witnessed several dozens of S&OP processes in +1B companies. I have never seen one of those processes be anything else than a huge bureaucratic nightmare.

I politely, but firmly, disagree with the statement that *a* process is better than any process at all. This is a fallacy. There is no grand requirement written in the sky that any of the things that S&OP does have to be done at all.

Hello,

I had a look at your code.
First I created a Sku table that you can find in your CustomerName/clean/Sku.ion file. We will use this table as the item table as you want to compute things at Sku level and not Item level.
When I take the PurchaseOrders table, we want to do exactly the same thing, meaning create a Sku vector that is "MaterialSID x Location". The thing is that there are no location column in the PurchaseOrders table that indicate where the goods are received.

Once we have it, we will simply create a Sku vector in PurchaseOrders Table and then use the primary dimension [Sku] as the joint between the 2 tables Sku and PurchaseOrders

Best regards

Also, instead of using by .. at everywhere, you could declare Suppliers as upstream of Items. This will remove the need for by .. at option entirely. I am giving an example of the relevant syntax at: https://news.lokad.com/posts/647

It is possible to declare a tuple as the primary dimension of a table in a read block through the keyword as:


read "/suppliers.csv" as Suppliers [(Supplier, Location) as myPrimary] with
  Supplier : text
  Location : text
  LeadTimeInDays : number

A more complete example:


read "/skus.csv" as Skus with
  Id : text
  Supplier : text
  Location : text

read "/suppliers.csv" as Suppliers [(Supplier, Location) as sulo] with
  Supplier : text
  Location : text
  LeadTimeInDays : number

expect Skus.sulo = (Skus.Supplier, Skus.Location)

Skus.LeadTimeInDays = Suppliers.LeadTimeInDays

dumay 10 months | flag | on: Prefer "scan" to "sort"

Automatic hints from Envision recommends you to use "scan" rather than "sort" with this function.

Seems to me that supply chain can very easily become the enabler or barrier to competing with time. He mentions an interesting example on optimizing for full truck loads and the effects on the business as a whole.

It is possible to have sanity checks in user defined functions and throw an error if the check is not passed.
Cf. https://docs.lokad.com/reference/abc/assertfail/

Thank you ToLok.

Are you able to modify the code or give a more explicit example on how to implement the code at the SKU level? From data standpoint, I assume the following fields need to exist in items, PO, and vendor tables: item #, destination location, and supplier ID, in order to implement the SKU level code?

Currently the partnering data has not been updated to such a structure. Only the Items table (Item Master) has the item #, supplier ID, and destination location. If the data structure noted above is needed to implement the SKU level code, I can make sure this is done.

Thank you.

Hello s40racer,

Indeed if you use Items.AnnouncedSLTValue = same(Suppliers.Leadtime) by [Suppliers.Supplier, Suppliers.Location] at [Items.Supplier, Items.Location], you would get for each item the value corresponding to the pair (Items.Supplier; Items.Location) adding the granularity that you wanted.
However, this implementation implies that all items with the same pair (Supplier; Location) would have the same Lead-Time. If you want to have different Lead-Time for different items provided by the same supplier, you need to add the relevant Reference in the Suppliers table (both for your orignal case at item level and your updated one at SKU level)

Also looking at the original code:
It seems that your table Items has a primary dimension, which is also present in PO, allowing you to have natural aggregation on line 2,3 and 4.
If the primary dimension was previously at the Item level, you might want to change it to the SKU level (Item x Location). This way, Items.SLT_ItemLevel will be the distribution of observed Lead-Time for your specific SKU (versus for your specific Item previously).

Hope it helps!

Thank you for the guidance. I am asking more from the code standpoint. The data is given with lead-time at the item-location level. I am thinking the easiest is to bring that data from Items table into the Vendors table to utilize the existing code.
With the existing code, I assume I need to add a location variable to the file to look something like:

Original:


read "/clean/tmp/Suppliers.ion" as Suppliers with
  Supplier : text
  Leadtime : number

Updated:


read "/clean/tmp/Suppliers.ion" as Suppliers with
  Supplier : text
  Location: text
  Leadtime : number

Then in any subsequent joins or filters, I will need to add the location filter. How would I update the following code to account for the location specific lead-time?


///Possible SLT layers depending on many datapoints can be found in the dataset
Items.SLT_ItemLevel = ranvar(PO.DeliveryDelay) when PO.IsClosed
Items.SLT_SupplierAndCategoryLevel = ranvar(PO.DeliveryDelay) by [Items.Supplier,Items.Category] when PO.IsClosed
Items.SLT_SupplierLevel = ranvar(PO.DeliveryDelay) by [Items.Supplier] when PO.IsClosed
Items.AnnouncedSLTValue = same(Suppliers.Leadtime) by Suppliers.Supplier at Items.Supplier

Taking the last line as an example, would it look something like ?


Items.AnnouncedSLTValue = same(Suppliers.Leadtime) by [Suppliers.Supplier, Suppliers.Location] at [Items.Supplier, Items.Location]

Hello,

It is indeed very common to have distinct supplier lead times depending on the location to be served.
The usual way to take the differences into account into your data is :
- Having a SKU table and not only Item table
- If you have a Purchase Orders history with relevant data, then you can simply create a joint between [PO.Sku] and [Sku.sku]. We would recommend to have a probabilistic supplier leadtime (use ranvar()). If not possible, then take avg

Hope it helps

Hey! Thanks for your interest. I am not too sure which code you are referring to. Don't hesitate to include an Envision snippet (see https://news.lokad.com/static/formatting ) in your question to clarify what you are working on. You can also include a link to the Envision code playground (see https://try.lokad.com ) if you can isolate the problem.

The Lokad usually approach lead time forecasts to craft a parametric probabilistic model to be regressed with differentiable programming. This approach makes it possible, for example, to introduce a distance parameter in the model. The value of this parameter is then learn by regressing the model over the data that happens to be available. Conversely, if there is no data at all (at least for now), the value of the parameter can be hard-coded to a guestimate as a transient solution.

Then, this approach might be overkill if there is enough data to support a direct lead time ranvar construction over supplier-location instead of supplier.

Let me know if it helps.

When using probabilistic lead times in actionrwd.reward, there is a possibility of encountering situations where a previously placed order is simulated to arrive later than the additional potential order considered by actionrwd. In other words, if a purchase order (PO) is in progress, the simulated purchase order generated by actionrwd may not adhere to a first-in, first-out (FIFO) rule in relation to the previous ongoing orders. This is a scenario that makes sense from a realistic standpoint, as purchase orders are not always strictly FIFO. However, from a stock manager/planning perspective, this can result in repetitive and misunderstood purchase suggestions for the user. It is unlikely that conditional lead time logic will be integrated into actionrwd, but this aspect should be addressed in the Monte Carlo reconstruction of actionrwd.
To avoid this pitfall, SCS often resort to using deterministic lead times (e.g., dirac(days)) that preserve the FIFO rule.

I feel like the explanation about the alpha parameter is a bit incomplete. The definition "The update speed parameter of the ISSM model for each item" is quite vague, when I think what needs to be understood is that alpha represents the correlation between one observation and the next one.
I would add that, the 0.3 value in the code example is way too high in most cases which can be misleading, a value of 0.05 would better fit usual cases to begin with.

arkadir 11 months | flag | on: Arguments of parsenumber function

The thousands separator is optional, but the decimal separator is mandatory. If no decimal separator is provided, the parsing will fail even if the provided numbers do not have decimals.

The shortest call to parsenumber (or tryparsenumber) is therefore:


T.Number = parsenumber(T.Text, "", ".")

arkadir 11 months | flag | on: Support for trimming in dates and numbers

When reading a date column, it is possible to provide a `*` at the end of the format to cause it to discard an optional time section, if present, for example:


read "/example.csv" as T date: "yyyy-MM-dd*" with

This will treat a value such as 2023-06-29 10:24:35 as if it were just 2023-06-29. Without this trim option, attempting to read the value will fail and report an error.

Similarly, when reading a number column, it is possible to provide a `*` at the end of the format to cause it to discard up to three non-digit characters either at the start or the end of the number value. For example:


read "/example.csv" as T number: "1,000.0*" with 

This will treat a value such as 10.00 USD as if it were just 10.00. Without this trim option, attempting to read the value will fail and report an error.

vermorel 11 months | flag | on: Be careful what you negotiate for! [pic]
Where you say “to some extent negotiable” (paraphrased) could we regard it as the quantity unit corresponding to a price, and that a different and likely higher price might apply to orders of smaller quantities? In which case, knowing the tiers of quantity and their corresponding prices would enable us to find the best order pattern, trading off price, wastage or inventory holding cost, and lead time.

What you are describing is frequently referred to as 'price breaks'. Price breaks can indeed be seen as a more general flavor of MOQs. In practice, tthere are two flavors of price breaks: merchant and fiscal. See also https://docs.lokad.com/library/supplier-price-breaks/

An enlightening chat on the future of aviation supply chain, shot within Air France's own engine repair facilities.

A remarkably well-illustrated dissertation on an under-studied topic. Very approachable, even for non-specialists.

What is a better way of getting stakeholder engagement for large investment without a smaller PoC-like approach?

The fundamental challenge is de-risking the process.

How does one get stakeholder engagement for TMS, WMS, MRP or ERPs? Those products are orders of magnitude more expensive than supply chain optimization software, and yet, there is no POCs.

I can't speak for the whole enterprise software industry. In its field, the Lokad approach to de-risking a quantitative supply chain initiative consists of many the whole thing accretive in a way that is largely independent of the vendor (aka Lokad).

Since Lokad charges on a monthly basis, with little or no commitment, and the process can end at any time. Whenever it ends, if it ends at all, the client company (the one operating a supply chain) can resume where Lokad left it.

The fine-print of the process and methodologies is detailed in my series of lectures https://lokad.com/lectures

vermorel 11 months | flag | on: What defines supply chain excellence?

My own take is that IT, and more generally anything that is really the foundation of actual execution, is treated as second class citizens, especially the _infrastructure_. Yet, the immense majorities of the operational woes in supply chain nowadays are IT-related or even IT-driven. For example _Make use of channel data_ is wishful thinking for most companies due to IT mess. IT is too important to be left in the hands of IT :-)

ttarabbia 11 months | flag | on: What defines supply chain excellence?

Not sure I agree with everything here - however the bit on the different supply chain flows and their priorities is helpful (Efficient, Agile, Responsive, Seasonal, Low Volume). Helps to answer those questions whose answer is typically "it depends".

Begs the question of segmentation since you are measuring performance by product/market/....

ArthurGau 12 months | flag | on: Related parsing functions

Related parsing functions:
containscount()
contains() : single needle
containsany() : multiple needles
fieldcount()

ArthurGau 12 months | flag | on: Related parsing functions

Related parsing functions:
containscount()
contains() : single needle
containsany() : multiple needles
fieldcount()

ArthurGau 12 months | flag | on: Related parsing functions

containscount()
contains() : single needle
containsany() : multiple needles
fieldcount()

vermorel 12 months | flag | on: Safety stock [pic]

I have two main objections to safety stocks, a stronger one and a weaker one.

First, my stronger objection is that safety stocks contradicts what basic economics tell us about supply chain. By design, safety stocks are a violation of basics economics. As expected, safety stocks don't end-up proving economics wrong, but it's the other way around. Economics are proving safety stock wrong. This argument will be detailed in my upcoming lecture 1.7, see https://lokad.com/lectures

Second, my weaker objection, is that safety stocks, as presented in every textbook, and as implemented in every software, are hot nonsense. Not only Gaussians are used both for demand and lead time - while they should not - but also the way lead time is combined with demand is also sup-par. This argument is weak because, in theory, safety stock formulas could be rewritten from scratch to fix this; however, the first, stronger objection remains, thus, it's moot.

See also:

- Why safety stock is unsafe https://tv.lokad.com/journal/2019/1/9/why-safety-stock-is-unsafe/
- Retail stock allocation with probabilistic forecasts - Lecture 6.1 https://tv.lokad.com/journal/2022/5/12/retail-stock-allocation-with-probabilistic-forecasts/

vermorel 12 months | flag | on: RFI, RFP and RFQ madness in supply chain

Very interesting reference! I will have to check it out.

For someone inside an organization, situations, where you can't evaluate a software vendor entirely from publicly available information, are pretty rare. Even the lack of information is telling (and not in a good way). The only thing missing is usually getting a quote from the vendor, but that doesn't require an RFP, merely a problem statement, and some ballpark figures.

As a vendor (like Lokad), you don't have a say. If the prospect says that the process is an RFP, then so be it. I have repeatedly tried to convince prospects to stop paying consultants twice what it would cost them to do the setup of the supply chain solution they were looking for, but I have never managed to convince any company to give up on their RFP process. Thus, nowadays, we just go with the flow.

ttarabbia 12 months | flag | on: RFI, RFP and RFQ madness in supply chain

I like the analogy of “increased attack surface”, particularly as it increases your chances of being infected by a vague, but attractive, idea-virus-meme. Reminds me of Robert Greene in 48 Laws of Power on charlatanism - “…on the one hand the promise of something great and transformative, and on the other a total vagueness. This combination will stimulate all kinds of hazy dreams in your listeners who will make their own connections and see what they want to see”

In my experience it is quite common for the stated goal of an organization to be “improve XYZ business metrics with ABC type system” but the ulterior motive to be “make a defensible and risk-free decision to look like we’re progressing”
Is there a solution to this problem for someone inside an organization evaluating vendors? How about as a vendor?

vermorel 12 months | flag | on: Community notes for docs.lokad.com

We have just rolled out a community note system for the technical documentation.

Envision snippets are allowed:


// Text following a double-slash is a comment
a = 5
b = (a + 1) * 3 / 4
show scalar "Result will be 4.5" a1b1 with b // display a simple tile

But also mathematical expressions:

$$ \phi = \frac{1 + \sqrt{5}}{2} $$
vermorel 12 months | flag | on: How SAP Failed The Supply Chain Leader

The article, by Lora Cecere, a notable market analyst in supply chain circles, has been taken down by Forbes.
It seems that Forbes is afraid of losing SAP a client. So much for an independent press...

Update: my network tells me that a copy of the article can be found at:
https://pdfhost.io/v/lE65WObHk_How_SAP_Failed_The_Supply_Chain_Leader

vermorel Apr 26, 2023 | flag | on: 21st Century Trends in Supply Chain

Yes, exactly the meaning of terms. Every company uses the terms product, order, stock level, but those words rarely mean exactly the same thing from one company to the next.

ttarabbia Apr 25, 2023 | flag | on: 21st Century Trends in Supply Chain

When you say glossary - you mean between people to understand the meaning of terms? Or in the sense of lookup table for values in data?

vermorel Apr 25, 2023 | flag | on: Forecast Accuracy [pic]

Inaccurate forecasts can't be right for the company. This is pretty much self-evident. Thus, companies have been chasing better forecasts, leveraging varied metrics. Yet, while this game has been played relentlessly for the last 4 decades. Near all companies have next-to-nothing to show for all those efforts.

The Lokad position is that the way those forecasting initiatives were framed, aka deterministic forecasts, were spelling their doom from Day 1.

vermorel Apr 25, 2023 | flag | on: 21st Century Trends in Supply Chain

Yes, indeed. Also, I am very much aligned with the paper vision that "Simplicity is Hard". Stuff (patterns, organizations, processes, ..) can only become simple with the adequate intellectual instruments (terminologies, concepts, paradigms). Unearthing those instruments is difficult.

Among companies operating complex supply chains, I have rarely seen anyone (outside Lokad) maintain glossaries. Yet, a glossary is probably one of the cheapest ways to eliminate some accidental complexity.

ttarabbia Apr 24, 2023 | flag | on: 21st Century Trends in Supply Chain

The section on "Conquering Complexity" immediately reminds me of [Out Of The Tar Pit](https://curtclifton.net/papers/MoseleyMarks06a.pdf)

ttarabbia Apr 19, 2023 | flag | on: Malleable Software

Agreed, the current paradigm with token limitations restricts the use cases on raw data, i.e. giving the LLM your entire hypercube to look for things.
However if instead you were pointing it at the documentation for the 2-3 tools you're using plus excel and asking it to tweak XYZ functionality.... then the fuzziness/randomness is confined to the configuration/setup layer which then drives a consistent and performant tool to generate results.

ttarabbia Apr 19, 2023 | flag | on: Supply Chain Normalcy Not In Sight
...Train teams to model variability and build a planning master data layer to understand layers of variability. (A planning master data layer measures and tracks shifts in lead times, conversion rates, and quality.....

This was the most insightful bit... many companies see the value in doing this, however modelling "new" problems in a rigidly implemented system doesn't often lend itself to experimentation.

Are there any good examples of processes/software that "self adjusts" planning/modelling parameters? Seems like something that could easily lead to crazy and infeasible results if left alone....

vermorel Apr 19, 2023 | flag | on: Malleable Software

LLMs can certainly support a whole next-gen replacement for Tableau-like software (widely used for supply chain purposes), where the SQL queries are generated from prompts. I may have to a revisit my Thin BI section at https://www.lokad.com/business-intelligence-bi a few years down the road.

However, system-wide consistency is a big unsolved challenge. LLMs have token limits. Within those limits, LLMs are frequently beyond-human for linguistic or patternistic tasks (lacking a better word). Beyond those limits, it becomes very fuzzy. Even OpenAI doesn't seem convinced in their own capacity to push those token limits further within the current LLM paradigm.

ttarabbia Apr 19, 2023 | flag | on: Malleable Software

A helpful allegory for today's software flexibility vs. ease-of-use tradeoff and how LLMs may lead to more extensible and malleable software for the end user.

To be quite honest it's the first time I've seen something that helps inform how we may use LLMs as a supply chain community in the context of spreadsheets and rigid tools.

"...LLM developers could go beyond that and update the application. When we give feedback about adding a new feature, our request wouldn’t get lost in an infinite queue. They would respond immediately, and we’d have some back and forth to get the feature implemented."

Yes, this part has been somewhat hastily written (my fault). At Lokad, we tend to alternate between the algebra of random variables (faster, more reliable) and the montecarlo approach (more expressive). Here, is below the typical way we approach this integrated demand over the lead time while producing a probabilistic forecast at the end (this is very much aligned with your "simulation" approach):


present = date(2021, 8, 1)
keep span date = [present .. date(2021, 10, 30)]
 
Day.Baseline = random.uniform(0.5 into Day, 1.5) // 'theta'
alpha = 0.3
level = 1.0 // initial level
minLevel = 0.1
dispersion = 2.0

L = 7 + poisson(5) // Reorder lead time + supply lead time

montecarlo 500 with
  h = random.ranvar(L)

  Day.Q = each Day scan date // minimal ISSM
    keep level
    mean = level * Day.Baseline
    deviate = random.negativebinomial(mean, dispersion)
    level = alpha * deviate / Day.Baseline + (1 - alpha) * level
    level = max(minLevel, level) // arbitrary, prevents "collapse" to zero
    return deviate

  s = sum(Day.Q) when (date - present <= h)
  sample d = ranvar(s)

show scalar "Raw integrated demand over the lead time" a4d6 with d
show scalar "Smoothed integrated demand over the lead time" a7d9 with smooth(d)

See also https://try.lokad.com/s/demand-over-leadtime-v1 if you want to try out the code.

vermorel Mar 28, 2023 | flag | on: Let's try Lokad

By the way, mathematical formulas are pretty-printed as well:

$$ \phi = \frac{1 + \sqrt{5}}{2} $$
vermorel Mar 28, 2023 | flag | on: Let's try Lokad

I have just updated Supply Chain News to pretty print Envision scripts as well. Here is the first script:


montecarlo 1000 with // approximate π value
  x = random.uniform(-1, 1)
  y = random.uniform(-1, 1)
  inCircle = x^2 + y^2 < 1
  sample approxPi = avg(if inCircle then 4 else 0)
show scalar "π approximation" with approxPi // 3.22

A short summary of the second lecture in Joannes Vermorel's series on Quantitative Supply Chain. This constitutes a solid overview of his overarching supply chain vision.

A short summary of the first lecture in Joannes Vermorel's series on Quantitative Supply Chain.

lucianolis Mar 07, 2023 | flag | on: Etihad grows MRO capabilities

I saw them having activities in South America lately. Partnering with FADEA, the company that built the famous Pampa plane

A discussion with Jay Koganti, Vice President of Supply Chain at Estée Lauder’s Centre of Excellence

vermorel Jan 23, 2023 | flag | on: Architecture of Lokad

The predictive optimization of supply chain comes with unusual requirements. As a result, the usual software recipes for enterprise software aren't working too well. Thus, we had to diverge - quite substantially - from the mainstream path.

The 5 trends as listed by the author:

  • 88% of small businesses supply chains will use suppliers closer to home by next year.
  • Small business supply chains are moving most or all suppliers closer to the U.S. faster than predicted
  • The strained economy and low inventory are top stressors
  • Software-based emerging tech is on the rise while hardware-based ones lag behind
  • 67% of SMB supply chains say their forecasting techniques were helpful in preventing excess inventory

As data were a byproduct and at best second concern for the businesses since the first introduction of ERPs and other enterprise software, it should not come as a surprise that the vast majority of data analytics and digital transformations fail. GIGO hits you when you don't know what are you looking for. Decisions, not the data should be primary focus when you start. Then it helps to narrow down your focus on improving only the data that matter.

Very interesting article! I always like to recall that risk is defined by (probability of occurrence) x (impact on the business). And human tend to underestimate high impact-low probability events and overestimate high probability-low impact events.

Very informative video! The development of digital twins is undoubtfully important. with all the benefits they bring, I think soon it will be almost a must-have for some businesses.

This problem is referred to as censored demand. Indeed, this is not the sales but the demand that is of interest to be forecast. Unfortunately, there is no such thing as historical demand, only historical sales that represent a loose approximation of the demand. When a product goes out of the assortment, due to stockout or otherwise, sales drop to zero, but demand (most likely) does not.

The old school approach to address censored demand consists of iterating through the historical sales data, and replacing the zero segments with demand forecast. Unfortunately, this method is fraught with methodological issues, such as building a forecast on top of another forecast is friable. Furthermore, in the case of products that are not sold during for long periods (not just rare stockout events), say summer, forecasting a fictitious demand over those long periods is not entirely sensical.

The most commonly used technique at Lokad to deal with censored demand is loss masking, understood as from a differentiable programming perspective. This technique is detailed at:
https://tv.lokad.com/journal/2022/2/2/structured-predictive-modeling-for-supply-chain/

Hope it helps, Joannes

There are several questions to unpack here about seasonality. (1) Is seasonality best approached as a multiplicative factor? (2) Is seasonality best approached through a fixed-size vector reflecting those factors? (hence the "profile") (3) How to compute the values of those vectors?

Concerning (1), the result that Lokad has obtained at the M5 competition is a strong case for seasonality as a multiplicative factor:
https://tv.lokad.com/journal/2022/1/5/no1-at-the-sku-level-in-the-m5-forecasting-competition/ The literature provides alternatives approaches (like additive factors); however, this don't seem to work nearly as well.

Concerning (2), the use of a fixed size vector to reflect the seasonality (like a 52-week vector) has some limitations. For example struggles to capture patterns like an early summer. More generally the vector approach does work too well when the seasonal pattern are shifting, not in amplitude, but in time. The literature provides more elaborate approaches like dynamic time warping (DTW). However, DTW is complicated to implement. Nowadays, most machine learning researcher have moved toward deep learning. However, I am on the fence on this. While DTW is complicated, it has the benefit of having a clear intent model-wise (important for whiteboxing).
https://en.wikipedia.org/wiki/Dynamic_time_warping

Finally (3), the best approach that Lokad has found to compute those vector values is differentiable programming. It does achieve either state of the art results or very close to start of the art with a tiny fraction of the problems (compute performance, blackbox, debuggability, stability) associated with alternative methods such as deep learning and gradient boosted trees. The method is detailed at:
https://tv.lokad.com/journal/2022/2/2/structured-predictive-modeling-for-supply-chain/

Hope it helps, Joannes

Patrice Fitzner, who contributed to the design of the Quai 30, a next-gen 21st century logistical platform explains the thinking that went into this 400m by 100m monster of automation.

Last mile delivery services where attaining incredible valuations only a year ago. It seems as the honeymoon phase is over and the reality of a challenging route to profitability of the overall business model is setting in.
The model in itself is not new at all and the service is incredibly desirable from a customer perspective, however, the question remains why the big supermarket giants have not implemented it themselves 10 years ago already - maybe the business model is not as amazing as believed?
Future will tell whether clients will be willing to pay the premiums that the remaining 'monopolists' will have to charge to become profitable.

Factorio https://www.factorio.com/ is the grandfather of factory simulation games, and it's centered around production chains, but the default play style is rather light in terms of supply chain (just create a factory that, if fed with raw materials for long enough, produces everything you need eventually). I recommend going for a train-based play style (especially against hostile aliens), as it tends to center a lot more around what you need to deliver, where, when, and in what quantities. The space exploration mod https://mods.factorio.com/mod/space-exploration is also great on those aspects.

Satisfactory https://www.satisfactorygame.com/ is also interesting for those who like more of a 3D feel.

My favorite is Dyson Sphere Program https://store.steampowered.com/app/1366540/Dyson_Sphere_Program/ as it forces you to set up interstellar supply chains: in Factorio and Satisfactory one tends to create a central base into which all raw inputs are fed, but in Dyson Sphere Program it's usually better to transport your intermediate goods to a planet with oceans of sulfuric acid that are needed for its processing, rather than shipping the sulfuric acid back to a central planet. Or, to move energy-hungry processing to a tidally-locked planet right next to the sun.

Transport Fever 2 https://www.transportfever2.com/ has excellent gameplay, with half of it being centered around industrial supply chains. The Industry Expanded mod is very good in this aspect https://steamcommunity.com/sharedfiles/filedetails/?id=1950013035 The Cities: Skylines game with the Industries expansion https://www.paradoxinteractive.com/games/cities-skylines/add-ons/cities-skylines-industries is also quite good in the same vein.

Banished https://store.steampowered.com/app/242920/Banished/ is also interesting in that you are expected to set up an entire medieval supply chain from scratch. It includes dealing with overfilled warehouses, random demand (weather for firewood, illness for medicine), investment (planting an orchard takes years to yield fruit), out-of-stock penalties (starvation, running out of tools which cuts productivity across the board), and so on.

Very nerdy Factorio rocks https://www.factorio.com/

My personal favorite remains https://www.ubisoft.com/en-gb/game/anno/1800.
The entire Anno series is outstanding to understand basic concepts of raw material - semi finished - finished product transition + lead times.
Would be keen to know whether there are other games that come to mind!

Just to clarify the terminology that I am using the following: the EOQ (economic order quantity) is a quantity decided by the client, the MOQ (minimal order quantity) is a quantity imposed by the supplier. Here, my understanding is that the question is oriented toward EOQs (my answer below); but I am wondering if it's not about picking the right MOQs to impose to clients (which is another problem entirely).

The "mainstream" methods approach EOQs, especially all of those that promise any kind of optimality suffer from a series of problems:

  • ignore variations of the demand, which is expected to be stationary (no seasonality for example)
  • ignore variations of the lead time, which expected to be constant
  • apply only to "simple EOQ" that apply to a single P/N at a time (but not to a EOQ for the whole shipment)
  • ignore macro-budgeting constraints, aka this PO competing against other POs (from other suppliers for example)
  • ignore the ramification of the EOQs across dependent BOMs (client don't care about anything but the finished products)

Do not expect a formula for EOQs. There isn't one. A satisfying answer requires a way "to factor in" all those elements. What we have found at Lokad for better EOQs in manufacturing (not "optimal" ones, I am not even sure we can reason about optimality), is that a certain list of ingredients are needed:

  • probabilistic forecasts that provide probability distributions at least for the future demand and the future lead times. Indeed, classic forecasts deal very poorly with irregular flows (both demand and supply), and MOQs, by design, magnifies the erraticity of the flow.
  • stochastic optimization, that is the capacity to optimize in presence of randomness. Indeed, the EOQ is a cost-minimization of some kind (hence an optimization problem), but this optimization happens under uncertain demand and uncertain lead time (hence the stochastic flavor).
  • financial perspective, aka we don't optimize percentages of errors, but dollars of error. Indeed, EOQs is typically a tradeoff between more stock and more overhead (shipment, paperwork, manhandling, etc)

In my series of supply chain lectures, I will be treating (probably somewhere next year) the fine print of MOQs and EOQs in my chapter 6. For now, the lecture 6.1 provides a first intro into the main ingredients needed for economic order optimization, but without delving (yet) into the non-linearities:
https://tv.lokad.com/journal/2022/5/12/retail-stock-allocation-with-probabilistic-forecasts/

It will come. Stay tuned!

vermorel Nov 30, 2022 | flag | on: Goodbye, Data Science

An incredibly perceptive discussion that reflects my own experience with data science in general.

vermorel Nov 29, 2022 | flag | on: Cycle Count Manager

A small side software project dedicated to inventory counting.

vermorel Nov 23, 2022 | flag | on: The supply chain triangle in 3 minutes [video]

The earliest occurrence I could find of the concept is 2016 with the presentation:
https://www.slideshare.net/BramDesmet/supply-chain-innovations-2016-strategic-target-setting-in-the-supply-chain-triangle

The 2018's book is available at:
https://www.amazon.com/gp/product/B07CL2MCWS/

vermorel Nov 14, 2022 | flag | on: The Saga of Supply Chain Innovation

ATP (used in the article) stands for Available-To-Promise.

I am very much in agreement concerning the list of stuff that didn't work: Consolidations Decimated Value, Consultants Failed to Deliver Value Through Software Models, Barney Partnerships Bled Purple, not Green, The Saga of Venture Capitalists and Private Equity Firms, New Forms of Software Marketing Creates Haze not Value

Concerning the value of cloud and NoSQL. Well, yes, but it's a bit of an old news. Lokad migrated toward cloud computing and NoSQL back in 2010. A lot did happen since then. For a discussion about what a modern cloud-based tech look like https://blog.lokad.com/journal/2021/11/15/envision-vm-environment-and-general-architecture/

Excel is ingrained in day-to-day work of current working generation. It will be difficult to replace (barring select-few companies).

I strongly feel that post-generation-Z (born post 2001) workforce will expect (and work towards developing) better decision engines - powered by better technologies. This generation is using some form of AI/ML in day-to-day life, in schools, colleges. They wouldn't expect anything less in work. This will get progressively better.

vermorel Nov 09, 2022 | flag | on: Prioritized Ordering [video]

A couple of relevant links:

Hoehner Nov 04, 2022 | flag | on: Slower, lower, weaker [pic]

Parkinson's law is the adage that "work expands so as to fill the time available for its completion."

In most of Western Europe, my (tough) take is that, career-wise, those certifications are worth the paper they are printed on. The vast majority of the supply chain executives that I know have no certification.

More specifically, the example exam questions are ludicrous, see
https://www.theorsociety.com/media/1712/cap_handbook_14122017133427.pdf

MCQ (Multiple Choice Questions) is the exact opposite of the sort of problems faced by supply chain practitioners. MCQs emphasize super-shallow understanding of vast array of the keywords. Worse, it treats those keywords (eg data mining, integer programming) as if they were encompassing any kind of cohesive body of work (or tech). This is wrong, plain wrong.

vermorel Oct 25, 2022 | flag | on: Supply Chains are Healing

Minor: Edited the title to avoid the question mark. I am trying to reserve the question titles to actual questions addressed to the community.

Also, discussed at https://news.lokad.com/posts/349/ (1 comment)

This is great! We need to hear more stories about “the day of small beginnings” as I like to call it. Too often we read marketing buzz and press releases on the glory of a great achievement. That’s great and all, but every human achievement starts with people and there are usually some interesting dynamics around how something goes from zero to something. We need more stories around the underlying climate preceding the best innovations. THAT will inspire and spark excitement to build. There are plenty of lessons learned that can be shared in that regard (Paul Graham and Y Combinator essays do this). Who else has an early day supply chain story to share?

“In 2011, Amazon’s total revenue reached nearly $48 billion, and it was already clear to the senior leadership that the company’s scale would require the automation of buying and the management of inventory; monitoring spreadsheets was not a long-term solution. Indeed, even then the sheer range of products offered by Amazon meant the “illusion of control” was already kicking in among the groups managing inventory, says Bhatia. In fact, Bhatia notes, the sheer complexity and scale meant the challenge was beyond the scope of any team, let alone an individual.”

Light retrospective on the evolution of Amazon automated decision-making tech for supply chain. Interesting nugget, Amazon appears to be still using their multi-horizon quantile recurrent forecaster (1) as it appears to have taken several years to cover the full scope (which is not unreasonable considering the scale of Amazon).

(1) A multi-horizon quantile recurrent forecaster
By Ruofeng Wen, Kari Torkkola, Balakrishnan (Murali) Narayanaswamy, Dhruv Madeka, 2017
https://www.amazon.science/publications/a-multi-horizon-quantile-recurrent-forecaster

The book can be purchased from
https://www.amazon.com/Profit-Source-Transforming-Business-Suppliers-ebook/dp/B099KQ126Z

The main message by Schuh et al. is that a collaborative relationship with suppliers can be vastly more profitable that an oppressive one solely focused on lowering the supply prices. While the idea isn't novel, most companies still favor aggressive and antagonistic procurement strategies which leave no room for more profitable collaborations to emerge.

I recommend Jason Miller (MSU SCM head) and Warren Powell’s posts from LinkedIn. Both are widely recognized thought leaders in the quantitative SCM space.

10 years ago, Amazon was acquiring Kiva Systems for $775 million.

The quote is from The Testament of a Furniture Dealer by Ingvar Kamprad, IKEA founder. The original document can be found at:
https://www.inter.ikea.com/en/-/media/InterIKEA/IGI/Financial%20Reports/English_The_testament_of_a_dealer_2018.pdf

Forecasting and S&OP initiatives almost invariably turn into bureaucratic monsters.

A team from Lokad took part in the M5 competition. The method, which landed No1 at the SKU level, has been presented at https://tv.lokad.com/journal/2022/1/5/no1-at-the-sku-level-in-the-m5-forecasting-competition/

Hoehner Oct 05, 2022 | flag | on: Flaw of Averages Trilogy

The average project completion time is greater than what is predicted by the average durations of the underlying tasks, because the project is not done until the last task is done.

The average profit is less than the profit of the average demand, because there is no upside if demand exceeds quantity ordered.

The average operating cost is greater than the operating cost of the average demand.

Yes, continuous improvement is the key to make a new technology become economically efficient, but also it may become enabler as you've mentioned. Though, by breakthrough we usually mean a change of principle how the same objective is achieved better/faster/cheaper etc. This is qualitative change in the first place. Then continuous improvement starts.

As written in the article, you have to choose between two models when developing a startup : either grow at all costs, and optimize later (like Uber), or keep the notion of cost and frugality as an essential part of every initiative. Amazon is doing that in its warehouses : Grow, but make sure to optimize for efficiency.

On the other hand, groundbreaking innovations are often made possible by incremental innovations in other areas (such as the improved precision of steel machining tools allowing the creation of cost-efficient gasoline engines).

Hoehner Sep 28, 2022 | flag | on: Supply Chain News on news.lokad.com

A video explaining why this forum was launched.

While Reddit targets a young category of users and Wikipedia is bound to contain old and somewhat deprecated information, LinkedIn remains one of the only trustworthy source of information in the field, due to the experience of the professionals that are sharing their experiences. However, this information does not remain easily accessible after a while. Therefore, it would be interesting to have a place to discuss the specifics of the Supply Chain field.

For this purpose, Lokad has launched a new page called https://news.lokad.com/, an aggregator that consolidates interesting links to various Supply Chain topics and where the community can discuss and submit their own subjects. The final goal is to share links of interest for Supply Chain-minded people, giving them the chance to debate and contribute to further achievements in the field.

vermorel Sep 26, 2022 | flag | on: Software to simplify the supply chain

Interesting nuggets of this interview with Ryan Petersen, CEO of Flexport:

- 20% of the Flexport workforce is software engineering. The rest is sales and account management.
- The P95 transit time is a 95% quantile estimate of the transit time; part of the core Flexport promise.

Overall, a very interesting discussion, although the simplify part really refers to the Flexport product itself.

Most supply chain initiatives fail. Dead-ends are a given, although my understand differs a little bit concerning the root causes. Among the top offenders, the lack of decision-centric methodologies and technologies ranks very high. In the 'future' section proposed by the author, I see layers of processes to generate ever more numerical artifacts (eg: Market-driven Demand, Demand Visibility, Baseline Demand and Ongoing Analysis of Market Potential, Unified Data Model Tied to a Balanced Scorecard, Procurement/Buyers Workbench).

vermorel Sep 26, 2022 | flag | on: Made.com puts itself up for sale

A decade ago, Made.com - along with a couple of similar ecommerces - took extensive advantage of the payment terms of their oversea suppliers (mostly in Asia). Their supply chain execution allowed them to sell their goods while goods were still in-transit. This worked well for furniture as customers - at the time - were OK waiting a month or two to receive their order. I don't know where they stand now, but I suspect that the supply chain tensions (sourcing problems in Asia + surge of transport fees) do pose significant challenges to this business model.

tikhonov Sep 23, 2022 | flag | on: Stock is stock - whatever you call it [pic]

If we look at other fields we can realize safety stock will never get buried completely.
We have modern medicine, but healers still do exist.
We have modern agriculture, but growing 'organic' movement (which is in fact less sustainable).
Etc, etc, etc.
Those advocating for "supply chain is not for tech, but for people" will always seek for dysfunctional but simplistic recipes.

vermorel Sep 23, 2022 | flag | on: Stock is stock - whatever you call it [pic]

My previous take on safety stocks:
https://tv.lokad.com/journal/2019/1/9/why-safety-stock-is-unsafe/

In short, not only the normal distribution assumption is bad, but the whole approach is very naïve. It made (somewhat) sense before the advent of computers, but at present, safety stocks should be treated as a method of historical interest only.