vermorel 3 months | flag | on: Workshop data

The playground has been fixed.

vermorel 3 months | flag | on: Workshop data

Working on the issue. Sorry, for the delay. Best regards, Joannes

You will find a safety stock calculation, with varying lead times, at:
https://www.lokad.com/calculate-safety-stocks-with-sales-forecasting/

The web page includes an illustrative Excel sheet to let you reproduce the calculation.

However, the bottom line is: safety stocks are a terrible model. See:
https://www.lokad.com/tv/2019/1/9/why-safety-stock-is-unsafe/

As a statistical answer, they are completely obsolete. Probabilistic forecast must be favored instead.
https://www.lokad.com/probabilistic-forecasting-definition/

15 months of historical data is ok-ish, it's a bit tricky to assess seasonality with less than 2 years, but it can be done as well.

Hope it helps,
Joannes

Unfortunately, in supply chain, things can be done "a small piece" at a time. It just doesn't work. See
https://www.lokad.com/blog/2021/9/20/incrementalism-is-the-bane-of-supply-chains/

I would have very much preferred the answer to this question to be different, to have a nice incremental path that could be made available to all employees; it would have made the early life of Lokad much easier while competing against Big Vendors.

Then, don't underestimate what a supposedly "minor" employee can do. Apathy is one of the many diseases touching large companies. When nobody cares, the one person who genuinely care ends up steering the ship. All it takes to point out the obvious as many people it takes. The flaws of the "legacy" supply chain solutions are not subtle, they are glaring.

In MRO, it boils down to: uncertainty must be embraced and quantified, varying TATs matter as much as varying consumptions, etc. See an extensive review of the challenges that need to be covered https://www.lokad.com/tv/2021/4/28/miami-a-supply-chain-persona-an-aviation-mro/

Forecasting is a mean to an end, but just a mean. Focusing on forecasting as a "stand-alone thingy" is wrong. This is the naked forecast antipattern, see https://www.lokad.com/antipattern-naked-forecasts/

For an overview on how to get a supply chain initiative organized, and launched, see https://www.lokad.com/tv/2022/7/6/getting-started-with-a-quantitative-supply-chain-initiative/

Hope it helps,

Hello! We have been developing - for the past two years - a general purpose stochastic optimizer. It has passed the prototype stage and we have a short series of client that are running their production over this new thingy. Stochastic optimization (aka an optimization under a noisy loss) is exactly what you are looking for here. This will be replacing our old-school MOQ solver as well.

We are now moving forward with the development of the clean long-term version of this stochastic optimizer, but it won't become generally available before the end of 2024 (or so). Meanwhile, we can only offer ad-hoc heuristics. Sorry for the delay, it has been a really tough nut to crack.

Lokad has developed its own DSL, dedicated to the predictive optimization of supply chain, namely Envision https://docs.lokad.com/

vermorel 4 months | flag | on: Implement Forecast at Monthly level

Instead of going from weekly to monthly, I would, on the contrary, suggest to go from weekly to daily, and then from daily to monthly. Keep the weekly base structure, and to introduce day-of-week multiplier. This gives you a model at the daily level. Turn this daily model into a monthly forecasting model. Indeed, having 4 or 5 weekends has a significant impact on any given month, and usually to most effective path to capture this pattern consists of operating from the daily level.

Hope it helps,

I am not overly convinced by the idea of 'agents' when it comes to system-wide optimization. Indeed, the optimization of a supply chain must be done at the system level. What is 'best' for a node (a site, a SKU, etc) is not the what is best for the whole. The 'agent' paradigm is certainly relevant for modeling purposes, but for optimization, I am not so sure.

Concerning evolution vs revolution, see 'Incrementalism is the bane of supply chains', https://www.lokad.com/blog/2021/9/20/incrementalism-is-the-bane-of-supply-chains/

Hello! While we haven't publicly communicated much on the case, Lokad has been very active on the LLM front over the last couple of months. We have also an interview with Rinat Adbullin, coming up on Lokad TV, discussing more broadly LLMs for enterprises.

LLMs are surprisingly powerful, but they have their own limitations. Future breakthrough may happen, but chances are that whatever lift some of those limitations, may be something quite unlike the LLMs we have today.

The first prime limitation is that LLMs don't learn anything after the initial training (in GPT, the 'P' stands for 'pretrained'). They just perform text completions, think of it as a 1D time-series forecast where values have been replaced by words (tokens actually, aka sub-words). There are techniques to cope - somehow - with this limitation, but none of them is even close to be as good as the original LLM.

The second prime limitation is that LLMs deal with text only (possibly images too with multi-modal variants, but images are most irrelevant to supply chain purposes). Thus, LLMs cannot directly crunch transactional data, which represents more than +90% of the relevant information for a given supply chain.

Finally, it is a mistake to look at the supply chain of the future, powered by LLMs, as an extension of the present-day practices. Just like eCommerce companies have very little in common with mail-order companies that appeared in the 19th century; the same will - most likely - be true for those future practices.

This is why a large software vendor cannot, by default, be deemed a "safer" option than a small vendor. In B2B software, the odds of the vendor going bankrupt are usually dwarfed by the odds of the vendor discontinuing the product. The chances that Microsoft would stop supporting core offering (ex: Excel / Word) within 2 decades are low, very low. However, the same odds cannot be applied to every single product pushed by Microsoft. Yet, when it comes to long-term support, Microsoft is one of the best vendors around (generally speaking).

vermorel 5 months | flag | on: Unicity of ranvar after transform

The function transform should be understood from the perspective of the divisibility of random variables, see https://en.wikipedia.org/wiki/Infinite_divisibility_(probability)

However, just like not all matrices can be inverted, not all random variables can be divided. Thus, Lokad adopts a pseudo-division approximate approach which is reminiscent (in spirit) to the pseudo-inverse of matrices. This technique is dependent on the chosen optimization criteria, and indeed, in this regards, although transform does return a "unique" result, alternative function implementations could be provided as well.

vermorel 5 months | flag | on: Cross Entropy Loss Understanding

Cross-entropy is merely a variant of the likelihood in probability theory. Cross-entropy works on any probability distribution as long as a density function is available. See for example https://docs.lokad.com/reference/jkl/loglikelihood.negativebinomial/

If you can produce a parametric density distribution, then, putting pathological situations aside, you can regress it through differentiable programming. See fleshed out examples at https://www.lokad.com/tv/2023/1/11/lead-time-forecasting/

In the article, it is mentioned that Lokad collected empirical data which supports the claim that Cross Entropy is usually the most efficient metric to optimize, rather than MSE, MAPE, CRPS, etc. Is it possible to view that data?

No, unfortunately for two major reasons.

First, Lokad has strict NDAs in place with all our client companies. We do not share anything, not even derivative data, without the consent of all the parties involved.

Second, this claim should be understood from the perspective the experimental optimization paradigm, which is (most likely) not what you think. See https://www.lokad.com/tv/2021/3/3/experimental-optimization/

Hope it helps,
Joannes

vermorel 5 months | flag | on: Lag Forecasting

I have a few tangential remarks, but I firmly believe this is where you should start.

First, what is the problem that you are trying to solve? Here, I see you struggling with the concept of "lag", but what you are trying to achieve in unclear. See also https://www.lokad.com/blog/2019/6/3/fall-in-love-with-the-problem-not-the-solution/

Second, put aside Excel entirely for now. It is hindering, not helping, your journey toward a proper understanding. You must be able to reason about your supply chain problem / challenge without Excel; Excel is a technicality.

Third, read your own question aloud. If you struggle to read your own prose, then probably, it needs to be rewritten. Too frequently, I realize, upon reading my own draft that the answer was in front of me once the question is properly (re)phrased.

Back to your question / statement, it seems you are confusing / conflating two distinct concepts:

  • The forecasting horizon
  • The lead times (production / dispatch / replenishment)

Then, we have also the lag which is a mathematical concept akin to time-series translation.

Any forecasting process is horizon-dependent, and no matter how you approach the accuracy, the accuracy will also be horizon dependency. The duration of between the time of cut-off and the time of the forecast is frequently referred to as the lag because in order to backtest, you will adding "lag" to your time-series.

Any supply chain decision takes time to come to pass, i.e. there is a lead time involved. Again, in order to factor those delays, it is possible to add "lag" to your time-series to reflect the various delays.

Lagging (aka time-series shift, time-series translation) is just a technicality to factor any kind of delay.

Hope it helps.

Yes, just use text interpolation to insert your text values. See below:


table T = with 
  [| date(2021, 2, 1) as D |]
  [| date(2022, 3, 1) |]
  [| date(2023, 4, 1) |]

maxy = isoyear(max(T.D))

show table "My tile tile with \{maxy}" a1b3 with
  T.D as "My column header with \{maxy}"
  random.integer(10 into T) as "Random" // dummy

On the playground https://try.lokad.com/s/ad-hoc-labels-in-table-tile

vermorel 7 months | flag | on: Display Data by Year

Envision has a today() function, see


show scalar "Today" a1b2 with today()

table X = with 
  [| today() as today |]

show table "X" a3b4 with X.today

See https://try.lokad.com/s/today-sample

In your example above, DV.today is not hard-coded but most likely loaded from the data. It's a regular variable, not the standard function today().

Hope it helps,
Joannes

vermorel 8 months | flag | on: Forecast Analysis - Forecast Quality

I suspect its the behavior of the same aggregator when facing an empty set which defaults to zero, see my snippet below:


table Orders = with // hard-coding a table
  [| as Sku, as Date          , as Qty, as Price |] // headers
  [| "a",    date(2020, 1, 17), 5     , 1.5      |]
  [| "b",    date(2020, 2, 5) , 3     , 7.0      |]
  [| "b",    date(2020, 2, 7) , 1     , 2.0      |]
  [| "c",    date(2020, 2, 15), 7     , 5.7      |]

where Orders.Sku == "foo"
  x = same(Orders.Price) // empty set, defaults to zero
  y = same(Orders.Price) default 42 // forcing the default

show summary "same() behavior" a1b2 with
  x as "without default" // 0
  y as "with default"    // 42

Try it at https://try.lokad.com/s/same-defaults-to-zero

Hope it helps.

The dispersion of actionrwd.foo is controlled by Dispersion:. At line 13, in your script I see:


Items.Dispersion = max(Items.AvgErrorRatio/2, 1)

This line implies that if there is 1 item (and only 1) that happens to have a super-large value, then, it will be applied for all items. This seems to be the root cause behind the high dispersion values that you are observing.

In particular,


ItemsWeek.RatioOfError = if ItemsWeek.Baseline != 0  then (ItemsWeek.Baseline - ItemsWeek.DemandQty) ^ 2 /. ItemsWeek.Baseline else 0

Above, ItemsWeek.RatioOfError can get very very large. If the baseline is small, like 0.01, and the demand qty is 1, then this value can be 100+.

Thus, my recommendations would be:

  • sanitize your ratio of error
  • don't use a max for the dispersion

Hope it helps.

Remark: I have edited your posts to add the Envision code formatting syntax, https://news.lokad.com/static/formatting

Envision is deterministic. You should not be able to re-run twice the same code over the same data and get different results.

Then, there is pseudo-randomness involved in functions like actionrwd. Thus, the seeding tend to be quite dependent on the exact fine-print of the code. If you change filters, for example, you are most likely going to end-up with different results.

Thus, even seemingly "minor" code change can lead to a re-seeding behavior.

As a rule of thumb, if the logic breaks due to re-seeding, then the logic is friable and must be adjusted so that its validity does not depend on being lucky during the seeding of the random generators.

vermorel 8 months | flag | on: deleted post

Please try to ask self-contained questions. Without context, those questions are a bit cryptic to the community.

You can share code and/or links to the Envision playground. Think of this board as Stack Overflow, but for supply chain.

Cheers,

vermorel 8 months | flag | on: S&OP [pic]

S&OP is only ever touted as a "grand success" by consultants who directly profit from the massive overhead.

In contrast, I have met with 200+ supply chain directors in 15 years. I have witnessed several dozens of S&OP processes in +1B companies. I have never seen one of those processes be anything else than a huge bureaucratic nightmare.

I politely, but firmly, disagree with the statement that *a* process is better than any process at all. This is a fallacy. There is no grand requirement written in the sky that any of the things that S&OP does have to be done at all.

Also, instead of using by .. at everywhere, you could declare Suppliers as upstream of Items. This will remove the need for by .. at option entirely. I am giving an example of the relevant syntax at: https://news.lokad.com/posts/647

It is possible to declare a tuple as the primary dimension of a table in a read block through the keyword as:


read "/suppliers.csv" as Suppliers [(Supplier, Location) as myPrimary] with
  Supplier : text
  Location : text
  LeadTimeInDays : number

A more complete example:


read "/skus.csv" as Skus with
  Id : text
  Supplier : text
  Location : text

read "/suppliers.csv" as Suppliers [(Supplier, Location) as sulo] with
  Supplier : text
  Location : text
  LeadTimeInDays : number

expect Skus.sulo = (Skus.Supplier, Skus.Location)

Skus.LeadTimeInDays = Suppliers.LeadTimeInDays

Hey! Thanks for your interest. I am not too sure which code you are referring to. Don't hesitate to include an Envision snippet (see https://news.lokad.com/static/formatting ) in your question to clarify what you are working on. You can also include a link to the Envision code playground (see https://try.lokad.com ) if you can isolate the problem.

The Lokad usually approach lead time forecasts to craft a parametric probabilistic model to be regressed with differentiable programming. This approach makes it possible, for example, to introduce a distance parameter in the model. The value of this parameter is then learn by regressing the model over the data that happens to be available. Conversely, if there is no data at all (at least for now), the value of the parameter can be hard-coded to a guestimate as a transient solution.

Then, this approach might be overkill if there is enough data to support a direct lead time ranvar construction over supplier-location instead of supplier.

Let me know if it helps.

vermorel 9 months | flag | on: Be careful what you negotiate for! [pic]
Where you say “to some extent negotiable” (paraphrased) could we regard it as the quantity unit corresponding to a price, and that a different and likely higher price might apply to orders of smaller quantities? In which case, knowing the tiers of quantity and their corresponding prices would enable us to find the best order pattern, trading off price, wastage or inventory holding cost, and lead time.

What you are describing is frequently referred to as 'price breaks'. Price breaks can indeed be seen as a more general flavor of MOQs. In practice, tthere are two flavors of price breaks: merchant and fiscal. See also https://docs.lokad.com/library/supplier-price-breaks/

What is a better way of getting stakeholder engagement for large investment without a smaller PoC-like approach?

The fundamental challenge is de-risking the process.

How does one get stakeholder engagement for TMS, WMS, MRP or ERPs? Those products are orders of magnitude more expensive than supply chain optimization software, and yet, there is no POCs.

I can't speak for the whole enterprise software industry. In its field, the Lokad approach to de-risking a quantitative supply chain initiative consists of many the whole thing accretive in a way that is largely independent of the vendor (aka Lokad).

Since Lokad charges on a monthly basis, with little or no commitment, and the process can end at any time. Whenever it ends, if it ends at all, the client company (the one operating a supply chain) can resume where Lokad left it.

The fine-print of the process and methodologies is detailed in my series of lectures https://lokad.com/lectures

vermorel 9 months | flag | on: What defines supply chain excellence?

My own take is that IT, and more generally anything that is really the foundation of actual execution, is treated as second class citizens, especially the _infrastructure_. Yet, the immense majorities of the operational woes in supply chain nowadays are IT-related or even IT-driven. For example _Make use of channel data_ is wishful thinking for most companies due to IT mess. IT is too important to be left in the hands of IT :-)

vermorel 10 months | flag | on: Safety stock [pic]

I have two main objections to safety stocks, a stronger one and a weaker one.

First, my stronger objection is that safety stocks contradicts what basic economics tell us about supply chain. By design, safety stocks are a violation of basics economics. As expected, safety stocks don't end-up proving economics wrong, but it's the other way around. Economics are proving safety stock wrong. This argument will be detailed in my upcoming lecture 1.7, see https://lokad.com/lectures

Second, my weaker objection, is that safety stocks, as presented in every textbook, and as implemented in every software, are hot nonsense. Not only Gaussians are used both for demand and lead time - while they should not - but also the way lead time is combined with demand is also sup-par. This argument is weak because, in theory, safety stock formulas could be rewritten from scratch to fix this; however, the first, stronger objection remains, thus, it's moot.

See also:

- Why safety stock is unsafe https://tv.lokad.com/journal/2019/1/9/why-safety-stock-is-unsafe/
- Retail stock allocation with probabilistic forecasts - Lecture 6.1 https://tv.lokad.com/journal/2022/5/12/retail-stock-allocation-with-probabilistic-forecasts/

vermorel 10 months | flag | on: RFI, RFP and RFQ madness in supply chain

Very interesting reference! I will have to check it out.

For someone inside an organization, situations, where you can't evaluate a software vendor entirely from publicly available information, are pretty rare. Even the lack of information is telling (and not in a good way). The only thing missing is usually getting a quote from the vendor, but that doesn't require an RFP, merely a problem statement, and some ballpark figures.

As a vendor (like Lokad), you don't have a say. If the prospect says that the process is an RFP, then so be it. I have repeatedly tried to convince prospects to stop paying consultants twice what it would cost them to do the setup of the supply chain solution they were looking for, but I have never managed to convince any company to give up on their RFP process. Thus, nowadays, we just go with the flow.

vermorel 10 months | flag | on: Community notes for docs.lokad.com

We have just rolled out a community note system for the technical documentation.

Envision snippets are allowed:


// Text following a double-slash is a comment
a = 5
b = (a + 1) * 3 / 4
show scalar "Result will be 4.5" a1b1 with b // display a simple tile

But also mathematical expressions:

$$ \phi = \frac{1 + \sqrt{5}}{2} $$
vermorel 10 months | flag | on: How SAP Failed The Supply Chain Leader

The article, by Lora Cecere, a notable market analyst in supply chain circles, has been taken down by Forbes.
It seems that Forbes is afraid of losing SAP a client. So much for an independent press...

Update: my network tells me that a copy of the article can be found at:
https://pdfhost.io/v/lE65WObHk_How_SAP_Failed_The_Supply_Chain_Leader

vermorel 11 months | flag | on: 21st Century Trends in Supply Chain

Yes, exactly the meaning of terms. Every company uses the terms product, order, stock level, but those words rarely mean exactly the same thing from one company to the next.

vermorel 11 months | flag | on: Forecast Accuracy [pic]

Inaccurate forecasts can't be right for the company. This is pretty much self-evident. Thus, companies have been chasing better forecasts, leveraging varied metrics. Yet, while this game has been played relentlessly for the last 4 decades. Near all companies have next-to-nothing to show for all those efforts.

The Lokad position is that the way those forecasting initiatives were framed, aka deterministic forecasts, were spelling their doom from Day 1.

vermorel 11 months | flag | on: 21st Century Trends in Supply Chain

Yes, indeed. Also, I am very much aligned with the paper vision that "Simplicity is Hard". Stuff (patterns, organizations, processes, ..) can only become simple with the adequate intellectual instruments (terminologies, concepts, paradigms). Unearthing those instruments is difficult.

Among companies operating complex supply chains, I have rarely seen anyone (outside Lokad) maintain glossaries. Yet, a glossary is probably one of the cheapest ways to eliminate some accidental complexity.

vermorel 11 months | flag | on: Malleable Software

LLMs can certainly support a whole next-gen replacement for Tableau-like software (widely used for supply chain purposes), where the SQL queries are generated from prompts. I may have to a revisit my Thin BI section at https://www.lokad.com/business-intelligence-bi a few years down the road.

However, system-wide consistency is a big unsolved challenge. LLMs have token limits. Within those limits, LLMs are frequently beyond-human for linguistic or patternistic tasks (lacking a better word). Beyond those limits, it becomes very fuzzy. Even OpenAI doesn't seem convinced in their own capacity to push those token limits further within the current LLM paradigm.

Yes, this part has been somewhat hastily written (my fault). At Lokad, we tend to alternate between the algebra of random variables (faster, more reliable) and the montecarlo approach (more expressive). Here, is below the typical way we approach this integrated demand over the lead time while producing a probabilistic forecast at the end (this is very much aligned with your "simulation" approach):


present = date(2021, 8, 1)
keep span date = [present .. date(2021, 10, 30)]
 
Day.Baseline = random.uniform(0.5 into Day, 1.5) // 'theta'
alpha = 0.3
level = 1.0 // initial level
minLevel = 0.1
dispersion = 2.0

L = 7 + poisson(5) // Reorder lead time + supply lead time

montecarlo 500 with
  h = random.ranvar(L)

  Day.Q = each Day scan date // minimal ISSM
    keep level
    mean = level * Day.Baseline
    deviate = random.negativebinomial(mean, dispersion)
    level = alpha * deviate / Day.Baseline + (1 - alpha) * level
    level = max(minLevel, level) // arbitrary, prevents "collapse" to zero
    return deviate

  s = sum(Day.Q) when (date - present <= h)
  sample d = ranvar(s)

show scalar "Raw integrated demand over the lead time" a4d6 with d
show scalar "Smoothed integrated demand over the lead time" a7d9 with smooth(d)

See also https://try.lokad.com/s/demand-over-leadtime-v1 if you want to try out the code.

vermorel 12 months | flag | on: Let's try Lokad

By the way, mathematical formulas are pretty-printed as well:

$$ \phi = \frac{1 + \sqrt{5}}{2} $$
vermorel 12 months | flag | on: Let's try Lokad

I have just updated Supply Chain News to pretty print Envision scripts as well. Here is the first script:


montecarlo 1000 with // approximate π value
  x = random.uniform(-1, 1)
  y = random.uniform(-1, 1)
  inCircle = x^2 + y^2 < 1
  sample approxPi = avg(if inCircle then 4 else 0)
show scalar "π approximation" with approxPi // 3.22

A discussion with Jay Koganti, Vice President of Supply Chain at Estée Lauder’s Centre of Excellence

vermorel Jan 23, 2023 | flag | on: Architecture of Lokad

The predictive optimization of supply chain comes with unusual requirements. As a result, the usual software recipes for enterprise software aren't working too well. Thus, we had to diverge - quite substantially - from the mainstream path.

The 5 trends as listed by the author:

  • 88% of small businesses supply chains will use suppliers closer to home by next year.
  • Small business supply chains are moving most or all suppliers closer to the U.S. faster than predicted
  • The strained economy and low inventory are top stressors
  • Software-based emerging tech is on the rise while hardware-based ones lag behind
  • 67% of SMB supply chains say their forecasting techniques were helpful in preventing excess inventory

This problem is referred to as censored demand. Indeed, this is not the sales but the demand that is of interest to be forecast. Unfortunately, there is no such thing as historical demand, only historical sales that represent a loose approximation of the demand. When a product goes out of the assortment, due to stockout or otherwise, sales drop to zero, but demand (most likely) does not.

The old school approach to address censored demand consists of iterating through the historical sales data, and replacing the zero segments with demand forecast. Unfortunately, this method is fraught with methodological issues, such as building a forecast on top of another forecast is friable. Furthermore, in the case of products that are not sold during for long periods (not just rare stockout events), say summer, forecasting a fictitious demand over those long periods is not entirely sensical.

The most commonly used technique at Lokad to deal with censored demand is loss masking, understood as from a differentiable programming perspective. This technique is detailed at:
https://tv.lokad.com/journal/2022/2/2/structured-predictive-modeling-for-supply-chain/

Hope it helps, Joannes

There are several questions to unpack here about seasonality. (1) Is seasonality best approached as a multiplicative factor? (2) Is seasonality best approached through a fixed-size vector reflecting those factors? (hence the "profile") (3) How to compute the values of those vectors?

Concerning (1), the result that Lokad has obtained at the M5 competition is a strong case for seasonality as a multiplicative factor:
https://tv.lokad.com/journal/2022/1/5/no1-at-the-sku-level-in-the-m5-forecasting-competition/ The literature provides alternatives approaches (like additive factors); however, this don't seem to work nearly as well.

Concerning (2), the use of a fixed size vector to reflect the seasonality (like a 52-week vector) has some limitations. For example struggles to capture patterns like an early summer. More generally the vector approach does work too well when the seasonal pattern are shifting, not in amplitude, but in time. The literature provides more elaborate approaches like dynamic time warping (DTW). However, DTW is complicated to implement. Nowadays, most machine learning researcher have moved toward deep learning. However, I am on the fence on this. While DTW is complicated, it has the benefit of having a clear intent model-wise (important for whiteboxing).
https://en.wikipedia.org/wiki/Dynamic_time_warping

Finally (3), the best approach that Lokad has found to compute those vector values is differentiable programming. It does achieve either state of the art results or very close to start of the art with a tiny fraction of the problems (compute performance, blackbox, debuggability, stability) associated with alternative methods such as deep learning and gradient boosted trees. The method is detailed at:
https://tv.lokad.com/journal/2022/2/2/structured-predictive-modeling-for-supply-chain/

Hope it helps, Joannes

Patrice Fitzner, who contributed to the design of the Quai 30, a next-gen 21st century logistical platform explains the thinking that went into this 400m by 100m monster of automation.

Very nerdy Factorio rocks https://www.factorio.com/

Just to clarify the terminology that I am using the following: the EOQ (economic order quantity) is a quantity decided by the client, the MOQ (minimal order quantity) is a quantity imposed by the supplier. Here, my understanding is that the question is oriented toward EOQs (my answer below); but I am wondering if it's not about picking the right MOQs to impose to clients (which is another problem entirely).

The "mainstream" methods approach EOQs, especially all of those that promise any kind of optimality suffer from a series of problems:

  • ignore variations of the demand, which is expected to be stationary (no seasonality for example)
  • ignore variations of the lead time, which expected to be constant
  • apply only to "simple EOQ" that apply to a single P/N at a time (but not to a EOQ for the whole shipment)
  • ignore macro-budgeting constraints, aka this PO competing against other POs (from other suppliers for example)
  • ignore the ramification of the EOQs across dependent BOMs (client don't care about anything but the finished products)

Do not expect a formula for EOQs. There isn't one. A satisfying answer requires a way "to factor in" all those elements. What we have found at Lokad for better EOQs in manufacturing (not "optimal" ones, I am not even sure we can reason about optimality), is that a certain list of ingredients are needed:

  • probabilistic forecasts that provide probability distributions at least for the future demand and the future lead times. Indeed, classic forecasts deal very poorly with irregular flows (both demand and supply), and MOQs, by design, magnifies the erraticity of the flow.
  • stochastic optimization, that is the capacity to optimize in presence of randomness. Indeed, the EOQ is a cost-minimization of some kind (hence an optimization problem), but this optimization happens under uncertain demand and uncertain lead time (hence the stochastic flavor).
  • financial perspective, aka we don't optimize percentages of errors, but dollars of error. Indeed, EOQs is typically a tradeoff between more stock and more overhead (shipment, paperwork, manhandling, etc)

In my series of supply chain lectures, I will be treating (probably somewhere next year) the fine print of MOQs and EOQs in my chapter 6. For now, the lecture 6.1 provides a first intro into the main ingredients needed for economic order optimization, but without delving (yet) into the non-linearities:
https://tv.lokad.com/journal/2022/5/12/retail-stock-allocation-with-probabilistic-forecasts/

It will come. Stay tuned!

vermorel Nov 30, 2022 | flag | on: Goodbye, Data Science

An incredibly perceptive discussion that reflects my own experience with data science in general.

vermorel Nov 29, 2022 | flag | on: Cycle Count Manager

A small side software project dedicated to inventory counting.

vermorel Nov 23, 2022 | flag | on: The supply chain triangle in 3 minutes [video]

The earliest occurrence I could find of the concept is 2016 with the presentation:
https://www.slideshare.net/BramDesmet/supply-chain-innovations-2016-strategic-target-setting-in-the-supply-chain-triangle

The 2018's book is available at:
https://www.amazon.com/gp/product/B07CL2MCWS/

vermorel Nov 14, 2022 | flag | on: The Saga of Supply Chain Innovation

ATP (used in the article) stands for Available-To-Promise.

I am very much in agreement concerning the list of stuff that didn't work: Consolidations Decimated Value, Consultants Failed to Deliver Value Through Software Models, Barney Partnerships Bled Purple, not Green, The Saga of Venture Capitalists and Private Equity Firms, New Forms of Software Marketing Creates Haze not Value

Concerning the value of cloud and NoSQL. Well, yes, but it's a bit of an old news. Lokad migrated toward cloud computing and NoSQL back in 2010. A lot did happen since then. For a discussion about what a modern cloud-based tech look like https://blog.lokad.com/journal/2021/11/15/envision-vm-environment-and-general-architecture/

vermorel Nov 09, 2022 | flag | on: Prioritized Ordering [video]

A couple of relevant links:

In most of Western Europe, my (tough) take is that, career-wise, those certifications are worth the paper they are printed on. The vast majority of the supply chain executives that I know have no certification.

More specifically, the example exam questions are ludicrous, see
https://www.theorsociety.com/media/1712/cap_handbook_14122017133427.pdf

MCQ (Multiple Choice Questions) is the exact opposite of the sort of problems faced by supply chain practitioners. MCQs emphasize super-shallow understanding of vast array of the keywords. Worse, it treats those keywords (eg data mining, integer programming) as if they were encompassing any kind of cohesive body of work (or tech). This is wrong, plain wrong.

vermorel Oct 25, 2022 | flag | on: Supply Chains are Healing

Minor: Edited the title to avoid the question mark. I am trying to reserve the question titles to actual questions addressed to the community.

Also, discussed at https://news.lokad.com/posts/349/ (1 comment)

Light retrospective on the evolution of Amazon automated decision-making tech for supply chain. Interesting nugget, Amazon appears to be still using their multi-horizon quantile recurrent forecaster (1) as it appears to have taken several years to cover the full scope (which is not unreasonable considering the scale of Amazon).

(1) A multi-horizon quantile recurrent forecaster
By Ruofeng Wen, Kari Torkkola, Balakrishnan (Murali) Narayanaswamy, Dhruv Madeka, 2017
https://www.amazon.science/publications/a-multi-horizon-quantile-recurrent-forecaster

The book can be purchased from
https://www.amazon.com/Profit-Source-Transforming-Business-Suppliers-ebook/dp/B099KQ126Z

The main message by Schuh et al. is that a collaborative relationship with suppliers can be vastly more profitable that an oppressive one solely focused on lowering the supply prices. While the idea isn't novel, most companies still favor aggressive and antagonistic procurement strategies which leave no room for more profitable collaborations to emerge.

10 years ago, Amazon was acquiring Kiva Systems for $775 million.

The quote is from The Testament of a Furniture Dealer by Ingvar Kamprad, IKEA founder. The original document can be found at:
https://www.inter.ikea.com/en/-/media/InterIKEA/IGI/Financial%20Reports/English_The_testament_of_a_dealer_2018.pdf

Forecasting and S&OP initiatives almost invariably turn into bureaucratic monsters.

A team from Lokad took part in the M5 competition. The method, which landed No1 at the SKU level, has been presented at https://tv.lokad.com/journal/2022/1/5/no1-at-the-sku-level-in-the-m5-forecasting-competition/

vermorel Sep 26, 2022 | flag | on: Software to simplify the supply chain

Interesting nuggets of this interview with Ryan Petersen, CEO of Flexport:

- 20% of the Flexport workforce is software engineering. The rest is sales and account management.
- The P95 transit time is a 95% quantile estimate of the transit time; part of the core Flexport promise.

Overall, a very interesting discussion, although the simplify part really refers to the Flexport product itself.

Most supply chain initiatives fail. Dead-ends are a given, although my understand differs a little bit concerning the root causes. Among the top offenders, the lack of decision-centric methodologies and technologies ranks very high. In the 'future' section proposed by the author, I see layers of processes to generate ever more numerical artifacts (eg: Market-driven Demand, Demand Visibility, Baseline Demand and Ongoing Analysis of Market Potential, Unified Data Model Tied to a Balanced Scorecard, Procurement/Buyers Workbench).

vermorel Sep 26, 2022 | flag | on: Made.com puts itself up for sale

A decade ago, Made.com - along with a couple of similar ecommerces - took extensive advantage of the payment terms of their oversea suppliers (mostly in Asia). Their supply chain execution allowed them to sell their goods while goods were still in-transit. This worked well for furniture as customers - at the time - were OK waiting a month or two to receive their order. I don't know where they stand now, but I suspect that the supply chain tensions (sourcing problems in Asia + surge of transport fees) do pose significant challenges to this business model.

vermorel Sep 23, 2022 | flag | on: Stock is stock - whatever you call it [pic]

My previous take on safety stocks:
https://tv.lokad.com/journal/2019/1/9/why-safety-stock-is-unsafe/

In short, not only the normal distribution assumption is bad, but the whole approach is very naïve. It made (somewhat) sense before the advent of computers, but at present, safety stocks should be treated as a method of historical interest only.

vermorel Sep 21, 2022 | flag | on: Differentiating Relational Queries

This work is done by Paul Peseux who is currently doing a PhD at Lokad. In terms of research, it's a convergence between machine learning, mathematical optimization and compiler design; fields that are usually considered as fairly distinct - but that end up being glued together in the context of differentiable programming.

vermorel Sep 09, 2022 | flag | on: Demand sensing [pic]
impossible to get a real sense of what's offered without some sort of PoC

Fully agree. This is why I abhor those made-up terminologies: it's pure vendor shenanigan's.

vermorel Sep 09, 2022 | flag | on: Demand sensing [pic]

Q: Why a new buzzword if it's about repackaging techniques that already have proper names?
A: Occam Razor: to make the tech appear more attractive and valuable than it really is.

According to [1], SAP recognizes 'demand sensing' as 'a forecasting method that leverages [...] near real-time information to create an accurate forecast of demand'.

  • Why would a 'near real time' provides a forecast that is any less accurate than a batch forecast happening with a lag of, say, 10min?
  • Why should gradient boosting be even considered as a relevant technical solution for 'near real-time' tasks?

Remove demand sensing from the picture, and you still have the exact same tech with the exact same processes.

[1] https://blogs.sap.com/2020/02/09/sap-integrated-business-planning-demand-sensing-functionality/

vermorel Sep 09, 2022 | flag | on: Save The Supply Chain Leader From Groupthink
When I look at the market, I see major contributions of GroupThink:
- Failure of IT Standardization. SAP and IBM failed the market. The recent gains in market share of Kinaxis, o9, and OMP are largely due to the failure of SAP to drive thought leadership in planning.
- Private Equity M&A. Software mergers & acquisitions also slowed innovation. The technology roll-ups of INFOR, JDA (now BlueYonder), and E2open improved investors’ balance sheets, but did not drive value for their clients.
- Event Companies Are the Nemesis of the Industry. Event companies take large sums of money from technology companies and host events based on the Rolodex of a prior supply chain leader

A spot-on analysis. Low level IT standardization is moving forward nicely (think federated identity management), but it's not the case for high level IT (think workflows). The success of products like Tableau reflects that there is a major need to cope with the lack of standardization.

M&A in enterprise software is almost always resulting in large about of technological debt. It's very hard to get good software engineers motivated about clean-up millions of lines of code of haphazard codebases where stuff has just been "thrown together".

Event Companies are a severe form of epistemic corruption. I discussed the case in https://tv.lokad.com/journal/2021/3/31/adversarial-market-research-for-enterprise-software/

Full text article https://archive.ph/wlCqS

The sheer scale of this piece of engineering is incredible. We are talking of a machine that is 400m x 100m x 12m big. The whole thing maximizes that can be done in terms of inventory storage and inventory throughput while taking advantage of every cubic meter that is available.

Digital talent remains the Achille heel of many (most?) supply chain initiatives. Nonsensical tech decisions keep being made due to a lack of understanding of what is at stake. The challenges pointed out in this post a few years back have only become more acute since then.

The stochastic gradient descent (SGD) is used for a whole variety of supply chain problems from demand forecasting to pricing optimization. From a software performance perspective, the crux of the SGD problem is to increase the wall-clock rate of descent while preserving the determinism of the execution. Indeed, as far as parallelization is concerned, indeterminism is the default; it takes effort to achieve a reproducible flavor of the algorithm. The report introduces a technique that delivers a 5x speed-up at a 6x increase of compute costs.

10 years is a good ballpark assessment to produce a good software product - assuming there are people who will stick around for a decade to see it through. See https://www.joelonsoftware.com/2001/07/21/good-software-takes-ten-years-get-used-to-it/ Written 20 years ago, but the points are still largely valid.

vermorel Sep 04, 2022 | flag | on: Conformal prediction materials

Conformal predictions are one of the flavor of probabilistic forecasting, leaning toward high dimensional situations. This repository is an extensive compilation of the papers, phds and open source toolkits that are available for conformal predictions.

We strongly believe that probabilistic methods play a key role in the future of supply chain planning. They represent the right way to go for areas that contain uncertainty, including for all types of forecasting and the subsequent planning situations.
Traditional deterministic planning methods base their decisions on the mistaken assumption that uncertain values can be approximated by a single average number. As a direct consequence of this assumption, plans are often infeasible at the time they are created and manual interventions are continuously needed.

Better late than never! However, let's immediately point out that the SAP IBP architecture is very much hostile to probabilistic modeling. More specifically, the high memory consumption of HANA is going to add some massive overhead on top of methods that not exactly lightweight in the first place.

vermorel Sep 02, 2022 | flag | on: Proxy Variable [pic]

Service levels are probably the favorite proxy variable in supply chain. Supply chain textbooks and consultants assume that "finely tuned service levels" automatically translate into better outcomes for the company, while those service levels say very little of substance about the quality of service actually perceived by customers.

Supply chains are complex systems. It's maddeningly difficult to solve problems rather than displacing them. When confronted with incredibly difficult problems, bureaucracies are also incredibly good at ignoring them altogether. In supply chain, big problems are usually big enough to take care of themselves.

How far should we go to say that we have reached a point of say realistic representation of an agent? Also when we say accuracy , what does it really mean?

Right now, as far as my understanding of the supply chain literature goes, there is just nothing yet published to tell you whether a simulation - in the general case - is accurate or not. The tool we have for time-series forecast don't generalize properly to higher dimensional settings.

For example, if a simulator of a multi-echelon supply chain of interest is implemented, and then someone decide to refine of the model of some inner agent within the simulator, there is no metric that are even known to be able to tell you if this refinement is making the simulator more accurate of not.

Stay tuned, I am planning a lecture on the subject in the future, it's a big tough question.

vermorel Sep 01, 2022 | flag | on: Lokad is hiring a supply chain content creator

Lokad tries to push a lot of (hopefully) quality supply chain materials in the open. Unlike many vendors, we don't attempt to shroud our technology in a veil of mystery. However, we need backup. If you think that you help us produce videos, guides, articles ... then, drop your resume at j.vermorel@lokad.com

Answering a question on YouTube:

As per my understanding the following are the core concerns -
1) Accuracy
2) Doesn't necessarily represent reality based on Agent behaviour
3) Gives me insights, ok now what should i do? Don't give me numbers, tell me what to do. If an employee sits down and tweaks parameters then how do i make sense if the decision is correct?

Yes, in short, the two big gotchas are (a) your digital twin may no reflect the reality (b) your digital twin may not be prescriptive.

Concerning (a), measuring accuracy when considering the modeling of a system turns out to be a difficult problem. I intend to revisit the case in my series of supply chain lectures, but it's nontrivial, and so far all the vendors seem to be sweeping the dust under the rug.

Concerning (b), if all the digital twin delivers are metrics, then, it's just an elaborate way to waste employee's time, and thus money. Merely presenting metrics to employees is suspicious if there is no immediate call-to-call. If there is a call-to-action, then let's take it further and automate the whole thing.

Indeed. Although, as aircrafts get dismantled, it tends to introduce a lot of spare parts into the market. Thus, most of the time, the parts of weaning aircraft types become cheaper despite the lack of production of parts. However, as you correctly point out, there are parts that become rare and very expensive, making the aircraft type economically unviable.

A nice illustration the sort of stuff that characterizes aviation supply chain: aircrafts are both expensive and modular. Thus, the option is always on the table to take a component from an aircraft and move it to another aircraft. Most of time, exercising this option is pointless, but sometimes, it's an economically viable move. Here, this is what Boeing is doing with aircraft engines. Aviation supply chains are not about picking safety stocks :-)

Fun fact: Lokad started to implement digital twins of supply chains more than a decade ago; although I don't overly like this terminology. As a rule of thumb, I tend to dislike terminologies that try to make tech sounds cool, irrespectively of the merit of said technology. There are tons of challenges associated with large scale modeling of supply chain, the first one being: how accurate is my digital twin? Tech vendors are usually exceedingly quite about this essential question.

747 have been produced for 54 years. The one most notable evolution being the introduction of the fly-by-wire tech in the 1990s
https://www.flightglobal.com/boeing-747-x-flies-by-wire/6314.article

This plane has massively contributed to the democratization of both air travel and air shipments. Considering that aircrafts are typically operated for decades, some 747 are likely to keep flying for the next 20/30 years.

Lion Hirth is Professor of Energy Policy at the Hertie School. His research interests lie in the economics of wind and solar power, energy policy instruments and electricity market design.

The document introduce marginal pricing - in the context of energy, and make three statements about it:

Marginal pricing is not unique to power markets.
Marginal pricing is not an artificial rule.
If you want to get rid of marginal pricing, you must force people to change their behavior

Three points are very much aligned with what is generally understood as mainstream economics. Those points are quite general and do apply to most supply chains as well.

I am not familiar with the specific Greek energy market.

However from a supply-and-demand perspective,

  • Intermittent energy sources do not meet the Quality of Service requirement which is essential for energy.
  • As long as the source can be turned into electricity, all energy sources are near perfect substitutes. Hence, one cannot isolate the price of a selection of sources vs the rest.
  • When demand is inelastic and grows, as it is the case for energy demand, it's the supply that has to grow.
vermorel Aug 29, 2022 | flag | on: Lokad is hiring a Supply Chain Scientist
The Supply Chain Scientist delivers human intelligence magnified through machine intelligence . The smart automation of the supply chain decisions is the end product of the work done by the Supply Chain Scientist.

Excerpt from 'The Supply Chain Scientist' at
https://www.lokad.com/the-supply-chain-scientist

Transit costs to low orbit are still beyond the realm of supply chain, however, it is notable the cost per kilogram has been going down by a factor 1000 over the course of 70 years. If progress keeps happening at the same pace, in a few decades, launches will become an option. The benefits of easier access to low orbit are somewhat unclear beyond telecommunications, but specialized micro-gravity factories has been explored many time in science fiction. At this point, orbit remains too expensive to even try to investigate newer / better industrial processes in orbit.

vermorel Aug 27, 2022 | flag | on: How to Measure Forecastability (2021)

The only way to assess "forecastability" of a time-series is to use a forecasting model as a baseline. This is exactly what is done in the article, but unfortunately, it means that if the baseline model is poor, the "forecastability" assessment is going to be poor as well. There is no work-around that.

Stepping back, one of the things that I have learned more than a decade ago at Lokad is that all the forecasting metrics are moot unless they are connected to euros or dollars attached to tangible supply chain decisions. This is true for deterministic and probabilistic forecasts alike, although, the problem becomes more apparent when probabilistic forecasts are used.

Be our guest, virtually! These live, one-hour tours take you behind the scenes at our fulfilment centres, using a combination of live streaming, videos, 360° footage, and real-time Q&A to replicate the experience of our in-person tours.
Live virtual tours are approximately 1 hour long, including Q&A.
Registration closes 6 hours in advance of each tour. Last-minute registration ("instant join") is not possible. Tours will no longer appear in the calendar once registration is closed, or when they are fully booked.

Various options are available depending on the region of interest:

vermorel Aug 26, 2022 | flag | on: Adequate decision policy is the way [pic]

Steps for the new supply chain decision systems:

  • Pick expensive consultants to devise a 100 pages RFP. Gather all requirements, especially the imaginary ones.
  • Select 20 vendors, shortlist 2 ultra-expensive big names plus 1 cheap startup (they won't make it to the final round, but those guys are more fun to talk to)
  • Pick the big name vendor that has the most features. An excess of 1000 screens is desirable.
  • Plug the latest bleeding edge AI toolkit. The important part is the "bleeding" part, that's a sign of real innovation.
  • Customize all UIs so that everything becomes collaborative. Numbers were bad before, but now, it costs a fortune to produce them.
  • After 6 months, declare the initiative a success, and change job immediately afterward.

Simple, really.

vermorel Aug 26, 2022 | flag | on: Future-proof your supply chain

The article proposes three ways, namely:

Building supply chain resilience by managing risk
Using technology to increase supply chain agility
Identifying and promoting ways to be more sustainable

However, the analysis is a bit all over the place.

  • For risk management, the example of RFID at Nike is given. However, RFID has nothing to with with risk management at the supply chain level.
  • For supply chain agility, the example (which features a plug for a planning software vendor) of AI / ML, is a double-edged sword. Historically, software has been a great force to rigidify systems, lowering their operating costs, but usually making them less agile too.
  • For sustainability, frankly, this is pure virtue-signal, both from the article itself, and from the respondent of the survey. I am not saying that sustainability isn't a worthy goal, however, very companies are in any position to do much on this front as far as their supply chain is concerned.

Afaik, those types of ships are typically referred to as bulk carriers

A bulk carrier or bulker is a merchant ship specially designed to transport unpackaged bulk cargo — such as grains, coal, ore, steel coils, and cement — in its cargo holds.

From https://en.wikipedia.org/wiki/Bulk_carrier

The interesting element is the extra option that COSCO gains by being able to leverage one extra type of ship. This method is probably inferior cost-wise to regular containers, but if a bulk carrier is the only ship that happens to be available, then, it becomes very valuable to have the option.

vermorel Aug 25, 2022 | flag | on: How to calculate true demand (2021)

The post points out that competing a "demand" needs to factor-in the delivery date (requested) vs the shipped date (realized). However, I am afraid, this is a very thin contribution.

Demand is an incredibly multi-faceted topic. Demand is never observed. Only sales, or sales intents are observed. The sales are conditioned by many (many) factors that distort the perception of the demand.

First, let's start with the easy ones, the factors that simply censor your perception of the demand:

  • No having the right product to sell. The sales never happen, yet, demand was there.
  • Not having the right price. Idem, demand exists, just not for this price.
  • Not having the right position (bad image, bad description). Visitors miss what they could have wanted.
  • Not having the right delivery promise. Visitors give up if out-of-stock or if delivery date is too far away.

Then, we have all the big factors:

  • Say's law: Supply creates its own demand, demand isn't prexisting, it's engineered as such.
  • Branding: take two physically identically product plus/minus the brand, demand changes entirely.
  • Cannibalizations and substitutions: demand covers a whole spectrum of willingness to buy. Demand cannot be understood at the product level.
  • etc

Looking at the demand through the lenses of time-series analysis is short-sighted.

Ps: thanks a lot for being one of the first SCN contributors!

In 2011 Lidl made the decision to replace its homegrown legacy system “Wawi” with a new solution based on “SAP for Retail, powered by HANA”. [..] Key figure analyzes and forecasts should be available in real time. In addition, Lidl hoped for more efficient processes and easier handling of master data for the more than 10,000 stores and over 140 logistics centers.
[..]
The problems arose when Lidl discovered that the SAP system based it's inventory on retail prices, where Lidl was used to do that based on purchase prices. Lidl refused to change both her mindset and processes and decided to customise the software. That was the beginning of the end.

Disclaimer: Lokad competes with SAP on the inventory optimization front.

My take is that the SAP tech suffered from two non-recoverable design issues.

First, HANA has excessive needs of computer resources, especially memory. This is usually the case with in-memory designs, but HANA seems to be one of the worst offenders (see [1]). This adds an enormous amount of mundane friction. At the scale of Lidl, this sort of friction becomes very unforgiving - every minor glitches turning into many-hours (sometime multi-days) fixes.

Second, when operating at the supply chain analytical layer, complete customization is a necessity. There is no such thing as "standard" decision taking algorithm to drive a replenishment system moving billions of Euros worth of good per year. This insights goes very much against most of design choices which have been made in SAP. Customization shouldn't be the enemy.

[1] https://www.brightworkresearch.com/how-hana-takes-30-to-40-times-the-memory-of-other-databases/

vermorel Aug 25, 2022 | flag | on: Probabilistic supply chain vision in Excel

This spreadsheet contains a prioritized inventory replenishment logic based on a probabilistic demand forecast. It illustrates how SKUs compete for the same budget when it comes to the improvement of the service levels while keeping the amount of inventory under control. A lot of in-sheet explanations are provided so that the logic can be understood by practitioners.

vermorel Aug 24, 2022 | flag | on: The beer game

This is a nice readily accessible implementation, no sign-up, no login, create a new game and play. For those who are not familiar with the beer game, it's 4 stage supply chain game with 4 roles: manufacturer, distributor, supplier, retailer. Each player fills a role, and tries to keep the right amount of goods flowing around. It's a nice - and somewhat brutal - way to experience a fair dose of bullwhip. If you don't have 3 friends readily available, the computer will play the other 3 roles.

Ps: I never got the chance to experience this game at university. If some people did, I would love to hear about their experience - as students - of their first 'Beer game'.