vermorel Oct 09, 2023 | flag | on: Unicity of ranvar after transform

The function transform should be understood from the perspective of the divisibility of random variables, see https://en.wikipedia.org/wiki/Infinite_divisibility_(probability)

However, just like not all matrices can be inverted, not all random variables can be divided. Thus, Lokad adopts a pseudo-division approximate approach which is reminiscent (in spirit) to the pseudo-inverse of matrices. This technique is dependent on the chosen optimization criteria, and indeed, in this regards, although transform does return a "unique" result, alternative function implementations could be provided as well.

vermorel Oct 09, 2023 | flag | on: Cross Entropy Loss Understanding

Cross-entropy is merely a variant of the likelihood in probability theory. Cross-entropy works on any probability distribution as long as a density function is available. See for example https://docs.lokad.com/reference/jkl/loglikelihood.negativebinomial/

If you can produce a parametric density distribution, then, putting pathological situations aside, you can regress it through differentiable programming. See fleshed out examples at https://www.lokad.com/tv/2023/1/11/lead-time-forecasting/

In the article, it is mentioned that Lokad collected empirical data which supports the claim that Cross Entropy is usually the most efficient metric to optimize, rather than MSE, MAPE, CRPS, etc. Is it possible to view that data?

No, unfortunately for two major reasons.

First, Lokad has strict NDAs in place with all our client companies. We do not share anything, not even derivative data, without the consent of all the parties involved.

Second, this claim should be understood from the perspective the experimental optimization paradigm, which is (most likely) not what you think. See https://www.lokad.com/tv/2021/3/3/experimental-optimization/

Hope it helps,
Joannes

Miceli_Baptiste Oct 04, 2023 | flag | on: Difference with extend.range

Side note on this function: if you extend.split a line not containing any of the S.Separators you will still get the line in the resulting table. This is not the same as an extend.range for instance.
Script to illustrate this case: https://try.lokad.com/s/extend.split

vermorel Oct 01, 2023 | flag | on: Lag Forecasting

I have a few tangential remarks, but I firmly believe this is where you should start.

First, what is the problem that you are trying to solve? Here, I see you struggling with the concept of "lag", but what you are trying to achieve in unclear. See also https://www.lokad.com/blog/2019/6/3/fall-in-love-with-the-problem-not-the-solution/

Second, put aside Excel entirely for now. It is hindering, not helping, your journey toward a proper understanding. You must be able to reason about your supply chain problem / challenge without Excel; Excel is a technicality.

Third, read your own question aloud. If you struggle to read your own prose, then probably, it needs to be rewritten. Too frequently, I realize, upon reading my own draft that the answer was in front of me once the question is properly (re)phrased.

Back to your question / statement, it seems you are confusing / conflating two distinct concepts:

  • The forecasting horizon
  • The lead times (production / dispatch / replenishment)

Then, we have also the lag which is a mathematical concept akin to time-series translation.

Any forecasting process is horizon-dependent, and no matter how you approach the accuracy, the accuracy will also be horizon dependency. The duration of between the time of cut-off and the time of the forecast is frequently referred to as the lag because in order to backtest, you will adding "lag" to your time-series.

Any supply chain decision takes time to come to pass, i.e. there is a lead time involved. Again, in order to factor those delays, it is possible to add "lag" to your time-series to reflect the various delays.

Lagging (aka time-series shift, time-series translation) is just a technicality to factor any kind of delay.

Hope it helps.

ramanathanl Oct 01, 2023 | flag | on: Lag Forecasting

Link for the Excel file
Copy and paste the entire link in a new window (do not click directly on the link as it does not seem to redirect correctly)

https://1drv.ms/x/s!AmdAMe2CGp70kgXFYSnCr0-SHoEV?e=prEmbD

remi-quentin_92 Sep 21, 2023 | flag | on: Values of alpha parameter

The value of alpha in this example is very high. I would suggest to use a value close to 0.05 (depending on how much you want your sales are correlated).

ToLok Sep 21, 2023 | flag | on: Unicity of ranvar after transform

Formally, $P[X=n/a]$ is not a random variable but a scalar and the corresponding ranvar is not unique.
Could we add more details about how the ranvar returned by transform() is chosen ?
A graphical example might also be a nice addition.
Thanks!

Miceli_Baptiste Sep 19, 2023 | flag | on: Simple use case of the escape function

A very common use case is searching for hidden characters in any string (most commonly in Product references).
This script: https://try.lokad.com/s/hiddencharacters shows how we can detect special characters in order to fix any corrupted text.

Important to note that in the upload read at the beginning of the script:

read upload "myEditable" as myEditable with ..

and where the editable is defined:

editable: "myEditable"

that this is a case sensitive feature. So if editable: "myeditable" is written (lower case `e` instead of upper case `E`) then you will not have an error message but your values will not be saved correctly when updating the table and running it from the dashboard. The two names need to be exactly aligned for each character not just the name.

Effective MRO (maintenance, repair and overhaul) requires meticulous management of up to several million parts per plane, where any unavailability can result in costly aircraft-on-ground (AOG) events. Traditional solutions to manage this complexity involve implementing safety stock formulas or maintaining excessive inventory, both of which have limitations and can be financially untenable. Lokad, through a probabilistic forecasting approach, focuses on forecasting the failure or repair needs of every individual part across the fleet and assessing the immediate and downstream financial impact of potential AOG events. This approach can even lead to seemingly counter-intuitive decisions, such as not stocking certain parts and instead paying a premium during actual need, which may, paradoxically, be more cost-effective than maintaining surplus inventory. Furthermore, Lokad’s approach automates these decision-making processes, reducing squandered time and bandwidth and increasing operational efficiency.

Miceli_Baptiste Sep 07, 2023 | flag | on: Ranvar representation

Ranvars have buckets that spread over multiple values.
The first such bucket is the 65th (meaning that the probability for 65 and 66 are always the same in a ranvar), so dirac(65) actually spread over two values (65 and 66).
We have again 64 buckets with 2 values each,, and then 64 buckets with four values, etc .. so the thresholds are : 64, 196, 452, … (every one being of the form $\sum_{0..n}(64*2^n)$ )

Miceli_Baptiste Sep 07, 2023 | flag | on: Ranvar representation

Ranvars have buckets that spread over multiple values.
The first such bucket is the 65th (meaning that the probability for 65 and 66 are always the same in a ranvar), so dirac(65) actually spread over two values (65 and 66).
We have again 64 buckets with 2 values each,, and then 64 buckets with four values, etc .. so the thresholds are : 64, 196, 452, … (every one being of the form $\sum_{0..n}(64*2^n)$ )

Example script: https://try.lokad.com/6rk5wgpaf4mp0?tab=Output

Miceli_Baptiste Aug 30, 2023 | flag | on: What happens in case of equality

In case of multiple T.a values, the returned T.b value is the first value encountered.
In fact, argmax is a process function scanning the table in its default order and will return different values in case of equality for two equivalent tables ordered in a different way.

This script https://try.lokad.com/5c15t7ajn1j38?tab=Code illustrates this equality management, the usage of the function and highlights the order importance with the Hat.

Conor Aug 23, 2023 | flag | on: Suppler Analysis through Envision (Workshop #1)

A free public tutorial on how to use Envision (Lokad's DSL) to analyze retail suppliers.

Yes, just use text interpolation to insert your text values. See below:


table T = with 
  [| date(2021, 2, 1) as D |]
  [| date(2022, 3, 1) |]
  [| date(2023, 4, 1) |]

maxy = isoyear(max(T.D))

show table "My tile tile with \{maxy}" a1b3 with
  T.D as "My column header with \{maxy}"
  random.integer(10 into T) as "Random" // dummy

On the playground https://try.lokad.com/s/ad-hoc-labels-in-table-tile

Does the AI use a time series? Lol

As it is common for the buzzword of the year in supply chain - a lot of noise, but very little substance.

jamalsan Aug 17, 2023 | flag | on: Lag Forecasting

My two cents: in a classical setting, manufacturing would have a frozen horizon period and use the net demand + stock policy to define its procurement and production at T0. Additionally you would have sourced more raw material than your short term demand (again safety stock in its classical sense + lot quantity from the tier1 supplier).
In each cycle the base forecast is converted in net demand for the next node (your excess material / existing stock would be subtracted from the forecast)

ramanathanl Aug 14, 2023 | flag | on: Lag Forecasting

I have a few doubts regarding the concept of "Lags" in forecasting.
Let T0, T1, T2... be the time periods, with T0 being the current time period. "Row 2" in the attached Excel gives the forecast generated in time period T0 for the next month onwards, T1, T2...

After time period T0 gets over, and we reach time period T1, the forecast is again generated for time periods T2, T3, and so on. "Row 3" in Excel gives us this.

The "Actual" sales observed in each time period are given by "Row 8", highlighted in Green.

"Lag 1" signifies the forecast for the next immediate Time period. So forecast generated in "T0" for "T1"; forecast generated in "T1" for "T2" and so on. The same is highlighted in a shade of yellow and the successive snapshots are in "Row 10".

"Lag 2" signifies the forecast for 2 Time periods from now. So forecast generated in "T0" for "T2"... and the successive snapshots are in "Row 11" highlighted in light blue.

Likewise for "Lag 3" and "Lag 4".

Let us consider a company, and let us assume "Lag 4" is used for the procurement of Raw Materials.
"Lag 3" is used for Manufacturing.
"Lag 2" is used for dispatching to the DCs.
"Lag 1" is used for replenishing the stores.

So if we are in "T0", Lag 4 forecast = 420 units, and we will procure raw material worth this.
After 1 time period elapses, we are in "T1" and we would manufacture for "410" forecast for the time period "T4" (Lag3). (What would happen to the 10 units worth of Raw Material that will not be manufactured?)

When we come to T2, we will have to dispatch 500 (Lag2), so if we only made 410 in the previous step, how do we get the extra 90 units?

When we come to T3, we have to send 430 (Lag1) to stores. If we got 500 from the previous step what happens to the 70 units? If we only got 410 (as Lag3 was 410 and we assume we manufacture and send the same to the DCs), we still fall short by 20 units.

My question is at every step the forecast for a particular time period ("T4") changes whenever we move from "T0" to "T1", "T1" to "T2". So where do we get the additional units from in each stage if forecast at say Lag2 (500)> Lag3 (410) or conversely what happens to excess material if "Lag 4(420) > Lag3 (410)"

For each lag we have,

Error = (Forecast-Actuals)
Accuracy = {1-[Abs(Error)/Actuals]}

The same has been computed in the Excel file. Please let me know if my understanding is correct.

vermorel Aug 14, 2023 | flag | on: Display Data by Year

Envision has a today() function, see


show scalar "Today" a1b2 with today()

table X = with 
  [| today() as today |]

show table "X" a3b4 with X.today

See https://try.lokad.com/s/today-sample

In your example above, DV.today is not hard-coded but most likely loaded from the data. It's a regular variable, not the standard function today().

Hope it helps,
Joannes

ttarabbia Aug 11, 2023 | flag | on: Spilling to Disk in .NET [video]

Great talk - the in-memory approach makes more sense when you have a lot of global dependencies. I would imagine you get some thrashing behavior in cases where you spill the "wrong" thing.

David_BH Aug 04, 2023 | flag | on: ExcelFormat currency change

If you need your column to be in € when the user download the file as an Excel, you can replace
excelformat: "#,##0.00\ [$₽-419]" by
excelformat: "#,##0.00\ [$€-407]"
And for other currencies, $ => [$$-409],
¥ => [$¥-804]
₽ => [$₽-419]
£ => [$£-809]
₺ => [$₺-41F]

ArthurGau Aug 02, 2023 | flag | on: Support for trimming in dates and numbers

Thanks a lot for your contribution arkadir !
There's a slight mistake, the date format specifying should appear before the alias of the table. So the line should be this instead.

read "/example.csv" date: "yyyy-MM-dd*" as T with

s40racer Aug 01, 2023 | flag | on: Forecast Analysis - Forecast Quality

Now I encounter another issue. The code below follows what I posted initially.
```envision
// ///Export

quantileLow1 = 0.3
quantileLow2 = 0.05
quantileHigh1 = 0.7
quantileHigh2 = 0.95

ItemsWeek.One = dirac(1)
ItemsWeek.Demand = dirac(0)

where ItemsWeek.FutureWeekRank > 0
ItemsWeek.Demand = actionrwd.segment(
TimeIndex: ItemsWeek.FutureWeekRank
BaseLine: ItemsWeek.Baseline
Dispersion: Items.Dispersion
Alpha: 0.05
Start: dirac(ItemsWeek.FutureWeekRank - 1)
Duration: ItemsWeek.One
Samples : 1500)

// ////BackTest Demand

keep where min(ItemsWeek.Baseline) when (ItemsWeek.Baseline > 0) by Items.Sku >= 1

ItemsWeek.One=dirac(1)
ItemsWeek.BacktestForecastWeekRank = 0
where ItemsWeek.IsPast
ItemsWeek.BacktestForecastWeekRank = rank() by Items.Sku scan - ItemsWeek.Monday

keep where ItemsWeek.BacktestForecastWeekRank >0 and ItemsWeek.BacktestForecastWeekRank < 371

where ItemsWeek.BacktestForecastWeekRank > 0
ItemsWeek.BackTestDemand = actionrwd.segment(
TimeIndex: ItemsWeek.BacktestForecastWeekRank
BaseLine: ItemsWeek.Baseline
Dispersion: Items.Dispersion
Alpha: 0.05
Start: dirac(ItemsWeek.BacktestForecastWeekRank - 1)
Duration: ItemsWeek.One
Samples : 1500)
```
I have no issue with the the forward looking forecast. I have, however, issue with the backward forecast test..... specifically with BacktestForecastWeekRank. It grows to 790 days, which is greater than what actionrwd can allow (365 days). The data set I have goes back to 2018. Would this be the cause?

s40racer Aug 01, 2023 | flag | on: Forecast Analysis - Forecast Quality

Thank you. I resolved the issue above by using
```envision
keep where Items.Sku in ForecastProduit.Sku
```
To make sure the SKUs in the items table matches the SKU in the ForecastProduit table.

vermorel Jul 29, 2023 | flag | on: Forecast Analysis - Forecast Quality

I suspect its the behavior of the same aggregator when facing an empty set which defaults to zero, see my snippet below:


table Orders = with // hard-coding a table
  [| as Sku, as Date          , as Qty, as Price |] // headers
  [| "a",    date(2020, 1, 17), 5     , 1.5      |]
  [| "b",    date(2020, 2, 5) , 3     , 7.0      |]
  [| "b",    date(2020, 2, 7) , 1     , 2.0      |]
  [| "c",    date(2020, 2, 15), 7     , 5.7      |]

where Orders.Sku == "foo"
  x = same(Orders.Price) // empty set, defaults to zero
  y = same(Orders.Price) default 42 // forcing the default

show summary "same() behavior" a1b2 with
  x as "without default" // 0
  y as "with default"    // 42

Try it at https://try.lokad.com/s/same-defaults-to-zero

Hope it helps.

s40racer Jul 28, 2023 | flag | on: Forecast Analysis - Forecast Quality

I did an output table to see the values in the Items and Itemsweek table


today = max(Sales.Date)
todayForecast = monday(today) + 7

Items.Amount365 = sum(Sales.LokadNetAmount) when (Date >= today - 365)
Items.Q365 = sum(Sales.DeliveryQty) when (Date >= today - 365)
Items.DisplayRank = rank()  scan Items.Q365

table ItemsWeek = cross(Items, Week)
ItemsWeek.Monday = monday(ItemsWeek.Week)
ItemsWeek.IsPast = single(ForecastProduit.IsPast) by [ForecastProduit.Sku,ForecastProduit.Date] at [Items.Sku,ItemsWeek.Monday]
ItemsWeek.Baseline = single(ForecastProduit.Baseline) by [ForecastProduit.Sku,ForecastProduit.Date] at [Items.Sku,ItemsWeek.Monday]
ItemsWeek.DemandQty = single(ForecastProduit.DemandQty) by [ForecastProduit.Sku,ForecastProduit.Date] at [Items.Sku,ItemsWeek.Monday]
ItemsWeek.SmoothedDemandQty = single(ForecastProduit.SmoothedDemandQty) by [ForecastProduit.Sku,ForecastProduit.Date] at [Items.Sku,ItemsWeek.Monday]
ItemsWeek.FutureWeekRank = single(ForecastProduit.FutureWeekRank) by [ForecastProduit.Sku,ForecastProduit.Date] at [Items.Sku,ItemsWeek.Monday]
Items.Dispersion = same(ForecastProduit.Dispersion)

show table "items" with
  today
  todayForecast
  Items.Amount365 
  Items.Q365 
  Items.DisplayRank 
  ItemsWeek.Monday 
  ItemsWeek.IsPast 
  ItemsWeek.Baseline 
  ItemsWeek.DemandQty 
  ItemsWeek.SmoothedDemandQty 
  ItemsWeek.FutureWeekRank 
  Items.Dispersion 

show table "forecastproductit" with
  ForecastProduit.date
  ForecastProduit.Sku
  ForecastProduit.DemandQty
  ForecastProduit.Baseline
  ForecastProduit.Dispersion

and confirmed that there are quite a bit of data with dispersion value = 0 but this is not the case in the ForecastProduit table (as verified from the code output above). Any suggestions on what may cause the dispersion value to become 0?

The dispersion of actionrwd.foo is controlled by Dispersion:. At line 13, in your script I see:


Items.Dispersion = max(Items.AvgErrorRatio/2, 1)

This line implies that if there is 1 item (and only 1) that happens to have a super-large value, then, it will be applied for all items. This seems to be the root cause behind the high dispersion values that you are observing.

In particular,


ItemsWeek.RatioOfError = if ItemsWeek.Baseline != 0  then (ItemsWeek.Baseline - ItemsWeek.DemandQty) ^ 2 /. ItemsWeek.Baseline else 0

Above, ItemsWeek.RatioOfError can get very very large. If the baseline is small, like 0.01, and the demand qty is 1, then this value can be 100+.

Thus, my recommendations would be:

  • sanitize your ratio of error
  • don't use a max for the dispersion

Hope it helps.

Remark: I have edited your posts to add the Envision code formatting syntax, https://news.lokad.com/static/formatting

Envision is deterministic. You should not be able to re-run twice the same code over the same data and get different results.

Then, there is pseudo-randomness involved in functions like actionrwd. Thus, the seeding tend to be quite dependent on the exact fine-print of the code. If you change filters, for example, you are most likely going to end-up with different results.

Thus, even seemingly "minor" code change can lead to a re-seeding behavior.

As a rule of thumb, if the logic breaks due to re-seeding, then the logic is friable and must be adjusted so that its validity does not depend on being lucky during the seeding of the random generators.

Continuing from the previous comment - For the same SKU, the values for SeasonalityModel, Profile1, level changed between two runs on different days. I am unsure what caused the change in these values - the input data remained the same.

Code before dispersion:


ItemsWeek.ItemLife =  1
ItemsWeek.CumSumMinusOneExt = 0
where ItemsWeek.Monday >= firstDate
  ItemsWeek.CumSumMinusOneExt = (sum(ItemsWeek.ItemLife) by ItemsWeek.Sku scan ItemsWeek.Week) - 1
ItemsWeek.CumSumMinusOneExtMonth = ceiling(ItemsWeek.CumSumMinusOneExt / 8)
ItemsWeek.WeekNum = rank() by Items.Sku scan -monday(Week)
nbWeeks = same(ItemsWeek.WeekNum) when(ItemsWeek.Week == week(today()))
ItemsWeek.ItemLifeWeight = 0.3 + 1.2*(ItemsWeek.WeekNum/nbWeeks)^(1/3) 
ItemsWeek.IsCache = ItemsWeek.Monday >= firstDate and ItemsWeek.Monday < today
ItemsWeek.Cache = if ItemsWeek.IsCache then 1 else 0
expect table Items max 30000
expect table ItemsWeek max 5m
table YearWeek[YearWeek] = by ((Week.Week - week(firstDate)) mod 52)
Items.SeasonalityGroup = Items.Category
table Groups[SeasonalityGroup] = by Items.SeasonalityGroup
table SeasonYW max 1m = cross(Groups, YearWeek)
Items.Level = avg(ItemsWeek.DemandQty) when(ItemsWeek.Monday >= today - 365 and ItemsWeek.Monday < today)
Items.Level = if Items.Level == 0 then -10 else
              if log(Items.Level) < -10 then - 10 else
              if log(Items.Level) > 10 then 10 else
              log(Items.Level)
maxEpochs = 1000
autodiff Items epochs:maxEpochs learningRate:0.01 with
  params Items.Affinity1 in [0..] auto(0.5, 0.166)
  params Items.Affinity2 in [0..] auto(0.5, 0.166)
  params Items.Level in [-10..10]
  params Items.LevelShift in [-0.5..0.5] auto(0, 0)
  params SeasonYW.Profile1  in [0..1] auto(0.5, 0.1)
  params SeasonYW.Profile2  in [0..1] auto(0.5, 0.1)
  SumAffinity =
    Items.Affinity1 +Items.Affinity2
  YearWeek.SeasonalityModel = SeasonYW.Profile1  * Items.Affinity1 +SeasonYW.Profile2  * Items.Affinity2
  Week.LinearTrend = ItemsWeek.Cache + (ItemsWeek.CumSumMinusOneExtMonth * ItemsWeek.Cache * Items.LevelShift / 10)
  Week.Baseline = exp(Items.Level) * YearWeek.SeasonalityModel * ItemsWeek.Cache *  Week.LinearTrend
  Week.Coeff =  ItemsWeek.ItemLifeWeight
  Week.DeltaSquare = (Week.Baseline - ItemsWeek.SmoothedDemandQty) ^ 2

  Sum = sum(Week.Coeff * Week.DeltaSquare) / 10000
  SumPowAffinity = (Items.Affinity1 ^2 +
                    Items.Affinity2 ^2 ) /\
                    (SumAffinity ^2)
  return ( \
          // Core Loss Function
          (1 + Sum) / (SumPowAffinity))
table ItemsYW = cross(Items, YearWeek)
ItemsYW.SeasonalityGroup = Items.SeasonalityGroup
ItemsWeek.YearWeek = Week.YearWeek
ItemsYW.Profile1 = SeasonYW.Profile1
ItemsYW.Profile2 = SeasonYW.Profile2
ItemsYW.SeasonalityModel = ItemsYW.Profile1  * Items.Affinity1 +
                            ItemsYW.Profile2  * Items.Affinity2
ItemsWeek.LinearTrend =  max(0, 1 + (ItemsWeek.CumSumMinusOneExtMonth * Items.LevelShift/ 10))
ItemsWeek.Baseline = exp(Items.Level) * ItemsYW.SeasonalityModel * ItemsWeek.LinearTrend

vermorel Jul 20, 2023 | flag | on: deleted post

Please try to ask self-contained questions. Without context, those questions are a bit cryptic to the community.

You can share code and/or links to the Envision playground. Think of this board as Stack Overflow, but for supply chain.

Cheers,

ToLok Jul 20, 2023 | flag | on: Forecast Analysis Performance Measures

Hello s40racer,

The forecast cockpit is evaluating the accuracy of the quantile 95 with respect to the past sales. In other words, it is measuring the percentage of time that the sales where over the quantile 95.
In a perfect forecast where we have the exact distribution of demand, this percentage should be equal to 5%: 95% of the time, the sales should be under the quantile 95 and 5% of the time sales should be over it.

In the example 11635178 - (Above: 9.62% - At: 0% - Below: 90.38%), it means that for the Reference 11635178, 9.62% of the time the sales were above the quantile 95 and 90.38% of the time there were below. In particular, this means that the forecast is a little underestimating the Demand for this specific Ref as we have actually 4.62% more weeks with sales over quantile 95 than expected.
This is completely normal to have small disparities such as this. If we didn't, we'd probably be overfitting the data.

Regarding the scope of the overall forecast sanity label, it is indeed an average (weighted) concerning only the Ref (not SKU) in the Forecast Sanity table. In details, this is looking only at the history dating of at least 1 year: hence the Items that are more recent are not in the analysis.

Hope it helps!

Conor Jul 18, 2023 | flag | on: ABC Analysis [pic]

Hopefully, the final nail in the coffin. We've already covered ABC (and ABC XYZ) in print and video, so we consider this matter put to rest!

Print:
https://www.lokad.com/abc-analysis-(inventory)-definition
https://www.lokad.com/abc-xyz-analysis-inventory

Video:
https://tv.lokad.com/journal/2018/9/12/abc-analysis/
https://tv.lokad.com/journal/2023/6/14/analyzing-abc-xyz/

In this interview recorded onsite at a Celio store in Rosny-sous-Bois, Joannes Vermorel and David Teboul (Managing Director of Operations at Celio) discuss the resurgence of Celio following the challenges of 2020-2021. David highlights the importance of a "normal" customer-focused approach in transforming the brand. Lokad supported this transformation by assisting in optimizing the supply chain to better cater to a diverse range of stores and offers. Despite increasing complexity and the rise of online commerce, David emphasizes the need for agility and the critical role of physical stores for Celio, while striving to understand and meet customer needs through various touchpoints.

manessi Jul 12, 2023 | flag | on: Editables: Runflow and IDE behavior

It is crucial to note that editables and the uploads tied to these will only be modified by a dashboard interaction followed by a "Start Run" from said dashboard.
A script has no control over what inputs it will receive when invoked from Runflow, from the IDE, or from the list of projects (basically, anywhere except from the dashboard). It will instead receive the same inputs as the previous run, unless manually overridden (through Runflow options, the “clear uploads” of the Run Details, or setting up dedicated inputs in the IDE).

manessi Jul 11, 2023 | flag | on: Reset/Clear uploaded file

If you need to reset an uploaded file or clear it altogether, the show upload can be tweaked into

show upload "Please upload File 1" editable:"upload1" with Hash, Name

The hash should be a 32-character hexadecimal hash, such as the one obtained from Files.Hash, and the name should be a valid filename (no forbidden characters), more importantly it should have the proper extension in order to be able to read the file .

If both the hash and the name are "", then that particular line is ignored (meaning, show upload "MyFile" with "", "" will clear the tile).

vermorel Jul 11, 2023 | flag | on: S&OP [pic]

S&OP is only ever touted as a "grand success" by consultants who directly profit from the massive overhead.

In contrast, I have met with 200+ supply chain directors in 15 years. I have witnessed several dozens of S&OP processes in +1B companies. I have never seen one of those processes be anything else than a huge bureaucratic nightmare.

I politely, but firmly, disagree with the statement that *a* process is better than any process at all. This is a fallacy. There is no grand requirement written in the sky that any of the things that S&OP does have to be done at all.

Hello,

I had a look at your code.
First I created a Sku table that you can find in your CustomerName/clean/Sku.ion file. We will use this table as the item table as you want to compute things at Sku level and not Item level.
When I take the PurchaseOrders table, we want to do exactly the same thing, meaning create a Sku vector that is "MaterialSID x Location". The thing is that there are no location column in the PurchaseOrders table that indicate where the goods are received.

Once we have it, we will simply create a Sku vector in PurchaseOrders Table and then use the primary dimension [Sku] as the joint between the 2 tables Sku and PurchaseOrders

Best regards

Also, instead of using by .. at everywhere, you could declare Suppliers as upstream of Items. This will remove the need for by .. at option entirely. I am giving an example of the relevant syntax at: https://news.lokad.com/posts/647

It is possible to declare a tuple as the primary dimension of a table in a read block through the keyword as:


read "/suppliers.csv" as Suppliers [(Supplier, Location) as myPrimary] with
  Supplier : text
  Location : text
  LeadTimeInDays : number

A more complete example:


read "/skus.csv" as Skus with
  Id : text
  Supplier : text
  Location : text

read "/suppliers.csv" as Suppliers [(Supplier, Location) as sulo] with
  Supplier : text
  Location : text
  LeadTimeInDays : number

expect Skus.sulo = (Skus.Supplier, Skus.Location)

Skus.LeadTimeInDays = Suppliers.LeadTimeInDays

dumay Jul 10, 2023 | flag | on: Prefer "scan" to "sort"

Automatic hints from Envision recommends you to use "scan" rather than "sort" with this function.

ttarabbia Jul 07, 2023 | flag | on: On Time-Based Competition - George Stalk Jr.

Seems to me that supply chain can very easily become the enabler or barrier to competing with time. He mentions an interesting example on optimizing for full truck loads and the effects on the business as a whole.

It is possible to have sanity checks in user defined functions and throw an error if the check is not passed.
Cf. https://docs.lokad.com/reference/abc/assertfail/

Thank you ToLok.

Are you able to modify the code or give a more explicit example on how to implement the code at the SKU level? From data standpoint, I assume the following fields need to exist in items, PO, and vendor tables: item #, destination location, and supplier ID, in order to implement the SKU level code?

Currently the partnering data has not been updated to such a structure. Only the Items table (Item Master) has the item #, supplier ID, and destination location. If the data structure noted above is needed to implement the SKU level code, I can make sure this is done.

Thank you.

ToLok Jul 06, 2023 | flag | on: Supplier Leadtime Forecast and Supplier Analysis

Hello s40racer,

Indeed if you use Items.AnnouncedSLTValue = same(Suppliers.Leadtime) by [Suppliers.Supplier, Suppliers.Location] at [Items.Supplier, Items.Location], you would get for each item the value corresponding to the pair (Items.Supplier; Items.Location) adding the granularity that you wanted.
However, this implementation implies that all items with the same pair (Supplier; Location) would have the same Lead-Time. If you want to have different Lead-Time for different items provided by the same supplier, you need to add the relevant Reference in the Suppliers table (both for your orignal case at item level and your updated one at SKU level)

Also looking at the original code:
It seems that your table Items has a primary dimension, which is also present in PO, allowing you to have natural aggregation on line 2,3 and 4.
If the primary dimension was previously at the Item level, you might want to change it to the SKU level (Item x Location). This way, Items.SLT_ItemLevel will be the distribution of observed Lead-Time for your specific SKU (versus for your specific Item previously).

Hope it helps!

Thank you for the guidance. I am asking more from the code standpoint. The data is given with lead-time at the item-location level. I am thinking the easiest is to bring that data from Items table into the Vendors table to utilize the existing code.
With the existing code, I assume I need to add a location variable to the file to look something like:

Original:


read "/clean/tmp/Suppliers.ion" as Suppliers with
  Supplier : text
  Leadtime : number

Updated:


read "/clean/tmp/Suppliers.ion" as Suppliers with
  Supplier : text
  Location: text
  Leadtime : number

Then in any subsequent joins or filters, I will need to add the location filter. How would I update the following code to account for the location specific lead-time?


///Possible SLT layers depending on many datapoints can be found in the dataset
Items.SLT_ItemLevel = ranvar(PO.DeliveryDelay) when PO.IsClosed
Items.SLT_SupplierAndCategoryLevel = ranvar(PO.DeliveryDelay) by [Items.Supplier,Items.Category] when PO.IsClosed
Items.SLT_SupplierLevel = ranvar(PO.DeliveryDelay) by [Items.Supplier] when PO.IsClosed
Items.AnnouncedSLTValue = same(Suppliers.Leadtime) by Suppliers.Supplier at Items.Supplier

Taking the last line as an example, would it look something like ?


Items.AnnouncedSLTValue = same(Suppliers.Leadtime) by [Suppliers.Supplier, Suppliers.Location] at [Items.Supplier, Items.Location]

Hello,

It is indeed very common to have distinct supplier lead times depending on the location to be served.
The usual way to take the differences into account into your data is :
- Having a SKU table and not only Item table
- If you have a Purchase Orders history with relevant data, then you can simply create a joint between [PO.Sku] and [Sku.sku]. We would recommend to have a probabilistic supplier leadtime (use ranvar()). If not possible, then take avg

Hope it helps

Hey! Thanks for your interest. I am not too sure which code you are referring to. Don't hesitate to include an Envision snippet (see https://news.lokad.com/static/formatting ) in your question to clarify what you are working on. You can also include a link to the Envision code playground (see https://try.lokad.com ) if you can isolate the problem.

The Lokad usually approach lead time forecasts to craft a parametric probabilistic model to be regressed with differentiable programming. This approach makes it possible, for example, to introduce a distance parameter in the model. The value of this parameter is then learn by regressing the model over the data that happens to be available. Conversely, if there is no data at all (at least for now), the value of the parameter can be hard-coded to a guestimate as a transient solution.

Then, this approach might be overkill if there is enough data to support a direct lead time ranvar construction over supplier-location instead of supplier.

Let me know if it helps.

When using probabilistic lead times in actionrwd.reward, there is a possibility of encountering situations where a previously placed order is simulated to arrive later than the additional potential order considered by actionrwd. In other words, if a purchase order (PO) is in progress, the simulated purchase order generated by actionrwd may not adhere to a first-in, first-out (FIFO) rule in relation to the previous ongoing orders. This is a scenario that makes sense from a realistic standpoint, as purchase orders are not always strictly FIFO. However, from a stock manager/planning perspective, this can result in repetitive and misunderstood purchase suggestions for the user. It is unlikely that conditional lead time logic will be integrated into actionrwd, but this aspect should be addressed in the Monte Carlo reconstruction of actionrwd.
To avoid this pitfall, SCS often resort to using deterministic lead times (e.g., dirac(days)) that preserve the FIFO rule.

I feel like the explanation about the alpha parameter is a bit incomplete. The definition "The update speed parameter of the ISSM model for each item" is quite vague, when I think what needs to be understood is that alpha represents the correlation between one observation and the next one.
I would add that, the 0.3 value in the code example is way too high in most cases which can be misleading, a value of 0.05 would better fit usual cases to begin with.

arkadir Jun 29, 2023 | flag | on: Arguments of parsenumber function

The thousands separator is optional, but the decimal separator is mandatory. If no decimal separator is provided, the parsing will fail even if the provided numbers do not have decimals.

The shortest call to parsenumber (or tryparsenumber) is therefore:


T.Number = parsenumber(T.Text, "", ".")

arkadir Jun 29, 2023 | flag | on: Support for trimming in dates and numbers

When reading a date column, it is possible to provide a `*` at the end of the format to cause it to discard an optional time section, if present, for example:


read "/example.csv" as T date: "yyyy-MM-dd*" with

This will treat a value such as 2023-06-29 10:24:35 as if it were just 2023-06-29. Without this trim option, attempting to read the value will fail and report an error.

Similarly, when reading a number column, it is possible to provide a `*` at the end of the format to cause it to discard up to three non-digit characters either at the start or the end of the number value. For example:


read "/example.csv" as T number: "1,000.0*" with 

This will treat a value such as 10.00 USD as if it were just 10.00. Without this trim option, attempting to read the value will fail and report an error.

vermorel Jun 28, 2023 | flag | on: Be careful what you negotiate for! [pic]
Where you say “to some extent negotiable” (paraphrased) could we regard it as the quantity unit corresponding to a price, and that a different and likely higher price might apply to orders of smaller quantities? In which case, knowing the tiers of quantity and their corresponding prices would enable us to find the best order pattern, trading off price, wastage or inventory holding cost, and lead time.

What you are describing is frequently referred to as 'price breaks'. Price breaks can indeed be seen as a more general flavor of MOQs. In practice, tthere are two flavors of price breaks: merchant and fiscal. See also https://docs.lokad.com/library/supplier-price-breaks/

An enlightening chat on the future of aviation supply chain, shot within Air France's own engine repair facilities.

A remarkably well-illustrated dissertation on an under-studied topic. Very approachable, even for non-specialists.

What is a better way of getting stakeholder engagement for large investment without a smaller PoC-like approach?

The fundamental challenge is de-risking the process.

How does one get stakeholder engagement for TMS, WMS, MRP or ERPs? Those products are orders of magnitude more expensive than supply chain optimization software, and yet, there is no POCs.

I can't speak for the whole enterprise software industry. In its field, the Lokad approach to de-risking a quantitative supply chain initiative consists of many the whole thing accretive in a way that is largely independent of the vendor (aka Lokad).

Since Lokad charges on a monthly basis, with little or no commitment, and the process can end at any time. Whenever it ends, if it ends at all, the client company (the one operating a supply chain) can resume where Lokad left it.

The fine-print of the process and methodologies is detailed in my series of lectures https://lokad.com/lectures

vermorel Jun 13, 2023 | flag | on: What defines supply chain excellence?

My own take is that IT, and more generally anything that is really the foundation of actual execution, is treated as second class citizens, especially the _infrastructure_. Yet, the immense majorities of the operational woes in supply chain nowadays are IT-related or even IT-driven. For example _Make use of channel data_ is wishful thinking for most companies due to IT mess. IT is too important to be left in the hands of IT :-)

ttarabbia Jun 08, 2023 | flag | on: What defines supply chain excellence?

Not sure I agree with everything here - however the bit on the different supply chain flows and their priorities is helpful (Efficient, Agile, Responsive, Seasonal, Low Volume). Helps to answer those questions whose answer is typically "it depends".

Begs the question of segmentation since you are measuring performance by product/market/....

ArthurGau May 26, 2023 | flag | on: Related parsing functions

Related parsing functions:
containscount()
contains() : single needle
containsany() : multiple needles
fieldcount()

ArthurGau May 26, 2023 | flag | on: Related parsing functions

Related parsing functions:
containscount()
contains() : single needle
containsany() : multiple needles
fieldcount()

ArthurGau May 26, 2023 | flag | on: Related parsing functions

containscount()
contains() : single needle
containsany() : multiple needles
fieldcount()

vermorel May 15, 2023 | flag | on: Safety stock [pic]

I have two main objections to safety stocks, a stronger one and a weaker one.

First, my stronger objection is that safety stocks contradicts what basic economics tell us about supply chain. By design, safety stocks are a violation of basics economics. As expected, safety stocks don't end-up proving economics wrong, but it's the other way around. Economics are proving safety stock wrong. This argument will be detailed in my upcoming lecture 1.7, see https://lokad.com/lectures

Second, my weaker objection, is that safety stocks, as presented in every textbook, and as implemented in every software, are hot nonsense. Not only Gaussians are used both for demand and lead time - while they should not - but also the way lead time is combined with demand is also sup-par. This argument is weak because, in theory, safety stock formulas could be rewritten from scratch to fix this; however, the first, stronger objection remains, thus, it's moot.

See also:

- Why safety stock is unsafe https://tv.lokad.com/journal/2019/1/9/why-safety-stock-is-unsafe/
- Retail stock allocation with probabilistic forecasts - Lecture 6.1 https://tv.lokad.com/journal/2022/5/12/retail-stock-allocation-with-probabilistic-forecasts/

vermorel May 12, 2023 | flag | on: RFI, RFP and RFQ madness in supply chain

Very interesting reference! I will have to check it out.

For someone inside an organization, situations, where you can't evaluate a software vendor entirely from publicly available information, are pretty rare. Even the lack of information is telling (and not in a good way). The only thing missing is usually getting a quote from the vendor, but that doesn't require an RFP, merely a problem statement, and some ballpark figures.

As a vendor (like Lokad), you don't have a say. If the prospect says that the process is an RFP, then so be it. I have repeatedly tried to convince prospects to stop paying consultants twice what it would cost them to do the setup of the supply chain solution they were looking for, but I have never managed to convince any company to give up on their RFP process. Thus, nowadays, we just go with the flow.

ttarabbia May 12, 2023 | flag | on: RFI, RFP and RFQ madness in supply chain

I like the analogy of “increased attack surface”, particularly as it increases your chances of being infected by a vague, but attractive, idea-virus-meme. Reminds me of Robert Greene in 48 Laws of Power on charlatanism - “…on the one hand the promise of something great and transformative, and on the other a total vagueness. This combination will stimulate all kinds of hazy dreams in your listeners who will make their own connections and see what they want to see”

In my experience it is quite common for the stated goal of an organization to be “improve XYZ business metrics with ABC type system” but the ulterior motive to be “make a defensible and risk-free decision to look like we’re progressing”
Is there a solution to this problem for someone inside an organization evaluating vendors? How about as a vendor?

vermorel May 11, 2023 | flag | on: Community notes for docs.lokad.com

We have just rolled out a community note system for the technical documentation.

Envision snippets are allowed:


// Text following a double-slash is a comment
a = 5
b = (a + 1) * 3 / 4
show scalar "Result will be 4.5" a1b1 with b // display a simple tile

But also mathematical expressions:

$$ \phi = \frac{1 + \sqrt{5}}{2} $$
vermorel May 10, 2023 | flag | on: How SAP Failed The Supply Chain Leader

The article, by Lora Cecere, a notable market analyst in supply chain circles, has been taken down by Forbes.
It seems that Forbes is afraid of losing SAP a client. So much for an independent press...

Update: my network tells me that a copy of the article can be found at:
https://pdfhost.io/v/lE65WObHk_How_SAP_Failed_The_Supply_Chain_Leader

vermorel Apr 26, 2023 | flag | on: 21st Century Trends in Supply Chain

Yes, exactly the meaning of terms. Every company uses the terms product, order, stock level, but those words rarely mean exactly the same thing from one company to the next.

ttarabbia Apr 25, 2023 | flag | on: 21st Century Trends in Supply Chain

When you say glossary - you mean between people to understand the meaning of terms? Or in the sense of lookup table for values in data?

vermorel Apr 25, 2023 | flag | on: Forecast Accuracy [pic]

Inaccurate forecasts can't be right for the company. This is pretty much self-evident. Thus, companies have been chasing better forecasts, leveraging varied metrics. Yet, while this game has been played relentlessly for the last 4 decades. Near all companies have next-to-nothing to show for all those efforts.

The Lokad position is that the way those forecasting initiatives were framed, aka deterministic forecasts, were spelling their doom from Day 1.

vermorel Apr 25, 2023 | flag | on: 21st Century Trends in Supply Chain

Yes, indeed. Also, I am very much aligned with the paper vision that "Simplicity is Hard". Stuff (patterns, organizations, processes, ..) can only become simple with the adequate intellectual instruments (terminologies, concepts, paradigms). Unearthing those instruments is difficult.

Among companies operating complex supply chains, I have rarely seen anyone (outside Lokad) maintain glossaries. Yet, a glossary is probably one of the cheapest ways to eliminate some accidental complexity.

ttarabbia Apr 24, 2023 | flag | on: 21st Century Trends in Supply Chain

The section on "Conquering Complexity" immediately reminds me of [Out Of The Tar Pit](https://curtclifton.net/papers/MoseleyMarks06a.pdf)

ttarabbia Apr 19, 2023 | flag | on: Malleable Software

Agreed, the current paradigm with token limitations restricts the use cases on raw data, i.e. giving the LLM your entire hypercube to look for things.
However if instead you were pointing it at the documentation for the 2-3 tools you're using plus excel and asking it to tweak XYZ functionality.... then the fuzziness/randomness is confined to the configuration/setup layer which then drives a consistent and performant tool to generate results.

ttarabbia Apr 19, 2023 | flag | on: Supply Chain Normalcy Not In Sight
...Train teams to model variability and build a planning master data layer to understand layers of variability. (A planning master data layer measures and tracks shifts in lead times, conversion rates, and quality.....

This was the most insightful bit... many companies see the value in doing this, however modelling "new" problems in a rigidly implemented system doesn't often lend itself to experimentation.

Are there any good examples of processes/software that "self adjusts" planning/modelling parameters? Seems like something that could easily lead to crazy and infeasible results if left alone....

vermorel Apr 19, 2023 | flag | on: Malleable Software

LLMs can certainly support a whole next-gen replacement for Tableau-like software (widely used for supply chain purposes), where the SQL queries are generated from prompts. I may have to a revisit my Thin BI section at https://www.lokad.com/business-intelligence-bi a few years down the road.

However, system-wide consistency is a big unsolved challenge. LLMs have token limits. Within those limits, LLMs are frequently beyond-human for linguistic or patternistic tasks (lacking a better word). Beyond those limits, it becomes very fuzzy. Even OpenAI doesn't seem convinced in their own capacity to push those token limits further within the current LLM paradigm.

ttarabbia Apr 19, 2023 | flag | on: Malleable Software

A helpful allegory for today's software flexibility vs. ease-of-use tradeoff and how LLMs may lead to more extensible and malleable software for the end user.

To be quite honest it's the first time I've seen something that helps inform how we may use LLMs as a supply chain community in the context of spreadsheets and rigid tools.

"...LLM developers could go beyond that and update the application. When we give feedback about adding a new feature, our request wouldn’t get lost in an infinite queue. They would respond immediately, and we’d have some back and forth to get the feature implemented."

Yes, this part has been somewhat hastily written (my fault). At Lokad, we tend to alternate between the algebra of random variables (faster, more reliable) and the montecarlo approach (more expressive). Here, is below the typical way we approach this integrated demand over the lead time while producing a probabilistic forecast at the end (this is very much aligned with your "simulation" approach):


present = date(2021, 8, 1)
keep span date = [present .. date(2021, 10, 30)]
 
Day.Baseline = random.uniform(0.5 into Day, 1.5) // 'theta'
alpha = 0.3
level = 1.0 // initial level
minLevel = 0.1
dispersion = 2.0

L = 7 + poisson(5) // Reorder lead time + supply lead time

montecarlo 500 with
  h = random.ranvar(L)

  Day.Q = each Day scan date // minimal ISSM
    keep level
    mean = level * Day.Baseline
    deviate = random.negativebinomial(mean, dispersion)
    level = alpha * deviate / Day.Baseline + (1 - alpha) * level
    level = max(minLevel, level) // arbitrary, prevents "collapse" to zero
    return deviate

  s = sum(Day.Q) when (date - present <= h)
  sample d = ranvar(s)

show scalar "Raw integrated demand over the lead time" a4d6 with d
show scalar "Smoothed integrated demand over the lead time" a7d9 with smooth(d)

See also https://try.lokad.com/s/demand-over-leadtime-v1 if you want to try out the code.

vermorel Mar 28, 2023 | flag | on: Let's try Lokad

By the way, mathematical formulas are pretty-printed as well:

$$ \phi = \frac{1 + \sqrt{5}}{2} $$
vermorel Mar 28, 2023 | flag | on: Let's try Lokad

I have just updated Supply Chain News to pretty print Envision scripts as well. Here is the first script:


montecarlo 1000 with // approximate π value
  x = random.uniform(-1, 1)
  y = random.uniform(-1, 1)
  inCircle = x^2 + y^2 < 1
  sample approxPi = avg(if inCircle then 4 else 0)
show scalar "π approximation" with approxPi // 3.22

A short summary of the second lecture in Joannes Vermorel's series on Quantitative Supply Chain. This constitutes a solid overview of his overarching supply chain vision.

A short summary of the first lecture in Joannes Vermorel's series on Quantitative Supply Chain.

lucianolis Mar 07, 2023 | flag | on: Etihad grows MRO capabilities

I saw them having activities in South America lately. Partnering with FADEA, the company that built the famous Pampa plane

A discussion with Jay Koganti, Vice President of Supply Chain at Estée Lauder’s Centre of Excellence

vermorel Jan 23, 2023 | flag | on: Architecture of Lokad

The predictive optimization of supply chain comes with unusual requirements. As a result, the usual software recipes for enterprise software aren't working too well. Thus, we had to diverge - quite substantially - from the mainstream path.

The 5 trends as listed by the author:

  • 88% of small businesses supply chains will use suppliers closer to home by next year.
  • Small business supply chains are moving most or all suppliers closer to the U.S. faster than predicted
  • The strained economy and low inventory are top stressors
  • Software-based emerging tech is on the rise while hardware-based ones lag behind
  • 67% of SMB supply chains say their forecasting techniques were helpful in preventing excess inventory

As data were a byproduct and at best second concern for the businesses since the first introduction of ERPs and other enterprise software, it should not come as a surprise that the vast majority of data analytics and digital transformations fail. GIGO hits you when you don't know what are you looking for. Decisions, not the data should be primary focus when you start. Then it helps to narrow down your focus on improving only the data that matter.

Very interesting article! I always like to recall that risk is defined by (probability of occurrence) x (impact on the business). And human tend to underestimate high impact-low probability events and overestimate high probability-low impact events.

Very informative video! The development of digital twins is undoubtfully important. with all the benefits they bring, I think soon it will be almost a must-have for some businesses.

This problem is referred to as censored demand. Indeed, this is not the sales but the demand that is of interest to be forecast. Unfortunately, there is no such thing as historical demand, only historical sales that represent a loose approximation of the demand. When a product goes out of the assortment, due to stockout or otherwise, sales drop to zero, but demand (most likely) does not.

The old school approach to address censored demand consists of iterating through the historical sales data, and replacing the zero segments with demand forecast. Unfortunately, this method is fraught with methodological issues, such as building a forecast on top of another forecast is friable. Furthermore, in the case of products that are not sold during for long periods (not just rare stockout events), say summer, forecasting a fictitious demand over those long periods is not entirely sensical.

The most commonly used technique at Lokad to deal with censored demand is loss masking, understood as from a differentiable programming perspective. This technique is detailed at:
https://tv.lokad.com/journal/2022/2/2/structured-predictive-modeling-for-supply-chain/

Hope it helps, Joannes

There are several questions to unpack here about seasonality. (1) Is seasonality best approached as a multiplicative factor? (2) Is seasonality best approached through a fixed-size vector reflecting those factors? (hence the "profile") (3) How to compute the values of those vectors?

Concerning (1), the result that Lokad has obtained at the M5 competition is a strong case for seasonality as a multiplicative factor:
https://tv.lokad.com/journal/2022/1/5/no1-at-the-sku-level-in-the-m5-forecasting-competition/ The literature provides alternatives approaches (like additive factors); however, this don't seem to work nearly as well.

Concerning (2), the use of a fixed size vector to reflect the seasonality (like a 52-week vector) has some limitations. For example struggles to capture patterns like an early summer. More generally the vector approach does work too well when the seasonal pattern are shifting, not in amplitude, but in time. The literature provides more elaborate approaches like dynamic time warping (DTW). However, DTW is complicated to implement. Nowadays, most machine learning researcher have moved toward deep learning. However, I am on the fence on this. While DTW is complicated, it has the benefit of having a clear intent model-wise (important for whiteboxing).
https://en.wikipedia.org/wiki/Dynamic_time_warping

Finally (3), the best approach that Lokad has found to compute those vector values is differentiable programming. It does achieve either state of the art results or very close to start of the art with a tiny fraction of the problems (compute performance, blackbox, debuggability, stability) associated with alternative methods such as deep learning and gradient boosted trees. The method is detailed at:
https://tv.lokad.com/journal/2022/2/2/structured-predictive-modeling-for-supply-chain/

Hope it helps, Joannes

Patrice Fitzner, who contributed to the design of the Quai 30, a next-gen 21st century logistical platform explains the thinking that went into this 400m by 100m monster of automation.

Last mile delivery services where attaining incredible valuations only a year ago. It seems as the honeymoon phase is over and the reality of a challenging route to profitability of the overall business model is setting in.
The model in itself is not new at all and the service is incredibly desirable from a customer perspective, however, the question remains why the big supermarket giants have not implemented it themselves 10 years ago already - maybe the business model is not as amazing as believed?
Future will tell whether clients will be willing to pay the premiums that the remaining 'monopolists' will have to charge to become profitable.

Factorio https://www.factorio.com/ is the grandfather of factory simulation games, and it's centered around production chains, but the default play style is rather light in terms of supply chain (just create a factory that, if fed with raw materials for long enough, produces everything you need eventually). I recommend going for a train-based play style (especially against hostile aliens), as it tends to center a lot more around what you need to deliver, where, when, and in what quantities. The space exploration mod https://mods.factorio.com/mod/space-exploration is also great on those aspects.

Satisfactory https://www.satisfactorygame.com/ is also interesting for those who like more of a 3D feel.

My favorite is Dyson Sphere Program https://store.steampowered.com/app/1366540/Dyson_Sphere_Program/ as it forces you to set up interstellar supply chains: in Factorio and Satisfactory one tends to create a central base into which all raw inputs are fed, but in Dyson Sphere Program it's usually better to transport your intermediate goods to a planet with oceans of sulfuric acid that are needed for its processing, rather than shipping the sulfuric acid back to a central planet. Or, to move energy-hungry processing to a tidally-locked planet right next to the sun.

Transport Fever 2 https://www.transportfever2.com/ has excellent gameplay, with half of it being centered around industrial supply chains. The Industry Expanded mod is very good in this aspect https://steamcommunity.com/sharedfiles/filedetails/?id=1950013035 The Cities: Skylines game with the Industries expansion https://www.paradoxinteractive.com/games/cities-skylines/add-ons/cities-skylines-industries is also quite good in the same vein.

Banished https://store.steampowered.com/app/242920/Banished/ is also interesting in that you are expected to set up an entire medieval supply chain from scratch. It includes dealing with overfilled warehouses, random demand (weather for firewood, illness for medicine), investment (planting an orchard takes years to yield fruit), out-of-stock penalties (starvation, running out of tools which cuts productivity across the board), and so on.

Very nerdy Factorio rocks https://www.factorio.com/

My personal favorite remains https://www.ubisoft.com/en-gb/game/anno/1800.
The entire Anno series is outstanding to understand basic concepts of raw material - semi finished - finished product transition + lead times.
Would be keen to know whether there are other games that come to mind!