vermorel Dec 19, 2023 | flag | on: Workshop data

The playground has been fixed.

marinedup Dec 18, 2023 | flag | on: URL builders to use in dashboards

/// Returns its argument as-is. If the path is known at compile-time and points to /files, remembers that said file is referenced by this script.
[restricted]
map downloadUrl(fullpath: text) : text as "downloadurl(txt)"

/// Returns an URL for the file with the provided hash, to be downloaded as the provided name.
[restricted]
map downloadUrl(hash: text, name: text) : text as "downloadurl(txt,txt)"

/// Returns an URL for the folder at the provided path in /files, and (if known at compile-time) remembers that said folder is referenced by this script.
[restricted]
map filesUrl(path: text) : text as "filesurl(txt)"

MBk78 Dec 17, 2023 | flag | on: Workshop data

Thank you

vermorel Dec 14, 2023 | flag | on: Workshop data

Working on the issue. Sorry, for the delay. Best regards, Joannes

MBk78 Dec 14, 2023 | flag | on: Workshop data

Hello, please how to download the data ?
Even in the playground there is an error "Maximum table size is 1000000" so can't see the output.

In a recent dialogue, Conor Doherty of Lokad conversed with Joannes Vermorel and Rinat Abdullin about generative AI’s impact on supply chains. Vermorel, Lokad’s CEO, and Abdullin, a technical consultant, discussed the evolution from time series forecasting to leveraging Large Language Models (LLMs) like ChatGPT. They explored LLMs’ potential to automate tasks, enhance productivity, and assist in data analysis without displacing jobs. While Vermorel remained cautious about LLMs in planning, both acknowledged their utility in composing solutions. The interview underscored the transformative role of AI in supply chain management and the importance of integrating LLMs with specialized tools.

ttarabbia Dec 04, 2023 | flag | on: On Learning in Ill-Structured Domains

"The more ill-structured the domain, the poorer the guidance for knowledge application that ‘top-down’ structures will generally provide. That is, the way abstract concepts (theories, general principles, etc.) should be used to facilitate understanding and to dictate action in naturally occurring cases becomes increasingly indeterminate in ill-structured domains."

manessi Dec 01, 2023 | flag | on: Private/Public Key

Shamelessly copy-pasting an ELI5 from Reddit on how private/public key work. Nothing fancy once explained :)
https://www.reddit.com/r/explainlikeimfive/comments/1jvduu/eli5_how_does_publicprivate_key_encryption_work/
---
The basis is really simple:

Imagine you have a computer program that will allow you to encrypt a text file or digital document by using either one of two passwords, so that once you freely choose any one of the two passwords for encrypting, then decryption can only be performed by using the other password. That is: you can't both encrypt and decrypt by using the same password; once you use one of the two possible passwords for encryption, then you can only decrypt by using the other password you did not use for encrypting.

This is all the technical basis you need.

But why is this technical basis so useful?

If we use intelligently that previous technical stuff, then we can get a nice security system working.

This is the trick: you keep one of the two passwords as a secret password only you know and no one else knows; this is called your "private key". And you let the other password be known by everybody; this will be your "public key".

Thanks to this, you can nicely perform two interesting tasks:

a) You can RECEIVE (not send) information in a secure manner: since everybody knows your public key, then they can encrypt any information they want to send to you by using your public key. Since the information has been encrypted by using your public key, then it can only be decrypted by using your private, secret key; and since you are the only one who knows your own private key, then you are the only person who will be able to decrypt the information that was sent to you. (If you want to SEND information to other people in a secure manner, then you'll have to know those people's respective public keys, and use these public keys for encrypting).

b) you can SEND information in such manner that you can absolutely prove that information was sent by YOU: if you want to send a certain information and you encrypt it by using your PRIVATE, SECRET key, then everybody will be able to decrypt and read that information, because the information will be decryptable by your public key and everybody knows your public key. So your information is not protected against reading, but, since it is decryptable by your public key, then it is a complete proof that the information was encrypted by your private, secret key. And since you are the only person who knows your own private, secret key, then it gets perfectly proven that the information was encrypted by you and no one else. This is why encrypting by using your own private key is also known as "digitally signing" the information you send.

You will find a safety stock calculation, with varying lead times, at:
https://www.lokad.com/calculate-safety-stocks-with-sales-forecasting/

The web page includes an illustrative Excel sheet to let you reproduce the calculation.

However, the bottom line is: safety stocks are a terrible model. See:
https://www.lokad.com/tv/2019/1/9/why-safety-stock-is-unsafe/

As a statistical answer, they are completely obsolete. Probabilistic forecast must be favored instead.
https://www.lokad.com/probabilistic-forecasting-definition/

15 months of historical data is ok-ish, it's a bit tricky to assess seasonality with less than 2 years, but it can be done as well.

Hope it helps,
Joannes

Miceli_Baptiste Nov 23, 2023 | flag | on: substr(text,x (x>0))

In the case of x being positive, the function returns the input text starting at the xth character. The example provided is kind of tedious with an input text of 12 characters... In this script: https://try.lokad.com/s/substr-example?tab=Output I've added a character to the input text to remove the ambiguity.

BenoitBB Nov 23, 2023 | flag | on: Note on match syntax (indentation)

match syntax must have 2 indentations spaces .

BenoitBB Nov 23, 2023 | flag | on: deleted post

title says it all. . . .

Miceli_Baptiste Nov 21, 2023 | flag | on: exponential limits

exp(x) cannot have x exceeding 87, otherwise the script would break
Note that exp(-1000) does not break but the value isn't computed, we all know it is 0 ;)

vermorel Nov 20, 2023 | flag | on: deleted post

Unfortunately, in supply chain, things can be done "a small piece" at a time. It just doesn't work. See
https://www.lokad.com/blog/2021/9/20/incrementalism-is-the-bane-of-supply-chains/

I would have very much preferred the answer to this question to be different, to have a nice incremental path that could be made available to all employees; it would have made the early life of Lokad much easier while competing against Big Vendors.

Then, don't underestimate what a supposedly "minor" employee can do. Apathy is one of the many diseases touching large companies. When nobody cares, the one person who genuinely care ends up steering the ship. All it takes to point out the obvious as many people it takes. The flaws of the "legacy" supply chain solutions are not subtle, they are glaring.

In MRO, it boils down to: uncertainty must be embraced and quantified, varying TATs matter as much as varying consumptions, etc. See an extensive review of the challenges that need to be covered https://www.lokad.com/tv/2021/4/28/miami-a-supply-chain-persona-an-aviation-mro/

Forecasting is a mean to an end, but just a mean. Focusing on forecasting as a "stand-alone thingy" is wrong. This is the naked forecast antipattern, see https://www.lokad.com/antipattern-naked-forecasts/

For an overview on how to get a supply chain initiative organized, and launched, see https://www.lokad.com/tv/2022/7/6/getting-started-with-a-quantitative-supply-chain-initiative/

Hope it helps,

njoshuabradshaw Nov 20, 2023 | flag | on: deleted post

What are the steps to integrate probabilistic forecasting into the supply chain of an Aerospace MRO (i.e, similar to your work with Air France Industries), particularly when I'm a minor player (employee) handling it independently without any investment capital, but rather as part of my job responsibilities?

Additionally, as you have discussed in your YouTube videos and articles, is forecasting truly the answer, or should the focus be more on reengineering the supply chain or implementing other process modifications across different levels?

bperraudin Nov 17, 2023 | flag | on: List of Parameters for ActionRwD functions

I completely agree with what was said in the previous comment: stochastic optimization will definitely enable to tackle this issue (and more) the right way.
Meanwhile, here is a piece of code that you could use to tackle the order multiplier problem, following the logic I described in my previous comment:


///## ACTION REWARD FUNCTION
///### Conversion to account for customer Order Multipliers
Skus.CustomerOrderMultiplier = 2
Skus.StockOnHandInLots = floor(Skus.StockOnHand/Skus.CustomerOrderMultiplier) //could also use round(), business assumption to be taken according to the pb treated
PO.OrderQtyInLots = floor(PO.OrderQty/Skus.CustomerOrderMultiplier)
CatalogPeriods.BaselineInLots = floor(CatalogPeriods.Baseline/Skus.CustomerOrderMultiplier)

///### Action reward call

Skus.WOOUncovDemandInLots, Skus.HoldingTime = actionrwd.reward(
  TimeIndex: CatalogPeriods.N
  Baseline: CatalogPeriods.BaselineInLots
  Dispersion: Skus.Dispersion ///assume Dispersion is adapted to Order Multiplier change. This should be handled in previous script
  Alpha: 0.05
  StockOnHand: Skus.StockOnHandInLots
  ArrivalTime: dirac(PO.POTimeIndex)
  StockOnOrder: PO.OrderQtyInLots
  LeadTime: Skus.SLT
  StepOfReorder: Skus.RLT)

Skus.WOOUncovDemand = Skus.WOOUncovDemandInLots * Skus.CustomerOrderMultiplier
Skus.SellThrough = (1-cdf(Skus.WOOUncovDemand + 1)) * uniform.right(1)

Regarding the customer MOQ, provided that the customers tend to always order the MOQ or a few units above the MOQ, you could use the same logic as an approximation and treat the MOQ as an Order multiplier. I can't offer a cleaner way to cope with this unfortunately, as it would require to dedicate too much time to the problem.

s40racer Nov 17, 2023 | flag | on: List of Parameters for ActionRwD functions

Thank you.

For this upcoming stochastic optimizer, could you describe the inputs/parameters data that is needed/required for the optimization to run?

vermorel Nov 17, 2023 | flag | on: List of Parameters for ActionRwD functions

Hello! We have been developing - for the past two years - a general purpose stochastic optimizer. It has passed the prototype stage and we have a short series of client that are running their production over this new thingy. Stochastic optimization (aka an optimization under a noisy loss) is exactly what you are looking for here. This will be replacing our old-school MOQ solver as well.

We are now moving forward with the development of the clean long-term version of this stochastic optimizer, but it won't become generally available before the end of 2024 (or so). Meanwhile, we can only offer ad-hoc heuristics. Sorry for the delay, it has been a really tough nut to crack.

s40racer Nov 17, 2023 | flag | on: List of Parameters for ActionRwD functions

Thank you. Your previous answer is also relevant - it helped me understand more about the actionrwd function as a whole.

Are you able to provide coding example of how to (or how Lokad typically overcomes this limitation in real-world situations), for both the order multiplier and the MOQ? Not every product has the order multiplier or MOQ pattern or requirement.

And, similarly, if actionrwd is not the solution for this type of situation, how do you overcome this in general? Coding examples will also be very helpful here as well.

Thank you.

bperraudin Nov 17, 2023 | flag | on: List of Parameters for ActionRwD functions

Ok now I understand better, sorry about my first irrelevant answer.
Unfortunately, action reward is not designed for these use cases. It assumes under the hood that the demand follows a negative binomial distribution defined by the mean and distribution in inputs.
For order multipliers, you could work you way around this limitation by converting every input of action reward (stock available, stock on order, baseline) in number of multipliers and then multiply the outputs by the size of the multiplier. This is far from perfect as you'll have to make rounding approximations during the conversions.
For MOQ, I'd say that action reward is not built for such use cases.

s40racer Nov 16, 2023 | flag | on: List of Parameters for ActionRwD functions

Thank you. I apologize if my earlier question was not clear to begin with.

To clarify, I was asking from the perspective of the MOQ that I place on my customers, or if the customers have an MOQ or order multipliers when they purchase this item from me (due to logistics constraints or economy of scales on the transportation cost). In another words, I do not have an MOQ or order multiple to my customers, but they have instituted this requirement because of transportation efficiency and constraints.

How do I take these factors into account when generating a forecast? Currently, the forecasts are generated using the actionrwd.demand function. However there are no parameters to account for the MOQ or the order multiples, and the smoothed demand and the forecast are always way under-fitted for products with these requirements.

I hope this is clear.

bperraudin Nov 16, 2023 | flag | on: Output of actionrwd function

There are 2 outputs for the function actionrwd.rwd (see the documentation here: https://docs.lokad.com/reference/abc/actionrwd.reward/):

- Demand: it estimates the probability distribution of the demand that is not covered yet by existing stock and that happens within the coverage timespan. This distribution may have non-zero probabilities in the negative values: this simply means that even if no order is placed, the entire demand might still be covered, with stock remaining at the end of the coverage timespan.
- Holding Time: estimates, for each extra unit that could be purchased, the average time it will stay in stock before being sold.

From your description, I assume you are talking of the Demand output. To complement the explanation of the documentation, you can also see this demand output as the probability to need x additional units in stock to satisfy the future demand. If x is negative, you get the probability of reaching the end of the timespan with abs(x) units in stock without making any additional order.

I hope this helps,

bperraudin Nov 16, 2023 | flag | on: List of Parameters for ActionRwD functions

Hello,

The exhaustive list of parameters is available in the documentation page of the function actionrwd.reward, in the section function signature: https://docs.lokad.com/reference/abc/actionrwd.reward/.

However, this will not help you to integrate MOQs or Order Multipliers. These constraints cannot and should not be treated using actionrwd. Indeed, the main output of actionrwd is the probability distribution of the customer demand that is not covered yet by existing stock (on hand or on order). This has no reason to be affected by MOQ/Multiplier constraints.
However, once the demand left to satisfy is obtained through actionrwd, an economic optimization must be performed to determine the best decision to take, given that demand and other parameters/constraints. It is in this optimization that the MOQ/Mulitplier constraints must be taken into account.

If a product with a demand left to satisfy on the time period considered of 5 units has a MOQ of 50 units, the decision of whether or not you should actually purchase the MOQ completely depends on economical factors. If the product is expensive and has a low margin you might be reluctant to purchase it, if it is cheap and very profitable you might want to do it. This also possibly depends on other parameters as well like budget or storage limitation which can be integrated in an economical optimization.

I hope this helps,

bperraudin Nov 13, 2023 | flag | on: Implement Forecast at Monthly level

Hello,

You'll find below an example of code that can help you generate a daily and a monthly forecast from an existing weekly forecast. I hope this will help.

The methodology I followed fits what was described in previous comments:
1. Compute the weight of each day of the week in the whole horizon considers for each group of products.
2. Multiply this weight to an already computed weekly baseline to get a daily baseline.
3. Aggregate at month level.

This is only example to be adapted to your specific use case, here are the assumptions I made:
1. The weight might differ on the seasonality group: if not the granularity can be changed or the weight can simply be computed for the whole dataset. For categories with very few sales, this logic might overfit
2. The horizon contains only full weeks or is sufficiently big for considering that having 1 extra occurrence of a given week day is negligible.
3. The weight of the days is constant over the whole horizon. In particular we completely neglect here the impact of events like Black Friday.


///Create necessary tables
table WeekDays = extend.range(7)
WeekDays.DayNum = WeekDays.N - 1 //DayNum between 0 and 6
table GroupsWeekDays = cross(Groups,WeekDays) //Groups being an existing table with 1 line per seasonality group

Sales.DayNum = Sales.Date - monday(Sales.Date)
GroupsWeekDays.DemandQty = sum(Sales.DeliveryQty) by [Items.SeasonalityGroup,Sales.DayNum] at [Groups.SeasonalityGroup,WeekDays.DayNum]
GroupsWeekDays.WeightDay = GroupsWeekDays.DemandQty /. sum(GroupsWeekDays.DemandQty) by GroupsWeekDays.SeasonalityGroup

///Compute daily forecast
table ItemsDay = cross(Items,Day)
Day.DayNum = Day.Date - monday(Day.Date)
ItemsDay.Baseline = ItemsWeek.Baseline * single(GroupsWeekDays.WeightDay) by [Groups.SeasonalityGroup,WeekDays.DayNum] at [Items.SeasonalityGroup,Day.DayNum]
ItemsDay.DemandQty = sum(Sales.DeliveryQty)

///Compute monthly forecast
table ItemsMonth = cross(Items,Month)
ItemsMonth.DemandQty = sum(Sales.DeliveryQty)
ItemsMonth.Baseline = sum(ItemsDay.Baseline) //mind partial months when analyzing the results

s40racer Nov 10, 2023 | flag | on: Implement Forecast at Monthly level

I can also use some guidance on how to change from a weekly to a daily implementation from a coding perspective.

s40racer Nov 10, 2023 | flag | on: Implement Forecast at Monthly level

Thank you.
Would you mind elaborating a bit more on the day-of-week multiplier concept you mentioned above?

Lokad has developed its own DSL, dedicated to the predictive optimization of supply chain, namely Envision https://docs.lokad.com/

ArthurGau Nov 08, 2023 | flag | on: Lexicographic order

The order is :
0-9A-Za-z

which means to test if a string is fully numerical, you can also test if it's < "A"

vermorel Nov 05, 2023 | flag | on: Implement Forecast at Monthly level

Instead of going from weekly to monthly, I would, on the contrary, suggest to go from weekly to daily, and then from daily to monthly. Keep the weekly base structure, and to introduce day-of-week multiplier. This gives you a model at the daily level. Turn this daily model into a monthly forecasting model. Indeed, having 4 or 5 weekends has a significant impact on any given month, and usually to most effective path to capture this pattern consists of operating from the daily level.

Hope it helps,

I am not overly convinced by the idea of 'agents' when it comes to system-wide optimization. Indeed, the optimization of a supply chain must be done at the system level. What is 'best' for a node (a site, a SKU, etc) is not the what is best for the whole. The 'agent' paradigm is certainly relevant for modeling purposes, but for optimization, I am not so sure.

Concerning evolution vs revolution, see 'Incrementalism is the bane of supply chains', https://www.lokad.com/blog/2021/9/20/incrementalism-is-the-bane-of-supply-chains/

Thank you for the detailed insights!. It's always enlightening to hear from experts like Lokad. While I understand Lokad's active involvement and experimentation with LLMs, I'd like to share my perspective based on your points and my own observations.

Firstly, the limitations you mentioned regarding LLMs, especially their inability to learn post their initial training, is indeed a significant challenge. Their textual (and sometimes image) processing capabilities, though remarkable, may not suffice for the intricate nuances of supply chain transactional data. This is especially true when considering that such data comprises over 90% of pertinent supply chain information.

However, I envision generative AI working in tandem with supply chain teams, not replacing them. The role of a learning agent (at each node) could be used to assist these teams, enabling them to capture a more comprehensive representation of unconstrained demand and thereby enriching their baseline models. While the potential of AI in supply chains is vast, solely relying on this technology for business practices might be too early, given its experimental nature.

In the future, I believe the challenge won't be about replacing current practices with LLM-powered processes but merging both to create a hybrid model where technology complements human expertise. Just as eCommerce companies evolved from but differ vastly from mail-order companies of the 19th century, our future supply chain practices will likely be an evolution, not a replacement, of current methods.

Miceli_Baptiste Oct 27, 2023 | flag | on: Points limit in show scatter

Note that a show scatter will fail if you are trying to show more than 5,000 points.

Hello! While we haven't publicly communicated much on the case, Lokad has been very active on the LLM front over the last couple of months. We have also an interview with Rinat Adbullin, coming up on Lokad TV, discussing more broadly LLMs for enterprises.

LLMs are surprisingly powerful, but they have their own limitations. Future breakthrough may happen, but chances are that whatever lift some of those limitations, may be something quite unlike the LLMs we have today.

The first prime limitation is that LLMs don't learn anything after the initial training (in GPT, the 'P' stands for 'pretrained'). They just perform text completions, think of it as a 1D time-series forecast where values have been replaced by words (tokens actually, aka sub-words). There are techniques to cope - somehow - with this limitation, but none of them is even close to be as good as the original LLM.

The second prime limitation is that LLMs deal with text only (possibly images too with multi-modal variants, but images are most irrelevant to supply chain purposes). Thus, LLMs cannot directly crunch transactional data, which represents more than +90% of the relevant information for a given supply chain.

Finally, it is a mistake to look at the supply chain of the future, powered by LLMs, as an extension of the present-day practices. Just like eCommerce companies have very little in common with mail-order companies that appeared in the 19th century; the same will - most likely - be true for those future practices.

acifonelli Oct 23, 2023 | flag | on: Deterministic LeadTime

In production - last check 04 July 2023 - there are:
- 180 matches of dirac used for variable assignment, like LeadTime = dirac(<number>) (and I am constraining the search only to LeadTime variables);
- 427 matches of direct use in an actionrwd.* function.
For a *total of 607 matches*. Looking for *all* the actionrwd. *calls* we have *894 matches*. Assuming that each call is independent - i.e. is not reusing a LeadTime variable already defined - we are already at *~70% of calls to* actionrwd. *not requesting a real distribution to operate**.

Repeating the same reasoning for poisson we have:
- 0 matches for variable assignment;
- 4 matches of direct use ( 1 in Lokad Customer Demo, 1 in Public Demo and 2 in client account ).

The other distributions are not used as far as the Code Search allows me to check.

For further information on the topics covered in the video, consult Lokad's technology page
https://www.lokad.com/technology/

This is why a large software vendor cannot, by default, be deemed a "safer" option than a small vendor. In B2B software, the odds of the vendor going bankrupt are usually dwarfed by the odds of the vendor discontinuing the product. The chances that Microsoft would stop supporting core offering (ex: Excel / Word) within 2 decades are low, very low. However, the same odds cannot be applied to every single product pushed by Microsoft. Yet, when it comes to long-term support, Microsoft is one of the best vendors around (generally speaking).

marinedup Oct 10, 2023 | flag | on: Other functions generating URLs

Other useful functions generating URLs are not listed in this documentation:

sliceUrl(slice: ordinal) -> text, pure function
Produces a link to the specified slice, in the current dashboard

sliceUrl(slice: ordinal, tab: text) -> text, pure function
Produces a link to the specified slice & tab, in the current dashboard

dashUrl() -> text, pure function
Produces an url towards the current project dashboard

dashUrl(tabSearch: text) -> text, pure function
Convert a tab name into an url to be used as a link to a specific dashboard tab in current project dashboard

dashUrl(project: number, tabSearch: text) -> text, pure function
Convert a project id and a tab name into an url to be used as a link to a specific dashboard tab

dashUrl(project: number) -> text, pure function
Convert a project id into an url to be used as a link to a dashboard

currentDashUrl() -> text, pure function
Produces an url towards the current run's dashboard.

marinedup Oct 10, 2023 | flag | on: How to use an URL generated by sliceSearchUrl

"The returned text value that contains an URL can be rendered as a link through the StyleCode element {text: "link"}"
{text: link} is deprecated, {href: #(link)} or {href: #[T.Link]} should be used instead

marinedup Oct 10, 2023 | flag | on: sliceSearchUrl overloads

sliceSearchUrl(sliceSearch: text) 🡒 text, pure function
Convert an inspector name search key into an url to be used as a link to a specific slice in current project dashboard

sliceSearchUrl(project: number, sliceSearch: text) 🡒 text, pure function
Convert a project id and an inspector name search key into an url to be used as a link to a specific dashboard slice

sliceSearchUrl(sliceSearch: text, tabSearch: text) 🡒 text, pure function
Convert an inspector name search key and a tab name into an url to be used as a link to a specific dashboard slice and tab in the current project dashboard

sliceSearchUrl(project: number, sliceSearch: text, tabSearch: text) 🡒 text, pure function
Convert a project id, an inspector name search key and a tab name into an url to be used as a link to a specific dashboard slice and tab

Conor Oct 10, 2023 | flag | on: ABC XYZ Analysis [Pic]

For Lokad's detailed analysis of the practice, see https://www.lokad.com/abc-xyz-analysis-inventory/

vermorel Oct 09, 2023 | flag | on: Unicity of ranvar after transform

The function transform should be understood from the perspective of the divisibility of random variables, see https://en.wikipedia.org/wiki/Infinite_divisibility_(probability)

However, just like not all matrices can be inverted, not all random variables can be divided. Thus, Lokad adopts a pseudo-division approximate approach which is reminiscent (in spirit) to the pseudo-inverse of matrices. This technique is dependent on the chosen optimization criteria, and indeed, in this regards, although transform does return a "unique" result, alternative function implementations could be provided as well.

vermorel Oct 09, 2023 | flag | on: Cross Entropy Loss Understanding

Cross-entropy is merely a variant of the likelihood in probability theory. Cross-entropy works on any probability distribution as long as a density function is available. See for example https://docs.lokad.com/reference/jkl/loglikelihood.negativebinomial/

If you can produce a parametric density distribution, then, putting pathological situations aside, you can regress it through differentiable programming. See fleshed out examples at https://www.lokad.com/tv/2023/1/11/lead-time-forecasting/

In the article, it is mentioned that Lokad collected empirical data which supports the claim that Cross Entropy is usually the most efficient metric to optimize, rather than MSE, MAPE, CRPS, etc. Is it possible to view that data?

No, unfortunately for two major reasons.

First, Lokad has strict NDAs in place with all our client companies. We do not share anything, not even derivative data, without the consent of all the parties involved.

Second, this claim should be understood from the perspective the experimental optimization paradigm, which is (most likely) not what you think. See https://www.lokad.com/tv/2021/3/3/experimental-optimization/

Hope it helps,
Joannes

Miceli_Baptiste Oct 04, 2023 | flag | on: Difference with extend.range

Side note on this function: if you extend.split a line not containing any of the S.Separators you will still get the line in the resulting table. This is not the same as an extend.range for instance.
Script to illustrate this case: https://try.lokad.com/s/extend.split

vermorel Oct 01, 2023 | flag | on: Lag Forecasting

I have a few tangential remarks, but I firmly believe this is where you should start.

First, what is the problem that you are trying to solve? Here, I see you struggling with the concept of "lag", but what you are trying to achieve in unclear. See also https://www.lokad.com/blog/2019/6/3/fall-in-love-with-the-problem-not-the-solution/

Second, put aside Excel entirely for now. It is hindering, not helping, your journey toward a proper understanding. You must be able to reason about your supply chain problem / challenge without Excel; Excel is a technicality.

Third, read your own question aloud. If you struggle to read your own prose, then probably, it needs to be rewritten. Too frequently, I realize, upon reading my own draft that the answer was in front of me once the question is properly (re)phrased.

Back to your question / statement, it seems you are confusing / conflating two distinct concepts:

  • The forecasting horizon
  • The lead times (production / dispatch / replenishment)

Then, we have also the lag which is a mathematical concept akin to time-series translation.

Any forecasting process is horizon-dependent, and no matter how you approach the accuracy, the accuracy will also be horizon dependency. The duration of between the time of cut-off and the time of the forecast is frequently referred to as the lag because in order to backtest, you will adding "lag" to your time-series.

Any supply chain decision takes time to come to pass, i.e. there is a lead time involved. Again, in order to factor those delays, it is possible to add "lag" to your time-series to reflect the various delays.

Lagging (aka time-series shift, time-series translation) is just a technicality to factor any kind of delay.

Hope it helps.

ramanathanl Oct 01, 2023 | flag | on: Lag Forecasting

Link for the Excel file
Copy and paste the entire link in a new window (do not click directly on the link as it does not seem to redirect correctly)

https://1drv.ms/x/s!AmdAMe2CGp70kgXFYSnCr0-SHoEV?e=prEmbD

remi-quentin_92 Sep 21, 2023 | flag | on: Values of alpha parameter

The value of alpha in this example is very high. I would suggest to use a value close to 0.05 (depending on how much you want your sales are correlated).

ToLok Sep 21, 2023 | flag | on: Unicity of ranvar after transform

Formally, $P[X=n/a]$ is not a random variable but a scalar and the corresponding ranvar is not unique.
Could we add more details about how the ranvar returned by transform() is chosen ?
A graphical example might also be a nice addition.
Thanks!

Miceli_Baptiste Sep 19, 2023 | flag | on: Simple use case of the escape function

A very common use case is searching for hidden characters in any string (most commonly in Product references).
This script: https://try.lokad.com/s/hiddencharacters shows how we can detect special characters in order to fix any corrupted text.

Important to note that in the upload read at the beginning of the script:

read upload "myEditable" as myEditable with ..

and where the editable is defined:

editable: "myEditable"

that this is a case sensitive feature. So if editable: "myeditable" is written (lower case `e` instead of upper case `E`) then you will not have an error message but your values will not be saved correctly when updating the table and running it from the dashboard. The two names need to be exactly aligned for each character not just the name.

Effective MRO (maintenance, repair and overhaul) requires meticulous management of up to several million parts per plane, where any unavailability can result in costly aircraft-on-ground (AOG) events. Traditional solutions to manage this complexity involve implementing safety stock formulas or maintaining excessive inventory, both of which have limitations and can be financially untenable. Lokad, through a probabilistic forecasting approach, focuses on forecasting the failure or repair needs of every individual part across the fleet and assessing the immediate and downstream financial impact of potential AOG events. This approach can even lead to seemingly counter-intuitive decisions, such as not stocking certain parts and instead paying a premium during actual need, which may, paradoxically, be more cost-effective than maintaining surplus inventory. Furthermore, Lokad’s approach automates these decision-making processes, reducing squandered time and bandwidth and increasing operational efficiency.

Miceli_Baptiste Sep 07, 2023 | flag | on: Ranvar representation

Ranvars have buckets that spread over multiple values.
The first such bucket is the 65th (meaning that the probability for 65 and 66 are always the same in a ranvar), so dirac(65) actually spread over two values (65 and 66).
We have again 64 buckets with 2 values each,, and then 64 buckets with four values, etc .. so the thresholds are : 64, 196, 452, … (every one being of the form $\sum_{0..n}(64*2^n)$ )

Miceli_Baptiste Sep 07, 2023 | flag | on: Ranvar representation

Ranvars have buckets that spread over multiple values.
The first such bucket is the 65th (meaning that the probability for 65 and 66 are always the same in a ranvar), so dirac(65) actually spread over two values (65 and 66).
We have again 64 buckets with 2 values each,, and then 64 buckets with four values, etc .. so the thresholds are : 64, 196, 452, … (every one being of the form $\sum_{0..n}(64*2^n)$ )

Example script: https://try.lokad.com/6rk5wgpaf4mp0?tab=Output

Miceli_Baptiste Aug 30, 2023 | flag | on: What happens in case of equality

In case of multiple T.a values, the returned T.b value is the first value encountered.
In fact, argmax is a process function scanning the table in its default order and will return different values in case of equality for two equivalent tables ordered in a different way.

This script https://try.lokad.com/5c15t7ajn1j38?tab=Code illustrates this equality management, the usage of the function and highlights the order importance with the Hat.

Conor Aug 23, 2023 | flag | on: Suppler Analysis through Envision (Workshop #1)

A free public tutorial on how to use Envision (Lokad's DSL) to analyze retail suppliers.

Yes, just use text interpolation to insert your text values. See below:


table T = with 
  [| date(2021, 2, 1) as D |]
  [| date(2022, 3, 1) |]
  [| date(2023, 4, 1) |]

maxy = isoyear(max(T.D))

show table "My tile tile with \{maxy}" a1b3 with
  T.D as "My column header with \{maxy}"
  random.integer(10 into T) as "Random" // dummy

On the playground https://try.lokad.com/s/ad-hoc-labels-in-table-tile

Does the AI use a time series? Lol

As it is common for the buzzword of the year in supply chain - a lot of noise, but very little substance.

jamalsan Aug 17, 2023 | flag | on: Lag Forecasting

My two cents: in a classical setting, manufacturing would have a frozen horizon period and use the net demand + stock policy to define its procurement and production at T0. Additionally you would have sourced more raw material than your short term demand (again safety stock in its classical sense + lot quantity from the tier1 supplier).
In each cycle the base forecast is converted in net demand for the next node (your excess material / existing stock would be subtracted from the forecast)

ramanathanl Aug 14, 2023 | flag | on: Lag Forecasting

I have a few doubts regarding the concept of "Lags" in forecasting.
Let T0, T1, T2... be the time periods, with T0 being the current time period. "Row 2" in the attached Excel gives the forecast generated in time period T0 for the next month onwards, T1, T2...

After time period T0 gets over, and we reach time period T1, the forecast is again generated for time periods T2, T3, and so on. "Row 3" in Excel gives us this.

The "Actual" sales observed in each time period are given by "Row 8", highlighted in Green.

"Lag 1" signifies the forecast for the next immediate Time period. So forecast generated in "T0" for "T1"; forecast generated in "T1" for "T2" and so on. The same is highlighted in a shade of yellow and the successive snapshots are in "Row 10".

"Lag 2" signifies the forecast for 2 Time periods from now. So forecast generated in "T0" for "T2"... and the successive snapshots are in "Row 11" highlighted in light blue.

Likewise for "Lag 3" and "Lag 4".

Let us consider a company, and let us assume "Lag 4" is used for the procurement of Raw Materials.
"Lag 3" is used for Manufacturing.
"Lag 2" is used for dispatching to the DCs.
"Lag 1" is used for replenishing the stores.

So if we are in "T0", Lag 4 forecast = 420 units, and we will procure raw material worth this.
After 1 time period elapses, we are in "T1" and we would manufacture for "410" forecast for the time period "T4" (Lag3). (What would happen to the 10 units worth of Raw Material that will not be manufactured?)

When we come to T2, we will have to dispatch 500 (Lag2), so if we only made 410 in the previous step, how do we get the extra 90 units?

When we come to T3, we have to send 430 (Lag1) to stores. If we got 500 from the previous step what happens to the 70 units? If we only got 410 (as Lag3 was 410 and we assume we manufacture and send the same to the DCs), we still fall short by 20 units.

My question is at every step the forecast for a particular time period ("T4") changes whenever we move from "T0" to "T1", "T1" to "T2". So where do we get the additional units from in each stage if forecast at say Lag2 (500)> Lag3 (410) or conversely what happens to excess material if "Lag 4(420) > Lag3 (410)"

For each lag we have,

Error = (Forecast-Actuals)
Accuracy = {1-[Abs(Error)/Actuals]}

The same has been computed in the Excel file. Please let me know if my understanding is correct.

vermorel Aug 14, 2023 | flag | on: Display Data by Year

Envision has a today() function, see


show scalar "Today" a1b2 with today()

table X = with 
  [| today() as today |]

show table "X" a3b4 with X.today

See https://try.lokad.com/s/today-sample

In your example above, DV.today is not hard-coded but most likely loaded from the data. It's a regular variable, not the standard function today().

Hope it helps,
Joannes

ttarabbia Aug 11, 2023 | flag | on: Spilling to Disk in .NET [video]

Great talk - the in-memory approach makes more sense when you have a lot of global dependencies. I would imagine you get some thrashing behavior in cases where you spill the "wrong" thing.

David_BH Aug 04, 2023 | flag | on: ExcelFormat currency change

If you need your column to be in € when the user download the file as an Excel, you can replace
excelformat: "#,##0.00\ [$₽-419]" by
excelformat: "#,##0.00\ [$€-407]"
And for other currencies, $ => [$$-409],
¥ => [$¥-804]
₽ => [$₽-419]
£ => [$£-809]
₺ => [$₺-41F]

ArthurGau Aug 02, 2023 | flag | on: Support for trimming in dates and numbers

Thanks a lot for your contribution arkadir !
There's a slight mistake, the date format specifying should appear before the alias of the table. So the line should be this instead.

read "/example.csv" date: "yyyy-MM-dd*" as T with

s40racer Aug 01, 2023 | flag | on: Forecast Analysis - Forecast Quality

Now I encounter another issue. The code below follows what I posted initially.
```envision
// ///Export

quantileLow1 = 0.3
quantileLow2 = 0.05
quantileHigh1 = 0.7
quantileHigh2 = 0.95

ItemsWeek.One = dirac(1)
ItemsWeek.Demand = dirac(0)

where ItemsWeek.FutureWeekRank > 0
ItemsWeek.Demand = actionrwd.segment(
TimeIndex: ItemsWeek.FutureWeekRank
BaseLine: ItemsWeek.Baseline
Dispersion: Items.Dispersion
Alpha: 0.05
Start: dirac(ItemsWeek.FutureWeekRank - 1)
Duration: ItemsWeek.One
Samples : 1500)

// ////BackTest Demand

keep where min(ItemsWeek.Baseline) when (ItemsWeek.Baseline > 0) by Items.Sku >= 1

ItemsWeek.One=dirac(1)
ItemsWeek.BacktestForecastWeekRank = 0
where ItemsWeek.IsPast
ItemsWeek.BacktestForecastWeekRank = rank() by Items.Sku scan - ItemsWeek.Monday

keep where ItemsWeek.BacktestForecastWeekRank >0 and ItemsWeek.BacktestForecastWeekRank < 371

where ItemsWeek.BacktestForecastWeekRank > 0
ItemsWeek.BackTestDemand = actionrwd.segment(
TimeIndex: ItemsWeek.BacktestForecastWeekRank
BaseLine: ItemsWeek.Baseline
Dispersion: Items.Dispersion
Alpha: 0.05
Start: dirac(ItemsWeek.BacktestForecastWeekRank - 1)
Duration: ItemsWeek.One
Samples : 1500)
```
I have no issue with the the forward looking forecast. I have, however, issue with the backward forecast test..... specifically with BacktestForecastWeekRank. It grows to 790 days, which is greater than what actionrwd can allow (365 days). The data set I have goes back to 2018. Would this be the cause?

s40racer Aug 01, 2023 | flag | on: Forecast Analysis - Forecast Quality

Thank you. I resolved the issue above by using
```envision
keep where Items.Sku in ForecastProduit.Sku
```
To make sure the SKUs in the items table matches the SKU in the ForecastProduit table.

vermorel Jul 29, 2023 | flag | on: Forecast Analysis - Forecast Quality

I suspect its the behavior of the same aggregator when facing an empty set which defaults to zero, see my snippet below:


table Orders = with // hard-coding a table
  [| as Sku, as Date          , as Qty, as Price |] // headers
  [| "a",    date(2020, 1, 17), 5     , 1.5      |]
  [| "b",    date(2020, 2, 5) , 3     , 7.0      |]
  [| "b",    date(2020, 2, 7) , 1     , 2.0      |]
  [| "c",    date(2020, 2, 15), 7     , 5.7      |]

where Orders.Sku == "foo"
  x = same(Orders.Price) // empty set, defaults to zero
  y = same(Orders.Price) default 42 // forcing the default

show summary "same() behavior" a1b2 with
  x as "without default" // 0
  y as "with default"    // 42

Try it at https://try.lokad.com/s/same-defaults-to-zero

Hope it helps.

s40racer Jul 28, 2023 | flag | on: Forecast Analysis - Forecast Quality

I did an output table to see the values in the Items and Itemsweek table


today = max(Sales.Date)
todayForecast = monday(today) + 7

Items.Amount365 = sum(Sales.LokadNetAmount) when (Date >= today - 365)
Items.Q365 = sum(Sales.DeliveryQty) when (Date >= today - 365)
Items.DisplayRank = rank()  scan Items.Q365

table ItemsWeek = cross(Items, Week)
ItemsWeek.Monday = monday(ItemsWeek.Week)
ItemsWeek.IsPast = single(ForecastProduit.IsPast) by [ForecastProduit.Sku,ForecastProduit.Date] at [Items.Sku,ItemsWeek.Monday]
ItemsWeek.Baseline = single(ForecastProduit.Baseline) by [ForecastProduit.Sku,ForecastProduit.Date] at [Items.Sku,ItemsWeek.Monday]
ItemsWeek.DemandQty = single(ForecastProduit.DemandQty) by [ForecastProduit.Sku,ForecastProduit.Date] at [Items.Sku,ItemsWeek.Monday]
ItemsWeek.SmoothedDemandQty = single(ForecastProduit.SmoothedDemandQty) by [ForecastProduit.Sku,ForecastProduit.Date] at [Items.Sku,ItemsWeek.Monday]
ItemsWeek.FutureWeekRank = single(ForecastProduit.FutureWeekRank) by [ForecastProduit.Sku,ForecastProduit.Date] at [Items.Sku,ItemsWeek.Monday]
Items.Dispersion = same(ForecastProduit.Dispersion)

show table "items" with
  today
  todayForecast
  Items.Amount365 
  Items.Q365 
  Items.DisplayRank 
  ItemsWeek.Monday 
  ItemsWeek.IsPast 
  ItemsWeek.Baseline 
  ItemsWeek.DemandQty 
  ItemsWeek.SmoothedDemandQty 
  ItemsWeek.FutureWeekRank 
  Items.Dispersion 

show table "forecastproductit" with
  ForecastProduit.date
  ForecastProduit.Sku
  ForecastProduit.DemandQty
  ForecastProduit.Baseline
  ForecastProduit.Dispersion

and confirmed that there are quite a bit of data with dispersion value = 0 but this is not the case in the ForecastProduit table (as verified from the code output above). Any suggestions on what may cause the dispersion value to become 0?

The dispersion of actionrwd.foo is controlled by Dispersion:. At line 13, in your script I see:


Items.Dispersion = max(Items.AvgErrorRatio/2, 1)

This line implies that if there is 1 item (and only 1) that happens to have a super-large value, then, it will be applied for all items. This seems to be the root cause behind the high dispersion values that you are observing.

In particular,


ItemsWeek.RatioOfError = if ItemsWeek.Baseline != 0  then (ItemsWeek.Baseline - ItemsWeek.DemandQty) ^ 2 /. ItemsWeek.Baseline else 0

Above, ItemsWeek.RatioOfError can get very very large. If the baseline is small, like 0.01, and the demand qty is 1, then this value can be 100+.

Thus, my recommendations would be:

  • sanitize your ratio of error
  • don't use a max for the dispersion

Hope it helps.

Remark: I have edited your posts to add the Envision code formatting syntax, https://news.lokad.com/static/formatting

Envision is deterministic. You should not be able to re-run twice the same code over the same data and get different results.

Then, there is pseudo-randomness involved in functions like actionrwd. Thus, the seeding tend to be quite dependent on the exact fine-print of the code. If you change filters, for example, you are most likely going to end-up with different results.

Thus, even seemingly "minor" code change can lead to a re-seeding behavior.

As a rule of thumb, if the logic breaks due to re-seeding, then the logic is friable and must be adjusted so that its validity does not depend on being lucky during the seeding of the random generators.

Continuing from the previous comment - For the same SKU, the values for SeasonalityModel, Profile1, level changed between two runs on different days. I am unsure what caused the change in these values - the input data remained the same.

Code before dispersion:


ItemsWeek.ItemLife =  1
ItemsWeek.CumSumMinusOneExt = 0
where ItemsWeek.Monday >= firstDate
  ItemsWeek.CumSumMinusOneExt = (sum(ItemsWeek.ItemLife) by ItemsWeek.Sku scan ItemsWeek.Week) - 1
ItemsWeek.CumSumMinusOneExtMonth = ceiling(ItemsWeek.CumSumMinusOneExt / 8)
ItemsWeek.WeekNum = rank() by Items.Sku scan -monday(Week)
nbWeeks = same(ItemsWeek.WeekNum) when(ItemsWeek.Week == week(today()))
ItemsWeek.ItemLifeWeight = 0.3 + 1.2*(ItemsWeek.WeekNum/nbWeeks)^(1/3) 
ItemsWeek.IsCache = ItemsWeek.Monday >= firstDate and ItemsWeek.Monday < today
ItemsWeek.Cache = if ItemsWeek.IsCache then 1 else 0
expect table Items max 30000
expect table ItemsWeek max 5m
table YearWeek[YearWeek] = by ((Week.Week - week(firstDate)) mod 52)
Items.SeasonalityGroup = Items.Category
table Groups[SeasonalityGroup] = by Items.SeasonalityGroup
table SeasonYW max 1m = cross(Groups, YearWeek)
Items.Level = avg(ItemsWeek.DemandQty) when(ItemsWeek.Monday >= today - 365 and ItemsWeek.Monday < today)
Items.Level = if Items.Level == 0 then -10 else
              if log(Items.Level) < -10 then - 10 else
              if log(Items.Level) > 10 then 10 else
              log(Items.Level)
maxEpochs = 1000
autodiff Items epochs:maxEpochs learningRate:0.01 with
  params Items.Affinity1 in [0..] auto(0.5, 0.166)
  params Items.Affinity2 in [0..] auto(0.5, 0.166)
  params Items.Level in [-10..10]
  params Items.LevelShift in [-0.5..0.5] auto(0, 0)
  params SeasonYW.Profile1  in [0..1] auto(0.5, 0.1)
  params SeasonYW.Profile2  in [0..1] auto(0.5, 0.1)
  SumAffinity =
    Items.Affinity1 +Items.Affinity2
  YearWeek.SeasonalityModel = SeasonYW.Profile1  * Items.Affinity1 +SeasonYW.Profile2  * Items.Affinity2
  Week.LinearTrend = ItemsWeek.Cache + (ItemsWeek.CumSumMinusOneExtMonth * ItemsWeek.Cache * Items.LevelShift / 10)
  Week.Baseline = exp(Items.Level) * YearWeek.SeasonalityModel * ItemsWeek.Cache *  Week.LinearTrend
  Week.Coeff =  ItemsWeek.ItemLifeWeight
  Week.DeltaSquare = (Week.Baseline - ItemsWeek.SmoothedDemandQty) ^ 2

  Sum = sum(Week.Coeff * Week.DeltaSquare) / 10000
  SumPowAffinity = (Items.Affinity1 ^2 +
                    Items.Affinity2 ^2 ) /\
                    (SumAffinity ^2)
  return ( \
          // Core Loss Function
          (1 + Sum) / (SumPowAffinity))
table ItemsYW = cross(Items, YearWeek)
ItemsYW.SeasonalityGroup = Items.SeasonalityGroup
ItemsWeek.YearWeek = Week.YearWeek
ItemsYW.Profile1 = SeasonYW.Profile1
ItemsYW.Profile2 = SeasonYW.Profile2
ItemsYW.SeasonalityModel = ItemsYW.Profile1  * Items.Affinity1 +
                            ItemsYW.Profile2  * Items.Affinity2
ItemsWeek.LinearTrend =  max(0, 1 + (ItemsWeek.CumSumMinusOneExtMonth * Items.LevelShift/ 10))
ItemsWeek.Baseline = exp(Items.Level) * ItemsYW.SeasonalityModel * ItemsWeek.LinearTrend

vermorel Jul 20, 2023 | flag | on: deleted post

Please try to ask self-contained questions. Without context, those questions are a bit cryptic to the community.

You can share code and/or links to the Envision playground. Think of this board as Stack Overflow, but for supply chain.

Cheers,

ToLok Jul 20, 2023 | flag | on: Forecast Analysis Performance Measures

Hello s40racer,

The forecast cockpit is evaluating the accuracy of the quantile 95 with respect to the past sales. In other words, it is measuring the percentage of time that the sales where over the quantile 95.
In a perfect forecast where we have the exact distribution of demand, this percentage should be equal to 5%: 95% of the time, the sales should be under the quantile 95 and 5% of the time sales should be over it.

In the example 11635178 - (Above: 9.62% - At: 0% - Below: 90.38%), it means that for the Reference 11635178, 9.62% of the time the sales were above the quantile 95 and 90.38% of the time there were below. In particular, this means that the forecast is a little underestimating the Demand for this specific Ref as we have actually 4.62% more weeks with sales over quantile 95 than expected.
This is completely normal to have small disparities such as this. If we didn't, we'd probably be overfitting the data.

Regarding the scope of the overall forecast sanity label, it is indeed an average (weighted) concerning only the Ref (not SKU) in the Forecast Sanity table. In details, this is looking only at the history dating of at least 1 year: hence the Items that are more recent are not in the analysis.

Hope it helps!

Conor Jul 18, 2023 | flag | on: ABC Analysis [pic]

Hopefully, the final nail in the coffin. We've already covered ABC (and ABC XYZ) in print and video, so we consider this matter put to rest!

Print:
https://www.lokad.com/abc-analysis-(inventory)-definition
https://www.lokad.com/abc-xyz-analysis-inventory

Video:
https://tv.lokad.com/journal/2018/9/12/abc-analysis/
https://tv.lokad.com/journal/2023/6/14/analyzing-abc-xyz/

In this interview recorded onsite at a Celio store in Rosny-sous-Bois, Joannes Vermorel and David Teboul (Managing Director of Operations at Celio) discuss the resurgence of Celio following the challenges of 2020-2021. David highlights the importance of a "normal" customer-focused approach in transforming the brand. Lokad supported this transformation by assisting in optimizing the supply chain to better cater to a diverse range of stores and offers. Despite increasing complexity and the rise of online commerce, David emphasizes the need for agility and the critical role of physical stores for Celio, while striving to understand and meet customer needs through various touchpoints.

manessi Jul 12, 2023 | flag | on: Editables: Runflow and IDE behavior

It is crucial to note that editables and the uploads tied to these will only be modified by a dashboard interaction followed by a "Start Run" from said dashboard.
A script has no control over what inputs it will receive when invoked from Runflow, from the IDE, or from the list of projects (basically, anywhere except from the dashboard). It will instead receive the same inputs as the previous run, unless manually overridden (through Runflow options, the “clear uploads” of the Run Details, or setting up dedicated inputs in the IDE).

manessi Jul 11, 2023 | flag | on: Reset/Clear uploaded file

If you need to reset an uploaded file or clear it altogether, the show upload can be tweaked into

show upload "Please upload File 1" editable:"upload1" with Hash, Name

The hash should be a 32-character hexadecimal hash, such as the one obtained from Files.Hash, and the name should be a valid filename (no forbidden characters), more importantly it should have the proper extension in order to be able to read the file .

If both the hash and the name are "", then that particular line is ignored (meaning, show upload "MyFile" with "", "" will clear the tile).

vermorel Jul 11, 2023 | flag | on: S&OP [pic]

S&OP is only ever touted as a "grand success" by consultants who directly profit from the massive overhead.

In contrast, I have met with 200+ supply chain directors in 15 years. I have witnessed several dozens of S&OP processes in +1B companies. I have never seen one of those processes be anything else than a huge bureaucratic nightmare.

I politely, but firmly, disagree with the statement that *a* process is better than any process at all. This is a fallacy. There is no grand requirement written in the sky that any of the things that S&OP does have to be done at all.

Hello,

I had a look at your code.
First I created a Sku table that you can find in your CustomerName/clean/Sku.ion file. We will use this table as the item table as you want to compute things at Sku level and not Item level.
When I take the PurchaseOrders table, we want to do exactly the same thing, meaning create a Sku vector that is "MaterialSID x Location". The thing is that there are no location column in the PurchaseOrders table that indicate where the goods are received.

Once we have it, we will simply create a Sku vector in PurchaseOrders Table and then use the primary dimension [Sku] as the joint between the 2 tables Sku and PurchaseOrders

Best regards

Also, instead of using by .. at everywhere, you could declare Suppliers as upstream of Items. This will remove the need for by .. at option entirely. I am giving an example of the relevant syntax at: https://news.lokad.com/posts/647

It is possible to declare a tuple as the primary dimension of a table in a read block through the keyword as:


read "/suppliers.csv" as Suppliers [(Supplier, Location) as myPrimary] with
  Supplier : text
  Location : text
  LeadTimeInDays : number

A more complete example:


read "/skus.csv" as Skus with
  Id : text
  Supplier : text
  Location : text

read "/suppliers.csv" as Suppliers [(Supplier, Location) as sulo] with
  Supplier : text
  Location : text
  LeadTimeInDays : number

expect Skus.sulo = (Skus.Supplier, Skus.Location)

Skus.LeadTimeInDays = Suppliers.LeadTimeInDays

dumay Jul 10, 2023 | flag | on: Prefer "scan" to "sort"

Automatic hints from Envision recommends you to use "scan" rather than "sort" with this function.

ttarabbia Jul 07, 2023 | flag | on: On Time-Based Competition - George Stalk Jr.

Seems to me that supply chain can very easily become the enabler or barrier to competing with time. He mentions an interesting example on optimizing for full truck loads and the effects on the business as a whole.

It is possible to have sanity checks in user defined functions and throw an error if the check is not passed.
Cf. https://docs.lokad.com/reference/abc/assertfail/

Thank you ToLok.

Are you able to modify the code or give a more explicit example on how to implement the code at the SKU level? From data standpoint, I assume the following fields need to exist in items, PO, and vendor tables: item #, destination location, and supplier ID, in order to implement the SKU level code?

Currently the partnering data has not been updated to such a structure. Only the Items table (Item Master) has the item #, supplier ID, and destination location. If the data structure noted above is needed to implement the SKU level code, I can make sure this is done.

Thank you.

ToLok Jul 06, 2023 | flag | on: Supplier Leadtime Forecast and Supplier Analysis

Hello s40racer,

Indeed if you use Items.AnnouncedSLTValue = same(Suppliers.Leadtime) by [Suppliers.Supplier, Suppliers.Location] at [Items.Supplier, Items.Location], you would get for each item the value corresponding to the pair (Items.Supplier; Items.Location) adding the granularity that you wanted.
However, this implementation implies that all items with the same pair (Supplier; Location) would have the same Lead-Time. If you want to have different Lead-Time for different items provided by the same supplier, you need to add the relevant Reference in the Suppliers table (both for your orignal case at item level and your updated one at SKU level)

Also looking at the original code:
It seems that your table Items has a primary dimension, which is also present in PO, allowing you to have natural aggregation on line 2,3 and 4.
If the primary dimension was previously at the Item level, you might want to change it to the SKU level (Item x Location). This way, Items.SLT_ItemLevel will be the distribution of observed Lead-Time for your specific SKU (versus for your specific Item previously).

Hope it helps!

Thank you for the guidance. I am asking more from the code standpoint. The data is given with lead-time at the item-location level. I am thinking the easiest is to bring that data from Items table into the Vendors table to utilize the existing code.
With the existing code, I assume I need to add a location variable to the file to look something like:

Original:


read "/clean/tmp/Suppliers.ion" as Suppliers with
  Supplier : text
  Leadtime : number

Updated:


read "/clean/tmp/Suppliers.ion" as Suppliers with
  Supplier : text
  Location: text
  Leadtime : number

Then in any subsequent joins or filters, I will need to add the location filter. How would I update the following code to account for the location specific lead-time?


///Possible SLT layers depending on many datapoints can be found in the dataset
Items.SLT_ItemLevel = ranvar(PO.DeliveryDelay) when PO.IsClosed
Items.SLT_SupplierAndCategoryLevel = ranvar(PO.DeliveryDelay) by [Items.Supplier,Items.Category] when PO.IsClosed
Items.SLT_SupplierLevel = ranvar(PO.DeliveryDelay) by [Items.Supplier] when PO.IsClosed
Items.AnnouncedSLTValue = same(Suppliers.Leadtime) by Suppliers.Supplier at Items.Supplier

Taking the last line as an example, would it look something like ?


Items.AnnouncedSLTValue = same(Suppliers.Leadtime) by [Suppliers.Supplier, Suppliers.Location] at [Items.Supplier, Items.Location]

Hello,

It is indeed very common to have distinct supplier lead times depending on the location to be served.
The usual way to take the differences into account into your data is :
- Having a SKU table and not only Item table
- If you have a Purchase Orders history with relevant data, then you can simply create a joint between [PO.Sku] and [Sku.sku]. We would recommend to have a probabilistic supplier leadtime (use ranvar()). If not possible, then take avg

Hope it helps

Hey! Thanks for your interest. I am not too sure which code you are referring to. Don't hesitate to include an Envision snippet (see https://news.lokad.com/static/formatting ) in your question to clarify what you are working on. You can also include a link to the Envision code playground (see https://try.lokad.com ) if you can isolate the problem.

The Lokad usually approach lead time forecasts to craft a parametric probabilistic model to be regressed with differentiable programming. This approach makes it possible, for example, to introduce a distance parameter in the model. The value of this parameter is then learn by regressing the model over the data that happens to be available. Conversely, if there is no data at all (at least for now), the value of the parameter can be hard-coded to a guestimate as a transient solution.

Then, this approach might be overkill if there is enough data to support a direct lead time ranvar construction over supplier-location instead of supplier.

Let me know if it helps.

When using probabilistic lead times in actionrwd.reward, there is a possibility of encountering situations where a previously placed order is simulated to arrive later than the additional potential order considered by actionrwd. In other words, if a purchase order (PO) is in progress, the simulated purchase order generated by actionrwd may not adhere to a first-in, first-out (FIFO) rule in relation to the previous ongoing orders. This is a scenario that makes sense from a realistic standpoint, as purchase orders are not always strictly FIFO. However, from a stock manager/planning perspective, this can result in repetitive and misunderstood purchase suggestions for the user. It is unlikely that conditional lead time logic will be integrated into actionrwd, but this aspect should be addressed in the Monte Carlo reconstruction of actionrwd.
To avoid this pitfall, SCS often resort to using deterministic lead times (e.g., dirac(days)) that preserve the FIFO rule.

I feel like the explanation about the alpha parameter is a bit incomplete. The definition "The update speed parameter of the ISSM model for each item" is quite vague, when I think what needs to be understood is that alpha represents the correlation between one observation and the next one.
I would add that, the 0.3 value in the code example is way too high in most cases which can be misleading, a value of 0.05 would better fit usual cases to begin with.

arkadir Jun 29, 2023 | flag | on: Arguments of parsenumber function

The thousands separator is optional, but the decimal separator is mandatory. If no decimal separator is provided, the parsing will fail even if the provided numbers do not have decimals.

The shortest call to parsenumber (or tryparsenumber) is therefore:


T.Number = parsenumber(T.Text, "", ".")

arkadir Jun 29, 2023 | flag | on: Support for trimming in dates and numbers

When reading a date column, it is possible to provide a `*` at the end of the format to cause it to discard an optional time section, if present, for example:


read "/example.csv" as T date: "yyyy-MM-dd*" with

This will treat a value such as 2023-06-29 10:24:35 as if it were just 2023-06-29. Without this trim option, attempting to read the value will fail and report an error.

Similarly, when reading a number column, it is possible to provide a `*` at the end of the format to cause it to discard up to three non-digit characters either at the start or the end of the number value. For example:


read "/example.csv" as T number: "1,000.0*" with 

This will treat a value such as 10.00 USD as if it were just 10.00. Without this trim option, attempting to read the value will fail and report an error.

vermorel Jun 28, 2023 | flag | on: Be careful what you negotiate for! [pic]
Where you say “to some extent negotiable” (paraphrased) could we regard it as the quantity unit corresponding to a price, and that a different and likely higher price might apply to orders of smaller quantities? In which case, knowing the tiers of quantity and their corresponding prices would enable us to find the best order pattern, trading off price, wastage or inventory holding cost, and lead time.

What you are describing is frequently referred to as 'price breaks'. Price breaks can indeed be seen as a more general flavor of MOQs. In practice, tthere are two flavors of price breaks: merchant and fiscal. See also https://docs.lokad.com/library/supplier-price-breaks/

An enlightening chat on the future of aviation supply chain, shot within Air France's own engine repair facilities.

A remarkably well-illustrated dissertation on an under-studied topic. Very approachable, even for non-specialists.

What is a better way of getting stakeholder engagement for large investment without a smaller PoC-like approach?

The fundamental challenge is de-risking the process.

How does one get stakeholder engagement for TMS, WMS, MRP or ERPs? Those products are orders of magnitude more expensive than supply chain optimization software, and yet, there is no POCs.

I can't speak for the whole enterprise software industry. In its field, the Lokad approach to de-risking a quantitative supply chain initiative consists of many the whole thing accretive in a way that is largely independent of the vendor (aka Lokad).

Since Lokad charges on a monthly basis, with little or no commitment, and the process can end at any time. Whenever it ends, if it ends at all, the client company (the one operating a supply chain) can resume where Lokad left it.

The fine-print of the process and methodologies is detailed in my series of lectures https://lokad.com/lectures

vermorel Jun 13, 2023 | flag | on: What defines supply chain excellence?

My own take is that IT, and more generally anything that is really the foundation of actual execution, is treated as second class citizens, especially the _infrastructure_. Yet, the immense majorities of the operational woes in supply chain nowadays are IT-related or even IT-driven. For example _Make use of channel data_ is wishful thinking for most companies due to IT mess. IT is too important to be left in the hands of IT :-)