This is brief summary of my Q&A with one of the users of Excel file. I will provide each question and answer as separate comment.
Q1: How are we creating the probability distributions for the products in this file? If I have historical sales data for a particular SKU, how should I get the probability distribution for that?
A1: We haven't made capabilities to build probabilistic demand forecasts using historical data in this Excel file. This is very hard to implement in Excel due to its limitations. Programming language like Envision is more appropriate for that, but one could also use Python or any other language. How to build Probabilistic forecast with historical data was discussed in this lecture:
https://tv.lokad.com/journal/2022/3/2/probabilistic-forecasting-for-supply-chain/
This file uses synthetic distributions via built-in Excel functions for normal and negative binomial distributions. Changing parameters you can change the distributions and the ranking of micro level decisions respectively. This is educational tool and the primary goal was to show how having probabilistic demand forecast and economic drivers demand planer can optimize purchasing decisions.
Though, the question of building probabilistic forecast based on historical data still remains valid. It is just not in the scope of this document because it is harder to understand, but also harder to explain and show through Excel.
Indeed. Although, as aircrafts get dismantled, it tends to introduce a lot of spare parts into the market. Thus, most of the time, the parts of weaning aircraft types become cheaper despite the lack of production of parts. However, as you correctly point out, there are parts that become rare and very expensive, making the aircraft type economically unviable.
Making noise and making money out of the noise, especially consistently, is not the same. I think that growing number of companies that adopt probabilistic perspective through appropriate software is the only adequate metric that tells anything about the adoption of the term.
That's is interesting. At some point there will be very few aircrafts of this model so servicing parts for them will become nightmarish as demand will be even more sparse than it is now meaning the parts rotation will slow down and carrying cost at some point will make it unprofitable for those remaining aircrafts to fly.
A nice illustration the sort of stuff that characterizes aviation supply chain: aircrafts are both expensive and modular. Thus, the option is always on the table to take a component from an aircraft and move it to another aircraft. Most of time, exercising this option is pointless, but sometimes, it's an economically viable move. Here, this is what Boeing is doing with aircraft engines. Aviation supply chains are not about picking safety stocks :-)
Working in the probabilistic space, it feels that the term is becoming more and more mainstream and anecdotal evidence confirms that. However, looking at google searches this is not at all confirmed. Maybe the topic isn't as mainstream as we feel - confirmation bias? https://www.lokad.com/probabilistic-forecasting
Funfact: Since 2012 the interest for the topic has x 100
https://trends.google.com/trends/explore?date=all&q=%2Fg%2F11b90fjhmq
Fun fact: Lokad started to implement digital twins of supply chains more than a decade ago; although I don't overly like this terminology. As a rule of thumb, I tend to dislike terminologies that try to make tech sounds cool, irrespectively of the merit of said technology. There are tons of challenges associated with large scale modeling of supply chain, the first one being: how accurate is my digital twin? Tech vendors are usually exceedingly quite about this essential question.
747 have been produced for 54 years. The one most notable evolution being the introduction of the fly-by-wire tech in the 1990s
https://www.flightglobal.com/boeing-747-x-flies-by-wire/6314.article
This plane has massively contributed to the democratization of both air travel and air shipments. Considering that aircrafts are typically operated for decades, some 747 are likely to keep flying for the next 20/30 years.
Lion Hirth is Professor of Energy Policy at the Hertie School. His research interests lie in the economics of wind and solar power, energy policy instruments and electricity market design.
The document introduce marginal pricing - in the context of energy, and make three statements about it:
Marginal pricing is not unique to power markets.
Marginal pricing is not an artificial rule.
If you want to get rid of marginal pricing, you must force people to change their behavior
Three points are very much aligned with what is generally understood as mainstream economics. Those points are quite general and do apply to most supply chains as well.
I am not familiar with the specific Greek energy market.
However from a supply-and-demand perspective,
Due to spiking electricity prices several stakeholders are arguing that the electricity market is malfunctioning and the pricing mechanism is flawed. The merit order model, that attributes the marginal (highest production price) to all producers is nothing else than the offer/demand model that we apply in all other markets as well.
The Supply Chain Scientist delivers human intelligence magnified through machine intelligence . The smart automation of the supply chain decisions is the end product of the work done by the Supply Chain Scientist.
Excerpt from 'The Supply Chain Scientist' at
https://www.lokad.com/the-supply-chain-scientist
Transit costs to low orbit are still beyond the realm of supply chain, however, it is notable the cost per kilogram has been going down by a factor 1000 over the course of 70 years. If progress keeps happening at the same pace, in a few decades, launches will become an option. The benefits of easier access to low orbit are somewhat unclear beyond telecommunications, but specialized micro-gravity factories has been explored many time in science fiction. At this point, orbit remains too expensive to even try to investigate newer / better industrial processes in orbit.
The only way to assess "forecastability" of a time-series is to use a forecasting model as a baseline. This is exactly what is done in the article, but unfortunately, it means that if the baseline model is poor, the "forecastability" assessment is going to be poor as well. There is no work-around that.
Stepping back, one of the things that I have learned more than a decade ago at Lokad is that all the forecasting metrics are moot unless they are connected to euros or dollars attached to tangible supply chain decisions. This is true for deterministic and probabilistic forecasts alike, although, the problem becomes more apparent when probabilistic forecasts are used.
Many articles discuss how to measure forecastability for deterministic forecasting. However alot for in the trap of suggesting to simply use a coefficient of variation (CV) measure - even though it will count forecastable patterns as season and trend as variation and therefore mistakenly set them as hard to forecast.
The linked article here by Stefan de Kok does a good job of explaining the trap of pure CV and come with an alternative.
I'm a bit split though whether to use this type of measure or to compute the FAA of a simple benchmark (such as a moving average).
The FAA gives you a minimum acceptable accuracy level, but the proposed method here gives a measure which (typically) can be reported from 0 (unforecastable) to 1 (no noise).
Do any of you here have experience in implementing this and can share any experiences? Especially on the stakeholder/change management side.
Be our guest, virtually! These live, one-hour tours take you behind the scenes at our fulfilment centres, using a combination of live streaming, videos, 360° footage, and real-time Q&A to replicate the experience of our in-person tours.
Live virtual tours are approximately 1 hour long, including Q&A.
Registration closes 6 hours in advance of each tour. Last-minute registration ("instant join") is not possible. Tours will no longer appear in the calendar once registration is closed, or when they are fully booked.
Various options are available depending on the region of interest:
I have seen this so many times while working in FMCGs. Some people made that with multiple companies in sequence and landed CXO positions.
Steps for the new supply chain decision systems:
Simple, really.
1-1-1-1
Looks easy-peasy. Where did they found so many well educated people who can't add numbers?
Let's show that supply chain practitioners can add.
Post your result in comment like X-X-X-X where X is either 0 or 1, where 0 means incorrect answer and 1 - correct one. So 1-1-0-0 would mean that only first and second questions were correctly answered.
The article proposes three ways, namely:
Building supply chain resilience by managing risk
Using technology to increase supply chain agility
Identifying and promoting ways to be more sustainable
However, the analysis is a bit all over the place.
Afaik, those types of ships are typically referred to as bulk carriers
A bulk carrier or bulker is a merchant ship specially designed to transport unpackaged bulk cargo — such as grains, coal, ore, steel coils, and cement — in its cargo holds.
From https://en.wikipedia.org/wiki/Bulk_carrier
The interesting element is the extra option that COSCO gains by being able to leverage one extra type of ship. This method is probably inferior cost-wise to regular containers, but if a bulk carrier is the only ship that happens to be available, then, it becomes very valuable to have the option.
The post points out that competing a "demand" needs to factor-in the delivery date (requested) vs the shipped date (realized). However, I am afraid, this is a very thin contribution.
Demand is an incredibly multi-faceted topic. Demand is never observed. Only sales, or sales intents are observed. The sales are conditioned by many (many) factors that distort the perception of the demand.
First, let's start with the easy ones, the factors that simply censor your perception of the demand:
Then, we have all the big factors:
Looking at the demand through the lenses of time-series analysis is short-sighted.
Ps: thanks a lot for being one of the first SCN contributors!
I think Lokad's supply chain lectures are an excellent source. They are all available on YouTube.
MIT offers a MicroMasters in Supply Chain Management through the edX platform. I haven't taken any of those courses, but maybe you can find something interesting in the syllabus. They are free and you can have the option of earning a certificate for a small fee if you complete all the requierements. This program can be half of the on-campus Supply Chain Management master's program.
Maybe someone that has taken this program can comment and tell us if it is worth it?
I had the chance to play it at university (it was very chaotic!). It was a session on "bullwhip effect".
In 2011 Lidl made the decision to replace its homegrown legacy system “Wawi” with a new solution based on “SAP for Retail, powered by HANA”. [..] Key figure analyzes and forecasts should be available in real time. In addition, Lidl hoped for more efficient processes and easier handling of master data for the more than 10,000 stores and over 140 logistics centers.
[..]
The problems arose when Lidl discovered that the SAP system based it's inventory on retail prices, where Lidl was used to do that based on purchase prices. Lidl refused to change both her mindset and processes and decided to customise the software. That was the beginning of the end.
Disclaimer: Lokad competes with SAP on the inventory optimization front.
My take is that the SAP tech suffered from two non-recoverable design issues.
First, HANA has excessive needs of computer resources, especially memory. This is usually the case with in-memory designs, but HANA seems to be one of the worst offenders (see [1]). This adds an enormous amount of mundane friction. At the scale of Lidl, this sort of friction becomes very unforgiving - every minor glitches turning into many-hours (sometime multi-days) fixes.
Second, when operating at the supply chain analytical layer, complete customization is a necessity. There is no such thing as "standard" decision taking algorithm to drive a replenishment system moving billions of Euros worth of good per year. This insights goes very much against most of design choices which have been made in SAP. Customization shouldn't be the enemy.
[1] https://www.brightworkresearch.com/how-hana-takes-30-to-40-times-the-memory-of-other-databases/
This spreadsheet contains a prioritized inventory replenishment logic based on a probabilistic demand forecast. It illustrates how SKUs compete for the same budget when it comes to the improvement of the service levels while keeping the amount of inventory under control. A lot of in-sheet explanations are provided so that the logic can be understood by practitioners.
another branch of discussion on LinkedIn
https://www.linkedin.com/feed/update/urn:li:activity:6966330168063725568/
Excel is the Swiss army's knife of any supply chain practitioner. While it is definitely not the most appropriate tool for managing supply chain it is important to be able to convey ideas through it. Lokad tried to build simplified educational version of decisions optimization with probabilistic forecasts in Excel. See the LinkedIn post by the link.
You can ask to receive the file in the comments either here or on LinkedIn under the post.
This is a nice readily accessible implementation, no sign-up, no login, create a new game and play. For those who are not familiar with the beer game, it's 4 stage supply chain game with 4 roles: manufacturer, distributor, supplier, retailer. Each player fills a role, and tries to keep the right amount of goods flowing around. It's a nice - and somewhat brutal - way to experience a fair dose of bullwhip. If you don't have 3 friends readily available, the computer will play the other 3 roles.
Ps: I never got the chance to experience this game at university. If some people did, I would love to hear about their experience - as students - of their first 'Beer game'.
Online version of the beer game, allowing multiplayer or computer controlled
2 years are a long way in ML.
These days CV models can handle trivial tasks like barcode recognition in near real-time even on commodity smartphones. Mostly thanks to the dedicated hardware like neural engines/tensor blocks etc.
Since vision is quite popular these days (cameras, AR/VR etc), things should progress even more quickly on the hardware front these days. E.g. building more affordable robotic assistants for the warehouse that are procured from cheaper parts but minor inefficiencies in the gear drives and motors are compensated by the software. This is similar to what Ocado Group has been aiming for when they acquired HaddingtonDynamics for their tech.
Also NVidia Omniverse, as a bet for creating digital twins for the reinforcement learning.
It looks like climate change effects on economics might be accelerated in the mid term. E.g. exceptionally dry summer in Europe alone wouldn't be that bad for the supply, but war in Ukraine and imbalance of supply chains that started 2020 - it all adds up.
Impossible to predict the future in this situation, but at least the actors could try to manage their risks across a range of future probabilities.
With all the opaque vocabulary you're dreaming of!
Sometimes I think of my professors always repeating "If you can explain it to your little siblings and answer all their questions then you understood it!" Who could do that with all the vocabulary used in the video?
The TSP is definitely a problem we encounter (almost) every day... With countless variants!
According to this paper https://www.researchgate.net/publication/337198743_A_comparative_analysis_of_the_travelling_salesman_problem_Exact_and_machine_learning_techniques, Google's tool seems pretty robust! I'd really like to give it a try when I need it.
For those wondering about this VizPick technology, there is a short video demo from two years ago at
https://www.reddit.com/r/walmart/comments/l0bn4m/for_those_wondering_about_vizpick/
The UX isn't perfectly smooth. You can feel the latency of the recognition software. Also, the operator has to move relatively slowly to give a chance to the acquired digital image to stabilize. However, it still beats the human perception by a sizeable margin.
Thanks and welcome! Don't hesitate to submit a link of your own, and/or to post a question. You will benefit from an extra level of care being on of the first non-Lokad contributors. :-)
I am looking forward to some good discussions here.
Unfortunately, S&OP processes - at the very least those I had the chance to observe in my career - had already devolved into tedious iterated sandbagging exercises, exclusively moving targets up or down without - ever - touching any actual decision. Yet, the insight remains correct, without putting the decision front and center, it devolves into idle speculations about the future.
That's a bit like the S&OP Process in general : it should be based on the important decision you need to take and those decisions only.
It is often the case in software that deciding on a solution based on a checklist of features leads to feature bloat. Picking a few core features, and wrapping them in a system that allows easy customization, is quite more effective.
Side story: this discussion board has been a long-time project of mine. For years I have been looking for a minimalistic discussion board, but all I could find was bloated pieces of software that did 10x what I wanted, and were missing the few things I cared about - like mathematical notations, (hey, have a look at my EOQ, $Q=\sqrt{\frac{2c_l\cdot k}{c_s}}$). More recently, the big platforms started doubling down on both fact checking - as if such a thing was possible when operating platforms that discuss everything and the rest - and monetization. Both being quite toxic to healthy open discussions. This renewed my sense of urgency for this project.
Yet, I believe that an online community does require neither a big platform to operate, not a heavy handed moderation policy. As long you don't have to deal with millions of users, a few careful design decisions can alleviate the bulk of the usual problems that plague online communities (spamming, trolling, brigading).
About 10 days ago, I started actually working on the Supply Chain News project. For the tech-inclined, I have implemented this webapp in C# / .NET 6 / ASP.NET with minimal amount of JS. The persistence is performed via Lokad.AzureEventStream
, an open source project of Lokad. I might open source this very forum at some point, once the webapp is a little more battle tested.
A system is a whole that contains two or more parts, each of which can affect the properties or behavior of the whole [..]. The second requirement of the parts of a system is that none of them has an independent effect on the whole. How any part affects the whole depends on what other parts are doing. [..] A system is a whole that cannot be divided into independent parts.
Supply chain is a system, in the purest sense as defined by Russell Ackoff. Approaching supply chains through systems thinking is critical to not entirely miss the point. Most practitioners do it instinctively. However, most academic treatments of the supply chain entirely miss the point.
There are three different ways of treating a problem. One is absolution. That's the way we trust most problems. You ignore it and you hope it'll go away or solve itself. [..] Problem resolution is a way of treating a problem where you dip into the past and say what have we done in the past that suggest we can do in the present that would be good enough. [..] There is something better to do a problem than solving it and it's dissolving it [..] by redesigning the system that has it so that the problem no longer exits.
Dissolving problems is incredibly powerful. A decade ago, Lokad (mostly) dissolved the forecasting accuracy problem through probabilistic forecasting. Instead of struggling with inaccurate forecasts we embraced the irreducible uncertainty of the future, hence mostly dissolving the accuracy problem (not entirely, but to a large extend).
late 15c., "any small charge over freight cost, payable by owners of goods to the master of a ship for his care of the goods," also "financial loss incurred through damage to goods in transit," from French avarie "damage to ship," and Italian avaria.
Supply chain has been shaping the terminology of mathematics.
The M5 forecasting competition was notable on several fronts:
The findings are not overly surprising: gradient boosted trees and deep learning models - which dominate the vast majority of the Kaggle competitions - end-up dominating the M5 as well.
Caveat emptor, those models are quite dramatically greedy in terms of computing resources. For a retail network of the scale of Walmart, I would not recommend those classes of machine learning models.
This video has ~47k at the time of the comment which is a shame, because this video is pure gold.
Brands have long known the power of the Made In Your Country sticker. It seems that the practice is over 4000 years old
https://en.wikipedia.org/wiki/Country_of_origin#History_of_country-of-origin_labelling
The Librem example is striking because they literally quantify the extra willingness to pay of their clients to benefit from the right country of origin.
I've seen several retailers completely give up on mustard. Rather than have an empty mustard section (with a paper explaining the situation), there is no mustard section anymore, the available space taken by enlarged mayonnaise and vinaigrette sections.
https://divan.dev/posts/animatedqr/
Someone invented an animated QR-code format. If you film the animation long enough, you can download an arbitrarily large file.
For another discussion about the bullwhip effect, check this discussion with Prof. Stephen Disney (University of Exeter):
https://tv.lokad.com/journal/2021/6/2/the-bullwhip-effect/
More generally, I am trying to consolidate a whole series of lectures about SCM that you can see at:
https://www.lokad.com/lectures
Shaun Snapp (principal at Brightwork Research) delivers an analysis that matches my own empirical observations about HANA, an analytics platform sold by SAP. The in-memory paradigm is expensive, pretty-much by design, due to both CAPEX and OPEX costs associated with the ownership of terabytes of DRAM, the class of devices that hold the memory in modern servers. Among the in-memory options for enterprise analytics, HANA appears to be one of the worst offenders of the market in terms of costs. Unfortunately, it does not appear to deliver features that can't be replicated in much cheaper systems. Nowadays, PostgreSQL, with a proper columnar setup, is a superior alternative in every dimension that I can think of.
Ryan Petersen is the CEO of Flexport, a startup that raised $2.2B. Flexport is a supply chain platform to track and manage orders.
Indeed, barcodes predate QR-codes, but QR-codes aren't necessarily superior. Information density comes as a tradeoff when it comes to the scanning apparatus and the need for ambient lighting. If you want to convey more information, then RFID is more appropriate than (hypothetic) higher dimensional barcodes. Alternatively, a QR-code is enough to encode a URL, and all the relevant information can be pulled from the internet instead of trying to cram the data into the label itself.
Close to three-quarters of supply-chain functions rely on the simplest method: spreadsheets. In addition, more than half use SAP Advanced Planning and Optimization (APO), a popular but antiquated supply-chain-planning application that SAP introduced in 1998 and will stop supporting in 2027. The portion of APO users in certain industries is even higher—75 to 80 percent of all the automotive, retail, and chemical companies we polled.
This 3/4 estimate for the supply chain functions that rely only on spreadsheets feels right. This is also matching my experience. Furthermore, even when some kind of planning tool is present, the tool almost invariably relies on the alerts & exceptions design antipattern which ensures a very low productivity for every employee that touch the piece of software.
However, I disagree with process suggested for the vendor selection. More specifically, the section that outlines the suggested process for the client company:
A list of business requirements.
Clear evaluation criteria.
Two or three “must have” use cases.
Companies invariably do an exceedingly poor job at any of those three tasks which are exceedingly technology dependent. This process guarantees a bureaucratic selection which favors whoever can tick the most boxes in the RFP document. Bloatware is the enemy of good software.
There is a much simpler, faster and more importantly accurate way to proceed through a vendor-on-vendor assessment:
https://tv.lokad.com/journal/2021/3/31/adversarial-market-research-for-enterprise-software/
Great invention! A barcode can be seen as grandfather of modern QR-codes. Its introduction initiated series of inventions where for the same basic idea inventors just added new dimensions. For instance, regular QR-code can bee seen as two dimensional counterpart of a barcode. But inventors didn't stop there. Somebody added third dimension via color coding. It is interesting where this trend will end and how many dimensions can be added to flat QR-code?
It is an object of the invention to provide automatic apparatus for classifying things according to photo-response to lines and/or colors which constitute classification instructions and which have been attached to, imprinted upon or caused to represent the things being classified.
Much of what makes the modern supply chain only become possible thanks to the widespread usage of the barcode technology. It's fascinating to see that the barcode predates mainframe computers which only started to get traction in the late 1950s.
The discussion on LinkedIn
https://www.linkedin.com/feed/update/urn:li:activity:6966315729650380800/
Paradoxically, data is the most under-valued and de-glamorised aspect of AI.
Lack of focus on the data plumbing spells the doom of most supply chain initiatives. While this article wasn't written with supply chain use-cases in mind, it's clearly relevant to the supply chain field. Data plumbing being "glamorous" means that it's difficult to gather resources for stuff that don't really have any kind of visible payback. Yet, data engineering is a highly capitalistic undertaking.
Ha ha. It's the pending $1 billion question that haven't cracked yet at Lokad.
Unfortunately, this article gives away very little about what Google may or may not do for supply chains. This article is written like a marketing piece. Nowadays, pretty much every single software company is using at least two cloud computing platforms to some extent. Thus, it's should be a surprise if a Software Company X that happens to do produce supply chain software runs on Cloud Platform Y. Yet, it this doesn't say why this cloud platform is better suited than its competitors.
The article puts forward Vertex AI, a recent ML offering of Google. However, as per the documentation of Google, it's state Pre-trained APIs for vision, video, natural language, and more which gives strong-vibes of being absolutely not the right quite of ML for supply chain. Furthermore, the AutoML (featured by Vertex AI) is also the sort of technology I would strongly recommend not using for supply chain purposes. It adds massive friction (compute costs, development times) for no clear upside in production.
Even with the Amazon.com behemoth dominating the e-commerce landscape, there’s room for smaller, scrappier rivals, Giovannelli says. In fact, many independent service providers are under contract to Amazon and other e-tailers to fill out the growing need for delivery vehicles.
In short, Amazon supply chain is a massive success, and just like the iPhone of Apple, it proves that there is a vast market waiting for alternatives. When considering large markets, even dominant players like Apple struggle to reach 25% market share. This is gives a lot of room for other actors. This is what those private equity investors are looking for.
Within the Lokad TV channel, this episode about DDMRP is probably the one, so far, that generated the most polarized feedback. On one hand, we got a lot of support. On the other hand, we got heavy criticized by the DDMRP evangelists. The most frustrating part of me is that the critics have been, so far, have only been shallow dismissals, or authority arguments.
A longer written discussion was produced at:
https://www.lokad.com/demand-driven-material-requirements-planning-ddmrp
Year's later, DDMRP community has still not addressed my top three objections:
It will be on September 14th at 15h00 (CET).
Watch the live at:
https://www.youtube.com/watch?v=3uqezVCMhSE
Most supply chain initiatives fail, and yet, the vast majority of supply chain case studies speak only of successes. Only the epic-scale failures become visible to outsiders. Even insiders frequently don't realize that most of the past initiatives of their own company have failed. Large companies aren't necessarily very capable when it comes to their institutional memory of past debacles.
The case of Target Canada was discussed in my supply chain lecture:
https://tv.lokad.com/journal/2021/2/3/supply-chain-personae/
Answering a question on YouTube:
Yes, in short, the two big gotchas are (a) your digital twin may no reflect the reality (b) your digital twin may not be prescriptive.
Concerning (a), measuring accuracy when considering the modeling of a system turns out to be a difficult problem. I intend to revisit the case in my series of supply chain lectures, but it's nontrivial, and so far all the vendors seem to be sweeping the dust under the rug.
Concerning (b), if all the digital twin delivers are metrics, then, it's just an elaborate way to waste employee's time, and thus money. Merely presenting metrics to employees is suspicious if there is no immediate call-to-call. If there is a call-to-action, then let's take it further and automate the whole thing.