Supervised learning is used all over the place for the main probabilistic forecasting tasks: part requests (number of units), scraps (number of units), turn-around time (number of days), etc. Unsupervised learning, or rather self-supervised learning, is used for master data improvement. For example to identifying faulty or missing compatibilities between aircraft types and PN), or to identify misclassified PN within the hierarchy, or to identify incorrect stock on hand values, etc. Then, stochastic optimization is applied on top of those probabilistic predictions to generate the supply chain decisions.
Beware, all those calculations should not be done within a data lake. The purpose of the data lake should only be to collect and serve the data "as is". It should not even attempt at adding intelligence.
The high-level process is the same for all verticals, it's the fine print that varies (which sort of forecasts? which sort of decisions? etc).
Thank you for explaining this very clearly. I understand we want to take all relevant raw data untouched and store it in a data lake. By keeping data storage and data processing separate, we can enable scalability and experimental optimization opportunities.
"Lokad is an analytical layer that operates on top of the client’s existing transactional systems. In other words, Lokad does not replace the ERP; it supplements it with predictive optimization capabilities that realistically cannot be implemented as part of a traditional transactional system."
I misspoke in my previous question; thank you for correcting me. I definitely need to further explore the architecture of data storage and data processing within this analytical engine. Thank you for your thorough response, Joannes.
Supervised learning is used all over the place for the main probabilistic forecasting tasks: part requests (number of units), scraps (number of units), turn-around time (number of days), etc. Unsupervised learning, or rather self-supervised learning, is used for master data improvement. For example to identifying faulty or missing compatibilities between aircraft types and PN), or to identify misclassified PN within the hierarchy, or to identify incorrect stock on hand values, etc. Then, stochastic optimization is applied on top of those probabilistic predictions to generate the supply chain decisions.
Beware, all those calculations should not be done within a data lake. The purpose of the data lake should only be to collect and serve the data "as is". It should not even attempt at adding intelligence.
The high-level process is the same for all verticals, it's the fine print that varies (which sort of forecasts? which sort of decisions? etc).
Thank you for explaining this very clearly. I understand we want to take all relevant raw data untouched and store it in a data lake. By keeping data storage and data processing separate, we can enable scalability and experimental optimization opportunities.
"Lokad is an analytical layer that operates on top of the client’s existing transactional systems. In other words, Lokad does not replace the ERP; it supplements it with predictive optimization capabilities that realistically cannot be implemented as part of a traditional transactional system."
I misspoke in my previous question; thank you for correcting me. I definitely need to further explore the architecture of data storage and data processing within this analytical engine. Thank you for your thorough response, Joannes.