Are improvements in forecasting less impactful as long as the optimization process is well-structured and focuses on improving decision-making based on those forecasts?

Also, why does Lokad use stochastic gradient descent instead of tools like Seeker from InsideOpt, which leverages metaheuristic algorithms, robust optimization, and customized methods? Is it due to interpretability or another reason?

vermorel 3 weeks ago | flag

Lokad has its own stochastic optimization tools, comparable to Seeker from InsideOpt. Those tools are not yet publicly documented.

Those tools are using the stochastic gradient descent (SGD), but it's only a small piece of the puzzle. There are two main challenges. First, SGD only works on continuous spaces, but the supply chain problems are invariably discrete. Second, a "naive" gradient descent fails at handling any non-convex situation. Our tools addresses both challenges.

The central insight of those tools is to put a learning process at the core of the optimization process. Lokad is not the only company doing that. AlphaFold from DeepMind is also doing that for another optimization problem, aka protein folding.

The reason why SGD is very important is that it gives a direction to the descent. Yes, the pure descent is insufficient (aka second challenge above), but ignoring altogether the most likely direction of the descent is highly inefficient. Moreover, SGD is remarkably scalable.

My former research supervisor, Mehryar Mohri, at the AT&T Labs was of the opinion that having "metaheuristic algorithms" was akin to saying you have no algorithm. Two decades later, this statement is still remarkably accurate. This class of techniques is only marginally better than random exploration.

My 2cts on the case. Hope it helps.

Conor 3 weeks ago | flag

Hi Joshua! Thanks for the question. I'll ping an internal expert to answer your question.