A helpful allegory for today's software flexibility vs. ease-of-use tradeoff and how LLMs may lead to more extensible and malleable software for the end user.
To be quite honest it's the first time I've seen something that helps inform how we may use LLMs as a supply chain community in the context of spreadsheets and rigid tools.
"...LLM developers could go beyond that and update the application. When we give feedback about adding a new feature, our request wouldn’t get lost in an infinite queue. They would respond immediately, and we’d have some back and forth to get the feature implemented."
LLMs can certainly support a whole next-gen replacement for Tableau-like software (widely used for supply chain purposes), where the SQL queries are generated from prompts. I may have to a revisit my Thin BI section at https://www.lokad.com/business-intelligence-bi a few years down the road.
However, system-wide consistency is a big unsolved challenge. LLMs have token limits. Within those limits, LLMs are frequently beyond-human for linguistic or patternistic tasks (lacking a better word). Beyond those limits, it becomes very fuzzy. Even OpenAI doesn't seem convinced in their own capacity to push those token limits further within the current LLM paradigm.
Agreed, the current paradigm with token limitations restricts the use cases on raw data, i.e. giving the LLM your entire hypercube to look for things.
However if instead you were pointing it at the documentation for the 2-3 tools you're using plus excel and asking it to tweak XYZ functionality.... then the fuzziness/randomness is confined to the configuration/setup layer which then drives a consistent and performant tool to generate results.
A helpful allegory for today's software flexibility vs. ease-of-use tradeoff and how LLMs may lead to more extensible and malleable software for the end user.
To be quite honest it's the first time I've seen something that helps inform how we may use LLMs as a supply chain community in the context of spreadsheets and rigid tools.
LLMs can certainly support a whole next-gen replacement for Tableau-like software (widely used for supply chain purposes), where the SQL queries are generated from prompts. I may have to a revisit my Thin BI section at https://www.lokad.com/business-intelligence-bi a few years down the road.
However, system-wide consistency is a big unsolved challenge. LLMs have token limits. Within those limits, LLMs are frequently beyond-human for linguistic or patternistic tasks (lacking a better word). Beyond those limits, it becomes very fuzzy. Even OpenAI doesn't seem convinced in their own capacity to push those token limits further within the current LLM paradigm.
Agreed, the current paradigm with token limitations restricts the use cases on raw data, i.e. giving the LLM your entire hypercube to look for things.
However if instead you were pointing it at the documentation for the 2-3 tools you're using plus excel and asking it to tweak XYZ functionality.... then the fuzziness/randomness is confined to the configuration/setup layer which then drives a consistent and performant tool to generate results.