When the mysterious black box is sufficiently intelligent and its output domain is _also_ natural language, precision and even correctness of the input carries less importance.
You can give an LLM a barely comprehensible query and you stand a decent chance of getting something useful back. Try the same in any conventional programming language and chances are it would not even have a valid AST and thus fail to compile.
So it's a question of necessity--there wasn't one. LLMs are capable enough to deal with the ambiguities of natural language.
That said I've seen some forms of DSL used to program LLMs in the chatbot character.ai scene, so I wouldn't be surprised if a more general DSL purpose built for efficient prompt engineering is discovered (efficient use of tokens, standardized forms, etc).
(I say discovered because at the moment you can for example literally make up your own DSL if you so desire, and the LLM will just roll with it and mostly do what you intend)
You can give an LLM a barely comprehensible query and you stand a decent chance of getting something useful back. Try the same in any conventional programming language and chances are it would not even have a valid AST and thus fail to compile.
So it's a question of necessity--there wasn't one. LLMs are capable enough to deal with the ambiguities of natural language.
That said I've seen some forms of DSL used to program LLMs in the chatbot character.ai scene, so I wouldn't be surprised if a more general DSL purpose built for efficient prompt engineering is discovered (efficient use of tokens, standardized forms, etc).
(I say discovered because at the moment you can for example literally make up your own DSL if you so desire, and the LLM will just roll with it and mostly do what you intend)