This article reads into so much nonsense, I would not be surprised to see that some of the content has been generated by ChatGPT. I mean just look at this:
> Citations required! I'm sorry, I didn't cite the experimental research to support these recommendations. The honest truth is that I'm too lazy to look up the papers I read about them (often multiple per point). If you choose not to believe me, that's fine, the more important point is that experimental studies on prompting techniques and their efficacy exist. But, I promise I didn't make these up, though it may be possible some are outdated with modern models.
This person appears like they are just in the hype phase of the LLM and prompt mania and attempting to justify this new snake oil with all this jargon that not even they understand the inner workings of a AI model when it hallucinates frequently.
"Prompt Engineering" and "Blind Prompting" is different branding of the same snake oil.
> Citations required! I'm sorry, I didn't cite the experimental research to support these recommendations. The honest truth is that I'm too lazy to look up the papers I read about them (often multiple per point). If you choose not to believe me, that's fine, the more important point is that experimental studies on prompting techniques and their efficacy exist. But, I promise I didn't make these up, though it may be possible some are outdated with modern models.
This person appears like they are just in the hype phase of the LLM and prompt mania and attempting to justify this new snake oil with all this jargon that not even they understand the inner workings of a AI model when it hallucinates frequently.
"Prompt Engineering" and "Blind Prompting" is different branding of the same snake oil.