Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe should be called Prompt Science or Prompt Discovery or even Prompt Craft.

I have a 40 million BERT-embedding spotify-annoy index that I keep experimenting with to make a better query vector.

One way that I’m doing is getting only the token vectors with the highest sum of the whole vector and averaging the top vectors to use as the query vector.

Another way is zeroing many dimensions randomly on the query vector to introduce diversity.

But after experimenting with “prompt engineering” I found out that prefixing the sentences for the query vectors with “prompts” yield very interesting results.

But I don’t see much engineering. It’s more trial, feedback and trying again. Maybe even Prompt Art. Just like on chatGPT.



Back in the day one could possess Google-fu, so this must be prompt-fu.


I like "prompt injection", personally. It's not as pretentious as "prompt engineering".


Nobody understands the emergent properties of LLMs. Trying to understand how it works is science or research, whereas using it to produce something that’s useful is alchemy.

Even “tuning”, as my sibling comment suggests, is imo a stretch, because it implies some form of finite set of knobs that can be adjusted. Prompts aren’t something that you can simply map to knobs without pushing the analogy beyond reason.


Prompt injection means something else: https://simonwillison.net/series/prompt-injection/


I'm struggling to understand how the two ideas are different in any way other than intent. Sure, I'm not likely to throw an <|endoftext|> into a tailored context, but anybody who, for example, lies about what "assistant" says in the API calls is surely attempting to coerce behavior out of the model that isn't in line with OpenAI's intentions.


I thought you were suggesting renaming "prompt engineering" - the activity of designing prompts to solve specific problems - to "prompt injection", which means deliberately attacking prompts using input designed to subvert their planned behaviour.

To me, that's like rebranding "software engineering" to "exploit engineering" - sure, one is a subset of the other but they are not the same thing.


I don't think "prompt engineering" was ever a clearly-defined practice. The way I see it, it's just some over-eager noobs both prompting and prompt-injecting until they get results close to what they want, and then subsequently pretending like they're engaging in some new branch of mathematical reasoning. Hence why I called the moniker "pretentious".

Personally, I've never liked the title of "software engineer" or even "data engineer" (my own title). However, those are more rooted in engineering-like practices than any of this "prompt engineering" nonsense.


I think that's already taken and more about hacking via variables in the prompt like SQL injection.

I would just got with prompt tuning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: