I am curious to learn about others' workflows for prompt engineering LLMs like GPT-3 -- how do you keep track of your prompts, how effective they are, how changes to the prompt affect the overall output, etc?
Thus far I've been taking a fairly simple approach of pasting them into a text file and writing my observations, but I wanted to know if there were any tools/workflows that make this process more efficient.