Skip to main content

Running a Prompt

Once all the configurations are set, users can run the selected prompt and monitor its real-time output. This interactive process allows for quick feedback and adjustments, helping users refine their input and the AI's response based on the desired outcomes.

Example: Running an Audio Diarization Prompt

For example, when running the Audio Diarization prompt, the user can input a function or command, and the AI model will process the input and return a response.

  • User Input: The user inputs data, such as text, code, or audio transcription.
  • Model Output: The model responds with a result, such as sorting a list of numbers or providing a transcription of the audio, complete with timestamps and speaker separation.

The interaction follows a dialogue format, clearly distinguishing between user queries and model responses.


Adjusting Parameters and Re-Running the Prompt

Users can adjust the parameters of the run on the right-hand side of the screen. For instance, they can modify the Token Count to limit the response length, or adjust the Temperature to control the randomness of the AI's response.

  • Token Count: Adjust the total number of tokens used in the response.
  • Temperature: Set how creative or deterministic the response should be.
  • JSON Mode: Enable or disable JSON mode for structured data handling.

Once changes are made, users can re-run the prompt to see how the output differs, based on the modified parameters. This iterative approach allows users to refine the results until they meet their project needs.


Saving the Conversation

After running the prompt, users can save a copy of the conversation, including both the input and output, for later use. This can be useful for keeping a record of the prompt's behavior or sharing the results with team members.

  • Save a Copy: Saves the entire conversation, which can be revisited or analyzed later.
  • Get a Code: Users can click Get a Code to generate a unique code for the current prompt setup, making it easy to reference or reuse in future sessions.

By saving these outputs, users can build a library of reusable prompts that can be fine-tuned or adapted for different tasks.


Best Practices for Running Prompts

To optimize the results of running AI prompts, consider the following best practices:

  • Test Multiple Parameters: Try different values for temperature, token count, and other settings to find the best combination for your task.
  • Iterate Frequently: Don’t hesitate to re-run the prompt with slight adjustments to refine the output.
  • Use Descriptive Inputs: Provide clear and descriptive input to help the AI model generate more accurate responses.

By following these practices, users can ensure that they get the most out of the Prompt Gallery and its powerful AI capabilities.