💻
Prompt Engineering Guide
  • Prompt Engineering Guide
  • Introduction
    • Introduction
    • What is an LLM
    • Soekia - What is an LLM
    • ChatGPT Models
    • What is a prompt?
    • Prompt Categories
    • Good prompting
    • Misconceptions about LLM's
  • Configuring ChatGPT
    • ChatGPT Configuration
  • Text Prompts
    • Prompt Frameworks
    • Roles
    • Prompt Cheat Sheet
    • Output Formats
    • Copy of Output Formats
    • Prompt Techniques
    • Prompt Refinement
    • Prompt Patterns
    • Temperature Parameter
  • Custom GPTs
    • Introduction to Custom GPTs
    • The GPT Store
    • Step-by-Step Guide to Creating a Custom GPT
    • Testing and Refining Your Custom GPT
    • Use Cases
    • TASKS
    • Scheduled Tasks
  • Appendix
    • Prompting Terms
    • About
    • Chorus
  • References
  • Hack and Track
  • FAQ
  • GPT4o Announcement
  • Exercises
    • Excercise: Geopandas
  • Group 1
    • Projekte
  • Write like me
Powered by GitBook
On this page
  1. Introduction

What is a prompt?

A prompt is the input text or query that is provided to a language model (like GPT) to guide its response. It is essentially a way of asking the model to generate text based on specific instructions, questions, or even contextual information. Prompts can vary in complexity, ranging from simple questions to detailed instructions or conversation starters.

How a prompt interacts with an LLM:

  1. Text Processing: When the language model (LLM) receives a prompt, it processes the input text by breaking it down into tokens (smaller chunks of text) that represent words or parts of words. It then uses this tokenized input to understand the context and meaning of the prompt.

  2. Pattern Recognition: LLMs, like GPT, are trained on vast amounts of text data and have learned to recognize patterns in language. Based on the prompt, the model searches for the most likely sequence of words or responses that match the patterns it has learned.

  3. Contextual Response: If the prompt provides enough context (such as background information, examples, or instructions), the model can generate a more specific and accurate response. Without sufficient context, the model will rely on its general knowledge of language patterns to provide a relevant output.

  4. Temperature and Response Variability: LLMs often have parameters like temperature, which control how deterministic or random the output is. A low temperature gives more focused, deterministic responses, while a higher temperature allows for more creative and varied outputs. This allows the prompt to influence not only the content of the response but also its creativity.

  5. Instructional Prompts: Prompts can be designed to provide instructions or constraints for the model. For example, you can ask an LLM to "write a poem in the style of Shakespeare" or "explain quantum physics in simple terms." The model interprets these cues and adapts its response accordingly.

  6. Fine-Tuning: In some cases, a prompt might be part of a larger set of instructions or used in conjunction with training data to fine-tune a language model, tailoring it to a specific domain or type of interaction.

PreviousChatGPT ModelsNextPrompt Categories

Last updated 5 months ago