📋 Legal Disclaimer and Terms of Use - Click to Read
Legal Disclaimer and Terms of Use⚖️ Important Legal Notice
Please read carefully before proceeding
Characteristic | Description |
---|---|
📊 Data Source | Based on publicly available information and community discussions |
🏢 Affiliation | Not affiliated with, endorsed by, or authorized by Oracle Corporation |
📝 Content Nature | Not representative of official examination content |
🎓 Purpose | Provided for educational purposes only |
⚖️ Rights Reserved
The author reserves all rights to this original work. Unauthorized use may result in legal action.
⚠️ Disclaimer
This material is provided "as is" without warranties of any kind. The author assumes no responsibility for:- 🎯 Accuracy or completeness of information
- 💥 Any damages resulting from use of this material
- 🚀 Actions taken by users based on this content
🧠 Section 1: Core Generative AI Concepts
Mastering the fundamentals of modern AI
✨ What is In-Context Learning?
In-context learning is a powerful capability of Large Language Models (LLMs) that allows them to learn and execute new tasks without updating their weights (i.e., without training or fine-tuning). This process relies solely on the contextual information provided within the prompt.
🔧 Mechanism & Types | 📝 Description |
---|---|
🎯 Mechanism | Leverages the LLM's "pattern matching" ability. By observing input-output examples or instructions in the prompt, the model infers the task's underlying pattern and applies it to new inputs. |
🔍 Zero-Shot Learning | No examples are provided in the prompt; the model relies solely on instructions and its pre-trained knowledge. |
1️⃣ One-Shot Learning | The prompt includes one example. |
🎲 Few-Shot Learning | The prompt contains a small number of examples (typically 2 to 5), which is often the most effective way to utilize in-context learning. |
✅ Key Advantage
Provides examples in the prompt to guide the LLM to better performance with no training cost. As stated in Q9, "In the prompt, it provides examples to guide the LLM to better performance, without training costs."
⚠️ Disadvantage (Q100)
It can increase latency for each model request because longer prompts with examples require more computational resources and time for the LLM to process.
🔄 Distinction from Fine-tuning
Unlike fine-tuning, which updates model parameters and is costly, in-context learning is parameter-agnostic, flexible, and has lower costs.
🔗 Relationship with Prompt Engineering
In-context learning is a core technique within prompt engineering, where the goal is to find the most effective prompts to elicit desired model capabilities.
Click to check the correct answer
Correct Answer: B. This involves guiding the model at inference time with examples, without updating its parameters.
Explanation in Chinese
🇨🇳 上下文学习(In-context Learning)
- 核心:在推理阶段,通过在输入提示(Prompt)中提供任务相关的指令或几个示例(demonstrations),引导一个已经预训练好的大语言模型(LLM)去执行新的、未见过的任务,而不更新模型的任何参数。
- 实现方式:用户在向模型提问时,会构造一个包含"上下文"的提示。这个上下文通常包含一到多个"输入-输出"的完整示例,模型会从中"领悟"任务的模式和要求,并对用户真正想解决的问题给出相应格式和内容的回答。
- 可以理解为:给一个博学的专家(预训练好的LLM)看几个例题和答案,然后让他照着样子去解一道新题。专家并没有通过这几个例题去"重新学习"知识改变自己的大脑结构(参数),只是利用自己已有的知识储备,理解了当前任务的"游戏规则"。
📚 一个简单的上下文学习示例:
# 示例:将动物翻译成表情符号 # --- 上下文中的示例 --- # 示例1 输入:牛 输出:🐄 # 示例2 输入:老虎 输出:🐅 # --- 用户的实际问题 --- 输入:青蛙 输出:模型的输出:
🐸
在这个示例中:模型在没有经过专门的"动物-表情符号"数据训练的情况下,依靠提示中提供的两个示例,"学会"了这项新任务,并正确输出了 🐸
。
Explanation in English
🇺🇸 What is In-Context Learning?
- Core Idea: During the inference phase, In-Context Learning (ICL) guides a pre-trained Large Language Model (LLM) to perform novel tasks by providing it with task-specific instructions or a few examples (demonstrations) in the input prompt, all without updating the model's internal parameters.
- How it Works: A user constructs a prompt that includes a "context" before the actual query. This context, typically a few input-output pairs, allows the model to infer the underlying pattern of the task and apply it to the new input.
- Think of it as: Giving a highly knowledgeable expert a few solved examples of a new type of problem and then asking them to solve a new one. The expert doesn't relearn their knowledge (update parameters); they simply use their existing expertise to understand the "rules of the game" for the current task.
💡 A simple example of in-context learning:
# Example for a sentiment classification task. # --- Demonstrations in the context --- # Example 1 Text: "This movie was fantastic, I loved it!" Sentiment: Positive # Example 2 Text: "A complete waste of time." Sentiment: Negative # --- The actual query --- Text: "The acting was superb, but the plot was predictable." Sentiment: # Model's expected output: # Mixed
Without any fine-tuning, the model "learns" the sentiment classification task from the two demonstrations provided in the prompt and outputs Mixed
.
Click to check the correct answer
Correct Answer: C. It guides the model with examples at inference time, avoiding costly fine-tuning.
Progress: 2 of 3 questions covered
Here is a detailed explanation of the concept and the distinctions from the other options:Explanation in Chinese
🇨🇳 少样本提示(Few-shot Prompting)
核心:在推理阶段,通过在输入提示(Prompt)中提供几个("少样本")完整的"输入-输出"示例,来引导一个预训练好的大语言模型(LLM)执行特定的、可能未见过的任务,而无需对模型进行任何参数微调。
Explanation in English
🇺🇸 What is Few-Shot Prompting?
Core Idea: During the inference phase, few-shot prompting is a technique to customize an LLM's output for a specific task by including a few complete demonstrations (the "shots") of the desired input-output format directly in the prompt, all without any costly model fine-tuning.
Click to check the correct answer
Correct Answer: B. It refers to including 'k' number of demonstrations in the prompt to condition the model.
✅ Progress: All 3 questions covered!
Here is a detailed explanation of the concept and the distinctions from the other options:Explanation in Chinese
🇨🇳 K-样本提示(K-shot Prompting)
核心:在推理阶段,通过在输入提示(Prompt)中包含 k
个完整的任务示例("样本"或"shots"),来引导一个预训练好的大语言模型(LLM)理解并执行一项新任务,整个过程不涉及任何模型参数的更新。
Explanation in English
🇺🇸 What is K-shot Prompting?
Core Idea: During the inference phase, k-shot prompting is the technique of including k
complete examples, or "shots," of a task within the input prompt to guide a pre-trained Large Language Model's output, all done without updating any of the model's parameters.
🎉 Congratulations!
You've completed the Core Generative AI Concepts section