How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?
When does a chain typically interact with memory in a run within the LangChain framework?
Which is a cost-related benefit of using vector databases with Large Language Models (LLMs)?
Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI service?
Which is a key characteristic of the annotation process used in T-Few fine-tuning?
Which is NOT a typical use case for LangSmith Evaluators?
Accuracy in vector databases contributes to the effectiveness of Large Language Models (LLMs) by preserving a specific type of relationship. What is the nature of these relationships, and why arethey crucial for language models?
What does the Ranker do in a text generation system?
Why is normalization of vectors important before indexing in a hybrid search system?
Given the following code:
chain = prompt | llm
Which statement is true about LangChain Expression Language (LCEL)?
What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?
Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?
What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?
Which LangChain component is responsible for generating the linguistic output in a chatbot system?
How does the utilization of T-Few transformer layers contribute to the efficiency of the fine-tuning process?
When does a chain typically interact with memory in a run within the LangChain framework?
An AI development company is working on an AI-assisted chatbot for a customer, which happens to be an online retail company. The goal is to create an assistant that can best answer queries regarding the company policies as well as retain the chat history throughout a session. Considering the capabilities, which type of model would be the best?
You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 days?
What does a cosine distance of 0 indicate about the relationship between two embeddings?
What does "Loss" measure in the evaluation of OCI Generative AI fine-tuned models?
Which is NOT a built-in memory type in LangChain?
What is the primary purpose of LangSmith Tracing?
Which is a key advantage of using T-Few over Vanilla fine-tuning in the OCI Generative AI service?
Which is a key characteristic of Large Language Models (LLMs) without Retrieval Augmented Generation (RAG)?
How does a presence penalty function in language model generation?
What does the RAG Sequence model do in the context of generating a response?