Weekend Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70percent

Oracle 1z0-1127-25 Oracle Cloud Infrastructure 2025 Generative AI Professional Exam Practice Test

Demo: 26 questions
Total 88 questions

Oracle Cloud Infrastructure 2025 Generative AI Professional Questions and Answers

Question 1

How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?

Options:

A.

Shared among multiple customers for efficiency

B.

Stored in Object Storage encrypted by default

C.

Stored in an unencrypted form in Object Storage

D.

Stored in Key Management service

Question 2

When does a chain typically interact with memory in a run within the LangChain framework?

Options:

A.

Only after the output has been generated.

B.

Before user input and after chain execution.

C.

After user input but before chain execution, and again after core logic but before output.

D.

Continuously throughout the entire chain execution process.

Question 3

Which is a cost-related benefit of using vector databases with Large Language Models (LLMs)?

Options:

A.

They require frequent manual updates, which increase operational costs.

B.

They offer real-time updated knowledge bases and are cheaper than fine-tuned LLMs.

C.

They increase the cost due to the need for real-time updates.

D.

They are more expensive but provide higher quality data.

Question 4

Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI service?

Options:

A.

Updates the weights of the base model during the fine-tuning process

B.

Serves as a designated point for user requests and model responses

C.

Evaluates the performance metrics of the custom models

D.

Hosts the training data for fine-tuning custom models

Question 5

Which is a key characteristic of the annotation process used in T-Few fine-tuning?

Options:

A.

T-Few fine-tuning uses annotated data to adjust a fraction of model weights.

B.

T-Few fine-tuning requires manual annotation of input-output pairs.

C.

T-Few fine-tuning involves updating the weights of all layers in the model.

D.

T-Few fine-tuning relies on unsupervised learning techniques for annotation.

Question 6

Which is NOT a typical use case for LangSmith Evaluators?

Options:

A.

Measuring coherence of generated text

B.

Aligning code readability

C.

Evaluating factual accuracy of outputs

D.

Detecting bias or toxicity

Question 7

Accuracy in vector databases contributes to the effectiveness of Large Language Models (LLMs) by preserving a specific type of relationship. What is the nature of these relationships, and why arethey crucial for language models?

Options:

A.

Linear relationships; they simplify the modeling process

B.

Semantic relationships; crucial for understanding context and generating precise language

C.

Hierarchical relationships; important for structuring database queries

D.

Temporal relationships; necessary for predicting future linguistic trends

Question 8

What does the Ranker do in a text generation system?

Options:

A.

It generates the final text based on the user's query.

B.

It sources information from databases to use in text generation.

C.

It evaluates and prioritizes the information retrieved by the Retriever.

D.

It interacts with the user to understand the query better.

Question 9

Why is normalization of vectors important before indexing in a hybrid search system?

Options:

A.

It ensures that all vectors represent keywords only.

B.

It significantly reduces the size of the database.

C.

It standardizes vector lengths for meaningful comparison using metrics such as Cosine Similarity.

D.

It converts all sparse vectors to dense vectors.

Question 10

Given the following code:

chain = prompt | llm

Which statement is true about LangChain Expression Language (LCEL)?

Options:

A.

LCEL is a programming language used to write documentation for LangChain.

B.

LCEL is a legacy method for creating chains in LangChain.

C.

LCEL is a declarative and preferred way to compose chains together.

D.

LCEL is an older Python library for building Large Language Models.

Question 11

What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?

Options:

A.

Support for tokenizing longer sentences

B.

Improved retrievals for Retrieval Augmented Generation (RAG) systems

C.

Emphasis on syntactic clustering of word embeddings

D.

Capacity to translate text in over 100 languages

Question 12

Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?

Options:

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.

B.

PEFT modifies all parameters and is typically used when no training data exists.

C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data.

D.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.

Question 13

What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?

Options:

A.

The model's ability to generate imaginative and creative content

B.

A technique used to enhance the model's performance on specific tasks

C.

The process by which the model visualizes and describes images in detail

D.

The phenomenon where the model generates factually incorrect information or unrelated content as if it were true

Question 14

Which LangChain component is responsible for generating the linguistic output in a chatbot system?

Options:

A.

Document Loaders

B.

Vector Stores

C.

LangChain Application

D.

LLMs

Question 15

How does the utilization of T-Few transformer layers contribute to the efficiency of the fine-tuning process?

Options:

A.

By incorporating additional layers to the base model

B.

By allowing updates across all layers of the model

C.

By excluding transformer layers from the fine-tuning process entirely

D.

By restricting updates to only a specific group of transformer layers

Question 16

When does a chain typically interact with memory in a run within the LangChain framework?

Options:

A.

Only after the output has been generated

B.

Before user input and after chain execution

C.

After user input but before chain execution, and again after core logic but before output

D.

Continuously throughout the entire chain execution process

Question 17

An AI development company is working on an AI-assisted chatbot for a customer, which happens to be an online retail company. The goal is to create an assistant that can best answer queries regarding the company policies as well as retain the chat history throughout a session. Considering the capabilities, which type of model would be the best?

Options:

A.

A keyword search-based AI that responds based on specific keywords identified in customer queries.

B.

An LLM enhanced with Retrieval-Augmented Generation (RAG) for dynamic information retrieval and response generation.

C.

An LLM dedicated to generating text responses without external data integration.

D.

A pre-trained LLM model from Cohere or OpenAI.

Question 18

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 days?

Options:

A.

480 unit hours

B.

240 unit hours

C.

744 unit hours

D.

20 unit hours

Question 19

What does a cosine distance of 0 indicate about the relationship between two embeddings?

Options:

A.

They are completely dissimilar

B.

They are unrelated

C.

They are similar in direction

D.

They have the same magnitude

Question 20

What does "Loss" measure in the evaluation of OCI Generative AI fine-tuned models?

Options:

A.

The difference between the accuracy of the model at the beginning of training and the accuracy of the deployed model

B.

The percentage of incorrect predictions made by the model compared with the total number of predictions in the evaluation

C.

The improvement in accuracy achieved by the model during training on the user-uploaded dataset

D.

The level of incorrectness in the model’s predictions, with lower values indicating better performance

Question 21

Which is NOT a built-in memory type in LangChain?

Options:

A.

ConversationImageMemory

B.

ConversationBufferMemory

C.

ConversationSummaryMemory

D.

ConversationTokenBufferMemory

Question 22

What is the primary purpose of LangSmith Tracing?

Options:

A.

To generate test cases for language models

B.

To analyze the reasoning process of language models

C.

To debug issues in language model outputs

D.

To monitor the performance of language models

Question 23

Which is a key advantage of using T-Few over Vanilla fine-tuning in the OCI Generative AI service?

Options:

A.

Reduced model complexity

B.

Enhanced generalization to unseen data

C.

Increased model interpretability

D.

Faster training time and lower cost

Question 24

Which is a key characteristic of Large Language Models (LLMs) without Retrieval Augmented Generation (RAG)?

Options:

A.

They always use an external database for generating responses.

B.

They rely on internal knowledge learned during pretraining on a large text corpus.

C.

They cannot generate responses without fine-tuning.

D.

They use vector databases exclusively to produce answers.

Question 25

How does a presence penalty function in language model generation?

Options:

A.

It penalizes all tokens equally, regardless of how often they have appeared.

B.

It penalizes only tokens that have never appeared in the text before.

C.

It applies a penalty only if the token has appeared more than twice.

D.

It penalizes a token each time it appears after the first occurrence.

Question 26

What does the RAG Sequence model do in the context of generating a response?

Options:

A.

It retrieves a single relevant document for the entire input query and generates a response based on that alone.

B.

For each input query, it retrieves a set of relevant documents and considers them together to generate a cohesive response.

C.

It retrieves relevant documents only for the initial part of the query and ignores the rest.

D.

It modifies the input query before retrieving relevant documents to ensure a diverse response.

Demo: 26 questions
Total 88 questions