Weekend Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70percent

Oracle 1z0-1127-24 Oracle Cloud Infrastructure 2024 Generative AI Professional Exam Practice Test

Demo: 11 questions
Total 64 questions

Oracle Cloud Infrastructure 2024 Generative AI Professional Questions and Answers

Question 1

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours arc required for fine-tuning if the cluster is active for 10 hours?

Options:

A.

10 unit hours

B.

30 unit hours

C.

15 unit hours

D.

40 unit hours

Question 2

Which is NOT a category of pertained foundational models available in the OCI Generative AI service?

Options:

A.

Translation models

B.

Summarization models

C.

Generation models

D.

Embedding models

Question 3

Which statement is true about the "Top p" parameter of the OCI Generative AI Generation models?

Options:

A.

Top p assigns penalties to frequently occurring tokens.

B.

Top p determines the maximum number of tokens per response.

C.

Top p limits token selection based on the sum of their probabilities.

D.

Top p selects tokens from the “Top k’ tokens sorted by probability.

Question 4

Which statement describes the difference between Top V and Top p" in selecting the next token in the OCI Generative AI Generation models?

Options:

A.

Top k selects the next token based on its position in the list of probable tokens, whereas "Top p" selects based on the cumulative probability of the Top token.

B.

Top K considers the sum of probabilities of the top tokens, whereas Top" selects from the Top k" tokens sorted by probability.

C.

Top k and Top p" both select from the same set of tokens but use different methods to prioritize them based on frequency.

D.

Top k and "Top p" are identical in their approach to token selection but differ in their application of penalties to tokens.

Question 5

What does "Loss" measure in the evaluation of OCI Generative AI fine-tuned models?

The difference between the accuracy of the model at the beginning of training and the accuracy of the deployed model

Options:

A.

The difference between the accuracy of the model at the beginning of training and the accuracy of the deployed model

B.

The percentage of incorrect predictions made by the model compared with the total number of predictions in the evaluation

C.

The improvement in accuracy achieved by the model during training on the user-uploaded data set

D.

The level of incorrectness in the models predictions, with lower values indicating better performance

Question 6

What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?

Options:

A.

It com rob the randomness of the model* output, affecting its creativity.

B It specifies a string that tells the model to stop generating more content

B.

It assigns a penalty to frequently occurring tokens to reduce repetitive text.

C.

It determines the maximum number of tokens the model can generate per response.

Question 7

Which is a distinguishing feature of "Parameter-Efficient Fine-tuning (PEFT)" as opposed to classic Tine- tuning" in Large Language Model training?

Options:

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.

B.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.

C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data. PEFT modifies

D.

PEFT parameters and b typically used when no training data exists.

Question 8

What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?

Options:

A.

Improved retrievals for Retrieval Augmented Generation (RAG) systems

B.

Capacity to translate text in over u languages

C.

Support for tokenizing longer sentences

D.

Emphasis on syntactic clustering of word embedding’s

Question 9

Given the following code: chain = prompt |11m

Options:

A.

Which statement is true about LangChain Expression language (ICED?

B.

LCEL is a programming language used to write documentation for LangChain.

C.

LCEL is a legacy method for creating chains in LangChain

D.

LCEL is a declarative and preferred way to compose chains together.

Question 10

Which is NOT a built-in memory type in LangChain?

Options:

A.

Conversation Token Buffer Memory

B.

Conversation ImgeMemory

C.

Conversation Buffer Memory

D.

Conversation Summary Memory

Question 11

Which is a key advantage of usingT-Few over Vanilla fine-tuning in the OCI Generative AI service?

Options:

A.

Reduced model complexity

B.

Enhanced generalization to unseen data

C.

Increased model interpretability

D.

Foster training time and lower cost

Demo: 11 questions
Total 64 questions