Weekend Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70percent

Oracle 1z0-1110-23 Oracle Cloud Infrastructure Data Science 2023 Professional Exam Practice Test

Demo: 24 questions
Total 80 questions

Oracle Cloud Infrastructure Data Science 2023 Professional Questions and Answers

Question 1

What preparation steps are required to access an Oracle AI service SDK from a Data Science notebook session?

Options:

A.

Call the Accented Data Science (ADS) command to enable Al integration

B.

Create and upload the API signing key and config file

C.

Import the REST API

D.

Create and upload execute.py and runtime.yaml

Question 2

During a job run, you receive an error message that no space is left on your disk device. To solve the problem, you must increase the size of the job storage. What would be the most effi-cient way to do this with Data Science Jobs?

Options:

A.

On the job run, set the environment variable that helps increase the size of the storage.

B.

Your code using too much disk space. Refactor the code to identify the problem.

C.

Edit the job, change the size of the storage of your job, and start a new job run.

D.

Create a new job with increased storage size and then run the job.

Question 3

Select two reasons why it is important to rotate encryption keys when using Oracle Cloud

Infrastructure (OCI) Vault to store credentials or other secrets.

Options:

A.

Key rotation allows you to encrypt no more than five keys at a time.

B.

Key rotation improves encryption efficiency.

C.

Periodically rotating keys make it easier to reuse keys.

D.

Key rotation reduces risk if a key is ever compromised.

E.

Periodically rotating keys limits the amount of data encrypted by one key version.

Question 4

When preparing your model artifact to save it to the Oracle Cloud Infrastructure (OCI) Data Science model catalog, you create a score.py file. What is the purpose of the score.py fie?

Options:

A.

Define the compute scaling strategy.

B.

Configure the deployment infrastructure.

C.

Define the inference server dependencies.

D.

Execute the inference logic code

Question 5

You are using Oracle Cloud Infrastructure (OCI) Anomaly Detection to train a model to detect

anomalies in pump sensor data. How does the required False Alarm Probability setting affect an

anomaly detection model?

Options:

A.

It is used to disable the reporting of false alarms.

B.

It changes the sensitivity of the model to detecting anomalies.

C.

It determines how many false alarms occur before an error message is generated.

D.

It adds a score to each signal indicating the probability that its a false alarm.

Question 6

You are a data scientist designing an air traffic control model, and you choose to leverage Oracle

AutoML You understand that the Oracle AutoML pipeline consists of multiple stages and

automatically operates in a certain sequence. What is the correct sequence for the Oracle AutoML

pipeline?

Options:

A.

Algorithm selection, Feature selection, Adaptive sampling, Hyperparameter tuning

Want any exam dump in pdf email me at info.tipsandtricks10@gmail.com (Little Paid)

B.

Adaptive sampling, Algorithm selection, Feature selection, Hyperparameter tuning

C.

Adaptive sampling, Feature selection, Algorithm selection, Hyperparameter tuning

D.

Algorithm selection, Adaptive sampling, Feature selection, Hyperparameter tuning

Question 7

For your next data science project, you need access to public geospatial images.

Which Oracle Cloud service provides free access to those images?

Options:

A.

Oracle Open Data

B.

Oracle Big Data Service

C.

Oracle Cloud Infrastructure Data Science

D.

Oracle Analytics Cloud

Question 8

You are a data scientist leveraging the Oracle Cloud Infrastructure (OCI) Language AI service for

various types of text analyses. Which TWO capabilities can you utilize with this tool?

Options:

A.

Topic classification

B.

Table extraction

C.

Sentiment analysis

D.

Sentence diagramming

E.

Punctuation correction

Question 9

You are attempting to save a model from a notebook session to the model catalog by using the

Accelerated Data Science (ADS) SDK, with resource principal as the authentication signer, and you

get a 404 authentication error. Which two should you look for to ensure permissions are set up

correctly?

Options:

A.

The model artifact is saved to the block volume of the notebook session.

B.

A dynamic group has rules that matching the notebook sessions in it compartment.

C.

The policy for your user group grants manages permissions for the model catalog in this

compartment.

D.

The policy for a dynamic group grant manages permissions for the model catalog in it

compartment.

E.

The networking configuration allows access to Oracle Cloud Infrastructure services through a

Service Gateway.

Question 10

You want to use ADSTuner to tune the hyperparameters of a supported model you recently

trained. You have just started your search and want to reduce the computational cost as well as

access the quality of the model class that you are using.

What is the most appropriate search space strategy to choose?

Options:

A.

Detailed

B.

ADSTuner doesn't need a search space to tune the hyperparameters.

C.

Perfunctory

D.

Pass a dictionary that defines a search space

Question 11

You have created a conda environment in your notebook session. This is the first time you are

working with published conda environments. You have also created an Object Storage bucket with

permission to manage the bucket.

Which two commands are required to publish the conda environment?

Options:

A.

odac conda publish --slug

B.

odsc conda list --override

C.

odsc conda init --bucket_namespace --bucket_name

D.

odsc conda create --file manifest.yaml

E.

conda activate /home/datascience/conda/

Question 12

You have received machine learning model training code, without clear information about the

optimal shape to run the training. How would you proceed to identify the optimal compute shape

for your model training that provides a balanced cost and processing time?

Options:

A.

Start with a random compute shape and monitor the utilization metrics and time required to

finish the model training. Perform model training optimizations and performance tests in

advance to identify the right compute shape before running the model training as a job.

B.

Start with a smaller shape and monitor the Job Run metrics and time required to complete

the model training. If the compute shape is not fully utilized, tune the model parameters,

and re- run the job. Repeat the process until the shape resources are fully utilized.

C.

Start with the strongest compute shape Job's support and monitor the Job Run metrics and

time required to complete the model training. Tune the model so that it utilizes as much

compute resources as possible, even at an increased cost.

D.

Start with a smaller shape and monitor the utilization metrics and time required to complete

the model training. If the compute shape is fully utilized, change to compute that has more

resources and re-run the job. Repeat the process until the processing time does not

improve.

Question 13

The feature type TechJob has the following registered validators: Tech-Job.validator.register(name=’is_tech_job’, handler=is_tech_job_default_handler) Tech-Job.validator.register(name=’is_tech_job’, handler= is_tech_job_open_handler, condi-tion=(‘job_family’,)) TechJob.validator.register(name=’is_tech_job’, handler= is_tech_job_closed_handler, condition=(‘job_family’: ‘IT’)) When you run is_tech_job(job_family=’Engineering’), what does the feature type validator system do?

Options:

A.

Execute the is_tech_job_default_handler sales handler.

B.

Throw an error because the system cannot determine which handler to run.

C.

Execute the is_tech_job_closed_handler handler.

D.

Execute the is_tech_job_open_handler handler.

Question 14

You are asked to prepare data for a custom-built model that requires transcribing Spanish video

recordings into a readable text format with profane words identified.

Which Oracle Cloud service would you use?

Options:

A.

OCI Translation

B.

OCI Language

C.

OCI Speech

D.

OCI Anomaly Detection

Question 15

What preparation steps are required to access an Oracle AI service SDK from a Data Science

notebook session?

Options:

A.

Create and upload score.py and runtime.yaml.

B.

Create and upload the APIsigning key and config file.

C.

Import the REST API.

D.

Call the ADS command to enable AI integration

Question 16

You have a complex Python code project that could benefit from using Data Science Jobs as it is a

repeatable machine learning model training task. The project contains many subfolders and classes.

What is the best way to run this project as a Job?

Options:

A.

ZIP the entire code project folder and upload it as a Job artifact. Jobs automatically identifies

the_main_ top level where the code is run.

B.

Rewrite your code so that it is a single executable Python or Bash/Shell script file.

C.

ZIP the entire code project folder and upload it as a Job artifact on job creation. Jobs

identifies the main executable file automatically.

D.

ZIP the entire code project folder, upload it as a Job artifact on job creation, and set

JOB_RUN_ENTRYPOINT to point to the main executable file.

Question 17

You are asked to prepare data for a custom-built model that requires transcribing Spanish video recordings into a readable text format with profane words identified. Which Oracle Cloud service would you use?

Options:

A.

OCI Translation

B.

OCI Language

C.

OCI Anomaly Detection

D.

OCI Speech

Question 18

As a data scientist, you are trying to automate a machine learning (ML) workflow and have

decided to use Oracle Cloud Infrastructure (OCI) AutoML Pipeline.

Which three are part of the AutoML Pipeline?

Options:

A.

Feature Selection

B.

Adaptive Sampling

C.

Model Deployment

D.

Feature Extraction

E.

Algorithm Selection

Question 19

You are working as a data scientist for a healthcare company. They decide to analyze the data to

find patterns in a large volume of electronic medical records. You are asked to build a PySpark

solution to analyze these records in a JupyterLab notebook. What is the order of recommended

steps to develop a PySpark application in Oracle Cloud Infrastructure (OCI) Data Science?

Options:

A.

Launch a notebook session. Install a PySpark conda environment. Configure core-site.xml.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

B.

Install a Spark conda environment. Configure core-site.xml. Launch a notebook session.

Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your

PySpark application.

C.

Configure core-site.xml. Install a PySpark conda environment. Create a Data Flow application

with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Launch a

notebook session.

D.

Launch a notebook session. Configure core-site.xml. Install a PySpark conda environment.

E.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

Question 20

You are a data scientist working for a utilities company. You have developed an algorithm that

detects anomalies from a utility reader in the grid. The size of the model artifact is about 2 GB, and

you are trying to store it in the model catalog. Which three interfaces could you use to save the

model artifact into the model catalog?

Options:

A.

Git CLI

B.

Oracle Cloud Infrastructure (OCI) Command Line Interface (CLI)

C.

Accelerated Data Science (ADS) Software Development Kit (SDK)

D.

ODSC CLI

E.

Console

F.

OCI Python SDK

Question 21

You want to write a Python script to create a collection of different projects for your data science

team. Which Oracle Cloud Infrastructure (OCI) Data Science interface would you use?

Options:

A.

The OCI Software Development Kit (SDK)

B.

OCI Console

C.

Command line interface (CLI)

D.

Mobile App

Question 22

Which Oracle Cloud Infrastructure (OCI) service should you use to create and run Spark

applications using ADS?

Options:

A.

Data Integration

B.

Vault

C.

Data Flow

D.

Analytics Cloud

Question 23

You have just received a new data set from a colleague. You want to quickly find out summary

information about the data set, such as the types of features, the total number of observations, and

distributions of the data. Which Accelerated Data Science (ADS) SDK method from the ADSDataset

class would you use?

Options:

A.

show_corr()

B.

to_xgb ()

C.

compute ()

D.

show_in_notebook ()

Question 24

You want to build a multistep machine learning workflow by using the Oracle Cloud

Infrastructure (OCI) Data Science Pipeline feature. How would you configure the conda environment

to run a pipeline step?

Options:

A.

Configure a compute shape.

B.

Configure a block volume.

C.

Use command-line variables.

D.

Use environmental variables

Demo: 24 questions
Total 80 questions