Get Full Version of the Exam
http://www.EnsurePass.com/DP-201.html

Question No.11

You have an on-premises MySQL database that is 800 GB in size.

You need to migrate a MySQL database to Azure Database for MySQL. You must minimize service interruption to live sites or applications that use the database.

What should you recommend?

  1. Azure Database Migration Service

  2. Dump and restore

  3. Import and export

  4. MySQL Workbench

Correct Answer: A

Explanation:

You can perform MySQL migrations to Azure Database for MySQL with minimal downtime by using the newly introduced continuous sync capability for the Azure Database Migration Service (DMS). This functionality limits the amount of downtime that is incurred by the application.

References:

https://docs.microsoft.com/en-us/azure/mysql/howto-migrate-online

Question No.12

You are designing a solution for a company. The solution will use model training for objective classification.

You need to design the solution. What should you recommend?

  1. an Azure Cognitive Services application

  2. a Spark Streaming job

  3. interactive Spark queries

  4. Power BI models

  5. a Spark application that uses Spark MLib.

Correct Answer: E

Explanation:

Spark in SQL Server big data cluster enables AI and machine learning.

You can use Apache Spark MLlib to create a machine learning application to do simple predictive analysis on an open dataset.

MLlib is a core Spark library that provides many utilities useful for machine learning tasks, including utilities that are suitable for:

image

image

image

image

Classification Regression Clustering Topic modeling

image

image

Singular value decomposition (SVD) and principal component analysis (PCA) Hypothesis testing and calculating sample statistics

References:

https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-machine-learning-mllib- ipython

Question No.13

A company stores data in multiple types of cloud-based databases.

You need to design a solution to consolidate data into a single relational database. Ingestion of data will occur at set times each day.

What should you recommend?

  1. SQL Server Migration Assistant

  2. SQL Data Sync

  3. Azure Data Factory

  4. Azure Database Migration Service

  5. Data Migration Assistant

Correct Answer: C

Question No.14

HOTSPOT

You design data engineering solutions for a company.

You must integrate on-premises SQL Server data into an Azure solution that performs Extract- Transform-Load (ETL) operations have the following requirements:

image

image

Develop a pipeline that can integrate data and run notebooks. Develop notebooks to transform the data.

image

Load the data into a massively parallel processing database for later analysis. You need to recommend a solution.

What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

image

Correct Answer:

image

Question No.15

You are designing an Azure Data Factory pipeline for processing data. The pipeline will process data that is stored in general-purpose standard Azure storage.

You need to ensure that the compute environment is created on-demand and removed when the process is completed.

Which type of activity should you recommend?

  1. Databricks Python activity

  2. Data Lake Analytics U-SQL activity

  3. HDInsight Pig activity

  4. Databricks Jar activity

Correct Answer: C

Explanation:

The HDInsight Pig activity in a Data Factory pipeline executes Pig queries on your own or on- demand HDInsight cluster.

References:

https://docs.microsoft.com/en-us/azure/data-factory/transform-data-using-hadoop-pig

Question No.16

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

A company is developing a solution to manage inventory data for a group of automotive repair shops. The solution will use Azure SQL Data Warehouse as the data store.

Shops will upload data every 10 days.

Data corruption checks must run each time data is uploaded. If corruption is detected, the corrupted data must be removed.

You need to ensure that upload processes and data corruption checks do not impact reporting and analytics processes that use the data warehouse.

Proposed solution: Insert data from shops and perform the data corruption check in a transaction. Rollback transfer if corruption is detected.

Does the solution meet the goal?

  1. Yes

  2. No

Correct Answer: B

Explanation:

Instead, create a user-defined restore point before data is uploaded. Delete the restore point after data corruption checks complete.

References:

https://docs.microsoft.com/en-us/azure/sql-data-warehouse/backup-and-restore

Question No.17

You need to design the unauthorized data usage detection system. What Azure service should you include in the design?

  1. Azure Databricks

  2. Azure SQL Data Warehouse

  3. Azure Analysis Services

  4. Azure Data Factory

Correct Answer: B

Question No.18

HOTSPOT

You are designing a solution for a company. You plan to use Azure Databricks. You need to recommend workloads and tiers to meet the following requirements:

image

Provide managed clusters for running production jobs.

image

image

Provide persistent clusters that support auto-scaling for analytics processes. Provide role-based access control (RBAC) support for Notebooks.

What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

image

Correct Answer:

image

Question No.19

You are designing a data processing solution that will implement the lambda architecture pattern. The solution will use Spark running on HDInsight for data processing.

You need to recommend a data storage technology for the solution.

Which two technologies should you recommend? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

  1. Azure Cosmos DB

  2. Azure Service Bus

  3. Azure Storage Queue

  4. Apache Cassandra

  5. Kafka HDInsight

Correct Answer: A

Explanation:

To implement a lambda architecture on Azure, you can combine the following technologies to accelerate realtime big data analytics:

Azure Cosmos DB, the industry#39;s first globally distributed, multi-model database service.

Apache Spark for Azure HDInsight, a processing framework that runs large-scale data analytics applications.

Azure Cosmos DB change feed, which streams new data to the batch layer for HDInsight to process.

The Spark to Azure Cosmos DB Connector

E: You can use Apache Spark to stream data into or out of Apache Kafka on HDInsight using DStreams.

References:

https://docs.microsoft.com/en-us/azure/cosmos-db/lambda-architecture

Question No.20

>

Get Full Version of the Exam
DP-201 Dumps
DP-201 VCE and PDF

Comments are closed.