This page was exported from Actual Test Materials [ http://blog.actualtests4sure.com ] Export date:Fri Nov 15 20:15:58 2024 / +0000 GMT ___________________________________________________ Title: Latest Microsoft DP-203 Exam questions and answers [Q111-Q126] --------------------------------------------------- Latest Microsoft DP-203 Exam questions and answers Actualtests4sure DP-203 Exam Practice Test Questions (Updated 242 Questions) NO.111 You have two fact tables named Flight and Weather. Queries targeting the tables will be based on the join between the following columns.You need to recommend a solution that maximum query performance.What should you include in the recommendation?  In each table, create a column as a composite of the other two columns in the table.  In each table, create an IDENTITY column.  In the tables, use a hash distribution of ArriveDateTime and ReportDateTime.  In the tables, use a hash distribution of ArriveAirPortID and AirportID. NO.112 You are building an Azure Stream Analytics job to identify how much time a user spends interacting with a feature on a webpage.The job receives events based on user actions on the webpage. Each row of data represents an event. Each event has a type of either ‘start’ or ‘end’.You need to calculate the duration between start and end events.How should you complete the query? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-stream-analytics-query-patternsNO.113 You are implementing an Azure Stream Analytics solution to process event data from devices.The devices output events when there is a fault and emit a repeat of the event every five seconds until the fault is resolved. The devices output a heartbeat event every five seconds after a previous event if there are no faults present.A sample of the events is shown in the following table.You need to calculate the uptime between the faults.How should you complete the Stream Analytics SQL query? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationGraphical user interface, text, application Description automatically generatedBox 1: WHERE EventType=’HeartBeat’Box 2: ,TumblingWindow(Second, 5)Tumbling windows are a series of fixed-sized, non-overlapping and contiguous time intervals.The following diagram illustrates a stream with a series of events and how they are mapped into 10-second tumbling windows.Timeline Description automatically generatedReference:https://docs.microsoft.com/en-us/stream-analytics-query/session-window-azure-stream-analyticshttps://docs.microsoft.com/en-us/stream-analytics-query/tumbling-window-azure-stream-analyticsNO.114 You have an Azure subscription that contains the following resources:* An Azure Active Directory (Azure AD) tenant that contains a security group named Group1.* An Azure Synapse Analytics SQL pool named Pool1.You need to control the access of Group1 to specific columns and rows in a table in Pool1 Which Transact-SQL commands should you use? To answer, select the appropriate options in the answer area.NOTE: Each appropriate options in the answer area. NO.115 You have an Azure Databricks workspace named workspace1 in the Standard pricing tier.You need to configure workspace1 to support autoscaling all-purpose clusters. The solution must meet the following requirements:* Automatically scale down workers when the cluster is underutilized for three minutes.* Minimize the time it takes to scale to the maximum number of workers.* Minimize costs.What should you do first?  Enable container services for workspace1.  Upgrade workspace1 to the Premium pricing tier.  Set Cluster Mode to High Concurrency.  Create a cluster policy in workspace1. For clusters running Databricks Runtime 6.4 and above, optimized autoscaling is used by all-purpose clusters in the Premium plan Optimized autoscaling:Scales up from min to max in 2 steps.Can scale down even if the cluster is not idle by looking at shuffle file state.Scales down based on a percentage of current nodes.On job clusters, scales down if the cluster is underutilized over the last 40 seconds.On all-purpose clusters, scales down if the cluster is underutilized over the last 150 seconds.The spark.databricks.aggressiveWindowDownS Spark configuration property specifies in seconds how often a cluster makes down-scaling decisions. Increasing the value causes a cluster to scale down more slowly. The maximum value is 600.Note: Standard autoscalingStarts with adding 8 nodes. Thereafter, scales up exponentially, but can take many steps to reach the max.You can customize the first step by setting the spark.databricks.autoscaling.standardFirstStepUp Spark configuration property.Scales down only when the cluster is completely idle and it has been underutilized for the last 10 minutes.Scales down exponentially, starting with 1 node.Reference:https://docs.databricks.com/clusters/configure.htmlNO.116 You have an Azure event hub named retailhub that has 16 partitions. Transactions are posted to retailhub.Each transaction includes the transaction ID, the individual line items, and the payment details. The transaction ID is used as the partition key.You are designing an Azure Stream Analytics job to identify potentially fraudulent transactions at a retail store. The job will use retailhub as the input. The job will output the transaction ID, the individual line items, the payment details, a fraud score, and a fraud indicator.You plan to send the output to an Azure event hub named fraudhub.You need to ensure that the fraud detection solution is highly scalable and processes transactions as quickly as possible.How should you structure the output of the Stream Analytics job? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationBox 1: 16For Event Hubs you need to set the partition key explicitly.An embarrassingly parallel job is the most scalable scenario in Azure Stream Analytics. It connects one partition of the input to one instance of the query to one partition of the output.Box 2: Transaction IDReference:https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#partitionsNO.117 You are designing an Azure Stream Analytics job to process incoming events from sensors in retail environments.You need to process the events to produce a running average of shopper counts during the previous 15 minutes, calculated at five-minute intervals.Which type of window should you use?  snapshot  tumbling  hopping  sliding ExplanationExplanation:Tumbling windows are a series of fixed-sized, non-overlapping and contiguous time intervals. The following diagram illustrates a stream with a series of events and how they are mapped into 10-second tumbling windows.Reference:https://docs.microsoft.com/en-us/stream-analytics-query/tumbling-window-azure-stream-analyticsNO.118 You are designing a statistical analysis solution that will use custom proprietary1 Python functions on near real-time data from Azure Event Hubs.You need to recommend which Azure service to use to perform the statistical analysis. The solution must minimize latency.What should you recommend?  Azure Stream Analytics  Azure SQL Database  Azure Databricks  Azure Synapse Analytics NO.119 You have an enterprise data warehouse in Azure Synapse Analytics named DW1 on a server named Server1.You need to determine the size of the transaction log file for each distribution of DW1.What should you do?  On DW1, execute a query against the sys.database_files dynamic management view.  From Azure Monitor in the Azure portal, execute a query against the logs of DW1.  Execute a query against the logs of DW1 by using theGet-AzOperationalInsightsSearchResult PowerShell cmdlet.  On the master database, execute a query against thesys.dm_pdw_nodes_os_performance_counters dynamic management view. For information about the current log file size, its maximum size, and the autogrow option for the file, you can also use the size, max_size, and growth columns for that log file in sys.database_files.Reference:https://docs.microsoft.com/en-us/sql/relational-databases/logs/manage-the-size-of-the-transaction-log-fileNO.120 You have an Azure Data Factory instance that contains two pipelines named Pipeline1 and Pipeline2.Pipeline1 has the activities shown in the following exhibit.Pipeline2 has the activities shown in the following exhibit.You execute Pipeline2, and Stored procedure1 in Pipeline1 fails.What is the status of the pipeline runs?  Pipeline1 and Pipeline2 succeeded.  Pipeline1 and Pipeline2 failed.  Pipeline1 succeeded and Pipeline2 failed.  Pipeline1 failed and Pipeline2 succeeded. ExplanationActivities are linked together via dependencies. A dependency has a condition of one of the following:Succeeded, Failed, Skipped, or Completed.Consider Pipeline1:If we have a pipeline with two activities where Activity2 has a failure dependency on Activity1, the pipeline will not fail just because Activity1 failed. If Activity1 fails and Activity2 succeeds, the pipeline will succeed.This scenario is treated as a try-catch block by Data Factory.Waterfall chart Description automatically generated with medium confidenceThe failure dependency means this pipeline reports success.Note:If we have a pipeline containing Activity1 and Activity2, and Activity2 has a success dependency on Activity1, it will only execute if Activity1 is successful. In this scenario, if Activity1 fails, the pipeline will fail.Reference:https://datasavvy.me/category/azure-data-factory/NO.121 You are building an Azure Stream Analytics job that queries reference data from a product catalog file. The file is updated daily.The reference data input details for the file are shown in the Input exhibit. (Click the Input tab.)The storage account container view is shown in the Refdata exhibit. (Click the Refdata tab.)You need to configure the Stream Analytics job to pick up the new reference data.What should you configure? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-use-reference-dataNO.122 You are designing an inventory updates table in an Azure Synapse Analytics dedicated SQL pool. The table will have a clustered columnstore index and will include the following columns:* EventDate: 1 million per day* EventTypelD: 10 million per event type* WarehouselD: 100 million per warehouse* ProductCategoryTypeiD: 25 million per product category typeYou identify the following usage patterns:Analyst will most commonly analyze transactions for a warehouse.Queries will summarize by product category type, date, and/or inventory event type.You need to recommend a partition strategy for the table to minimize query times.On which column should you recommend partitioning the table?  ProductCategoryTypeID  EventDate  WarehouseID  EventTypeID NO.123 You have an Azure Data Lake Storage account that contains a staging zone.You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes mapping data Flow, and then inserts the data info the data warehouse.Does this meet the goal?  Yes  No NO.124 You have an Azure data factory.You need to examine the pipeline failures from the last 180 flays.What should you use?  the Activity tog blade for the Data Factory resource  Azure Data Factory activity runs in Azure Monitor  Pipeline runs in the Azure Data Factory user experience  the Resource health blade for the Data Factory resource ExplanationData Factory stores pipeline-run data for only 45 days. Use Azure Monitor if you want to keep that data for a longer time.Reference:https://docs.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitorNO.125 You need to design a data storage structure for the product sales transactions. The solution must meet the sales transaction dataset requirements.What should you include in the solution? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. NO.126 You have the following table named Employees.You need to calculate the employee_type value based on the hire_date value.How should you complete the Transact-SQL statement? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.NOTE: Each correct selection is worth one point. ExplanationGraphical user interface, text, application Description automatically generatedBox 1: CASECASE evaluates a list of conditions and returns one of multiple possible result expressions.CASE can be used in any statement or clause that allows a valid expression. For example, you can use CASE in statements such as SELECT, UPDATE, DELETE and SET, and in clauses such as select_list, IN, WHERE, ORDER BY, and HAVING.Syntax: Simple CASE expression:CASE input_expressionWHEN when_expression THEN result_expression [ …n ][ ELSE else_result_expression ]ENDBox 2: ELSEReference:https://docs.microsoft.com/en-us/sql/t-sql/language-elements/case-transact-sql Loading … Skills measured Monitor and optimize data storage and data processing (10-15%)Design and implement data security (10-15%)Design and implement data storage (40-45%)Design and develop data processing (25-30%)   Pass Your Microsoft Exam with DP-203 Exam Dumps: https://www.actualtests4sure.com/DP-203-test-questions.html --------------------------------------------------- Images: https://blog.actualtests4sure.com/wp-content/plugins/watu/loading.gif https://blog.actualtests4sure.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2022-08-12 10:20:58 Post date GMT: 2022-08-12 10:20:58 Post modified date: 2022-08-12 10:20:58 Post modified date GMT: 2022-08-12 10:20:58