This page was exported from Actual Test Materials [ http://blog.actualtests4sure.com ] Export date:Fri Nov 15 20:35:48 2024 / +0000 GMT ___________________________________________________ Title: Updated Mar-2023 Official licence for DP-420 Certified by DP-420 Dumps PDF [Q23-Q42] --------------------------------------------------- Updated Mar-2023 Official licence for DP-420 Certified by DP-420 Dumps PDF Grab latest Amazon DP-420 Dumps as PDF Updated on 2023 Q23. You need to identify which connectivity mode to use when implementing App2. The solution must support the planned changes and meet the business requirements.Which connectivity mode should you identify?  Direct mode over HTTPS  Gateway mode (using HTTPS)  Direct mode over TCP Scenario: Develop an app named App2 that will run from the retail stores and query the data in account2. App2 must be limited to a single DNS endpoint when accessing account2.By using Azure Private Link, you can connect to an Azure Cosmos account via a private endpoint. The private endpoint is a set of private IP addresses in a subnet within your virtual network.When you’re using Private Link with an Azure Cosmos account through a direct mode connection, you can use only the TCP protocol. The HTTP protocol is not currently supported.Reference:https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-private-endpointsQ24. You are creating a database in an Azure Cosmos DB Core (SQL) API account. The database will be used by an application that will provide users with the ability to share online posts. Users will also be able to submit comments on other users’ posts.You need to store the data shown in the following table.The application has the following characteristics:Users can submit an unlimited number of posts.The average number of posts submitted by a user will be more than 1,000.Posts can have an unlimited number of comments from different users.The average number of comments per post will be 100, but many posts will exceed 1,000 comments.Users will be limited to having a maximum of 20 interests.For each of the following statements, select Yes if the statement is true. Otherwise, select No.NOTE: Each correct selection is worth one point. Q25. You need to configure an Apache Kafka instance to ingest data from an Azure Cosmos DB Core (SQL) API account. The data from a container named telemetry must be added to a Kafka topic named iot. The solution must store the data in a compact binary format.Which three configuration items should you include in the solution? Each correct answer presents part of the solution.NOTE: Each correct selection is worth one point.  “connector.class”: “com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector”  “key.converter”: “org.apache.kafka.connect.json.JsonConverter”  “key.converter”: “io.confluent.connect.avro.AvroConverter”  “connect.cosmos.containers.topicmap”: “iot#telemetry”  “connect.cosmos.containers.topicmap”: “iot”  “connector.class”: “com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector” C: Avro is binary format, while JSON is text.F: Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector polls data from Kafka to write to containers in the database based on the topics subscription.D: Create the Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink connector.Extract:“connector.class”: “com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector”,“key.converter”: “org.apache.kafka.connect.json.AvroConverter”“connect.cosmos.containers.topicmap”: “hotels#kafka”Incorrect Answers:B: JSON is plain text.Note, full example:{“name”: “cosmosdb-sink-connector”,“config”: {“connector.class”: “com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector”,“tasks.max”: “1”,“topics”: [“hotels”],“value.converter”: “org.apache.kafka.connect.json.AvroConverter”,“value.converter.schemas.enable”: “false”,“key.converter”: “org.apache.kafka.connect.json.AvroConverter”,“key.converter.schemas.enable”: “false”,“connect.cosmos.connection.endpoint”: “Error! Hyperlink reference not valid.”,“connect.cosmos.master.key”: “<cosmosdbprimarykey>”,“connect.cosmos.databasename”: “kafkaconnect”,“connect.cosmos.containers.topicmap”: “hotels#kafka”}}Reference:https://docs.microsoft.com/en-us/azure/cosmos-db/sql/kafka-connector-sinkhttps://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/Q26. You plan to deploy two Azure Cosmos DB Core (SQL) API accounts that will each contain a single database. The accounts will be configured as shown in the following table.How should you provision the containers within each account to minimize costs? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/cosmos-db/serverlesshttps://docs.microsoft.com/en-us/azure/cosmos-db/provision-throughput-autoscale#use-cases-of-autoscaleQ27. You have a container named container1 in an Azure Cosmos DB Core (SQL) API account. The container1 container has 120 GB of data.The following is a sample of a document in container1.The orderId property is used as the partition key.For each of the following statements, select Yes if the statement is true. Otherwise, select No.NOTE: Each correct selection is worth one point. Q28. You configure multi-region writes for account1.You need to ensure that App1 supports the new configuration for account1. The solution must meet the business requirements and the product catalog requirements.What should you do?  Set the default consistency level of accountl to bounded staleness.  Create a private endpoint connection.  Modify the connection policy of App1.  Increase the number of request units per second (RU/s) allocated to the con-product and con-productVendor containers. App1 queries the con-product and con-productVendor containers.Note: Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.Scenario:Develop an app named App1 that will run from all locations and query the data in account1.Once multi-region writes are configured, maximize the performance of App1 queries against the data in account1.Whenever there are multiple solutions for a requirement, select the solution that provides the best performance, as long as there are no additional costs associated.Reference:https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levelsQ29. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.Solution: You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure Blob Storage as the output.Does this meet the goal?  Yes  No Instead create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as the output.The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified.The following diagram represents the data flow and components involved in the solution:Q30. The settings for a container in an Azure Cosmos DB Core (SQL) API account are configured as shown in the following exhibit.Which statement describes the configuration of the container?  All items will be deleted after one year.  Items stored in the collection will be retained always, regardless of the items time to live value.  Items stored in the collection will expire only if the item has a time to live value.  All items will be deleted after one hour. When DefaultTimeToLive is -1 then your Time to Live setting is On (No default) Time to Live on a container, if present and the value is set to “-1”, it is equal to infinity, and items don’t expire by default.Time to Live on an item:This Property is applicable only if DefaultTimeToLive is present and it is not set to null for the parent container.If present, it overrides the DefaultTimeToLive value of the parent container.Q31. You provision Azure resources by using the following Azure Resource Manager (ARM) template.For each of the following statements, select Yes if the statement is true. Otherwise, select No.NOTE: Each correct selection is worth one point. Q32. You have a database in an Azure Cosmos DB Core (SQL) API account.You need to create an Azure function that will access the database to retrieve records based on a variable named accountnumber. The solution must protect against SQL injection attacks.How should you define the command statement in the function?  cmd = “SELECT * FROM Persons pWHERE p.accountnumber = ‘accountnumber'”  cmd = “SELECT * FROM Persons pWHERE p.accountnumber = LIKE @accountnumber”  cmd = “SELECT * FROM Persons pWHERE p.accountnumber = @accountnumber”  cmd = “SELECT * FROM Persons pWHERE p.accountnumber = ‘” + accountnumber + “‘” Azure Cosmos DB supports queries with parameters expressed by the familiar @ notation. Parameterized SQL provides robust handling and escaping of user input, and prevents accidental exposure of data through SQL injection.For example, you can write a query that takes lastName and address.state as parameters, and execute it for various values of lastName and address.state based on user input.SELECT *FROM Families fWHERE f.lastName = @lastName AND f.address.state = @addressStateQ33. You have a container in an Azure Cosmos DB Core (SQL) API account.You need to use the Azure Cosmos DB SDK to replace a document by using optimistic concurrency.What should you include in the code? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.cosmos.itemrequestoptionshttps://cosmosdb.github.io/labs/dotnet/labs/10-concurrency-control.htmlQ34. You have a container in an Azure Cosmos DB Core (SQL) API account. The container stores telemetry data from IoT devices. The container uses telemetryId as the partition key and has a throughput of 1,000 request units per second (RU/s). Approximately 5,000 IoT devices submit data every five minutes by using the same telemetryId value.You have an application that performs analytics on the data and frequently reads telemetry data for a single IoT device to perform trend analysis.The following is a sample of a document in the container.You need to reduce the amount of request units (RUs) consumed by the analytics application.What should you do?  Decrease the offerThroughput value for the container.  Increase the offerThroughput value for the container.  Move the data to a new container that has a partition key of deviceId.  Move the data to a new container that uses a partition key of date. The partition key is what will determine how data is routed in the various partitions by Cosmos DB and needs to make sense in the context of your specific scenario. The IoT Device ID is generally the “natural” partition key for IoT applications.Q35. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.You have an Azure Cosmos DB Core (SQL) API account named account 1 that uses autoscale throughput.You need to run an Azure function when the normalized request units per second for a container in account1 exceeds a specific value.Solution: You configure the function to have an Azure CosmosDB trigger.Does this meet the goal?  Yes  No Instead configure an Azure Monitor alert to trigger the function.You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.Q36. You need to provide a solution for the Azure Functions notifications following updates to con-product. The solution must meet the business requirements and the product catalog requirements.Which two actions should you perform? Each correct answer presents part of the solution.NOTE: Each correct selection is worth one point.  Configure the trigger for each function to use a different leaseCollectionPrefix  Configure the trigger for each function to use the same leaseCollectionNair.e  Configure the trigger for each function to use a different leaseCollectionName  Configure the trigger for each function to use the same leaseCollectionPrefix leaseCollectionPrefix: when set, the value is added as a prefix to the leases created in the Lease collection for this Function. Using a prefix allows two separate Azure Functions to share the same Lease collection by using different prefixes.Scenario: Use Azure Functions to send notifications about product updates to different recipients.Trigger the execution of two Azure functions following every update to any document in the con-product container.Reference:https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-triggerQ37. You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.The following is a sample of a document in container1.{“studentId”: “631282”,“firstName”: “James”,“lastName”: “Smith”,“enrollmentYear”: 1990,“isActivelyEnrolled”: true,“address”: {“street”: “”,“city”: “”,“stateProvince”: “”,“postal”: “”,}}The container1 container has the following indexing policy.{“indexingMode”: “consistent”,“includePaths”: [{“path”: “/*”},{“path”: “/address/city/?”}],“excludePaths”: [{“path”: “/address/*”},{“path”: “/firstName/?”}]}For each of the following statements, select Yes if the statement is true. Otherwise, select No.NOTE: Each correct selection is worth one point. Q38. You are troubleshooting the current issues caused by the application updates.Which action can address the application updates issue without affecting the functionality of the application?  Enable time to live for the con-product container.  Set the default consistency level of account1 to strong.  Set the default consistency level of account1 to bounded staleness.  Add a custom indexing policy to the con-product container. Bounded staleness is frequently chosen by globally distributed applications that expect low write latencies but require total global order guarantee. Bounded staleness is great for applications featuring group collaboration and sharing, stock ticker, publish-subscribe/queueing etc.Scenario: Application updates in con-product frequently cause HTTP status code 429 “Too many requests”. You discover that the 429 status code relates to excessive request unit (RU) consumption during the updates.Reference:https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levelsQ39. You have an Azure Cosmos DB Core (SQL) API account named storage1 that uses provisioned throughput capacity mode.The storage1 account contains the databases shown in the following table.The databases contain the containers shown in the following table.For each of the following statements, select Yes if the statement is true. Otherwise, select No.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/cosmos-db/plan-manage-costshttps://azure.microsoft.com/en-us/pricing/details/cosmos-db/Q40. You have the following query.SELECT * FROM cWHERE c.sensor = “TEMP1”AND c.value < 22AND c.timestamp >= 1619146031231You need to recommend a composite index strategy that will minimize the request units (RUs) consumed by the query.What should you recommend?  a composite index for (sensor ASC, value ASC) and a composite index for (sensor ASC, timestamp ASC)  a composite index for (sensor ASC, value ASC, timestamp ASC) and a composite index for (sensor DESC, value DESC, timestamp DESC)  a composite index for (value ASC, sensor ASC) and a composite index for (timestamp ASC, sensor ASC)  a composite index for (sensor ASC, value ASC, timestamp ASC) If a query has a filter with two or more properties, adding a composite index will improve performance.Consider the following query:SELECT * FROM c WHERE c.name = “Tim” and c.age > 18In the absence of a composite index on (name ASC, and age ASC), we will utilize a range index for this query. We can improve the efficiency of this query by creating a composite index for name and age.Queries with multiple equality filters and a maximum of one range filter (such as >,<, <=, >=, !=) will utilize the composite index.Q41. You have an Azure Cosmos DB Core (SQL) API account used by an application named App1.You open the Insights pane for the account and see the following chart.Use the drop-down menus to select the answer choice that answers each question based on the information presented in the graphic.NOTE: Each correct selection is worth one point. Q42. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.Solution: You create an Azure Synapse pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure Blob Storage as the output.Does this meet the goal?  Yes  No Instead create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as the output.The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified.The following diagram represents the data flow and components involved in the solution: Loading … Latest DP-420 Exam Dumps Microsoft Exam from Training: https://www.actualtests4sure.com/DP-420-test-questions.html --------------------------------------------------- Images: https://blog.actualtests4sure.com/wp-content/plugins/watu/loading.gif https://blog.actualtests4sure.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-03-22 10:28:41 Post date GMT: 2023-03-22 10:28:41 Post modified date: 2023-03-22 10:28:41 Post modified date GMT: 2023-03-22 10:28:41