This page was exported from Actual Test Materials [ http://blog.actualtests4sure.com ] Export date:Fri Nov 15 18:24:16 2024 / +0000 GMT ___________________________________________________ Title: [Q22-Q39] 2023 Updated MCPA-Level-1-Maintenance Tests Engine pdf - All Free Dumps Guaranteed! --------------------------------------------------- 2023 Updated MCPA-Level-1-Maintenance Tests Engine pdf - All Free Dumps Guaranteed! Latest MuleSoft Certified Platform Architect MCPA-Level-1-Maintenance Actual Free Exam Questions QUESTION 22Which of the following sequence is correct?  API Client implementes logic to call an API >> API Consumer requests access to API >> API Implementation routes the request to >> API  API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation  API Consumer implementes logic to call an API >> API Client requests access to API >> API Implementation routes the request to >> API  API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation*****************************************>> API consumer does not implement any logic to invoke APIs. It is just a role. So, the option stating “API Consumer implementes logic to call an API” is INVALID.>> API Implementation does not route any requests. It is a final piece of logic where functionality of target systems is exposed. So, the requests should be routed to the API implementation by some other entity. So, the options stating “API Implementation routes the request to >> API” is INVALID>> The statements in one of the options are correct but sequence is wrong. The sequence is given as “API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to>> API Implementation”. Here, the statements in the options are VALID but sequence is WRONG.>> Right option and sequence is the one where API consumer first requests access to API on Anypoint Exchange and obtains client credentials. API client then writes logic to call an API by using the access client credentials requested by API consumer and the requests will be routed to API implementation via the API which is managed by API Manager.QUESTION 23A retail company is using an Order API to accept new orders. The Order API uses a JMS queue to submit orders to a backend order management service. The normal load for orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore. The CPU load of each CloudHub worker normally runs well below 70%. However, several times during the year the Order API gets four times (4x) the average number of orders. This causes the CloudHub worker CPU load to exceed 90% and the order submission time to exceed 30 seconds. The cause, however, is NOT the backend order management service, which still responds fast enough to meet the response SLA for the Order API. What is the MOST resource-efficient way to configure the Mule application’s CloudHub deployment to help the company cope with this performance challenge?  Permanently increase the size of each of the two (2) CloudHub workers by at least four times (4x) to one (1) vCore  Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than 70%  Permanently increase the number of CloudHub workers by four times (4x) to eight (8) CloudHub workers  Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70% Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than70%*****************************************The scenario in the question is very clearly stating that the usual traffic in the year is pretty well handled by the existing worker configuration with CPU running well below 70%. The problem occurs only “sometimes” occasionally when there is spike in the number of orders coming in.So, based on above, We neither need to permanently increase the size of each worker nor need to permanently increase the number of workers. This is unnecessary as other than those “occasional” times the resources are idle and wasted.We have two options left now. Either to use horizontal Cloudhub autoscaling policy to automatically increase the number of workers or to use vertical Cloudhub autoscaling policy to automatically increase the vCore size of each worker.Here, we need to take two things into consideration:1. CPU2. Order Submission Rate to JMS Queue>> From CPU perspective, both the options (horizontal and vertical scaling) solves the issue. Both helps to bring down the usage below 90%.>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective, as the application is still being load balanced with two workers only, there may not be much improvement in the incoming request processing rate and order submission rate to JMS queue. The throughput would be same as before.Only CPU utilization comes down.>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to increase the throughput as more workers are being load balanced now. This way we can address both CPU and Order Submission rate.Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.QUESTION 24True or False. We should always make sure that the APIs being designed and developed are self-servable even if it needs more man-day effort and resources.  FALSE  TRUE TRUE*****************************************>> As per MuleSoft proposed IT Operating Model, designing APIs and making sure that they are discoverable and self-servable is VERY VERY IMPORTANT and decides the success of an API and its application network.QUESTION 25What are the major benefits of MuleSoft proposed IT Operating Model?  1. Decrease the IT delivery gap2. Meet various business demands without increasing the IT capacity3. Focus on creation of reusable assets first. Upon finishing creation of all the possible assets then inform the LOBs in the organization to start using them  1. Decrease the IT delivery gap2. Meet various business demands by increasing the IT capacity and forming various IT departments3. Make consumption of assets at the rate of production  1. Decrease the IT delivery gap2. Meet various business demands without increasing the IT capacity3. Make consumption of assets at the rate of production 1. Decrease the IT delivery gap2. Meet various business demands without increasing the IT capacity3. Make consumption of assets at the rate of production.*****************************************QUESTION 26The responses to some HTTP requests can be cached depending on the HTTP verb used in the request.According to the HTTP specification, for what HTTP verbs is this safe to do?  PUT, POST, DELETE  GET, HEAD, POST  GET, PUT, OPTIONS  GET, OPTIONS, HEAD GET, OPTIONS, HEADhttp://restcookbook.com/HTTP%20Methods/idempotency/QUESTION 27An API experiences a high rate of client requests (TPS) vwth small message paytoads. How can usage limits be imposed on the API based on the type of client application?  Use an SLA-based rate limiting policy and assign a client application to a matching SLA tier based on its type  Use a spike control policy that limits the number of requests for each client application type  Use a cross-origin resource sharing (CORS) policy to limit resource sharing between client applications, configured by the client application type  Use a rate limiting policy and a client ID enforcement policy, each configured by the client application type Use an SLA-based rate limiting policy and assign a client application to a matching SLA tierbased on its type.*****************************************>> SLA tiers will come into play whenever any limits to be imposed on APIs based on client typeQUESTION 28What best describes the Fully Qualified Domain Names (FQDNs), also known as DNS entries, created when a Mule application is deployed to the CloudHub Shared Worker Cloud?  A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design  The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region  The FQDNs are determined by the application name, but can be modified by an administrator after deployment  The FQDNs are determined by both the application name and the Anypoint Platform organization The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region*****************************************>> When deploying applications to Shared Worker Cloud, the FQDN are always determined by application name chosen.>> It does NOT matter what region the app is being deployed to.>> Although it is fact and true that the generated FQDN will have the region included in it (Ex:exp-salesorder-api.au-s1.cloudhub.io), it does NOT mean that the same name can be used when deploying to another CloudHub region.>> Application name should be universally unique irrespective of Region and Organization and solely determines the FQDN for Shared Load Balancers.QUESTION 29How can the application of a rate limiting API policy be accurately reflected in the RAML definition of an API?  By refining the resource definitions by adding a description of the rate limiting policy behavior  By refining the request definitions by adding a remaining Requests query parameter with description, type, and example  By refining the response definitions by adding the out-of-the-box Anypoint Platform rate-limit-enforcement securityScheme with description, type, and example  By refining the response definitions by adding the x-ratelimit-* response headers with description, type, and example By refining the response definitions by adding the x-ratelimit-* response headers with description, type, and example*****************************************References:https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling#response-headershttps://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-sla-based-policies#response-headersQUESTION 30A company requires Mule applications deployed to CloudHub to be isolated between non-production and production environments. This is so Mule applications deployed to non-production environments can only access backend systems running in their customer-hosted non-production environment, and so Mule applications deployed to production environments can only access backend systems running in their customer-hosted production environment. How does MuleSoft recommend modifying Mule applications, configuring environments, or changing infrastructure to support this type of per-environment isolation between Mule applications and backend systems?  Modify properties of Mule applications deployed to the production Anypoint Platform environments to prevent access from non-production Mule applications  Configure firewall rules in the infrastructure inside each customer-hosted environment so that only IP addresses from the corresponding Anypoint Platform environments are allowed to communicate with corresponding backend systems  Create non-production and production environments in different Anypoint Platform business groups  Create separate Anypoint VPCs for non-production and production environments, then configure connections to the backend systems in the corresponding customer-hosted environments Create separate Anypoint VPCs for non-production and production environments, then configure connections to the backend systems in the corresponding customer-hosted environments.*****************************************>> Creating different Business Groups does NOT make any difference w.r.t accessing the non-prod and prod customer-hosted environments. Still they will be accessing from both Business Groups unless process network restrictions are put in place.>> We need to modify or couple the Mule Application Implementations with the environment. In fact, we should never implements application coupled with environments by binding them in the properties. Only basic things like endpoint URL etc should be bundled in properties but not environment level access restrictions.>> IP addresses on CloudHub are dynamic until unless a special static addresses are assigned. So it is not possible to setup firewall rules in customer-hosted infrastrcture. More over, even if static IP addresses are assigned, there could be 100s of applications running on cloudhub and setting up rules for all of them would be a hectic task, non-maintainable and definitely got a good practice.>> The best practice recommended ), is to have your Anypoint VPCsseperated for Prod and Non-Prod and perform the VPC peering or VPN tunneling for these Anypoint VPCs to respective Prod and Non-Prod customer-hosted environment networks.QUESTION 31What is true about API implementations when dealing with legal regulations that require all data processing to be performed within a certain jurisdiction (such as in the USA or the EU)?  They must avoid using the Object Store as it depends on services deployed ONLY to the US East region  They must use a Jurisdiction-local external messaging system such as Active MQ rather than Anypoint MQ  They must te deployed to Anypoint Platform runtime planes that are managed by Anypoint Platform control planes, with both planes in the same Jurisdiction  They must ensure ALL data is encrypted both in transit and at rest They must be deployed to Anypoint Platform runtime planes that are managed by Anypoint Platform control planes, with both planes in the same Jurisdiction.*****************************************>> As per legal regulations, all data processing to be performed within a certain jurisdiction. Meaning, the data in USA should reside within USA and should not go out. Same way, the data in EU should reside within EU and should not go out.>> So, just encrypting the data in transit and at rest does not help to be compliant with the rules. We need to make sure that data does not go out too.>> The data that we are talking here is not just about the messages that are published to Anypoint MQ. It includes the apps running, transaction states, application logs, events, metric info and any other metadata. So, just replacing Anypoint MQ with a locally hosted ActiveMQ does NOT help.>> The data that we are talking here is not just about the key/value pairs that are stored in Object Store. It includes the messages published, apps running, transaction states, application logs, events, metric info and any other metadata. So, just avoiding using Object Store does NOT help.>> The only option left and also the right option in the given choices is to deploy application on runtime and control planes that are both within the jurisdiction.QUESTION 32What API policy would be LEAST LIKELY used when designing an Experience API that is intended to work with a consumer mobile phone or tablet application?  OAuth 2.0 access token enforcement  Client ID enforcement  JSON threat protection  IPwhitellst IP whitelist*****************************************>> OAuth 2.0 access token and Client ID enforcement policies are VERY common to apply on Experience APIs as API consumers need to register and access the APIs using one of these mechanisms>> JSON threat protection is also VERY common policy to apply on Experience APIs to prevent bad or suspicious payloads hitting the API implementations.>> IP whitelisting policy is usually very common in Process and System APIs to only whitelist the IP range inside the local VPC. But also applied occassionally on some experience APIs where the End User/ API Consumers are FIXED.>> When we know the API consumers upfront who are going to access certain Experience APIs, then we can request for static IPs from such consumers and whitelist them to prevent anyone else hitting the API.However, the experience API given in the question/ scenario is intended to work with a consumer mobile phone or tablet application. Which means, there is no way we can know all possible IPs that are to be whitelisted as mobile phones and tablets can so many in number and any device in the city/state/country/globe.So, It is very LEAST LIKELY to apply IP Whitelisting on such Experience APIs whose consumers are typically Mobile Phones or Tablets.QUESTION 33Refer to the exhibit.Three business processes need to be implemented, and the implementations need to communicate with several different SaaS applications.These processes are owned by separate (siloed) LOBs and are mainly independent of each other, but do share a few business entities. Each LOB has one development team and their own budget In this organizational context, what is the most effective approach to choose the API data models for the APIs that will implement these business processes with minimal redundancy of the data models?A) Build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entitiesB) Build distinct data models for each API to follow established micro-services and Agile API-centric practicesC) Build all API data models using XML schema to drive consistency and reuse across the organizationD) Build one centralized Canonical Data Model (Enterprise Data Model) that unifies all the data types from all three business processes, ensuring the data model is consistent and non-redundant  Option A  Option B  Option C  Option D Build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entities.*****************************************>> The options w.r.t building API data models using XML schema/ Agile API-centric practices are irrelevant to the scenario given in the question. So these two are INVALID.>> Building EDM (Enterprise Data Model) is not feasible or right fit for this scenario as the teams and LOBs work in silo and they all have different initiatives, budget etc.. Building EDM needs intensive coordination among all the team which evidently seems not possible in this scenario.So, the right fit for this scenario is to build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entities.QUESTION 34What is a typical result of using a fine-grained rather than a coarse-grained API deployment model to implement a given business process?  A decrease in the number of connections within the application network supporting the business process  A higher number of discoverable API-related assets in the application network  A better response time for the end user as a result of the APIs being smaller in scope and complexity  An overall tower usage of resources because each fine-grained API consumes less resources A higher number of discoverable API-related assets in the application network.*****************************************>> We do NOT get faster response times in fine-grained approach when compared to coarse-grained approach.>> In fact, we get faster response times from a network having coarse-grained APIs compared to a network having fine-grained APIs model. The reasons are below.Fine-grained approach:1. will have more APIs compared to coarse-grained2. So, more orchestration needs to be done to achieve a functionality in business process.3. Which means, lots of API calls to be made. So, more connections will needs to be established. So, obviously more hops, more network i/o, more number of integration points compared to coarse-grained approach where fewer APIs with bulk functionality embedded in them.4. That is why, because of all these extra hops and added latencies, fine-grained approach will have bit more response times compared to coarse-grained.5. Not only added latencies and connections, there will be more resources used up in fine-grained approach due to more number of APIs.That’s why, fine-grained APIs are good in a way to expose more number of resuable assets in your network and make them discoverable. However, needs more maintenance, taking care of integration points, connections, resources with a little compromise w.r.t network hops and response times.QUESTION 35What are 4 important Platform Capabilities offered by Anypoint Platform?  API Versioning, API Runtime Execution and Hosting, API Invocation, API Consumer Engagement  API Design and Development, API Runtime Execution and Hosting, API Versioning, API Deprecation  API Design and Development, API Runtime Execution and Hosting, API Operations and Management, API Consumer Engagement  API Design and Development, API Deprecation, API Versioning, API Consumer Engagement API Design and Development, API Runtime Execution and Hosting, API Operations and Management, API Consumer Engagement*****************************************>> API Design and Development – Anypoint Studio, Anypoint Design Center, Anypoint Connectors>> API Runtime Execution and Hosting – Mule Runtimes, CloudHub, Runtime Services>> API Operations and Management – Anypoint API Manager, Anypoint Exchange>> API Consumer Management – API Contracts, Public Portals, Anypoint Exchange, API NotebooksQUESTION 36What Anypoint Connectors support transactions?  Database, JMS, VM  Database, 3MS, HTTP  Database, JMS, VM, SFTP  Database, VM, File QUESTION 37An organization has implemented a Customer Address API to retrieve customer address information. This API has been deployed to multiple environments and has been configured to enforce client IDs everywhere.A developer is writing a client application to allow a user to update their address. The developer has found the Customer Address API in Anypoint Exchange and wants to use it in their client application.What step of gaining access to the API can be performed automatically by Anypoint Platform?  Approve the client application request for the chosen SLA tier  Request access to the appropriate API Instances deployed to multiple environments using the client application’s credentials  Modify the client application to call the API using the client application’s credentials  Create a new application in Anypoint Exchange for requesting access to the API Approve the client application request for the chosen SLA tier*****************************************>> Only approving the client application request for the chosen SLA tier can be automated>> Rest of the provided options are not validQUESTION 38What is the main change to the IT operating model that MuleSoft recommends to organizations to improve innovation and clock speed?  Drive consumption as much as production of assets; this enables developers to discover and reuse assets from other projects and encourages standardization  Expose assets using a Master Data Management (MDM) system; this standardizes projects and enables developers to quickly discover and reuse assets from other projects  Implement SOA for reusable APIs to focus on production over consumption; this standardizes on XML and WSDL formats to speed up decision making  Create a lean and agile organization that makes many small decisions everyday; this speeds up decision making and enables each line of business to take ownership of its projects Drive consumption as much as production of assets; this enables developers to discover and reuse assets from other projects and encourages standardization*****************************************>> The main motto of the new IT Operating Model that MuleSoft recommends and made popular is to change the way that they are delivered from a production model to a production + consumption model, which is done through an API strategy called API-led connectivity.>> The assets built should also be discoverable and self-serveable for reusablity across LOBs and organization.>> MuleSoft’s IT operating model does not talk about SDLC model (Agile/ Lean etc) or MDM at all. So, options suggesting these are not valid.References:https://blogs.mulesoft.com/biz/connectivity/what-is-a-center-for-enablement-c4e/https://www.mulesoft.com/resources/api/secret-to-managing-it-projectsQUESTION 39An API client calls one method from an existing API implementation. The API implementation is later updated. What change to the API implementation would require the API client’s invocation logic to also be updated?  When the data type of the response is changed for the method called by the API client  When a new method is added to the resource used by the API client  When a new required field is added to the method called by the API client  When a child method is added to the method called by the API client When a new required field is added to the method called by the API client*****************************************>> Generally, the logic on API clients need to be updated when the API contract breaks.>> When a new method or a child method is added to an API , the API client does not break as it can still continue to use its existing method. So these two options are out.>> We are left for two more where “datatype of the response if changed” and “a new required field is added”.>> Changing the datatype of the response does break the API contract. However, the question is insisting on the “invocation” logic and not about the response handling logic. The API client can still invoke the API successfully and receive the response but the response will have a different datatype for some field.>> Adding a new required field will break the API’s invocation contract. When adding a new required field, the API contract breaks the RAML or API spec agreement that the API client/API consumer and API provider has between them. So this requires the API client invocation logic to also be updated. Loading … The MCPA-Level-1-Maintenance exam focuses on assessing an individual's ability to perform maintenance tasks on the Anypoint Platform. This includes tasks such as upgrading the platform, managing security, troubleshooting issues, and ensuring high availability. MCPA-Level-1-Maintenance exam is designed to ensure that certified architects maintain their expertise and stay up to date with the latest updates and best practices.   MCPA-Level-1-Maintenance Dumps Updated Practice Test and 81 unique questions: https://www.actualtests4sure.com/MCPA-Level-1-Maintenance-test-questions.html --------------------------------------------------- Images: https://blog.actualtests4sure.com/wp-content/plugins/watu/loading.gif https://blog.actualtests4sure.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-11-06 15:44:58 Post date GMT: 2023-11-06 15:44:58 Post modified date: 2023-11-06 15:44:58 Post modified date GMT: 2023-11-06 15:44:58