This page was exported from Free valid test braindumps [ http://free.validbraindumps.com ] Export date:Sat Apr 5 23:57:45 2025 / +0000 GMT ___________________________________________________ Title: (2023) PASS MCPA-Level-1-Maintenance Exam Free Practice Test with 100% Accurate Answers [Q13-Q27] --------------------------------------------------- (2023) PASS MCPA-Level-1-Maintenance Exam Free Practice Test with 100% Accurate Answers MCPA-Level-1-Maintenance dumps Free Test Engine Verified By It Certified Experts Q13. What is a key requirement when using an external Identity Provider for Client Management in Anypoint Platform?  Single sign-on is required to sign in to Anypoint Platform  The application network must include System APIs that interact with the Identity Provider  To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider  APIs managed by Anypoint Platform must be protected by SAML 2.0 policies Explanationhttps://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.html ExplanationTo invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients mustsubmit access tokens issued by that same Identity Provider*****************************************>> It is NOT necessary that single sign-on is required to sign in to Anypoint Platform because we are using an external Identity Provider for Client Management>> It is NOT necessary that all APIs managed by Anypoint Platform must be protected by SAML 2.0 policies because we are using an external Identity Provider for Client Management>> Not TRUE that the application network must include System APIs that interact with the Identity Provider because we are using an external Identity Provider for Client Management Only TRUE statement in the given options is – “To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider” References:https://docs.mulesoft.com/api-manager/2.x/external-oauth-2.0-token-validation-policyhttps://blogs.mulesoft.com/dev/api-dev/api-security-ways-to-authenticate-and-authorize/Q14. An organization uses various cloud-based SaaS systems and multiple on-premises systems. The on-premises systems are an important part of the organization’s application network and can only be accessed from within the organization’s intranet.What is the best way to configure and use Anypoint Platform to support integrations with both the cloud-based SaaS systems and on-premises systems?A) Use CloudHub-deployed Mule runtimes in an Anypoint VPC managed by Anypoint Platform Private Cloud Edition control planeB) Use CloudHub-deployed Mule runtimes in the shared worker cloud managed by the MuleSoft-hosted Anypoint Platform control planeC) Use an on-premises installation of Mule runtimes that are completely isolated with NO external network access, managed by the Anypoint Platform Private Cloud Edition control planeD) Use a combination of Cloud Hub-deployed and manually provisioned on-premises Mule runtimes managed by the MuleSoft-hosted Anypoint Platform control plane  Option A  Option B  Option C  Option D Use a combination of CloudHub-deployed and manually provisioned on-premises Muleruntimes managed by the MuleSoft-hosted Platform control plane.*****************************************Key details to be taken from the given scenario:>> Organization uses BOTH cloud-based and on-premises systems>> On-premises systems can only be accessed from within the organization’s intranet Let us evaluate the given choices based on above key details:>> CloudHub-deployed Mule runtimes can ONLY be controlled using MuleSoft-hosted control plane. We CANNOT use Private Cloud Edition’s control plane to control CloudHub Mule Runtimes. So, option suggesting this is INVALID>> Using CloudHub-deployed Mule runtimes in the shared worker cloud managed by the MuleSoft-hosted Anypoint Platform is completely IRRELEVANT to given scenario and silly choice. So, option suggesting this is INVALID>> Using an on-premises installation of Mule runtimes that are completely isolated with NO external network access, managed by the Anypoint Platform Private Cloud Edition control plane would work for On-premises integrations. However, with NO external access, integrations cannot be done to SaaS-based apps. Moreover CloudHub-hosted apps are best-fit for integrating with SaaS-based applications. So, option suggesting this is BEST WAY.The best way to configure and use Anypoint Platform to support these mixed/hybrid integrations is to use a combination of CloudHub-deployed and manually provisioned on-premises Mule runtimes managed by the MuleSoft-hosted Platform control plane.Q15. A company wants to move its Mule API implementations into production as quickly as possible. To protect access to all Mule application data and metadata, the company requires that all Mule applications be deployed to the company’s customer-hosted infrastructure within the corporate firewall. What combination of runtime plane and control plane options meets these project lifecycle goals?  Manually provisioned customer-hosted runtime plane and customer-hosted control plane  MuleSoft-hosted runtime plane and customer-hosted control plane  Manually provisioned customer-hosted runtime plane and MuleSoft-hosted control plane  iPaaS provisioned customer-hosted runtime plane and MuleSoft-hosted control plane Manually provisioned customer-hosted runtime plane and customer-hosted control plane*****************************************There are two key factors that are to be taken into consideration from the scenario given in the question.>> Company requires both data and metadata to be resided within the corporate firewall>> Company would like to go with customer-hosted infrastructure.Any deployment model that is to deal with the cloud directly or indirectly (Mulesoft-hosted or Customer’s own cloud like Azure, AWS) will have to share atleast the metadata.Application data can be controlled inside firewall by having Mule Runtimes on customer hosted runtime plane. But if we go with Mulsoft-hosted/ Cloud-based control plane, the control plane required atleast some minimum level of metadata to be sent outside the corporate firewall.As the customer requirement is pretty clear about the data and metadata both to be within the corporate firewall, even though customer wants to move to production as quickly as possible, unfortunately due to the nature of their security requirements, they have no other option but to go with manually provisioned customer-hosted runtime plane and customer-hosted control plane.Q16. What is typically NOT a function of the APIs created within the framework called API-led connectivity?  They provide an additional layer of resilience on top of the underlying backend system, thereby insulating clients from extended failure of these systems.  They allow for innovation at the user Interface level by consuming the underlying assets without being aware of how data Is being extracted from backend systems.  They reduce the dependency on the underlying backend systems by helping unlock data from backend systems In a reusable and consumable way.  They can compose data from various sources and combine them with orchestration logic to create higher level value. They provide an additional layer of resilience on top of the underlying backend system, thereby insulating clients from extended failure of these systems.*****************************************In API-led connectivity,>> Experience APIs – allow for innovation at the user interface level by consuming the underlying assets without being aware of how data is being extracted from backend systems.>> Process APIs – compose data from various sources and combine them with orchestration logic to create higher level value>> System APIs – reduce the dependency on the underlying backend systems by helping unlock data from backend systems in a reusable and consumable way.However, they NEVER promise that they provide an additional layer of resilience on top of the underlying backend system, thereby insulating clients from extended failure of these systems.https://dzone.com/articles/api-led-connectivity-with-muleQ17. What is a best practice when building System APIs?  Document the API using an easily consumable asset like a RAML definition  Model all API resources and methods to closely mimic the operations of the backend system  Build an Enterprise Data Model (Canonical Data Model) for each backend system and apply it to System APIs  Expose to API clients all technical details of the API implementation’s interaction wifch the backend system Model all API resources and methods to closely mimic the operations of the backend system.*****************************************>> There are NO fixed and straight best practices while opting data models for APIs. They are completly contextual and depends on number of factors. Based upon those factors, an enterprise can choose if they have to go with Enterprise Canonical Data Model or Bounded Context Model etc.>> One should NEVER expose the technical details of API implementation to their API clients. Only the API interface/ RAML is exposed to API clients.>> It is true that the RAML definitions of APIs should be as detailed as possible and should reflect most of the documentation. However, just that is NOT enough to call your API as best documented API. There should be even more documentation on Anypoint Exchange with API Notebooks etc. to make and create a developer friendly API and repository..>> The best practice always when creating System APIs is to create their API interfaces by modeling their resources and methods to closely reflect the operations and functionalities of that backend system.Q18. Refer to the exhibit.what is true when using customer-hosted Mule runtimes with the MuleSoft-hosted Anypoint Platform control plane (hybrid deployment)?  Anypoint Runtime Manager initiates a network connection to a Mule runtime in order to deploy Mule applications  The MuleSoft-hosted Shared Load Balancer can be used to load balance API invocations to the Mule runtimes  API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane  Anypoint Runtime Manager automatically ensures HA in the control plane by creating a new Mule runtime instance in case of a node failure API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane.*****************************************>> We CANNOT use Shared Load balancer to load balance APIs on customer hosted runtimes>> For Hybrid deployment models, the on-premises are first connected to Runtime Manager using Runtime Manager agent. So, the connection is initiated first from On-premises to Runtime Manager. Then all control can be done from Runtime Manager.>> Anypoint Runtime Manager CANNOT ensure automatic HA. Clusters/Server Groups etc should be configured before hand.Only TRUE statement in the given choices is, API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane. There are several references below to justify this statement.References:https://docs.mulesoft.com/runtime-manager/deployment-strategies#hybrid-deploymentshttps://help.mulesoft.com/s/article/On-Premise-Runtimes-Disconnected-From-US-Control-Plane-June-18th-2018https://help.mulesoft.com/s/article/Runtime-Manager-cannot-manage-On-Prem-Applications-and-Servers-from-Uhttps://help.mulesoft.com/s/article/On-premise-Runtimes-Appear-Disconnected-in-Runtime-Manager-May-29th-Q19. What is true about where an API policy is defined in Anypoint Platform and how it is then applied to API instances?  The API policy Is defined In Runtime Manager as part of the API deployment to a Mule runtime, and then ONLY applied to the specific API Instance  The API policy Is defined In API Manager for a specific API Instance, and then ONLY applied to the specific API instance  The API policy Is defined in API Manager and then automatically applied to ALL API instances  The API policy is defined in API Manager, and then applied to ALL API instances in the specified environment The API policy is defined in API Manager for a specific API instance, and then ONLY applied to the specific API instance.*****************************************>> Once our API specifications are ready and published to Exchange, we need to visit API Manager and register an API instance for each API.>> API Manager is the place where management of API aspects takes place like addressing NFRs by enforcing policies on them.>> We can create multiple instances for a same API and manage them differently for different purposes.>> One instance can have a set of API policies applied and another instance of same API can have different set of policies applied for some other purpose.>> These APIs and their instances are defined PER environment basis. So, one need to manage them seperately in each environment.>> We can ensure that same configuration of API instances (SLAs, Policies etc..) gets promoted when promoting to higher environments using platform feature. But this is optional only. Still one can change them per environment basis if they have to.>> Runtime Manager is the place to manage API Implementations and their Mule Runtimes but NOT APIs itself. Though API policies gets executed in Mule Runtimes, We CANNOT enforce API policies in Runtime Manager. We would need to do that via API Manager only for a cherry picked instance in an environment.So, based on these facts, right statement in the given choices is – “The API policy is defined in API Manager for a specific API instance, and then ONLY applied to the specific API instance”.Q20. When could the API data model of a System API reasonably mimic the data model exposed by the corresponding backend system, with minimal improvements over the backend system’s data model?  When there is an existing Enterprise Data Model widely used across the organization  When the System API can be assigned to a bounded context with a corresponding data model  When a pragmatic approach with only limited isolation from the backend system is deemed appropriate  When the corresponding backend system is expected to be replaced in the near future When a pragmatic approach with only limited isolation from the backend system is deemed appropriate.*****************************************General guidance w.r.t choosing Data Models:>> If an Enterprise Data Model is in use then the API data model of System APIs should make use of data types from that Enterprise Data Model and the corresponding API implementation should translate between these data types from the Enterprise Data Model and the native data model of the backend system.>> If no Enterprise Data Model is in use then each System API should be assigned to a Bounded Context, the API data model of System APIs should make use of data types from the corresponding Bounded Context Data Model and the corresponding API implementation should translate between these data types from the Bounded Context Data Model and the native data model of the backend system. In this scenario, the data types in the Bounded Context Data Model are defined purely in terms of their business characteristics and are typically not related to the native data model of the backend system. In other words, the translation effort may be significant.>> If no Enterprise Data Model is in use, and the definition of a clean Bounded Context Data Model is considered too much effort, then the API data model of System APIs should make use of data types that approximately mirror those from the backend system, same semantics and naming as backend system, lightly sanitized, expose all fields needed for the given System API’s functionality, but not significantly more and making good use of REST conventions.The latter approach, i.e., exposing in System APIs an API data model that basically mirrors that of the backend system, does not provide satisfactory isolation from backend systems through the System API tier on its own.In particular, it will typically not be possible to “swap out” a backend system without significantly changing all System APIs in front of that backend system and therefore the API implementations of all Process APIs that depend on those System APIs! This is so because it is not desirable to prolong the life of a previous backend system’s data model in the form of the API data model of System APIs that now front a new backend system.The API data models of System APIs following this approach must therefore change when the backend system is replaced.On the other hand:>> It is a very pragmatic approach that adds comparatively little overhead over accessing the backend system directly>> Isolates API clients from intricacies of the backend system outside the data model (protocol, authentication, connection pooling, network address, …)>> Allows the usual API policies to be applied to System APIs>> Makes the API data model for interacting with the backend system explicit and visible, by exposing it in the RAML definitions of the System APIs>> Further isolation from the backend system data model does occur in the API implementations of the Process API tierQ21. What are 4 important Platform Capabilities offered by Anypoint Platform?  API Versioning, API Runtime Execution and Hosting, API Invocation, API Consumer Engagement  API Design and Development, API Runtime Execution and Hosting, API Versioning, API Deprecation  API Design and Development, API Runtime Execution and Hosting, API Operations and Management, API Consumer Engagement  API Design and Development, API Deprecation, API Versioning, API Consumer Engagement API Design and Development, API Runtime Execution and Hosting, API Operations and Management, API Consumer Engagement*****************************************>> API Design and Development – Anypoint Studio, Anypoint Design Center, Anypoint Connectors>> API Runtime Execution and Hosting – Mule Runtimes, CloudHub, Runtime Services>> API Operations and Management – Anypoint API Manager, Anypoint Exchange>> API Consumer Management – API Contracts, Public Portals, Anypoint Exchange, API NotebooksQ22. What is the most performant out-of-the-box solution in Anypoint Platform to track transaction state in an asynchronously executing long-running process implemented as a Mule application deployed to multiple CloudHub workers?  Redis distributed cache  java.util.WeakHashMap  Persistent Object Store  File-based storage Persistent Object Store*****************************************>> Redis distributed cache is performant but NOT out-of-the-box solution in Anypoint Platform>> File-storage is neither performant nor out-of-the-box solution in Anypoint Platform>> java.util.WeakHashMap needs a completely custom implementation of cache from scratch using Java code and is limited to the JVM where it is running. Which means the state in the cache is not worker aware when running on multiple workers. This type of cache is local to the worker. So, this is neither out-of-the-box nor worker-aware among multiple workers on cloudhub. https://www.baeldung.com/java-weakhashmap>> Persistent Object Store is an out-of-the-box solution provided by Anypoint Platform which is performant as well as worker aware among multiple workers running on CloudHub. https://docs.mulesoft.com/object-store/ So, Persistent Object Store is the right answer.Q23. Mule applications that implement a number of REST APIs are deployed to their own subnet that is inaccessible from outside the organization.External business-partners need to access these APIs, which are only allowed to be invoked from a separate subnet dedicated to partners – called Partner-subnet. This subnet is accessible from the public internet, which allows these external partners to reach it.Anypoint Platform and Mule runtimes are already deployed in Partner-subnet. These Mule runtimes can already access the APIs.What is the most resource-efficient solution to comply with these requirements, while having the least impact on other applications that are currently using the APIs?  Implement (or generate) an API proxy Mule application for each of the APIs, then deploy the API proxies to the Mule runtimes  Redeploy the API implementations to the same servers running the Mule runtimes  Add an additional endpoint to each API for partner-enablement consumption  Duplicate the APIs as Mule applications, then deploy them to the Mule runtimes Q24. A System API is designed to retrieve data from a backend system that has scalability challenges. What API policy can best safeguard the backend system?  IPwhitelist  SLA-based rate limiting  Auth 2 token enforcement  Client ID enforcement SLA-based rate limiting*****************************************>> Client Id enforement policy is a “Compliance” related NFR and does not help in maintaining the “Quality of Service (QoS)”. It CANNOT and NOT meant for protecting the backend systems from scalability challenges.>> IP Whitelisting and OAuth 2.0 token enforcement are “Security” related NFRs and again does not help in maintaining the “Quality of Service (QoS)”. They CANNOT and are NOT meant for protecting the backend systems from scalability challenges.Rate Limiting, Rate Limiting-SLA, Throttling, Spike Control are the policies that are “Quality of Service (QOS)” related NFRs and are meant to help in protecting the backend systems from getting overloaded.https://dzone.com/articles/how-to-secure-apisQ25. True or False. We should always make sure that the APIs being designed and developed are self-servable even if it needs more man-day effort and resources.  FALSE  TRUE TRUE*****************************************>> As per MuleSoft proposed IT Operating Model, designing APIs and making sure that they are discoverable and self-servable is VERY VERY IMPORTANT and decides the success of an API and its application network.Q26. What is true about automating interactions with Anypoint Platform using tools such as Anypoint Platform REST APIs, Anypoint CU, or the Mule Maven plugin?  Access to Anypoint Platform APIs and Anypoint CU can be controlled separately through the roles and permissions in Anypoint Platform, so that specific users can get access to Anypoint CLI white others get access to the platform APIs  Anypoint Platform APIs can ONLY automate interactions with CloudHub, while the Mule Maven plugin is required for deployment to customer-hosted Mule runtimes  By default, the Anypoint CLI and Mule Maven plugin are NOT included in the Mule runtime, so are NOT available to be used by deployed Mule applications  API policies can be applied to the Anypoint Platform APIs so that ONLY certain LOBs have access to specific functions By default, the Anypoint CLI and Mule Maven plugin are NOT included in the Mule runtime, so are NOT available to be used by deployed Mule applications*****************************************>> We CANNOT apply API policies to the Anypoint Platform APIs like we can do on our custom written API instances. So, option suggesting this is FALSE.>> Anypoint Platform APIs can be used for automating interactions with both CloudHub and customer-hosted Mule runtimes. Not JUST the CloudHub. So, option opposing this is FALSE.>> Mule Maven plugin is NOT mandatory for deployment to customer-hosted Mule runtimes. It just helps your CI/CD to have smoother automation. But not a compulsory requirement to deploy. So, option opposing this is FALSE.>> We DO NOT have any such special roles and permissions on the platform to separately control access for some users to have Anypoint CLI and others to have Anypoint Platform APIs. With proper general roles/permissions (API Owner, Cloudhub Admin etc..), one can use any of the options (Anypoint CLI or Platform APIs). So, option suggesting this is FALSE.Only TRUE statement given in the choices is that – Anypoint CLI and Mule Maven plugin are NOT included in the Mule runtime, so are NOT available to be used by deployed Mule applications.Maven is part of Studio or you can use other Maven installation for development.CLI is convenience only. It is one of many ways how to install app to the runtime.These are definitely NOT part of anything except your process of deployment or automation.Q27. What best describes the Fully Qualified Domain Names (FQDNs), also known as DNS entries, created when a Mule application is deployed to the CloudHub Shared Worker Cloud?  A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design  The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region  The FQDNs are determined by the application name, but can be modified by an administrator after deployment  The FQDNs are determined by both the application name and the Anypoint Platform organization The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region*****************************************>> When deploying applications to Shared Worker Cloud, the FQDN are always determined by application name chosen.>> It does NOT matter what region the app is being deployed to.>> Although it is fact and true that the generated FQDN will have the region included in it (Ex:exp-salesorder-api.au-s1.cloudhub.io), it does NOT mean that the same name can be used when deploying to another CloudHub region.>> Application name should be universally unique irrespective of Region and Organization and solely determines the FQDN for Shared Load Balancers. Loading … Latest MuleSoft MCPA-Level-1-Maintenance Practice Test Questions: https://www.validbraindumps.com/MCPA-Level-1-Maintenance-exam-prep.html --------------------------------------------------- Images: https://free.validbraindumps.com/wp-content/plugins/watu/loading.gif https://free.validbraindumps.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-11-20 11:22:46 Post date GMT: 2023-11-20 11:22:46 Post modified date: 2023-11-20 11:22:46 Post modified date GMT: 2023-11-20 11:22:46