This page was exported from Free valid test braindumps [ http://free.validbraindumps.com ] Export date:Sat Apr 5 1:24:12 2025 / +0000 GMT ___________________________________________________ Title: [2024] Valid DP-203 test answers & Microsoft DP-203 exam pdf [Q141-Q161] --------------------------------------------------- [2024] Valid DP-203 test answers & Microsoft DP-203 exam pdf Verified DP-203 dumps Q&As - Pass Guarantee or Full Refund NEW QUESTION 141You are designing a monitoring solution for a fleet of 500 vehicles. Each vehicle has a GPS tracking device that sends data to an Azure event hub once per minute.You have a CSV file in an Azure Data Lake Storage Gen2 container. The file maintains the expected geographical area in which each vehicle should be.You need to ensure that when a GPS position is outside the expected area, a message is added to another event hub for processing within 30 seconds. The solution must minimize cost.What should you include in the solution? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-window-functionsNEW QUESTION 142You have an Azure Stream Analytics job that receives clickstream data from an Azure event hub.You need to define a query in the Stream Analytics job. The query must meet the following requirements:* Count the number of clicks within each 10-second window based on the country of a visitor.* Ensure that each click is NOT counted more than once.How should you define the Query?SELECT Country, Avg(*) AS Average  FROM ClickStream TIMESTAMP BY CreatedAtGROUP BY Country, SlidingWindow(second, 10)SELECT Country, Count(*) AS Count  FROM ClickStream TIMESTAMP BY CreatedAtGROUP BY Country, TumblingWindow(second, 10)SELECT Country, Avg(*) AS Average  FROM ClickStream TIMESTAMP BY CreatedAtGROUP BY Country, HoppingWindow(second, 10, 2)SELECT Country, Count(*) AS Count  FROM ClickStream TIMESTAMP BY CreatedAtGROUP BY Country, SessionWindow(second, 5, 10) Tumbling window functions are used to segment a data stream into distinct time segments and perform a function against them, such as the example below. The key differentiators of a Tumbling window are that they repeat, do not overlap, and an event cannot belong to more than one tumbling window.Example:Incorrect Answers:A: Sliding windows, unlike Tumbling or Hopping windows, output events only for points in time when the content of the window actually changes. In other words, when an event enters or exits the window. Every window has at least one event, like in the case of Hopping windows, events can belong to more than one sliding window.C: Hopping window functions hop forward in time by a fixed period. It may be easy to think of them as Tumbling windows that can overlap, so events can belong to more than one Hopping window result set. To make a Hopping window the same as a Tumbling window, specify the hop size to be the same as the window size.D: Session windows group events that arrive at similar times, filtering out periods of time where there is no data.Reference:https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-window-functionsNEW QUESTION 143You have an Azure subscription that contains an Azure Data Lake Storage account. The storage account contains a data lake named DataLake1.You plan to use an Azure data factory to ingest data from a folder in DataLake1, transform the data, and land the data in another folder.You need to ensure that the data factory can read and write data from any folder in the DataLake1 file system.The solution must meet the following requirements:* Minimize the risk of unauthorized user access.* Use the principle of least privilege.* Minimize maintenance effort.How should you configure access to the storage account for the data factory? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationText Description automatically generated with low confidenceBox 1: Azure Active Directory (Azure AD)On Azure, managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.Box 2: a managed identityA data factory can be associated with a managed identity for Azure resources, which represents this specific data factory. You can directly use this managed identity for Data Lake Storage Gen2 authentication, similar to using your own service principal. It allows this designated factory to access and copy data to or from your Data Lake Storage Gen2.Note: The Azure Data Lake Storage Gen2 connector supports the following authentication types.* Account key authentication* Service principal authentication* Managed identities for Azure resources authenticationReference:https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overviewhttps://docs.microsoft.com/en-us/azure/data-factory/connector-azure-data-lake-storageNEW QUESTION 144You have an Azure Synapse Analytics dedicated SQL pool named SQL1 that contains a hash-distributed fact table named Table1.You need to recreate Table1 and add a new distribution column. The solution must maximize the availability of data.Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. 1 – Drop the indexes of Table1.2 – Create a new table named Table 1v2 by running CTAS3 – Rename Table1 as Table1_old.4 – Rename Table 1v2 as Table1.NEW QUESTION 145You need to implement an Azure Synapse Analytics database object for storing the sales transactions data.The solution must meet the sales transaction dataset requirements.What solution must meet the sales transaction dataset requirements.What should you do? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. NEW QUESTION 146You have an Azure Synapse Analytics dedicated SQL pool named Pool1 that contains an external table named Sales. Sales contains sales data. Each row in Sales contains data on a single sale, including the name of the salesperson.You need to implement row-level security (RLS). The solution must ensure that the salespeople can access only their respective sales.What should you do? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationBox 1: A security policy for saleHere are the steps to create a security policy for Sales:* Create a user-defined function that returns the name of the current user:* CREATE FUNCTION dbo.GetCurrentUser()* RETURNS NVARCHAR(128)* AS* BEGIN* RETURN SUSER_SNAME();* END;* Create a security predicate function that filters the Sales table based on the current user:* CREATE FUNCTION dbo.SalesPredicate(@salesperson NVARCHAR(128))* RETURNS TABLE* WITH SCHEMABINDING* AS* RETURN SELECT 1 AS access_result* WHERE @salesperson = SalespersonName;* Create a security policy on the Sales table that uses the SalesPredicate function to filter the data:* CREATE SECURITY POLICY SalesFilter* ADD FILTER PREDICATE dbo.SalesPredicate(dbo.GetCurrentUser()) ON dbo.Sales* WITH (STATE = ON);By creating a security policy for the Sales table, you ensure that each salesperson can only access their own sales data. The security policy uses a user-defined function to get the name of the current user and a security predicate function to filter the Sales table based on the current user.Box 2: table-value functionto restrict row access by using row-level security, you need to create a table-valued function that returns a table of values that represent the rows that a user can access. You then use this function in a security policy that applies a predicate on the table.NEW QUESTION 147You have an Azure Data Factory pipeline shown the following exhibit.The execution log for the first pipeline run is shown in the following exhibit.The execution log for the second pipeline run is shown in the following exhibit.For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ExplanationNEW QUESTION 148You are designing an Azure Databricks table. The table will ingest an average of 20 million streaming events per day.You need to persist the events in the table for use in incremental load pipeline jobs in Azure Databricks. The solution must minimize storage costs and incremental load times.What should you include in the solution?  Partition by DateTime fields.  Sink to Azure Queue storage.  Include a watermark column.  Use a JSON format for physical data storage. The Databricks ABS-AQS connector uses Azure Queue Storage (AQS) to provide an optimized file source that lets you find new files written to an Azure Blob storage (ABS) container without repeatedly listing all of the files.This provides two major advantages:Lower latency: no need to list nested directory structures on ABS, which is slow and resource intensive.Lower costs: no more costly LIST API requests made to ABS.Reference:https://docs.microsoft.com/en-us/azure/databricks/spark/latest/structured-streaming/aqsNEW QUESTION 149You have an Azure Data Factory pipeline that has the activity shown in the following exhibit.Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. ExplanationNEW QUESTION 150You are implementing Azure Stream Analytics windowing functions.Which windowing function should you use for each requirement? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. NEW QUESTION 151You are designing an application that will use an Azure Data Lake Storage Gen 2 account to store petabytes of license plate photos from toll booths. The account will use zone-redundant storage (ZRS).You identify the following usage patterns:* The data will be accessed several times a day during the first 30 days after the data is created. The data must meet an availability SU of 99.9%.* After 90 days, the data will be accessed infrequently but must be available within 30 seconds.* After 365 days, the data will be accessed infrequently but must be available within five minutes. ExplanationBox 1: HotThe data will be accessed several times a day during the first 30 days after the data is created. The data must meet an availability SLA of 99.9%.Box 2: CoolAfter 90 days, the data will be accessed infrequently but must be available within 30 seconds.Data in the Cool tier should be stored for a minimum of 30 days.When your data is stored in an online access tier (either Hot or Cool), users can access it immediately. The Hot tier is the best choice for data that is in active use, while the Cool tier is ideal for data that is accessed less frequently, but that still must be available for reading and writing.Box 3: CoolAfter 365 days, the data will be accessed infrequently but must be available within five minutes.Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/access-tiers-overviewhttps://docs.microsoft.com/en-us/azure/storage/blobs/archive-rehydrate-overviewNEW QUESTION 152You use Azure Data Lake Storage Gen2 to store data that data scientists and data engineers will query by using Azure Databricks interactive notebooks. Users will have access only to the Data Lake Storage folders that relate to the projects on which they work.You need to recommend which authentication methods to use for Databricks and Data Lake Storage to provide the users with the appropriate access. The solution must minimize administrative effort and development effort.Which authentication method should you recommend for each Azure service? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/databricks/data/data-sources/azure/adls-gen2/azure-datalake-gen2-sas-accesshttps://docs.microsoft.com/en-us/azure/databricks/security/credential-passthrough/adls-passthroughNEW QUESTION 153You have an Azure subscription that contains the following resources:* An Azure Active Directory (Azure AD) tenant that contains a security group named Group1* An Azure Synapse Analytics SQL pool named Pool1You need to control the access of Group1 to specific columns and rows in a table in Pool1.Which Transact-SQL commands should you use? To answer, select the appropriate options in the answer area. ExplanationText Description automatically generatedBox 1: GRANTYou can implement column-level security with the GRANT T-SQL statement.Box 2: CREATE SECURITY POLICYImplement Row Level Security by using the CREATE SECURITY POLICY Transact-SQL statement Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/column-level-securityNEW QUESTION 154You have an Azure subscription that contains an Azure Synapse Analytics workspace named workspace1.Workspace1 contains a dedicated SQL pool named SQL Pool and an Apache Spark pool named sparkpool.Sparkpool1 contains a DataFrame named pyspark.df.You need to write the contents of pyspark_df to a tabte in SQLPooM by using a PySpark notebook.How should you complete the code? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationNEW QUESTION 155You have an Azure subscription that contains a logical Microsoft SQL server named Server1. Server1 hosts an Azure Synapse Analytics SQL dedicated pool named Pool1.You need to recommend a Transparent Data Encryption (TDE) solution for Server1. The solution must meet the following requirements:Track the usage of encryption keys.Maintain the access of client apps to Pool1 in the event of an Azure datacenter outage that affects the availability of the encryption keys.What should you include in the recommendation? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/security/workspaces-encryptionhttps://docs.microsoft.com/en-us/azure/key-vault/general/loggingNEW QUESTION 156You are building an Azure Synapse Analytics dedicated SQL pool that will contain a fact table for transactions from the first half of the year 2020.You need to ensure that the table meets the following requirements:Minimizes the processing time to delete data that is older than 10 years Minimizes the I/O for queries that use year-to-date values How should you complete the Transact-SQL statement? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/sql/t-sql/statements/create-partition-function-transact-sqlNEW QUESTION 157You need to ensure that the Twitter feed data can be analyzed in the dedicated SQL pool. The solution must meet the customer sentiment analytics requirements.Which three Transaction-SQL DDL commands should you run in sequence? To answer, move the appropriate commands from the list of commands to the answer area and arrange them in the correct order.NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tablesNEW QUESTION 158You have a SQL pool in Azure Synapse.You plan to load data from Azure Blob storage to a staging table. Approximately 1 million rows of data will be loaded daily. The table will be truncated before each daily load.You need to create the staging table. The solution must minimize how long it takes to load the data to the staging table.How should you configure the table? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-partitionhttps://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distributeNEW QUESTION 159You are designing an application that will store petabytes of medical imaging data When the data is first created, the data will be accessed frequently during the first week. After one month, the data must be accessible within 30 seconds, but files will be accessed infrequently. After one year, the data will be accessed infrequently but must be accessible within five minutes.You need to select a storage strategy for the data. The solution must minimize costs.Which storage tier should you use for each time frame? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationFirst week: HotHot – Optimized for storing data that is accessed frequently.After one month: CoolCool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.After one year: CoolNEW QUESTION 160You are building a database in an Azure Synapse Analytics serverless SQL pool.You have data stored in Parquet files in an Azure Data Lake Storege Gen2 container.Records are structured as shown in the following sample.{“id”: 123,“address_housenumber”: “19c”,“address_line”: “Memory Lane”,“applicant1_name”: “Jane”,“applicant2_name”: “Dev”}The records contain two applicants at most.You need to build a table that includes only the address fields.How should you complete the Transact-SQL statement? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tablesNEW QUESTION 161Which Azure Data Factory components should you recommend using together to import the daily inventory data from the SQL server to Azure Data Lake Storage? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point.  Loading … DP-203 Exam Questions – Valid DP-203 Dumps Pdf: https://www.validbraindumps.com/DP-203-exam-prep.html --------------------------------------------------- Images: https://free.validbraindumps.com/wp-content/plugins/watu/loading.gif https://free.validbraindumps.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2024-07-15 09:43:00 Post date GMT: 2024-07-15 09:43:00 Post modified date: 2024-07-15 09:43:00 Post modified date GMT: 2024-07-15 09:43:00