Implementing a Data Warehouse using SQL
Question No: 111 DRAG DROP – (Topic 2)
You are the administrator for a Data Quality Server. You are adding a user who must have permission to:
->Edit and execute a project
->View the activity monitoring data
This user must not be able to:
->Perform any kind of knowledge management
->Create or change a knowledge base
->Terminate an activity or perform administrative duties
You need to develop a Transact-SQL (T-SQL) script to meet these requirements.
What should you do? (To answer, drag the appropriate code segment or segments to the correct location or locations in the answer area.)
Question No: 112 – (Topic 2)
You administer a Microsoft SQL Server 2016 server that has SQL Server Integration Services (SSIS) installed.
You plan to deploy new SSIS packages to the server. The SSIS packages use the Project Deployment Model together with parameters and Integration Services environment variables.
You need to configure the SQL Server environment to support these packages. What should you do?
Create SSIS configuration files for the packages.
Create an Integration Services catalog.
Install Data Quality Services.
Install Master Data services.
Question No: 113 HOTSPOT – (Topic 2)
You are developing a SQL Server Integration Service (SSIS) package. The package loads a customer dimension table by using a data flow task.
Changes to the customer attributes must be tracked over time.
You need to produce a checksum value to identify the rows that have changed since the last Extract, Transform and Load (ETL) process execution. You need to use the least amount of development effort to achieve this goal.
Which transformation should you use? (To answer, select the appropriate transformation in the answer area.)
Question No: 114 DRAG DROP – (Topic 2)
You are editing a SQL Server Integration Services (SSIS) package that uses checkpoints.
The package performs the following steps:
Download a sales transaction file by using FTP.
Truncate a staging table.
Load the contents of the file to the staging table.
Merge the data with another data source for loading to a data warehouse.
The checkpoints are currently working such that if any of the four steps fail, the package will restart from the failed step the next time it executes.
You need to modify the package to ensure that if either the Truncate Staging Table or the Load Sales to Staging task fails, the package will always restart from the Truncate Staging Table task the next time the package runs.
Which three steps should you perform in sequence? (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.)
Question No: 115 – (Topic 2)
You are designing an extract, transform, load (ETL) process for loading data from a SQL Server database into a large fact table in a data warehouse each day with the prior day#39;s
The ETL process for the fact table must meet the following requirements:
->Load new data in the shortest possible time.
->Remove data that is more than 36 months old.
->Ensure that data loads correctly.
->Minimize record locking.
->Minimize impact on the transaction log.
You need to design an ETL process that meets the requirements. What should you do? (More than one answer choice may achieve the goal. Select the BEST answer.)
Partition the destination fact table by date. Insert new data directly into the fact table and delete old data directly from the fact table.
Partition the destination fact table by date. Use partition switching and staging tables both to remove old data and to load new data.
Partition the destination fact table by customer. Use partition switching both to remove old data and to load new data into each partition.
Partition the destination fact table by date. Use partition switching and a staging table to remove old data. Insert new data directly into the fact table.
Question No: 116 – (Topic 2)
You are a database developer of a Microsoft SQL Server 2016 database. You are designing a table that will store Customer data from different sources. The table will include a column that contains the CustomerID from the source system and a column that contains the SourceID. A sample of this data is as shown in the following table. You need to ensure that the table has no duplicate CustomerID within a SourceID. You also need to ensure that the data in the table is in the order of SourceID and then CustomerID.
Which Transact- SQL statement should you use?
CREATE TABLE Customer (SourceID int NOT NULL IDENTITY, CustomerID int NOT NULL IDENTITY,
CustomerName varchar(255) NOT NULL);
CREATE TABLE Customer (SourceID int NOT NULL,
CustomerID int NOT NULL PRIMARY KEY CLUSTERED,
CustomerName varchar(255) NOT NULL);
CREATE TABLE Customer
(SourceID int NOT NULL PRIMARY KEY CLUSTERED,
CustomerID int NOT NULL UNIQUE, CustomerName varchar(255) NOT NULL);
CREATE TABLE Customer (SourceID int NOT NULL, CustomerID int NOT NULL,
CustomerName varchar(255) NOT NULL,
CONSTRAINT PK_Customer PRIMARY KEY CLUSTERED
Question No: 117 – (Topic 2)
You administer a SQL Server Integration Services (SSIS) solution in the SSIS catalog. A SQL Server Agent job is used to execute a package daily with the basic logging level.
Recently, the package execution failed because of a primary key violation when the package inserted data into the destination table.
You need to identify all previous times that the package execution failed because of a primary key violation.
What should you do?
Use an event handler for OnError for the package.
Use an event handler for OnError for each data flow task.
Use an event handler for OnTaskFailed for the package.
View the job history for the SQL Server Agent job.
View the All Messages subsection of the All Executions report for the package.
Store the System::SourceID variable in the custom log table.
Store the System::ServerExecutionID variable in the custom log table.
Store the System::ExecutionInstanceGUID variable in the custom log table.
Enable the SSIS log provider for SQL Server for OnError in the package control flow.
Enable the SSIS log provider for SQL Server for OnTaskFailed in the package control flow.
Deploy the project by using dtutil.exe with the /COPY DTS option.
Deploy the project by using dtutil.exe with the /COPY SQL option.
Deploy the .ispac file by using the Integration Services Deployment Wizard.
Create a SQL Server Agent job to execute the SSISDB.catalog.validate_project stored procedure.
Create a SQL Server Agent job to execute the SSISDB.catalog.validate_package stored procedure.
Create a SQL Server Agent job to execute the
SSISDB.catalog.create_execution and SSISDB.catalog.start_execution stored procedures.
Create a table to store error information. Create an error output on each data flow destination that writes OnError event text to the table.
Create a table to store error information. Create an error output on each data flow destination that writes OnTaskFailed event text to the table.
Question No: 118 – (Topic 2)
A SQL Server Integration Services (SSIS) package imports daily transactions from several files into a SQL Server table named Transaction. Each file corresponds to a different store and is imported in parallel with the other files. The data flow tasks use OLE DB destinations in fast load data access mode.
The number of daily transactions per store can be very large and is growing. The Transaction table does not have any indexes.
You need to minimize the package execution time. What should you do?
Partition the table by day and store.
Create a clustered index on the Transaction table.
Run the package in Performance mode.
Increase the value of the Row per Batch property.
Explanation: * Data Access Mode – This setting provides the #39;fast load#39; option which internally uses a BULK INSERT statement for uploading data into the destination table instead of a simple INSERT statement (for each single row) as in the case for other options.
BULK INSERT parameters include: ROWS_PER_BATCH =rows_per_batch
Indicates the approximate number of rows of data in the data file.
By default, all the data in the data file is sent to the server as a single transaction, and the number of rows in the batch is unknown to the query optimizer. If you specify
ROWS_PER_BATCH (with a value gt; 0) the server uses this value to optimize the bulk- import operation. The value specified for ROWS_PER_BATCH should approximately the same as the actual number of rows.
Question No: 119 – (Topic 2)
You are designing a data warehouse with two fact tables. The first table contains sales per month and the second table contains orders per day.
Referential integrity must be enforced declaratively.
You need to design a solution that can join a single time dimension to both fact tables. What should you do?
Join the two fact tables.
Merge the fact tables.
Create a time dimension that can join to both fact tables at their respective granularity.
Create a surrogate key for the time dimension.
http://technet.microsoft.com/en-us/library/ms174832.aspx http://msdn.microsoft.com/en-us/library/ms174884.aspx http://decipherinfosys.wordpress.com/2007/02/01/surrogate-keys-vs-natural-keys-for- primary-key/
Question No: 120 – (Topic 2)
You are using a SQL Server Integration Services (SSIS) project that is stored in the SSIS catalog. An Environment has been defined in the SSIS catalog.
You need to add the Environment to the project. Which stored procedure should you use?
Answer: B Explanation:
Environments (Test, Production etc) are associated with projects by creating references to the environments in the projects.
|Lowest Price Guarantee||Yes||No||No|
|Free VCE Simulator||Yes||No||No|