Data type

Data warehouse – Test Data Mangement

Posted on Updated on

Test Data Management

Hi All,

Hope all are doing great….it’s very long time since I posted through my blog… my sincere apologies  😦

Happy Reading……:-)

Software-testing-trends-2013

Test Data availability is the one of and most significant issue that will lead to Schedule slippage in DWH projects.

So the Testers and Test Managers should know what all are requirements for the test data and they need to define the test data strategy.

The Test Data Management should cover the below points:

  1. Subset of PROD data catered for Testing Requirements

 Suppose if you are working in an Enhancement projects then you can avail the existing data that is already loaded into warehouse. In this instance you can’t use the data that’s pulled directly from Production as it’s sensitive to the Bank and its Customers.

  1. Mask the data as per regulatory and data privacy requirements 

All the customer information related information should be masked. It is difficult to test the Masked data, suppose if the Name column should accept 20 (CHAR) and if the data is masked then we cannot do the BVA for the masked column.

  1. Fabricate Test Data in case of un availability 

Your source system cannot provide you the data for all your Testing Scenarios. Lack of Test data is the major issue in the Data warehouse projects. A tester should analyze the system and need to create the test data in the form of Insert Scripts. The test data should be extracted correctly by the query before its loaded into the Target tables.

Typical Challenges involved in Test Data Management:

  1. Delay in Schedule

The test data from Upstream should be their UAT else our testing will not effective, Suppose if we are in SIT and our Up Stream have not Completed UAT then our Schedule will be postponed as unavailable of UAT tested data from Up Stream.

  1. Issues to cover Business Cases

Test Data for few Business Scenarios cannot be produced in Up Stream or through Data fabricating. Take this example, reopening of an Account, a Customer was having an account and he closed the same few years before and his Records are Kept Active in ACCT_STATUS Closed Records. When the Customer comes to Bank and the Bank will somehow Identify his existing account and will try to Reopen across all the tables to stop not having one more record in the warehouse, for this kind of scenarios it’s difficult to manufacture the data in a given time.

  1. Data Privacy & Compliance

Banks are curious about the Customer Data because of the Data theft in any means. So when we will get the data from PROD for our Regression Testing, most of the Customer related data will be masked as per Compliance policy. So we cannot produce the data or test those scenarios in our Environment.

  1. Dependency of SME’s

SME’s availability is a biggest issue in Data warehouse world.

System Architect – He is accountable for the System Design based on the Business Requirements. He will be closely working with Data Modeler (Data Scientist) to design the Source to Target. System Architect needs to fit the current requirement into the Existing or he should create an Application Landscape for the new requirement.

Data Modeler – He/She is accountable for designing the S2T. Data Modeler should know the Physical and Logical Design of the System and Database. For any Data warehouse project, the S2T is the important document.

ETL Developers – He/She should be accountable for High Level and Low Level design of the Requirements into Code. An ETL developer should be capable enough to design the ETL Code without compromising the performance of the Extraction and Lodging mechanism. He/She should know what kind of Transformation mechanism should be designed for the particular requirement.

Test Analyst / Test Managers – Test Managers should foresee all the technical and business requirements can be tested in given time frame, because the Test Data and System Behaviors might be changing which might cause the Schedule slippage.

  1. Availability of Adequate Data

We should start our System Testing with the Source System’s UAT Tested Data. If in case due to some issue if the Source System is not completed their System Testing, and they can provide only their System Tested data, then these data is not sufficient to continue our System Testing, so we need to fabricate the Test Data in order to kick off out Testing.

  1. Volume of Test Data

The Data threshold requirement will be declared by the System Owners. Whereas we cannot create that declared volume of data in our environment and test the performance beyond the threshold limit and below the threshold limit, so there might be Spool Space issues when we go to Production.

  1. Multiple Source & format of data

In case of Multiple Source Systems are participating in ETL, then it’s difficult to get the Data from different source systems on the same time to being our Testing, in this case again we need to create the Mock Test files to being our testing.

 8.     Version Control of Data

Versioning of Data is very difficult in the Data warehouse world whereas we can see the history of the data using few housekeeping columns (EFFT_D, EXPY_D). But the data loads and extracts should be versioned to avoid the confusions.

Cheers,

Asik

Chapter 12 – Know about Data Types and ETL Testing

Posted on Updated on

Hi All,

Have you created , Updated , Deleted Face book account to know about DWH concepts ? Today in this post let me explain you what is the necessity and importance of the Data types in ETL Testing.

water

We will start with our known examples :

Can you fill 10 liters of water into a 5 liters container?

“No, the container can have only 5 liters of water, if you fill more than its capacity then it will burst :-(”

Can you use Salt instead of sugar to make Tea?

“No, then every one will stop drinking Tea :-(”

Can we name a Kid using Numbers?

“No, if we keep numbers , then how many duplicate persons exists in this world ? just Imaging if  I was named as 10215 !!!!

Can anyone have their Bank balance as absolute number ?

“No, because every money that you spent is fractional amount ! you cant have $ 5 all the time , it would be $ 5.28 most of the time.

Can you have your mobile number more than 10 digit ?or Can you have your mobile number as alphabets? 

“No, because the mobile number length is pre-defined and the number cant be alphabets”

Like the above example

Our Source files, Source tables , Target tables are constructed with limitations. You cant have or keep the data that you want. You can keep or have the data that what system can accept.

In every programming we have this data types , most of them who reads this post knew about basics of Data types.

INTEGER, CHAR, VARCHAR, DECIMAL, FLOAT etc.,

Most of the time developers are testers encounters problems because of the data typing in Data warehouse world are ,

1. Correct Data type is not chosen in Source tables

2. Correct length of the data is not received from the source system in the source file

3. Source is not giving the values as per the data types mentioned

4. Data got truncated when loading it into Target tables.

5.The amount column precision is not populated correctly as Teradata changes it to make round off value.

6.Regarding Dates, source will send them as var-char but when we load it into target tables we keep as DATE and the format

The Data type and its length will be designed it its  DDL – Data Definition Language . If you want to know about the tables properties then please use the blow query

a) ” SHOW TABLE Database.Table_Name ” – this will give you all about data types, data length. Not Null, Null, Primary Key definitions

b) ” HELP TABLE Database.Table_Name” – this will give you all about the table. 

As a Tester what we need to verify ?

Again as I said,

Check the data is matching with the data type mentioned in the spec.

Check any data truncation happened when source data is loaded into Staging tables

Check the data is the staging tables are as per the Staging tables DDL

Check the target table columns are loaded as per the Target tables DDL.

If it a Varchar columns from source ,then please take care of the space , invalid characters etc., right from source till staging tables, because data stage will not accept special characters

If its a Decimal column then make sure the precision is carried out till the target load

If its a Integer column then make sure you should not get blanks ans spaces from source

If its is a CHAR then check the length of the character that fit into this column

Like above we can add as many as much scenarios for Data Type verification.

Hops this blog helps you to understand what is the importance of the Data Types in ETL testing,

See you in my next post

Cheers – Asik