Thursday, 22 December 2016

Managing MDM Projects II – Development, Testing and Deployment

Okay, apologies for an unscheduled delay on the follow up post. Let’s get back to discussing how we manage our MDM Projects.
In my previous post, we talked about the first two stages of “InfoTrellis SMART MDM Methodology”, namely “Discovery and Assessment” and “Scope and Approach”. In these two stages, we spoke about activities around understanding business expectations, helping clients formulate their MDM strategy, help them identify scope of an MDM implementation along with defining right use cases and the optimal solution approach. I also mentioned that we generally follow a “non-iterative” approach to these stages as this helps us build a solid foundation before we can go on to the actual implementation.

Implementation:

Once scope of an MDM project is defined and client agrees to the solution approach, we enter the iterative phases of the project. We group them into two stages in our methodology:
  1. Analysis and Design
  2. Development and QA
Through these stages, we perform detailed requirements analysis, technical design, development and functional testing across several iterations.

Requirements Analysis:

At this stage of the project, high level business requirements are already available and we must start analyzing and prioritizing which requirements need to go into which phase. For Iteration I, we typically take up all foundation aspects of MDM such as the data model changes, initial Maintain services, ETL initial load and related activities. An MDM product consultant will interpret the business requirements, and work with the technical implementation leads to come up with:
  1. Custom Data Model with additions and extensions, as per project requirements
  2. Detailed data mapping document that captures source to MDM mapping for services as well as Initial load (one time migration) – data mapping is tricky; there will be different channels through which data will be brought into MDM. All different channels need to be identified and specific mapping for all these channels have to be completed; Doing this right will help us avoid surprises at a later stage
  3. Functional Requirements for each of the features – Services, Duplicate processing and so on
  4. Apart from the requirements analysis, work on the “Requirements Traceability Matrix” should start at this stage. This is one document that captures system traceability of requirements to test cases and will come in handy throughout the implementation.

Design:

Functional requirements are translated into detailed technical design for both MDM and ETL. Significant design decisions are listed out, Object model, business objects designed, and detailed design sequence diagrams are created. Similar sets of design artifacts are created for ETL components as well. The key items that are worked on during the design phase are:
  • Significant use cases – From a technical perspective, functional use cases are interpreted so the developer has a better grip on use cases and how they are connected together to form the overall solution
  • Detailed design elements – Elaboration on each technical component so development team has to just interpret what is designed as MDM code or ETL components
  • Unit Test cases – The technical lead plans unit test cases so 360 degree coverage is ensured during unit testing, and most of the simple unit level bugs are identified
Within the sphere of tools that we use, if unit test automation is possible we do that as well.

Development:

MDM and ETL development happen in most of our projects. Apart from IBM’s MDM suite, we also work on a spectrum of ETL tools such as IBM DataStage, Informatica Power Center, SAP PI, IBM CastIron, Talend ETL, and Microsoft SSIS. Some aspects that we emphasize on across all our projects are:
  • Coding standards – MDM and ETL teams have respective coding standards which are periodically reviewed as per changes in different product releases, and technological changes in general. The developers are trained to follow these standards when they write code
  • Continuous Integration – Most of our clients have svn repositories and our development teams actively use these repositories so the code remains an integral unit. We also have local repositories that can be used when the client does not have a repository of their own and explicitly allow us to host their code in our network
  • Peer code review – Every module is reviewed by a peer who acts as another pair of eye to bring in a different perspective
  • Lead code review – Apart from peer review, code is also reviewed by the tech lead to ensure development is consistent and error free
  • Unit Testing – Thorough unit testing is driven off the test cases written by development leads during design phase. Wherever possible, we also automate unit test cases redundancy and efficiency
With these checks and balances the developed code moves into testing phase.

Testing:

QA lead comes up with comprehensive test strategy covering Functional, system, performance and user acceptance testing. The different types of testing that we participate in differs from project to project, based on client requirements. We typically take up functional testing within the iterative Implementation phase. Rest are done once all functional components are developed and tested thoroughly.
Functional testing is driven off functional requirements. Our QA lead reviews the design as well to understand significant design decisions that helps in creating optimal test scenarios. Once requirements and design documents are reviewed, detailed test scenarios and test cases are created and are reviewed by the Business Analyst to ensure sufficient coverage. A mix of manual and automated testing is performed based on allowed scope in the project. Functional testing process will involve the following:
  • Test Definition – Scenarios / cases created, test environments identified, defect management and tracking methodology established, test data prepared or planned for
  • Test execution – Every build is subject to a build acceptance test, and upon being successful, the build is tested in detail for functionality
  • Regression runs – Once we enter defect fixing mode, multiple runs of (mostly automated) regression tests are run to ensure that test acceptance criteria is met
  • Test Acceptance – Our commitment is to provide a thoroughly tested product at the end of each iteration. For every release, we ensure all severity 1 and severity 2 defects are fixed, and low severity defects if deferred are documented and accounted for in subsequent releases.

Deployment:

In the deployment stage, we group the following activities together:
  1. System, UAT, Performance testing – All aspects of testing that sees the implementation as a single functional unit are performed
  2. MDM code deployment – MDM code will be deployed in production environment, and delta transactions (real time, or near real time) will be started
  3. One time migration or Initial Load to MDM – From various source systems, data will be extracted, transformed and loaded into MDM as a one-time exercise.
Deployment is very critical as it is a culmination of all work done until that point in the project. This is also the point at which the MDM system will get exposed to all other external systems in the client organization. If MDM is part of a much detailed revamp, or a bigger program, there will be many other projects that will need to go live or get deployed at the same time. To ensure deployment is successful, the following key points are to be considered:
  • Identify all interconnecting points and come up with an system plan that covers MDM and all integrating systems
  • If applicable, participate actively at program level activities as well to ensure the entire program accounts for all the items that have been built as part of the MDM project
  • Initial load happens across many days mostly in 24-hour cycles. Come up with clear plan, team, roles and responsibilities and if possible perform a trial / mock run of initial load
There is typically a post deployment support period and in this period we monitor the MDM hub to ensure master data is created as planned. If needed, optimizations and adjustments are made to ensure that the MDM hub performs as desired.
Once deployment is successfully completed, don’t forget to celebrate with the project team!!!Contact More http://www.infotrellis.com/

Thursday, 15 December 2016

InfoTrellis Expands Executive Team with Addition of Former IBM Exec, John Gairhan

InfoTrellis hires veteran sales executive from IBM as it positions itself to lead the customer data management market into the next generation.
InfoTrellis, an innovative provider of Customer 360 software, announced the hire of John Gairhan as Vice President of Sales. The addition further enhances the company’s experienced executive leadership team, all of whom understand the customer data and analytics market and how it is evolving to address organizations’ most pressing use cases.
As VP of Sales, Gairhan will leverage his 25 years of experience in sales, strategy, marketing and development of enterprise software to help solve clients’ highest priority customer big data problems. According to Forbes and Forrester, 48 percent of big data projects currently underway are focused on customer analytics, yet 88 percent of customer data is ignored for analysis because it is not easy to access in various silos. Gairhan will assist clients in addressing those customer data challenges to achieve Customer 360 and power the omnichannel experience.
InfoTrellis is a pioneer in the analytics and data management industry. InfoTrellis’ flagship product, AllSight, integrates all data – external and internal – to deeply understand customers, learn more about them, and provide an actionable 360-profile to all business users. Powered by machine learning, pre-built analytics and contextual matching to build the customer 360 from granular pieces of data, AllSight also enriches that record with customer intelligence and insights. Marketing and sales users access this enhanced customer data to personalize digital marketing and sales campaigns, while services utilize it to power personalized omnichannel customer care.
“We are unique in that our customer data system is purposely built to solve today’s biggest data challenges and maximize the value of data,” said Sachin Wadhwa, COO and co-founder at InfoTrellis. “The success of InfoTrellis is due to our deep technical expertise and industry knowledge, both of which will benefit from Gaihran’s experience as a former big data worldwide sales leader and promoter of open source technologies such as Apache Spark and Apache Hadoop. His insight will no doubt enable us to take InfoTrellis to the next level.”
The majority of Gairhan’s industry experience has been focused on data management. He led the worldwide Big Data sales team at IBM and worked as a consultative big data seller, partnering with Fortune 500 companies to define and execute their big data strategy and architecture while leveraging their existing IT investments. He has also held leadership positions in MDM strategy and marketing.
“For the past 25 years I have evangelized emerging data and analytics technology,” said Gairhan. “I have a passion to help organizations become more intelligent about their customers. My approach and strategy perfectly align with InfoTrellis’ vision and I am confident our exceptional executive team will reimagine the customer data and analytics market.”

About InfoTrellis

Based in Toronto, Canada, InfoTrellis has been a leading information management consulting and technology company since 2007, serving data-driven companies with the expertise and products required in today’s data-heavy world. As a pioneer in the big data, data integration and master data management market, InfoTrellis offers AllSight ConnectID and Veriscope in its comprehensive suite of data management products. AllSight ConnectID gives companies a complete Customer 360 view of each customer to facilitate an effective omnichannel strategy. Veriscope provides insight into the quality and usage of Master Data for successful Master Data Management and governance.Contact more.

Thursday, 8 December 2016

MDM for Regulatory Compliance in the Banking Industry

Banking Regulations – Overview
Managing regulatory issues and risk has never been so complex. Regulatory expectations continue to rise with increased emphasis on the institution’s ability to respond to the next potential crisis.Financial Institutionscontinue to face challenges implementing a comprehensive enterprise-wide governance program that meets all current and futureregulatory expectations. There has been a phenomenal rise in expectations related to data quality, risk analytics and regulatory reporting.

Following are some of the US regulations that MDM and customer 360 reports can be used for compliance:
FATCA (Foreign Account Tax Compliance Act)
FATCA was enacted to target non-compliance by U.S. taxpayers using foreign accounts. The objective of FATCA is the reporting of foreign financial assets. The ability to align all key stakeholders, including operations, technology, risk, legal, and tax, is critical to successfully comply with FATCA.
OFAC (Office of Foreign Asset Control)
The Office of Foreign Assets Control (OFAC) administers a series of laws that impose economic sanctions against hostile targets to further U.S. foreign policy and national security objectives. The bank regulatory agencies should cooperate in ensuring financial institutions comply with the Regulations.
 FACTA (Fair and Accurate Credit Transactions Act)
Its primary purpose is to reduce the risk of identity theft by regulating how consumer account information (such as Social Security numbers) is handled.
HMDA (Home Mortgage Disclosure Act)
This Act requires financial institutions to provide mortgage data to the public. HMDA data is used to identify probable housing discrimination in various ways.
Dodd Frank Regulations
The primary goal of the Dodd-Frank Wall Street Reform and Consumer Protection Act was to increase financial stability. This law places major regulations in the financial industry.
Basel III
A wide sweeping international set of regulations that many US banks must adhere to is Basel III. Basel III is a comprehensive set of reform measures, developed by the Basel Committee on Banking Supervision, to strengthen the regulation, supervision and risk management of the banking sector.
What do banks need to meet regulatory requirements?
To meet the regulatory requirements described in the previous section, Banks need an integrated systems environment that addresses requirements such as Enterprise-wide data access, single source of truth for customer details, customer identification programs, data auditability & traceability, customer data synchronization across multiple heterogeneous operational systems, ongoing data governance, risk and compliance reports.
How MDM can help?

Enterprise view of customer data
MDM solutions providean enterprise view of all customer data to ensure that a customer is in compliance with Government imposed regulations (e.g. FATCA, Basel II/III, Dodd Frank, HMDA, OFAC, AML etc.) and facilitate data linking for easy access.
Compliance Users
Users who satisfy the compliance criteriawill be able to retrieve the customer information such as name, address, contact method and demographics from the MDMsolution. They will be able to ensure customer compliance while creating reports, performing reviews and monitor the customer against watch lists.
Compliance Applications
FATCA supporting applications, Dodd Frank reporting applications, HMDA compliance reporting applications, Basel II & III compliance applications receive a data extract from the MDM solution containing detailed customer information such as name, addresses, contact methods, identifiers, demographics and customer to account relationships that enhance compliance reporting and customer analytics.
Compliance users can ensure compliance with all FATCA laws, create reports, link customer information to create HMDA reports and provide complete financial profile of all commercial customers to ensure compliance with Basel II & III regulations
Regulatory Risk Users
Regulatory risk users will be able to use customer data from MDM solution, create reports on an ad hoc basis, and perform annual reviews to ensure customer is compliant with risk regulations. These users will also be able to check if customers are on existing watch lists through pre-configured alerts and update the MDM solution as required during annual reviews.
Regulatory Risk Applications
MDM solution supplies detailed customer information such as name, addresses, identifiers, demographics, and customer to account relationships to Applications supporting AML, OFAC data, KYC, fraud analysis so that they can determine compliance to regulations such as AML. OFAC standards, determine if the proper KYC data has been captured for all customers and monitors fraudulent activities of any customer.
MDM solution will receive a close account transaction from the AML applications if the regulatory risk user determines the customer relationship must be exited for AML non-compliance.OFAC applications update customer’s watch list status within the MDM solution and send add/update/delete customer alert transactions to monitor customers on OFAC watch lists.
Conclusion
MDM solutions when implemented properly, can provide critical information to banks who have to comply with a number of regulations across many countries. At InfoTrellis, we have helped many organizations achieve these goals through IBM MDM implementations. You can contact us for further queries by sending an email to marketing@infotrellis.com.Contact more.

Tuesday, 29 November 2016

Common Sense is very uncommon

“Effort is important, but knowing where to make an effort makes all the difference!”
A few days ago, at the end of a very intense release, one of our long term clients asked what is the secret behind our team’s high quality testing effort, despite the very aggressive timelines and vast scope of work that she sets up for us. She was very much interested in understanding what we do different from the many large SI’s she has used in the past, who according to her were always struggling to survive in a highly time-conscious and fast changing environment. We went back with a presentation to the client’s delivery team, which was highly appreciated by one and all. This blog provides a gist of the practices that we follow to optimize our testing effort.
The fundamental principles that help us in managing an optimum balance between Scope, Time and Costs while ensuring high quality delivery are Build for Reuse, Automation and Big Picture Thinking.
To understand these principles better, let us consider the real project that we just concluded for this specific client. This project had three major work streams – MDM, ETL and BPM. The duration of the project was 8 months and was executed using the InfoTrellis Smart MDMTM methodology. In total, 3 resources were dedicated for testing activities, 1 QA Lead and 2 QA Analysts. Of the allocated 8 months (36 weeks), we spent 6 weeks on discovery & assessment, 6 weeks on scope & approach & 4 weeks on the final deployment. The remaining 20 weeks, that was spent on Analysis, Design, Development and QA, was split into 3 iterations with durations of 7, 7 and 6 weeks respectively. The QA Activities in this project were spread over these 3 iterations.
 Build for Reuse:
While every project and the iterations within a project will have its unique set of requirements, team members and activities, there will always be few tasks that are repetitive and will remain the same across iterations and across projects. Test Design Techniques, templates for test strategy, test cases, test reporting, test execution processes are some assets which can be heavily reused.
Being the experts in this field, we’ve built a rich repository of assets that can be reused across different projects. During the 1st iteration, the team utilized the whole 4 weeks which included some time for tweaking the test assets to suit the specific project needs. Due to the effort put in the 1st iteration to set up reusable assets, the team was able to complete the next two iterations in 2 weeks each.On the whole, we were able to save 2 weeks’ [6 man-weeks] worth of efforts in the next two iterations with the help of reusable assets.
Automation:
The task of testing encompasses the following four steps.
  • Creation of test data
  • Converting data to appropriate input formats
  • Execution & validation of test cases
  • Preparation of reports based on the test results
With 500 test cases in the bucket, the manual method would have taken us around 675 hours or 17 weeks approximately to complete the testing. However by using the various automation tools that we have built in-house such as ITLS Service tester, ITLS XML Generator, ITLS Auto UI and ITLS XML Comparator and many others we were able to complete our testing within 235 hours. The split of the effort is as follows:
The automation set up & test script preparation took us 135 hours approximately. But by investing time in this effort, we saved around 440 hours or 11 weeks even with executing 3 rounds of exhaustive regression tests. This was a net saving of 33 man weeks for the QA team.
  
Big Picture Thinking: 
One day a traveler, walking along a lane, came across 3 stonecutters working in a quarry. Each was busy cutting a block of stone. Interested to find out what they were working on, he asked the first stonecutter what he was doing and stonecutter said “I am cutting a stone!” Still no wiser the traveler turned to the second stonecutter and asked him what he was doing. He said “I am cutting this block of stone to make sure that its square, and its dimensions are uniform, so that it will fit exactly in its place in a wall.” A bit closer to finding out what the stonecutters were working on but still unclear, the traveler turned to the third stonecutter. He seemed to be the happiest of the three and when asked what he was doing replied: “I am building a cathedral.”
 The system under test had multiple work streams like MDM, ETL and BPM that were interacting with each other and the QA team was split to work on the individual work streams. Like the 3rd stonecutter, the team not only knew about how their work streams were expected to function but also about how each of them would fit into the entire system.
Thus we were able to avoid writing unnecessary test cases that could have resulted due to duplication of validations across multiple work streams or due to scenarios that may not have been realistic when considering the system as a whole. This is captured in the table below.
Our ability to identify the big picture thus saved us 128 hours or 3.2 weeks. To avoid such effort going down the drain, we get our QA leads to participate in the scope & approach phase so that they are able to grasp the “Big Picture” and educate their team members.
Conclusion:
Using our testing approach, we saved more than 16 weeks [48 man weeks] of QA effort and thus were able to complete the project in 8 months. Without our approach, this project could have gone easily for over 12 months. This also meant that we did not require the services of a team of 6 InfoTrellis resources [1 Project Manager, 0.5 Architect, 0.5 Dev Lead, 1 Developer, 1 QA Lead and 2 QA Analysts] for 4 additionalmonths i.e. 24 man months and avoided the many client resources who would have been on this project otherwise.
What we have described in this blog is only common sense which is well known to everyone in our industry. However common sense is very uncommon. At InfoTrellis, we have made full use of this common sense and are able to deliver projects faster and with better quality. This has helped our clients realize value from their investments much sooner than anticipated and at a much lower total cost of ownership.Contact More.