Tuesday, 29 November 2016

Common Sense is very uncommon

“Effort is important, but knowing where to make an effort makes all the difference!”
A few days ago, at the end of a very intense release, one of our long term clients asked what is the secret behind our team’s high quality testing effort, despite the very aggressive timelines and vast scope of work that she sets up for us. She was very much interested in understanding what we do different from the many large SI’s she has used in the past, who according to her were always struggling to survive in a highly time-conscious and fast changing environment. We went back with a presentation to the client’s delivery team, which was highly appreciated by one and all. This blog provides a gist of the practices that we follow to optimize our testing effort.
The fundamental principles that help us in managing an optimum balance between Scope, Time and Costs while ensuring high quality delivery are Build for Reuse, Automation and Big Picture Thinking.
To understand these principles better, let us consider the real project that we just concluded for this specific client. This project had three major work streams – MDM, ETL and BPM. The duration of the project was 8 months and was executed using the InfoTrellis Smart MDMTM methodology. In total, 3 resources were dedicated for testing activities, 1 QA Lead and 2 QA Analysts. Of the allocated 8 months (36 weeks), we spent 6 weeks on discovery & assessment, 6 weeks on scope & approach & 4 weeks on the final deployment. The remaining 20 weeks, that was spent on Analysis, Design, Development and QA, was split into 3 iterations with durations of 7, 7 and 6 weeks respectively. The QA Activities in this project were spread over these 3 iterations.
 Build for Reuse:
While every project and the iterations within a project will have its unique set of requirements, team members and activities, there will always be few tasks that are repetitive and will remain the same across iterations and across projects. Test Design Techniques, templates for test strategy, test cases, test reporting, test execution processes are some assets which can be heavily reused.
Being the experts in this field, we’ve built a rich repository of assets that can be reused across different projects. During the 1st iteration, the team utilized the whole 4 weeks which included some time for tweaking the test assets to suit the specific project needs. Due to the effort put in the 1st iteration to set up reusable assets, the team was able to complete the next two iterations in 2 weeks each.On the whole, we were able to save 2 weeks’ [6 man-weeks] worth of efforts in the next two iterations with the help of reusable assets.
Automation:
The task of testing encompasses the following four steps.
  • Creation of test data
  • Converting data to appropriate input formats
  • Execution & validation of test cases
  • Preparation of reports based on the test results
With 500 test cases in the bucket, the manual method would have taken us around 675 hours or 17 weeks approximately to complete the testing. However by using the various automation tools that we have built in-house such as ITLS Service tester, ITLS XML Generator, ITLS Auto UI and ITLS XML Comparator and many others we were able to complete our testing within 235 hours. The split of the effort is as follows:
The automation set up & test script preparation took us 135 hours approximately. But by investing time in this effort, we saved around 440 hours or 11 weeks even with executing 3 rounds of exhaustive regression tests. This was a net saving of 33 man weeks for the QA team.
  
Big Picture Thinking: 
One day a traveler, walking along a lane, came across 3 stonecutters working in a quarry. Each was busy cutting a block of stone. Interested to find out what they were working on, he asked the first stonecutter what he was doing and stonecutter said “I am cutting a stone!” Still no wiser the traveler turned to the second stonecutter and asked him what he was doing. He said “I am cutting this block of stone to make sure that its square, and its dimensions are uniform, so that it will fit exactly in its place in a wall.” A bit closer to finding out what the stonecutters were working on but still unclear, the traveler turned to the third stonecutter. He seemed to be the happiest of the three and when asked what he was doing replied: “I am building a cathedral.”
 The system under test had multiple work streams like MDM, ETL and BPM that were interacting with each other and the QA team was split to work on the individual work streams. Like the 3rd stonecutter, the team not only knew about how their work streams were expected to function but also about how each of them would fit into the entire system.
Thus we were able to avoid writing unnecessary test cases that could have resulted due to duplication of validations across multiple work streams or due to scenarios that may not have been realistic when considering the system as a whole. This is captured in the table below.
Our ability to identify the big picture thus saved us 128 hours or 3.2 weeks. To avoid such effort going down the drain, we get our QA leads to participate in the scope & approach phase so that they are able to grasp the “Big Picture” and educate their team members.
Conclusion:
Using our testing approach, we saved more than 16 weeks [48 man weeks] of QA effort and thus were able to complete the project in 8 months. Without our approach, this project could have gone easily for over 12 months. This also meant that we did not require the services of a team of 6 InfoTrellis resources [1 Project Manager, 0.5 Architect, 0.5 Dev Lead, 1 Developer, 1 QA Lead and 2 QA Analysts] for 4 additionalmonths i.e. 24 man months and avoided the many client resources who would have been on this project otherwise.
What we have described in this blog is only common sense which is well known to everyone in our industry. However common sense is very uncommon. At InfoTrellis, we have made full use of this common sense and are able to deliver projects faster and with better quality. This has helped our clients realize value from their investments much sooner than anticipated and at a much lower total cost of ownership.Contact More.

Wednesday, 23 November 2016

Virtual and Physical MDM in the same box, best of both worlds!

With the introduction of IBM Master Data Management v11, IBM has created a new implementation style combining the strengths of both MDM Physical and Virtual editions. While MDM Physical is more suited to the “centralized” MDM style (system of record), and MDM Virtual is aligned with the “registry” MDM style (system of reference), MDM Hybrid uses a “coexistence” style to provide a mixed system of reference & record. This article will give an overview of the MDM Hybrid implementation style and a couple of interesting lessons learned during a recent InfoTrellis engagement.
MDM Hybrid was first introduced in MDM v11.0 in June 2013 to leverage capabilities of both MDM Virtual and MDM Physical which themselves have grown considerably in capability in recent years. However, MDM Hybrid is still not yet mainstream due to a handful of reasons. One, it does represent a relative increase in complexity and requires practitioners competent in both MDM Virtual and MDM Physical. Two, it can be a difficult migration from an existing MDM Physical or MDM Virtual implementation (although the transition from virtual to hybrid is the easier of the two). Hopefully this article can help alleviate some of those concerns! We at InfoTrellis believe that MDM Hybrid is a strong offering that gives us the capability to have both Virtual MDM and Physical MDM in the same box – the best of both worlds. Additionally, MDM Hybrid is excellent for new MDM implementations, and can be implemented relatively quickly in a basic manner. IBM has also provided a detailed implementation path in its Knowledge Center (see link below).
When describing MDM Hybrid to clients, I have been couching it in terms of a “Virtual Side” and a “Physical Side”, as the product is still mostly segregated.   Between the two “sides” is a fence traversed by a physical MDM service. This MDM service, persistEntity, is one of the workhorses of any MDM Virtual implementation and will be the focus of much of the customization.

Member records (source data) are contained in the MDM Virtual side, processed through the powerful probabilistic matching process that MDM Virtual provides, and assembled into a “golden record” composite view that is then mapped into the MDM Physical schema and “thrown over the fence” to the MDM Physical side using the persistEntity service. The “golden record” is persisted on the physical side. Physical MDM services such as addParty and updateParty are disabled, and modifications to attributes mastered by the MDM Virtual side are not permitted. Other attributes, however, can be modified. For example, name types not in the golden record, privacy preferences, and product or contract data can be modified using standard Physical MDM services.
Special care needs to be taken when implementing the other MDM domains such as contract or product. The persistEntity service could initiate a call to deleteParty if the golden record no longer exists in the system – this could cause issues if there are Contract Roles or Party Product Roles. And how would one establish these in the first place? InfoTrellis recently implemented MDM hybrid with both the party and contract domains at a client, and we came away with some interesting lessons in how to accomplish this.
At our client, a large insurance corporation, we were charged with implementing MDM Hybrid using version 11.3 and using the Contract domain as well as the Party domain with both Persons and Orgs. While we implemented many pieces of the contract domain, this discussion will be simplified to contain the entities and attributes below.
In order to maintain contract role data, we had to create a “backpack” (and I’m sorry for the number of metaphors here – it helped us to explain this process to the client and has stuck in my mind as a method of explanation). This backpack would contain all the data needed to establish a contract role in Physical MDM and would accompany a party as it was processed by Virtual MDM and then get picked up by the persistEntity call on the round trip back into Physical MDM. On the virtual side, this data would not be used for searching or matching. On the Physical side, we had to create a transient data object (TDO) that would be mapped using the graphical data mapper (GDM) included in the workbench. This TDO is the backpack in the metaphor. Also, it needed to be added as an extension object under the TCRMOrganizationBObj & TCRMPersonBObj.
I hope this overview of the MDM Hybrid system has been informative. Unfortunately, as I wrote it, I noticed a number of components that I left out in the interest of giving a (hopefully) better overview of the case study without droning on for 20 pages. These include – handling role locations, customizing deleteParty, modifying the Virtual algorithm, constructing the composite view, and the framework we constructed to interface between Physical and Virtual MDM. We can dive deeper into those topics in a future article.
MDM Hybrid provides a number of exciting new functionalities, and with the flexibility inherent in the IBM MDM product, there remain many unexplored avenues and even ways of doing the same thing. Between MDM Virtual, MDM Physical, and now MDM Hybrid there’s no excuse to avoid creating a Master Data Management solution in your organization. If you’re considering an MDM Hybrid implementation (Or any other IBM MDM solution), give us a call! Contact More

Monday, 21 November 2016

Initiate your Data Governance

Data Governance is an important and imperative area for Enterprises that want to realize full value from the data available with them.  InfoTrellis’ Data Governance Methodology follows a multi-phased iterative approach with 4 stages – Initiate, Define, Deploy and Optimize. This article lists the important considerations that are part of the Initiate stage of our Data Governance Methodology.
The Debate– Small or Big? The very first step of Data Governance is also the most ambiguous for most enterprises. The most common debate is whether to start an independent Data Governance program across enterprise or to start with specific problem at hand and then scale big. Most enterprises that are successful with Data Governance start small with a specific domain or business area to solve a data issue and then expand. The few enterprises that aim for enterprise wide Data Governance, break their program in small iterative steps to achieve success. So whatever the approach, – small or big – having small iterative steps is the key. It is critical to resolve this debate and finalize on the strategy before embarking on further activities.
Maturity Assessment – Assessing the current state of information management is essential to understand why things went wrong and how they can be fixed. Clients can adopt any of the leading Maturity Standards for their Data Governance Program including the one from InfoTrellis. These standards are referenced to mark the current maturity level of a particular business area. It is a normal scenario to have different maturity levels for different business areas of an enterprise. We suggest that specific prescriptive roadmap be defined considering the individual maturity level of each area. The maturity assessment needs review after each iterative deployment of Data Governance policies.
Data Governance Roadmap– The roadmap defines the steps to be taken to reach the desired state of Data Governance. This can be specific to a business area at the start and can be refined based on each of the incremental Data Governance deployments.
Secure Sponsorship – A compelling business case highlighting the business problem, its impact on revenue and cost and how it can be resolved by Data Governance, is important for getting the attention of executives and the subsequent sponsorship. Many Data Governance programs lose steam mid-way due to inadequate sponsorship. Executives have to continuously send messages to the team, educating and reinforcing the importance of achieving data governance goals.
Assign an effective Program Manager – Data Governance programs are complex programs requiring top notch process and people skills and needs continuity. Hence identifying the right program manager with clearly defined roles and responsibilities is very important for a successful program. We have seen many programs suffer due to ambiguous or frequently changing Program Managers.
In conclusion, each enterprise begins its data governance program differently. But addressing each of the above considerations and activities proactively helps setup a path to success.
Check out the Part 2 of this 4 part series on Data Governance from InfoTrellis – Define your Data Governance
Please send us a note with your queries and feedback. Contact more.

Wednesday, 16 November 2016

InfoTrellis Adds Veteran Visionary to Executive Team

Complementing its deep expertise in master data management and big data strategy, InfoTrellis hires David Corrigan, a former marketing executive for IBM’s Big Data & Analytics business unit.
InfoTrellis, a leading provider of Big Data, MDM and Customer 360 software and services, announced the addition of David Corrigan as Chief Marketing Officer and Vice President of Product Management, rounding out its advanced team of technical experts.
InfoTrellis will leverage Corrigan’s experience with product strategy, marketing and messaging to inspire the company to reimagine customer data management and apply emerging technology to address new use cases for the full Customer 360 view. Monetate reported that 40% of consumers buy more from retailers who personalize the shopping experience across channels. InfoTrellis addresses the personalization demand head on with cutting edge technology in AllSight ConnectID.
Corrigan will drive product strategy and messaging for InfoTrellis and AllSight ConnectID. He will work with early adopter clients to shape and evangelize their big data strategy while building the business case for new data technologies. AllSight ConnectID is a “next generation” managed data hub as it integrates all data, centralizing internal data with external sources. Built-in analytics cleanse, match and build the complete customer context to share across the enterprise. Organizations can thus deeply understand their customers as individuals, providing a consistent, personalized customer experience and messaging across all channels and developing products they know will fill actual customer needs.
“The ability to provide the omnichannel experience consumers now demand is predicated on big data and no one does big data better than InfoTrellis,” said Sachin Wadhwa, COO and co-founder at InfoTrellis. “AllSight ConnectID is unique in its ability to learn everything about a customer and create actionable insights. With David joining our team, we are able to develop innovative product strategies that will solidify our leadership position in this growing market. Our value proposition is founded in the deep experience of our management team in master data, governance, big data and analytics.”
Corrigan brings 17 years of experience in marketing and product strategy to InfoTrellis. David led the worldwide marketing team at IBM for various business units including analytics, big data, integration and governance, and master data management. He also helped create and evolve the Customer Data Integration into Master Data Management markets, driving a multi-entity strategy and vision.
“I have great respect for what the executive team at InfoTrellis has been able to do in the data management industry, truly pioneering a next-generation customer data and analytics offering,” said Corrigan. “I believe the potential of AllSight ConnectID can reshape the industry, helping organizations learn more about their customers and acting on new customer insights that truly improves their customer relationships.”