Tuesday, 20 June 2017

Which is the best master data management company in USA?

InfoTrellis has a rich legacy in Master Data Management (MDM) and is founded by the team that has been shaping the industry since 1999. When you work with InfoTrellis, you’re dealing with the industry’s most knowledgeable service provider and have access to the most experienced MDM resources available.

Best master data management company in USA
Best master data management company in USA


We provide strategic consulting and tactical expertise to help clients define and deploy Master Data Management (MDM) solutions in service of their business objectives. InfoTrellis has expertise in all editions of IBM MDM, including MDM Server Standard Edition (f.k.a Initiate Master Data Services), MDM Server Advanced Edition, MDM Server Collaboration Edition (f.k.a PIM) and has worked on Customer, Product, Location, Vendor and Account domains. We were part of the team that architected the first multi-domain MDM Server covering the Product and the Account domains and were one of the first companies to implement IBM’s Reference Data Management (RDM) solution.

Tuesday, 13 June 2017

InfoTrellis Data Integration Solutions Texas

InfoTrellis follows best-in-class practices and processes for data integration solutions. We have highly-skilled consultants with rich experience in the areas of ETL, Integration, Data Quality, Data Profiling, Data Warehouse, Reporting, Data Analysis and Data Governance.

InfoTrellis Data Integration Solutions Texas
InfoTrellis Data Integration Solutions Texas


Several Global Giants Across Industry Verticals Have Benefitted from our Data Integration Services Which Include:


  • Scoping, Due Diligence, Implementation Strategy and Planning
  • Solution outline and macro design, including overall architecture, data mapping, data analysis and data processes
  • Design and Development of Data Warehouses, Data Marts, ETL and ESB-based solutions
  • Data profiling, cleansing and migration
  • Data quality assessment, reporting and monitoring
  • Data Governance roadmap planning, process design and implementation
  • Production roll-out and operational support



Read for more information at, http://www.infotrellis.com/data-integration/

Saturday, 10 June 2017

Product Information Management and Global Data Synchronization

The digital era has fostered information transfer between two systems but the communication flaws have led to information loss. For example; Product Information shared between manufacturers and retailers. Manufacturers often communicate about new Products or changes to existing Products, Price information changes to retailers manually and in an ad hoc manner, leading to the data quality and integrity issues in key retail systems. These problems result in revenue loss and dissatisfied consumers.
Considering these challenges in mind, GDSN (Global Data Synchronization Network) evolved as a key data synchronization mechanism in Product Information domain. IBM Info Sphere MDM Collaborative Edition, a compelling offering from IBM for Product Information Management, leverages GDSN and provides out-of-the-box capabilities that enable trading partners to globally share trusted product data automatically. This blog is focused on GDS, inner workings of GDS, associated concepts and the GDS component in IBM MDM Collaborative Edition.
Business Scenario
One of the global water quality solution providers for residential/industrial settings had data governance issues where information was scattered across multiple regions. InfoTrellis provided them Global Data Synchronization solution to have synchronized product information across regions and retailers. This really helped business in overcoming their revenue loss which eventually led to satisfied customers.
GDS Overview
Global data synchronization (GDS) is an ongoing business process that enables continuous exchange of data between trading partners and ensures sharing of synchronized information between them at any point in time. Each organization, Supplier or a Retailer, needs to join a data pool certified and tested by GS1.
Associated Concepts:
Trading Partners – Party who is either a manufacturer or retailer of products or both are considered as trading partners.
Subscriptions – A subscription is a message that establishes a request for trade item information for a trading partner who is receiving the data on a continuous basis.
GS1 messages – GS1 is the global organization responsible for the design and implementation of global standards and solutions to improve efficiency and visibility in the supply and demand chains across sectors. The GS1 system of standards is the most widely used supply-chain standards system in the world.
Global Location Number (GLN) – A global location number (GLN) is a unique 13-digit number that is used to identify a trade location. The first 7 digits represent the company prefix. The next 5 digits represent the trade location, and the last digit is the check digit.
Global Trade Item Number (GTIN) – A global trade item number (GTIN) is a unique 14-digit number that is used to identify trade products. The first 13 digits represent the product reference number and the last digit is the check digit.
GDS Flow – GDS works based on a publish/subscribe model. The supplier is required to publish the product information to a data pool, and the data pool then matches the published data to known subscribers of the data.

Monday, 5 June 2017

Define your Data Governance

InfoTrellis’ Data Governance Methodology follows a multi-phased iterative approach with 4 stages – Initiate, Define, Deploy and Optimize. This article is the second part of the Data Governance Methodology series by InfoTrellis. The first part of this series – Initiate your Data Governance – listed the essential foundations of successful Data Governance program.
‘Define’ stage primarily deals with defining effective Policies to address Data Governance issues. This article lists the important considerations of this stage.
Understand your Data Governance problem
Understand your Data Governance problem
Detailed investigation to understand the root cause of problem is essential to identify and solve Data Governance issues. For instance, a revenue amount discrepancy in financial report may look like a calculation error in first glance. Upon deep analysis, it could be revealed to be the result of interpreting the same business term, revenue, differently by different users which led to users applying different logic to arrive at the monthly figure.
Once we know the root cause of problem, it is important to categorize it. From our experience, categorizing a business problem into Data Domain Management, Business Process and Data Management Governance areas act as high level guides to understand the nature and scope of Data Governance problem. For example the revenue discrepancy problem mentioned above can be categorized into Finance data domain belonging to Accounting business process and Metadata Management Governance area. This helps to focus on the problem with the correct perspective.
Assemble the team to define Policies
Assemble the team to define policies
Data Governance is a wide domain and requires varied skillset. For instance, Metadata management skills are different from Data Retention skills. Categorizing the business problem as mentioned above also helps in identifying the required skillset to resolve the issue. From our experience, a dynamic team composition based on the nature of business problem works the best. Typical members of this team are Data owners and Architect of pertinent IT/Business system, Business Data Stewards and Technical Data Stewards who understand the business domain and the mapped Data Governance area.

Read more news at, http://www.infotrellis.com/define-data-governance/

Friday, 2 June 2017

“Effort is important, but knowing where to make an effort makes all the difference!”
A few days ago, at the end of a very intense release, one of our long term clients asked what is the secret behind our team’s high quality testing effort, despite the very aggressive timelines and vast scope of work that she sets up for us. She was very much interested in understanding what we do different from the many large SI’s she has used in the past, who according to her were always struggling to survive in a highly time-conscious and fast changing environment. We went back with a presentation to the client’s delivery team, which was highly appreciated by one and all. This blog provides a gist of the practices that we follow to optimize our testing effort.
The fundamental principles that help us in managing an optimum balance between Scope, Time and Costs while ensuring high quality delivery are Build for Reuse, Automation and Big Picture Thinking.

To understand these principles better, let us consider the real project that we just concluded for this specific client. This project had three major work streams – MDM, ETL and BPM. The duration of the project was 8 months and was executed using the InfoTrellis Smart MDMTM methodology. In total, 3 resources were dedicated for testing activities, 1 QA Lead and 2 QA Analysts. Of the allocated 8 months (36 weeks), we spent 6 weeks on discovery & assessment, 6 weeks on scope & approach & 4 weeks on the final deployment. The remaining 20 weeks, that was spent on Analysis, Design, Development and QA, was split into 3 iterations with durations of 7, 7 and 6 weeks respectively. The QA Activities in this project were spread over these 3 iterations.
Build for Reuse:
While every project and the iterations within a project will have its unique set of requirements, team members and activities, there will always be few tasks that are repetitive and will remain the same across iterations and across projects. Test Design Techniques, templates for test strategy, test cases, test reporting, test execution processes are some assets which can be heavily reused.
Being the experts in this field, we’ve built a rich repository of assets that can be reused across different projects. During the 1st iteration, the team utilized the whole 4 weeks which included some time for tweaking the test assets to suit the specific project needs. Due to the effort put in the 1st iteration to set up reusable assets, the team was able to complete the next two iterations in 2 weeks each.
On the whole, we were able to save 2 weeks’ [6 man-weeks] worth of efforts in the next two iterations with the help of reusable assets.
Automation:
The task of testing encompasses the following four steps.
  1. Creation of test data
  2. Converting data to appropriate input formats
  3. Execution & validation of test cases
  4. Preparation of reports based on the test results
With 500 test cases in the bucket, the manual method would have taken us around 675 hours or 17 weeks approximately to complete the testing. However by using the various automation tools that we have built in-house such as ITLS Service tester, ITLS XML Generator, ITLS Auto UI and ITLS XML Comparator and many others we were able to complete our testing within 235 hours. The split of the effort is as follows:
The automation set up & test script preparation took us 135 hours approximately. But by investing time in this effort, we saved around 440 hours or 11 weeks even with executing 3 rounds of exhaustive regression tests. This was a net saving of 33 man weeks for the QA team.
Big Picture Thinking:
One day a traveler, walking along a lane, came across 3 stonecutters working in a quarry. Each was busy cutting a block of stone. Interested to find out what they were working on, he asked the first stonecutter what he was doing and stonecutter said “I am cutting a stone!” Still no wiser the traveler turned to the second stonecutter and asked him what he was doing. He said “I am cutting this block of stone to make sure that its square, and its dimensions are uniform, so that it will fit exactly in its place in a wall.” A bit closer to finding out what the stonecutters were working on but still unclear, the traveler turned to the third stonecutter. He seemed to be the happiest of the three and when asked what he was doing replied: “I am building a cathedral.”
The system under test had multiple work streams like MDM, ETL and BPM that were interacting with each other and the QA team was split to work on the individual work streams. Like the 3rd stonecutter, the team not only knew about how their work streams were expected to function but also about how each of them would fit into the entire system.
Thus we were able to avoid writing unnecessary test cases that could have resulted due to duplication of validations across multiple work streams or due to scenarios that may not have been realistic when considering the system as a whole. This is captured in the table below.
Our ability to identify the big picture thus saved us 128 hours or 3.2 weeks. To avoid such effort going down the drain, we get our QA leads to participate in the scope & approach phase so that they are able to grasp the “Big Picture” and educate their team members.
Conclusion:
Using our testing approach, we saved more than 16 weeks [48 man weeks] of QA effort and thus were able to complete the project in 8 months. Without our approach, this project could have gone easily for over 12 months. This also meant that we did not require the services of a team of 6 InfoTrellis resources [1 Project Manager, 0.5 Architect, 0.5 Dev Lead, 1 Developer, 1 QA Lead and 2 QA Analysts] for 4 additional months i.e. 24 man months and avoided the many client resources who would have been on this project otherwise.
What we have described in this blog is only common sense which is well known to everyone in our industry. However common sense is very uncommon. At InfoTrellis, we have made full use of this common sense and are able to deliver projects faster and with better quality. This has helped our clients realize value from their investments much sooner than anticipated and at a much lower total cost of ownership.

InfoTrellis Master Data Management Company Texas

InfoTrellis Master Data Management Company Texas
InfoTrellis Master Data Management Company Texas


InfoTrellis has a rich legacy in Master Data Management (MDM) and is founded by the team that has been shaping the industry since 1999. When you work with InfoTrellis, you’re dealing with the industry’s most knowledgeable service provider and have access to the most experienced MDM resources available.

We provide strategic consulting and tactical expertise to help clients define and deploy Master Data Management (MDM) solutions in service of their business objectives. InfoTrellis has expertise in all editions of IBM MDM, including MDM Server Standard Edition (f.k.a Initiate Master Data Services), MDM Server Advanced Edition, MDM Server Collaboration Edition (f.k.a PIM) and has worked on Customer, Product, Location, Vendor and Account domains. We were part of the team that architected the first multi-domain MDM Server covering the Product and the Account domains and were one of the first companies to implement IBM’s Reference Data Management (RDM) solution.

Read for more information at, http://www.infotrellis.com/master-data-management/