Big Data Governance – Part 1 – Why Do You Govern Data Outside of Databases?

April 12, 2016

I originally posted this blog in May 2013 on another blog site but it’s still relevant.

Drivers of Big Data Governance

What are the major drivers behind Big Data Governance?  Both Big Data and Data Governance are very hot topics.  Most organizations are implementing a Data Governance program, but these programs tend to be focused on governing the data in relational databases even though unstructured data vastly outnumbers structured data and the projections of ever increasing volumes seems overwhelming. 

Governing unstructured data in documents is another way of referencing “Enterprise Content Management (ECM),” an area of Data Management with mature tools. Management at organizations is frustrated with the current implementations of many of their data management programs, including Enterprise Content Management.  Frustration with results from data management programs such as Data Warehousing, Business Intelligence, and Master Data Management, has been the primary driver of the implementation of Data Governance programs at many of these organizations.  Similarly, organizations are not getting the expected results from their Enterprise Content Management programs, frequently instantiated with Microsoft SharePoint , because they have not included sufficient Data Governance in the implementations.  As has been always the case, implementing a technology solution without sufficient business process involvement, rarely ends with the anticipated benefits.

Although the cost of storage has gone down dramatically, the rise in the amount of data that organizations wish to retain has risen even faster.  Governance is needed to establish the retention policies on all types of data (what data to store, when to archive, when to delete) in order to manage the cost of the storage of all this data and to enable important information to be retrieved.  IT management is particularly focused on cost of data storage. 

All parts of an organization would like to be able to retrieve their desired information from stored data but the legal department is particularly focused on this driver.  Legal would like to be able to point to their data retention policies and say definitively that certain data is no longer retained or to be able to find data quickly and minimize the cost and duration of the search for data requested by regulatory bodies.

Tagging and filing unstructured data to aid in later retrieval and determination must move from manual to automated processes because the volumes involved in Big Data are beyond human capabilities to manage and hand crafted solutions are no longer effective, if they ever were. Data analysis also needs to move beyond abilities to analyze data either structured (business intelligence) or unstructured (search) to utilize cross data types search and analysis. See my blog on Master Data in Big Data Management for more on some ideas for linking together structured and unstructured data and automation techniques.

Advertisement

Drivers for Managing Data Integration – from Data Conversion to Big Data

April 25, 2013

Data management in an organization is focused on getting data to its data consumers (whether human or application). Whereas the goal of data quality and data governance is trusted data, the goal of data integration is available data – getting data to the data consumers in the format that is right for them.
My new book on Data Integration has been published and is now available: “Managing Data in Motion: Data Integration Best Practice Techniques and Technologies”. Of course the first part of a book on data management techniques has to answer the question of why an organization should invest time and effort and money. The drivers for data integration solutions are very compelling.
Supporting Data Conversion
One very common need for data integration techniques is when copying or moving data from one application or data store to another, either when replacing an application in a portfolio or seeding the data needed for an additional application implementation. It is necessary to format the data as appropriate for the new application data store, both the technical format and the semantic business meaning of the data.
Managing the Complexity of Data Interfaces by Creating Data Hubs – MDM, Data Warehouses & Marts, Hub & Spoke
This, I think, is the most compelling reason for an organization to have an enterprise data integration strategy and architecture: hubs of data significantly simplify the problem of managing the data flowing between the applications in an organization. The number of potential interfaces between applications in an organization is an exponential function of the number of applications. Thus, an organization with one thousand applications could have as many as half a million interfaces, if all applications had to talk to all others. By using hubs of data, an organization brings the potential number of interfaces down to be just a linear function of the number of applications.
Master Data Management hubs are created to provide a central place for all applications in an organization to get its Master Data. Similarly, Data Warehouses and Data Marts enable an organization to have one place to obtain all the data they need for reporting and analysis.
Data hubs that are not visible to the human data consumers of the organization can be used to significantly simplify the natural complexity of data interfaces. If data being passed around in the organization is formatted, on leaving the application where it has been updated, into a common data format for that type of data, then applications updating data only need to reformat data into one format, instead of a different format for every application that needs it. Applications that need to receive the data that has been updated only need to reformat the data from the one common format into their own needs. This approach to data integration architecture is called using a “hub and spoke” approach. The structure of the common data format that all applications pass their data to and from is called the “canonical model.” Applications that want a certain kind of data need to “subscribe” to that data and applications that provide a certain kind of data are said to “publish” the data.
Integrating Vendor Packages with an Organization’s Application Portfolio
Current best practice is to buy vendor packages rather than developing custom applications, whenever possible. This exacerbates the data integration problem because each of these vendor packages will have their own master data that have to be integrated with the organization’s master data and they will either have to send or receive transactional data for consolidated reporting and analytics.
Sharing Data Among Applications and Organizations
Some data just naturally needs to flow between applications to support the operational processes of the organization. These days, that flow of data usually needs to be in a real time or near real time mode, and it makes sense to solve the requirements across the enterprise or across the applications that support the supply chain of data rather than developing independent solutions for each application.
Archiving Data
The life cycle for data may not match the life cycle for the application in which it resides. Some data may get in the way if retained in the active operational application and some data may need to be retained after an application is retired, even if the data is not being migrated to another application. All enterprises should have an enterprise archiving solution available where data can be housed and from which it can still be retrieved, even if the application from which it was taken no longer exists.
Moving data out of an application data store and restructuring it for an enterprise archiving solution is an important data integration function.
Leveraging External Available Data
There is so much data now available from government and other sites external to a company’s own, for free as well as data available for a fee. In order to leverage the value of what is available the external data needs to be made available to the data consumers who can use it, in an appropriate format. The amount of data now available is so vast and so fast that it may not be warranted to store or persist the external data, rather using techniques with data virtualization and streaming data, or not to store the data within the organization, choosing instead to leverage cloud solutions that are also external.
Integrating Structured and Unstructured Data
New tools and techniques allow analysis of unstructured data such as documents, web sites, social media feeds, audio, and video data. Greatest meaning can be applied to the analysis when it is possible to integrate together structured data (found in databases) and unstructured data types listed above. Data integration techniques and new technologies such as data virtualization servers enable the integration of structured and unstructured data.
Support Operational Intelligence and Management Decision Support
Using data integration to leverage big data includes not just mashing different types of data together for analysis, but being able to use data streams with that big data analysis to trigger alerts and even automated actions. Example use cases exist in every industry but some of the ones we’re all aware of include monitoring for credit card fraud as well as recommending products.


Is High Availability Sexy?

April 10, 2013

The subject of business continuity has grown in appeal for me as my years in IT have grown, especially as I have personally experienced disasters big and small and the need to recover systems and facilities. I became particularly interested during my training as an IT auditor.
The area of business continuity isn’t necessarily a “sexy” part of data management, but it is a franchise requirement for most organizations and corporations and certainly critical for financial services organizations. Interestingly, the responsibility for business continuity is a business responsibility and yet the knowledge and training for how to implement it is a specialty within information technology (IT). I call this “the tail wagging the dog” because the responsibility cannot be delegated to IT and yet IT needs to lead the process of how to implement it.
The way we implement business continuity is using techniques in disaster recovery and high availability. Disaster recovery is how to bring back up systems and access after the loss of power, services, or access to a facility. High availability is a similar concept except maintaining system continuity by switching to alternative resources automatically at the loss of any resource, system, connection, facility, etc.
The rule of thumb with business continuity is that the lower the amount of time of any disruption at the loss of a resource, the higher the cost. Thus, a high availability solution that has no disruption has the highest cost. Organizations that require high availability solutions are therefore frequently spending millions of dollars on their disaster recovery solutions and millions more on their high availability solutions.
EMC has recently released a new high availability services product and is now asking the question “why invest in both disaster recovery and high availability?” http://www.emc.com/about/news/press/2013/20130212-01.htm
Maybe organizations that require high availability should put their business continuity budgets into that rather than dividing between both high availability and disaster recovery. Well, it may not be such a simplistic answer. Should every single application in the organization be set up with high availability? And yet, dividing systems between continuity solutions makes testing and effecting business continuity much more difficult. As long as the organization can prove they have a high availability solution for everything that would serve any necessary disaster recovery requirements.
OK. High availability isn’t sexy. But to me, it is slightly sexier than disaster recovery. Certainly it is time to consider whether it is more cost effective to put the entire business continuity budget into high availability.


The start of a new week in Data Management

May 2, 2011

I tweet about Data Management @datagrrl, but it’s hard to explain an idea in 140 characters.  So I’ll use this space to write more fully. I also will preview ideas I’m working on for articles, presentations, etc.