Big Data Governance – Part 1 – Why Do You Govern Data Outside of Databases?

April 12, 2016

I originally posted this blog in May 2013 on another blog site but it’s still relevant.

Drivers of Big Data Governance

What are the major drivers behind Big Data Governance?  Both Big Data and Data Governance are very hot topics.  Most organizations are implementing a Data Governance program, but these programs tend to be focused on governing the data in relational databases even though unstructured data vastly outnumbers structured data and the projections of ever increasing volumes seems overwhelming. 

Governing unstructured data in documents is another way of referencing “Enterprise Content Management (ECM),” an area of Data Management with mature tools. Management at organizations is frustrated with the current implementations of many of their data management programs, including Enterprise Content Management.  Frustration with results from data management programs such as Data Warehousing, Business Intelligence, and Master Data Management, has been the primary driver of the implementation of Data Governance programs at many of these organizations.  Similarly, organizations are not getting the expected results from their Enterprise Content Management programs, frequently instantiated with Microsoft SharePoint , because they have not included sufficient Data Governance in the implementations.  As has been always the case, implementing a technology solution without sufficient business process involvement, rarely ends with the anticipated benefits.

Although the cost of storage has gone down dramatically, the rise in the amount of data that organizations wish to retain has risen even faster.  Governance is needed to establish the retention policies on all types of data (what data to store, when to archive, when to delete) in order to manage the cost of the storage of all this data and to enable important information to be retrieved.  IT management is particularly focused on cost of data storage. 

All parts of an organization would like to be able to retrieve their desired information from stored data but the legal department is particularly focused on this driver.  Legal would like to be able to point to their data retention policies and say definitively that certain data is no longer retained or to be able to find data quickly and minimize the cost and duration of the search for data requested by regulatory bodies.

Tagging and filing unstructured data to aid in later retrieval and determination must move from manual to automated processes because the volumes involved in Big Data are beyond human capabilities to manage and hand crafted solutions are no longer effective, if they ever were. Data analysis also needs to move beyond abilities to analyze data either structured (business intelligence) or unstructured (search) to utilize cross data types search and analysis. See my blog on Master Data in Big Data Management for more on some ideas for linking together structured and unstructured data and automation techniques.

Advertisement

Drivers for Managing Data Integration – from Data Conversion to Big Data

April 25, 2013

Data management in an organization is focused on getting data to its data consumers (whether human or application). Whereas the goal of data quality and data governance is trusted data, the goal of data integration is available data – getting data to the data consumers in the format that is right for them.
My new book on Data Integration has been published and is now available: “Managing Data in Motion: Data Integration Best Practice Techniques and Technologies”. Of course the first part of a book on data management techniques has to answer the question of why an organization should invest time and effort and money. The drivers for data integration solutions are very compelling.
Supporting Data Conversion
One very common need for data integration techniques is when copying or moving data from one application or data store to another, either when replacing an application in a portfolio or seeding the data needed for an additional application implementation. It is necessary to format the data as appropriate for the new application data store, both the technical format and the semantic business meaning of the data.
Managing the Complexity of Data Interfaces by Creating Data Hubs – MDM, Data Warehouses & Marts, Hub & Spoke
This, I think, is the most compelling reason for an organization to have an enterprise data integration strategy and architecture: hubs of data significantly simplify the problem of managing the data flowing between the applications in an organization. The number of potential interfaces between applications in an organization is an exponential function of the number of applications. Thus, an organization with one thousand applications could have as many as half a million interfaces, if all applications had to talk to all others. By using hubs of data, an organization brings the potential number of interfaces down to be just a linear function of the number of applications.
Master Data Management hubs are created to provide a central place for all applications in an organization to get its Master Data. Similarly, Data Warehouses and Data Marts enable an organization to have one place to obtain all the data they need for reporting and analysis.
Data hubs that are not visible to the human data consumers of the organization can be used to significantly simplify the natural complexity of data interfaces. If data being passed around in the organization is formatted, on leaving the application where it has been updated, into a common data format for that type of data, then applications updating data only need to reformat data into one format, instead of a different format for every application that needs it. Applications that need to receive the data that has been updated only need to reformat the data from the one common format into their own needs. This approach to data integration architecture is called using a “hub and spoke” approach. The structure of the common data format that all applications pass their data to and from is called the “canonical model.” Applications that want a certain kind of data need to “subscribe” to that data and applications that provide a certain kind of data are said to “publish” the data.
Integrating Vendor Packages with an Organization’s Application Portfolio
Current best practice is to buy vendor packages rather than developing custom applications, whenever possible. This exacerbates the data integration problem because each of these vendor packages will have their own master data that have to be integrated with the organization’s master data and they will either have to send or receive transactional data for consolidated reporting and analytics.
Sharing Data Among Applications and Organizations
Some data just naturally needs to flow between applications to support the operational processes of the organization. These days, that flow of data usually needs to be in a real time or near real time mode, and it makes sense to solve the requirements across the enterprise or across the applications that support the supply chain of data rather than developing independent solutions for each application.
Archiving Data
The life cycle for data may not match the life cycle for the application in which it resides. Some data may get in the way if retained in the active operational application and some data may need to be retained after an application is retired, even if the data is not being migrated to another application. All enterprises should have an enterprise archiving solution available where data can be housed and from which it can still be retrieved, even if the application from which it was taken no longer exists.
Moving data out of an application data store and restructuring it for an enterprise archiving solution is an important data integration function.
Leveraging External Available Data
There is so much data now available from government and other sites external to a company’s own, for free as well as data available for a fee. In order to leverage the value of what is available the external data needs to be made available to the data consumers who can use it, in an appropriate format. The amount of data now available is so vast and so fast that it may not be warranted to store or persist the external data, rather using techniques with data virtualization and streaming data, or not to store the data within the organization, choosing instead to leverage cloud solutions that are also external.
Integrating Structured and Unstructured Data
New tools and techniques allow analysis of unstructured data such as documents, web sites, social media feeds, audio, and video data. Greatest meaning can be applied to the analysis when it is possible to integrate together structured data (found in databases) and unstructured data types listed above. Data integration techniques and new technologies such as data virtualization servers enable the integration of structured and unstructured data.
Support Operational Intelligence and Management Decision Support
Using data integration to leverage big data includes not just mashing different types of data together for analysis, but being able to use data streams with that big data analysis to trigger alerts and even automated actions. Example use cases exist in every industry but some of the ones we’re all aware of include monitoring for credit card fraud as well as recommending products.


Is High Availability Sexy?

April 10, 2013

The subject of business continuity has grown in appeal for me as my years in IT have grown, especially as I have personally experienced disasters big and small and the need to recover systems and facilities. I became particularly interested during my training as an IT auditor.
The area of business continuity isn’t necessarily a “sexy” part of data management, but it is a franchise requirement for most organizations and corporations and certainly critical for financial services organizations. Interestingly, the responsibility for business continuity is a business responsibility and yet the knowledge and training for how to implement it is a specialty within information technology (IT). I call this “the tail wagging the dog” because the responsibility cannot be delegated to IT and yet IT needs to lead the process of how to implement it.
The way we implement business continuity is using techniques in disaster recovery and high availability. Disaster recovery is how to bring back up systems and access after the loss of power, services, or access to a facility. High availability is a similar concept except maintaining system continuity by switching to alternative resources automatically at the loss of any resource, system, connection, facility, etc.
The rule of thumb with business continuity is that the lower the amount of time of any disruption at the loss of a resource, the higher the cost. Thus, a high availability solution that has no disruption has the highest cost. Organizations that require high availability solutions are therefore frequently spending millions of dollars on their disaster recovery solutions and millions more on their high availability solutions.
EMC has recently released a new high availability services product and is now asking the question “why invest in both disaster recovery and high availability?” http://www.emc.com/about/news/press/2013/20130212-01.htm
Maybe organizations that require high availability should put their business continuity budgets into that rather than dividing between both high availability and disaster recovery. Well, it may not be such a simplistic answer. Should every single application in the organization be set up with high availability? And yet, dividing systems between continuity solutions makes testing and effecting business continuity much more difficult. As long as the organization can prove they have a high availability solution for everything that would serve any necessary disaster recovery requirements.
OK. High availability isn’t sexy. But to me, it is slightly sexier than disaster recovery. Certainly it is time to consider whether it is more cost effective to put the entire business continuity budget into high availability.



Don’t Get Caught in the Statistical Cobwebs of Data Quality

November 14, 2012

A couple of days ago, I heard about a data conversion project where the team was taking a statistical sample of source master data and cleaning it.  The discussion I heard was on how big a sample needed to be taken in order to have a 5% margin of error and continued through a series of issues about statistics.  This discussion has brought me up short because sometimes we get so caught up in the mathematics and techniques of our processes that we lose the basic understanding of when different techniques are appropriate.

I applaud the fact that the data conversion project in question had enough foresight to include a data quality stream, certainly not always the case.  Besides the fact that we don’t always know the level of quality of our production data in systems that have been running for many years, it is frequently true that data may have to be additionally cleaned in order for it to be in a state sufficient for the running of a new system to which the data is to be migrated.  The standard method to assess the quality level of the data in the source systems is to take a statistical sample of the source data and assess whether the quality level is sufficient for the target system.  Once we’ve determined how much cleaning of the sample data is necessary to get it into proper shape, we can extrapolate the estimate across the entire set of master data in order to determine how much in time and resources would be necessary to clean the master data.

How does a method for statistically determining an estimate turn into the idea that we only need to clean the statistical sample of data? And even if one person accidentally skipped a few steps in specifying the process, why hasn’t anyone else realized that cleaning a statistical sample of data doesn’t make the entire dataset clean?  Somehow, an entire project team has been dazzled by the implementation of statistics, or no one really thought about it that hard because it wasn’t their job.  Anyway, if you clean a statistical sample of data then only that sample will be sufficiently clean for your target system, the rest of the data will be at the same quality level as the start.

How do we prevent a problem like this?  I believe that a great deal of the problem is that most people like to be as far away from theoretical mathematical discussions as possible because, as Barbie used to say: “Math is hard”.  I think it is important, however, that people with common sense ask questions about project planning and process, even if they don’t understand the complexity of a technical design or approach.  Even very complex issues like encryption and business continuity need to make sense in their implementation and can easily be applied in slightly wrong ways that lose the intent. The economist Kenneth Galbraith proposed in the 1960s that technicians would take over the running of companies because business people wouldn’t understand what the technicians were talking about.  That did not happen because business managers with common sense insisted that the technicians explain the concepts sufficiently to their understanding, regardless of how long it took.  We need to continue that practice with even the implementation of statistics and mathematical concepts in project planning and data management.


Big Data and NoSQL – the problem with relational databases

September 25, 2012

The NoSQL movement, where “NoSQL” stands for “Not Only SQL” is based on the concept that relational databases are not the right database solution for all problems.  Relational databases are so ubiquitous in most organization these days that many people may not even be aware that there are other types of databases, let alone when using another database might be preferable. Relational databases perform transaction update functions very well, particularly handling the difficult issues of consistency during update. Production strength relational databases can handle the complexity of two phase commit capability, where one business transaction affects multiple databases and tables, and all updates have to be effected at the same moment.

However, relational databases apply much of the same overhead required for complex update operations to every activity, and that can handicap them for other functions. Relational databases struggle with the efficiency of certain operations key to big data management.  Firstly, they don’t scale well to very large sizes, and although grid solutions can help with this problem, the creation of new clusters on the grid is not dynamic and large data solutions become very expensive using relational databases. Secondly, they don’t do unstructured data search very well (i.e. google type searching) nor do they handle data in unexpected formats well. Thirdly, but not lastly, it is difficult to implement certain kinds of basic queries using SQL and relational databases, such as the shortest path between two points.

Social networking and big data organizations such as Facebook, Yahoo, Google, and Amazon were among the first to decide that relational databases were not good solutions for the volumes and types of data that they were dealing with, hence the development of the Hadoop file system, the MapReduce programming language, and associated databases such as Cassandra and HBase.  One of the key capabilities of a Hadoop type environment is the ability to dynamically, or at least easily, expand the number of servers being used for data storage. The cost of storing large amounts of data in a relational database gets very expensive, where cost grows geometrically with the amount of data to be stored, reaching a limit in the petabyte range.  The cost of storing data in a Hadoop solution grows linearly with the volume of data and there is no ultimate limit.

I was a working programmer before relational databases were in common use.  Yes, we did have electricity back then.  And the databases I used were of the type called “hierarchical”.  In fact, they were more efficient, in general, for high volume individual transaction processing than relational databases, although like relational databases they were not good for data that was structured inconsistently.  But what we considered “high volume” then could be handled reasonably by my laptop now and those databases couldn’t handle dynamically allocating unlimited additional space, either.

In my next blog post I’ll describe some of the new classes of NoSQL databases and what problems they solve well.


Big Data Modeling – part 2 – The Big Data Modeler

July 23, 2012

Continuing my discussion of “Big Data Modeling,” what is it and is it any different from normal data modeling?  Ultimately, the questions come down to: is there a role for a modeler on Big Data projects and what does that role look like?

Modeling for Communication –

If modeling is the process of creating a simpler representation of something that does or might exist, we can use modeling for communicating information about something in a simpler way than presenting the thing itself.  After all, we aren’t limited in describing a computer system to presenting only the system itself, but we present various models to communicate different aspects of what is or what might be.

Modeling Semantics –

On Big Data projects, as with all data oriented projects, it is necessary to communicate logical and semantic concepts about the data involved in the project.  This may involve, but is not limited to, models presented in entity-relationship diagrams.  The data modeling needs, in fact, are not limited to design of structures even but certainly includes data flows, process models, and other kinds of models.  This also would include any necessary taxonomy and ontology models.

Modeling Design –

Prior to construction it is necessary to represent (design) the data structures needed for the persistent as well as transitory data used in the project.  Persistent data structures include those in files or databases.  Transitory data structures include the messages and streams of data passing into and out of the organization as well as between applications.  For data being received from other organizations or groups, this may be receiving information rather than designing. This is, or is close to, the physical design level of the implementation including the design of database tables and structures, file layouts, metadata tags, message layouts, data services, etc.

Modeling Virtual Layers –

There is a big movement in systems development in virtualizing layers of the infrastructure, where the view presented to programmers or users may be different from the actual physical implementation.  This move toward creating virtual layers that can change independently is true in data design as well. It is necessary to design, or model, the presentation of information to the systems users (client experience) and programmers independently of the modeling of the physical data structures. This is more necessary for Big Data because it includes designing levels of virtualization for normalizing or merging data of different types into a consistent format.  In addition to the modeling of the virtual data layers there is a need for the translation from the physical data structures to the virtual level such as between relational database structures and web service objects.

Modeling Mappings and Transformations –

t is necessary in any design that involves the movement of data between systems, whether Big Data or not, to specifiy the lineage in the flow of data from physical data structure to physical data structure including the mappings and transformation rules necessary from persistent data structure to message to persistent data structure, as necessary.  This level of design requires an understanding of both the physical implementation and the business meaning of the data. We don’t usually call this activity modeling but strictly design.

Ultimately, there is a lot of work for a data modeler on Big Data projects, although little of it may look like creating entity relational models.  There is the need to create models for communicating ideas, for designing physical implementation solutions, for designing levels of virtualization, and for mapping between these models and designs.


Big Data Modeling – part 1 – Defining “Big Data” and “Data Modeling”

July 15, 2012

Last month I participated in a DataVersity webinar on Big Data Modeling .  There are a lot of definitions necessary in that discussion. What is meant by Big Data? What is meant by modeling? Does modeling mean entity-relationship modeling only or something broader?

The term “Big Data” implies an emphasis on high volumes of data. What constitutes big volumes for an organization seems to be dependent on the organization and its history.  The Wikipedia definition of “Big Data” says that an organization’s data is “big” when it can’t be comfortably handled by on hand technology solutions.  Since the current set of relational database software can comfortably handle terabytes of data and even desktop productivity software can comfortably handle gigabytes of data, “big” implies many terabytes at least.

However, the consensus on the definition of “Big Data” seems to be with the Gartner Group definition that says that “Big Data” implies large volume, variety, and velocity of data.  Therefore, “Big Data” means not just data located in relational databases but files, documents, email, web traffic, audio, video, and social media, as well.  The various types of data provides the “variety”, and not just data in an organization’s own data center but in the cloud and data from external sources as well as data on mobile devices.

The third aspect of “Big Data” is the velocity of data.  The ubiquity of sensor and global position monitoring information means a vast amount of information available at an ever increasing rate from both internal and external sources.  How quickly can this barrage of information be processed?  How much of it needs to be retained and for how long?

What is “data modeling”? Most people seem to picture this activity as synonymous with “entity relationship modeling”.  Is entity relationship modeling useful for purposes outside of relational database design?  If modeling is the process of creating a simpler representation of something that does or might exist, we can use modeling for communicating information about something in a simpler way than presenting the thing itself. So modeling is used for communicating.  Entity relationship modeling is useful to communicate information about the attributes of the data and the types of relationships allowed between the pieces of data.  This seems like it might be useful to communicate ideas outside of just relational databases.

Data modeling is also used to design data structures at various levels of abstraction from conceptual to physical. When we differentiate between modeling and design, we are mostly just differentiating between logical design and design closer to the physical implementation of a database. So data modeling is also useful for design.

In the next part of this blog I’ll get back to the question of “Big Data Modeling.”


Data Virtualization – Part 2 – Data Caching

June 10, 2012

Another type of Data Virtualization that is less frequently discussed than Business Intelligence (see my previous blog) has to do with having data available in the computer’s memory, or as close to it as possible, in order to tremendously speed up the speed of both data access and update.

A simplistic way of thinking about the relative time to retrieve data is that if it takes a certain amount of time in nanoseconds to retrieve something in memory then it will be something like 1000 times that to retrieve data from disk (milliseconds). Depending on the infrastructure configuration, retrieving data over a LAN or from the internet may be ten to 1000 times slower than that. If I load my most heavily used data into memory in advance, or something that behaves like memory, then my processing of that data should be speeded up by multiple orders of magnitude.  Using solid state disk for heavily used data can achieve access and update response times similar to having data in memory.  Computer memory, as well as solid state drives, although not as cheap as traditional disk, are certainly substantially less expensive than they used to be.

Using memory caching of data can be done using traditional databases and sequential processing, and multiple orders of magnitude response time improvements can be achieved.  However, really spectacular performance is possible if we combine memory caching with parallel computing and databases designed around data caching, such as GemFire.  This does require that we develop systems using these new technologies and approaches in order to take advantage of parallel processing and optimized data caching, but the results can be blazingly fast.


People who Tweet about Data Management

April 30, 2012

Data Management & Architecture

Karen Lopez @datachick

Neil Raden @NeilRaden

Robin Bloor @robinbloor

M. David Allen @mdavidallen

Sue Geuens @suegeuens

Mehmet Orun @DataMinstrel

Alec Sharp @alecsharp

Loretta Mahon Smith @silverdata

Eva Smith @datadeva

Corine Jasonius @DataGenie

Peter Aiken @paiken

Tony Shaw @tonyshaw

Glenn Thomas @Warduke

Bonnie O’Neil @bonnieoneil

Rob Paller @RobPaller

Pete Rivett @rivettp

Charles T. Betz @CharlesTBetz

Tracie Larsen @RelatedStuff

Wayne Eckerson @weckerson

Julian Keith Loren @jkloren

Christophe @mydatanews

Steve Francia @spf13

Gorm Braavig @gormb

Jim Finwick @jimfinwick

Alexej Freund @alexej_freund

Corinna Martinez @Futureatti

Data Quality

Jim Harris @ocdqblog – blog

David Loshin @davidloshin – blog

Rich Murnane @murnane

Daragh O Brien @daraghobrien

Jacqueline Roberts @JackieMRoberts

Steve Tuck @SteveTuck

Vish Agashe @VishAgashe

Julian Schwarzenbach @jschwa1

Henrik L. Sorensen @hlsdk

MDM and Data Governance

Jill Dyche @jilldyche – blog

Charles Blyth @charlesblyth

Steve Sarsfield @stevesarsfield – blog

Dan Power @dan_power

Philip Tyler @tylep0

Business Intelligence and Analytics

Marcus Borba @marcusborba

Tamara Dull @tamaradull

Claudia Imhoff @Claudia_Imhoff – blog

Scott Wallask @BI_expert

Peter Thomas @PeterJThomas – blog

Barney Finucane @bfinucane

Matt Winkleman @mattwinkleman

Stray_Cat @Stray_Cat

Brett2point0 @Brett2point0

Risk Management

Peter Went @Bank_Risk

Joshua Corman @joshcorman

Michael Rasmussen @GRCPundit

Nenshad Bardoliwalla @nenshad

Gary Byrne @GRCexpert

Helmut Schindlwick @Schindwick

Technology Companies and Data Organizations

Oracle @Oracle

DAMA international @DAMA_I

McKinsey on BT @mck_biztech

SmartData Collective @SmartDataCo

DataFlux InSight @Datafluxinsight

Gartner @Gartner_inc

TDWI @TDWI

Scientific  Computing @SciCom

Wearecloud @wearecloud

CloudCamp @cloudcamp

Panorama Software @PanoramaSW

Data Hole @datahole

BI Knowledge Base @biknowledgebase

EnterpriseArchitects @enterprisearchitects

DataQualityPro.com @dataqualitypro

RSA Archer eGRC @ArcherGRC

Exobox @Exobox_Security

EA_Consultant @EA_Consultant

Cloudbook @cloudbook

ID Experts @idexperts

IAIDQ @iaidq

EMC Forum @EMCForums

Data Junkies @datajunkies

True Finance Data @truefinancedata

Madam @TheMDMNetwork

IBM Initiate @IBMInitiate

Accelus_GRC @PaisleyGRC

DQ Asia Pacific @DQAsiaPacific

Data Guide @DataGuide

PCI PA-DSS Data @DataAssurant

DataFlux Corporation @DataFlux