Big Data Modeling – part 2 – The Big Data Modeler

July 23, 2012

Continuing my discussion of “Big Data Modeling,” what is it and is it any different from normal data modeling?  Ultimately, the questions come down to: is there a role for a modeler on Big Data projects and what does that role look like?

Modeling for Communication –

If modeling is the process of creating a simpler representation of something that does or might exist, we can use modeling for communicating information about something in a simpler way than presenting the thing itself.  After all, we aren’t limited in describing a computer system to presenting only the system itself, but we present various models to communicate different aspects of what is or what might be.

Modeling Semantics –

On Big Data projects, as with all data oriented projects, it is necessary to communicate logical and semantic concepts about the data involved in the project.  This may involve, but is not limited to, models presented in entity-relationship diagrams.  The data modeling needs, in fact, are not limited to design of structures even but certainly includes data flows, process models, and other kinds of models.  This also would include any necessary taxonomy and ontology models.

Modeling Design –

Prior to construction it is necessary to represent (design) the data structures needed for the persistent as well as transitory data used in the project.  Persistent data structures include those in files or databases.  Transitory data structures include the messages and streams of data passing into and out of the organization as well as between applications.  For data being received from other organizations or groups, this may be receiving information rather than designing. This is, or is close to, the physical design level of the implementation including the design of database tables and structures, file layouts, metadata tags, message layouts, data services, etc.

Modeling Virtual Layers –

There is a big movement in systems development in virtualizing layers of the infrastructure, where the view presented to programmers or users may be different from the actual physical implementation.  This move toward creating virtual layers that can change independently is true in data design as well. It is necessary to design, or model, the presentation of information to the systems users (client experience) and programmers independently of the modeling of the physical data structures. This is more necessary for Big Data because it includes designing levels of virtualization for normalizing or merging data of different types into a consistent format.  In addition to the modeling of the virtual data layers there is a need for the translation from the physical data structures to the virtual level such as between relational database structures and web service objects.

Modeling Mappings and Transformations –

t is necessary in any design that involves the movement of data between systems, whether Big Data or not, to specifiy the lineage in the flow of data from physical data structure to physical data structure including the mappings and transformation rules necessary from persistent data structure to message to persistent data structure, as necessary.  This level of design requires an understanding of both the physical implementation and the business meaning of the data. We don’t usually call this activity modeling but strictly design.

Ultimately, there is a lot of work for a data modeler on Big Data projects, although little of it may look like creating entity relational models.  There is the need to create models for communicating ideas, for designing physical implementation solutions, for designing levels of virtualization, and for mapping between these models and designs.


Big Data Modeling – part 1 – Defining “Big Data” and “Data Modeling”

July 15, 2012

Last month I participated in a DataVersity webinar on Big Data Modeling .  There are a lot of definitions necessary in that discussion. What is meant by Big Data? What is meant by modeling? Does modeling mean entity-relationship modeling only or something broader?

The term “Big Data” implies an emphasis on high volumes of data. What constitutes big volumes for an organization seems to be dependent on the organization and its history.  The Wikipedia definition of “Big Data” says that an organization’s data is “big” when it can’t be comfortably handled by on hand technology solutions.  Since the current set of relational database software can comfortably handle terabytes of data and even desktop productivity software can comfortably handle gigabytes of data, “big” implies many terabytes at least.

However, the consensus on the definition of “Big Data” seems to be with the Gartner Group definition that says that “Big Data” implies large volume, variety, and velocity of data.  Therefore, “Big Data” means not just data located in relational databases but files, documents, email, web traffic, audio, video, and social media, as well.  The various types of data provides the “variety”, and not just data in an organization’s own data center but in the cloud and data from external sources as well as data on mobile devices.

The third aspect of “Big Data” is the velocity of data.  The ubiquity of sensor and global position monitoring information means a vast amount of information available at an ever increasing rate from both internal and external sources.  How quickly can this barrage of information be processed?  How much of it needs to be retained and for how long?

What is “data modeling”? Most people seem to picture this activity as synonymous with “entity relationship modeling”.  Is entity relationship modeling useful for purposes outside of relational database design?  If modeling is the process of creating a simpler representation of something that does or might exist, we can use modeling for communicating information about something in a simpler way than presenting the thing itself. So modeling is used for communicating.  Entity relationship modeling is useful to communicate information about the attributes of the data and the types of relationships allowed between the pieces of data.  This seems like it might be useful to communicate ideas outside of just relational databases.

Data modeling is also used to design data structures at various levels of abstraction from conceptual to physical. When we differentiate between modeling and design, we are mostly just differentiating between logical design and design closer to the physical implementation of a database. So data modeling is also useful for design.

In the next part of this blog I’ll get back to the question of “Big Data Modeling.”


Data Virtualization – Part 2 – Data Caching

June 10, 2012

Another type of Data Virtualization that is less frequently discussed than Business Intelligence (see my previous blog) has to do with having data available in the computer’s memory, or as close to it as possible, in order to tremendously speed up the speed of both data access and update.

A simplistic way of thinking about the relative time to retrieve data is that if it takes a certain amount of time in nanoseconds to retrieve something in memory then it will be something like 1000 times that to retrieve data from disk (milliseconds). Depending on the infrastructure configuration, retrieving data over a LAN or from the internet may be ten to 1000 times slower than that. If I load my most heavily used data into memory in advance, or something that behaves like memory, then my processing of that data should be speeded up by multiple orders of magnitude.  Using solid state disk for heavily used data can achieve access and update response times similar to having data in memory.  Computer memory, as well as solid state drives, although not as cheap as traditional disk, are certainly substantially less expensive than they used to be.

Using memory caching of data can be done using traditional databases and sequential processing, and multiple orders of magnitude response time improvements can be achieved.  However, really spectacular performance is possible if we combine memory caching with parallel computing and databases designed around data caching, such as GemFire.  This does require that we develop systems using these new technologies and approaches in order to take advantage of parallel processing and optimized data caching, but the results can be blazingly fast.


Data Virtualization – Part 1 – Business Intelligence

March 26, 2012

The big transformation that we’re all dealing with in technology today is virtualization.  There are many aspects to virtualization: infrastructure, systems, organization, office, applications. When you search on the internet for “data virtualization,” most of the references are regarding business intelligence and data warehousing uses.  In part 2 of this blog I will talk about data virtualization and transaction processing.

In the day, when I used to build data warehouses (1990s+), there was a reference to a concept of “federated data warehouses”, where data in the logical warehouse would be separated physically either with the same schemas in multiple instances or different types of data in different locations.  The thought was that the data would be physically separate but brought together real time for reporting.  We also used to call that “data warehouses that don’t work”.  After all, the reason we created data warehouses in the first place was that we needed to instantiate the data consolidation in order to make the response time reasonable when trying to report on millions of records. No, really, the response time on these “federated data warehouse” systems used to be many minutes or more.

Now, however, the technologies involved have made huge leaps in capabilities.  The vendors have put thousands and thousands of man hours into how to make real time integration and reporting work.  There are many techniques involving specialized software and hardware (data appliances) that enable these capabilities, query optimization, distributed processing, and other optimization techniques, and hybrid solutions between pure virtualization and instantiation.  Specialized tuning is necessary, and the fastest solutions involve instantiating the consolidated data in a central place.

Ultimately, having to do a project to incorporate new data into the data warehouse physically isn’t responsive enough to the business need for information.  Better to have a short term solution that allows for the quick incorporation of new data and then, if there is a continued need for the data in question and you want to speed up the response, possibly integrate the additional data into the physical data warehouse.

The problems being solved now for business intelligence and data virtualizations include real time data integration of multiple regional instances of a data warehouse, integrating data of different types and kinds, and integrating data from a data warehouse with big data and cloud data.  This enables much more responsive business intelligence and analytical solutions to business requests without having to always instantiate all data for analysis into a central, single, enterprise data warehouse.