A data fabric is a data management solution design that connects all data sources and data management components to provide frictionless access to enterprise data to all consumers.
You will find all the fundamental functions of a data management framework (like data quality tools and a data catalog) stitched together by metadata to create a user-friendly and predominantly autonomous enterprise-wide data coverage interface.
A data fabric has six components:
- A data catalog. This captures the metadata that powers the automation delivered by the fabric.
- Knowledge graph. Store your metadata and allow users and machines (i.e., recommendation engine) to explore relationships between metadata entities.
- Metadata activation. Using existing metadata to infer new metadata. For example, profiling data, generating statistics, evaluating data quality, etc.
- Recommendation engine. Uses data from the knowledge graph to infer more metadata or recommend how to process your data.
- Data preparation and delivery. The data fabric understands the data structure (through metadata in the knowledge graph) and the intent of the data consumer. This enables the fabric to apply or suggest different data preparation or delivery types based on all the metadata and intent available.
- Orchestration and data ops. Having robust data processing engines close to data sources can deliver data the fastest way possible—adherence to Data Ops principles is also essential, such as the reusability of data pipelines.
As you can see, metadata drives almost everything within the data fabric, utilizing all of these components to automate data delivery and provide suggestions on how to operate better with the data you have at your disposal.
Implementing a data fabric can benefit your organization's data and data management. It can provide faster and easier access to data, simplify data privacy and protection, and save you loads of time on maintenance and configuration.