ABOUT THE EVENT
As technologies like cloud computing and big data analytics go mainstream, IT managers have realized that the independent storage silos they built to support each of their major applications are now islands of data. They’ve stood up HCI environments for VDI, and/or server virtualization, and a Hadoop cluster with local disks for HDFS, only to discover that they now need to come up with ways to load data from the VDI cluster to the Hadoop cluster, or to a public cloud provider for further analysis.
A data fabric like MapR’s MapR-XD allows organizations to store their data in a single logical repository that serves up their data across not only application silos but also between their data centers and public cloud providers.
Join us Thursday April 26th at 10am PT as we'll explore three examples of how an integrated data fabric simplifies IT processes and increases agility:
- Bringing data services to cloud storage. Most public cloud storage offerings are severely limited, lacking many of the data services enterprise applications have relied on. The data fabric allows users to Lift and Shift their current applications to the public cloud without sacrificing the data protection those applications were designed for.
- Eliminating the analytics data silo. Hadoop has traditionally used HDFS to provide storage from local drives on each node. Since other applications couldn’t write directly to HDFS, users were forced to batch load, analyze, and unload workflows. With a data fabric, the Hadoop cluster can analyze the data in the same place it was written originally.
- Simplifying data management within the data center. By presenting a single name space across multiple nodes and multiple storage tiers, users can leverage a data fabric to replace multiple scale-up and scale-out filers with a single point of management.
Howard Marks, Founder & Chief Scientist at DeepStorage.net
Suzy Visvanathan, Director of Product Management at MapR Technologies