Manage MapR Account
All Content - Simple List View
Community News Feed
All Recent Activity
Ask and Answer Tech Questions
Questions that Need Answers
Follow Release and Patch Announcements
Share Solutions and Code
Share Product Ideas
Find Academy Courses and Discussions
Explore Partner Solutions
Find and Share Meetups
Download MapR Platform for Free
MapR Technical Documentation
Community Help & Getting Started
to create and rate content, and to follow, bookmark, and share content with other members.
Compression on existing directory
Question asked by
on Dec 15, 2011
on Dec 15, 2011 by srivas
Show 0 Likes
Can we use hadoop mfs setcompression on comman an existing directory which have data inside that start them compress Or we need to create compressed directory beforehand ?
No one else has this question
Mark as assumed answered
This content has been marked as final.
Show 1 comment
(Required, will not be published)
Dec 15, 2011 11:20 PM
Existing files are not compressed when the directory compression properties are changed. To compress them, simply copy them into the directory after setting the compression property on.
Show 0 Likes
Retrieving data ...
Problems with Spark 1.6.1-1607
accessing hbase table from tomcat application
Configuring batch size for AsyncHbase puts to Maprdb
Job Class Deprecated
Ask Us Anything About Apache Drill 7/27 - 8/3