AnsweredAssumed Answered

large number of small files  - optimal chunk size

Question asked by mapr_user62 on Sep 9, 2012
Latest reply on May 27, 2013 by srivas
In an environment with a large number of small size files, what chunk size is optimal.

With small size , I mean many files are less than 1 MB. The number of files with size larger than 256MB are very small.
I know chunksize with Mapr is usually recommended within 64MB-256MB.

To get high performance out of Mapr-fs either reading/writing through NFS or through hadoop API client, what is recommended with this kind of environment.

What about SequenceFile, HAR or HBASE implementations for performance?