We've tried this as well, using ES 5.4.0 (as a Kubernetes pod, writing to a MapR-FS volume mounted via the posix client) and ES 5.5.0 (as a regular system service on a different node, writing to a different cluster). Using MapR 5.2.1.
In both cases things were fine at first but ran into trouble after a few days/weeks.
On the 5.4 installation, ES decided some shards on the MapR volume were out of sync, and would not reallocate them. We temporarily fixed this by reducing the number of replicas to 0 on the affected indices, and then increasing it again later.
On the 5.5 installation, ES stopped working altogether, reporting a variety of Java errors relating to the filesystem. Notably:
org.elasticsearch.ElasticsearchException: failed to refresh store stats
Caused by: java.nio.file.NoSuchFileException: /mapr/cluster/elasticsearch/nodes/0/indices/Y4dYavcSRvi8feqCcP-d2Q/1/index
[o.e.i.e.Engine ] [host] [index-2017.08.27] failed to rollback writer on close
java.io.IOException: Transport endpoint is not connected
java.lang.IllegalStateException: environment is not locked
org.elasticsearch.ElasticsearchException: failed to load started shards
[WARN ][o.e.e.NodeEnvironment ] [host] lock assertion failed
The same datadir copied to a local disk works properly.
My thinking at this point is that it seems like MapR will cause ES to fail, once the data volume gets "high enough". Number of open files feels relevant, but I can't explain why the same settings don't cause problems when using a local directory, but do when using the same data set on a MapR-FS POSIX client mount.
Retrieving data ...