Author: Jonathan Bubier
Original Publication Date: November 14, 2014
When attempting to insert data into a MapR table the operation may fail with an error/exception similar to the below.
java.io.IOException: Error: Argument list too long(7)
A corresponding message similar to the below can be found in /opt/mapr/logs/mfs.log-5 on the MFS node hosting the key range where the data is being inserted.
2014-11-14 15:31:35,1711 ERROR db/rpc/put.cc:380 put: largerow size: 21068502 bytes, allowed row size: 16777216 bytes
This error message indicates that MFS received a Put RPC from a client and the size of the data being inserted - 20MB in this case - is larger than the default maximum row size for a MapR table - 16MB. As a result the RPC is rejected and the error 'Argument list too long' and error code '7' is returned to the client.
Similarly when attempting to read back a row that is larger than the maximum row size either via the get or scan API the operation will fail with an exception similar to the one below. Note that this applies to both get and scan operations.
Error: java.io.IOException: Scan Error: File too large(27) at
A corresponding message similar to the below can be found in /opt/mapr/logs/mfs.log-5 on the MFS node hosting the key range from where the data is being retrieved.
2014-11-14 15:47:11,9698 ERROR db/rpc/get.cc:375 get: largerow for key user6284781860667377211, size: 21068502 bytes, allowed row size: 16777216 bytes, EFBIG 27
This error message indicates that MFS retrieved a Get RPC and the size of the row requested by the client is larger than the configured maximum row size. As a result the RPC is rejected and the error 'File too large' and error code '27' is returned to the client. Note that data can be inserted incrementally using the Put API into a row in a MapR table until the row is larger than the configured maximum row size as long as each put RPC inserts less than the maximum row size. For example, 20 individual put operations of 1MB each into a single row can create the situation described above.
MapR tables support up to 2GB rows though for performance reasons the default maximum row size is set lower to 16MB. Both cases described above can be resolved by increasing the default maximum row size in CLDB. This configuration is defined by the 'mfs.db.max.rowsize' parameter. Ex:
# maprcli config load -json | grep rowsize
If the use case requires larger rows to support large individual puts, i.e. single cells or columns greater than 16MB or to support large row retrieval the maximum row size can be increased using maprcli. More information on how to set the maximum row size for MapR tables can be found at the following documentation link:
Configuring Maximum Row Sizes for MapR Tables