AnsweredAssumed Answered

Container replica create failed for container

Question asked by mshirley on Apr 15, 2013
Latest reply on Apr 16, 2013 by mshirley
We're seeing a flood of messages regarding the replication of 2 containers.

This is from cldb.log

    2013-04-15 22:58:50,869 WARN Table [ReplicationManagerThread]: Container replica create failed for container 5188 with : status:11
    2013-04-15 22:58:50,869 WARN Containers [ReplicationManagerThread]: ContainerCreateCopy: Could not create replicas for container: 5188 attempt 0
    2013-04-15 22:58:50,870 WARN Table [ReplicationManagerThread]: Container replica create failed for container 5186 with : status:11
    2013-04-15 22:58:50,870 WARN Containers [ReplicationManagerThread]: ContainerCreateCopy: Could not create replicas for container: 5186 attempt 0

This is from one of our task trackers.


    2013-04-15 22:59:35,8741 ERROR  create.cc:83 x.x.1.1:5660 create container 5188 : it already exists
    2013-04-15 22:59:35,8749 ERROR  create.cc:83 x.x.1.1:5660 create container 5186 : it already exists

maprcli dump container info

    # maprcli dump containerinfo -ids 5188 -json
    {
     "timestamp":1366067441448,
     "status":"OK",
     "total":1,
     "data":[
      {
       "ContainerId":5188,
       "Epoch":8,
       "Master":"1.1.1.1:5660-1.1.2.1:5660--8-VALID",
       "ActiveServers":{
        "IP:Port":[
         "1.1.1.1:5660-1.1.2.1:5660--8-VALID",
         "2.2.2.2:5660-2.2.3.1:5660--8-VALID"
        ]
       },
       "InactiveServers":{
    
       },
       "UnusedServers":{
    
       },
       "OwnedSizeMB":"95 MB",
       "SharedSizeMB":"0 MB",
       "LogicalSizeMB":"95 MB",
       "TotalSizeMB":"95 MB",
       "NumInodesInUse":2730,
       "Mtime":"Mon Apr 15 23:10:38 GMT+00:00 2013",
       "NameContainer":"true",
       "VolumeName":"mapr.servername.local.metrics",
       "VolumeId":191354606,
       "VolumeReplication":2,
       "VolumeMounted":true
      }
     ]
    }

    # maprcli dump containerinfo -ids 5186 -json
    {
     "timestamp":1366067497869,
     "status":"OK",
     "total":1,
     "data":[
      {
       "ContainerId":5186,
       "Epoch":3,
       "Master":"1.1.1.1:5660-1.1.2.1:5660--8-VALID",
       "ActiveServers":{
        "IP:Port":[
         "1.1.1.1:5660-1.1.2.1:5660--8-VALID",
         "2.2.2.2:5660-2.2.3.1:5660--8-VALID"
        ]
       },
       "InactiveServers":{
    
       },
       "UnusedServers":{
    
       },
       "OwnedSizeMB":"0 MB",
       "SharedSizeMB":"0 MB",
       "LogicalSizeMB":"0 MB",
       "TotalSizeMB":"0 MB",
       "NumInodesInUse":256,
       "Mtime":"Wed Apr 10 22:24:29 GMT+00:00 2013",
       "NameContainer":"true",
       "VolumeName":"mapr.servername.local.logs",
       "VolumeId":238370896,
       "VolumeReplication":2,
       "VolumeMounted":true
      }
     ]
    }

It looks like these are for mapr.local.metrics and .logs.

Can I remove those volumes for this server and restart warden to recreate them or should I do something else?

Outcomes