AnsweredAssumed Answered

MFS Throttling

Question asked by impermisha on May 9, 2013
Latest reply on May 20, 2013 by impermisha
While troubleshooting I noticed this message in mfs.log about throttling requests happen quite a lot (especially on a few nodes in particular).  It has been going on off and on for some time, and for long periods of time (hours).  Any advice on why this is?  Is there a config that handles this?

<code>2013-04-25 11:21:00,8730 INFO  procqueue.h:96 x.x.0.103:54037 Throttling request 122 from 10.100.0.103:54037.
2013-04-25 11:21:00,8730 INFO  procqueue.h:96 x.x.0.100:55458 Throttling request 122 from 10.100.0.100:55458.
2013-04-25 11:21:00,8736 INFO  procqueue.h:96 x.x.0.116:37741 Throttling request 122 from 10.100.0.116:37741.
</code>

[cutting out time]

<code>2013-05-09 07:17:19,8660 INFO  procqueue.h:96 x.x.0.102:38741 Throttling request 21 from 10.100.0.102:38741.
2013-05-09 07:17:19,8706 INFO  procqueue.h:96 x.x.0.104:44795 Throttling request 122 from 10.100.0.104:44795.
2013-05-09 07:17:19,8707 INFO  procqueue.h:96 x.x.0.114:37755 Throttling request 122 from 10.100.0.114:37755.
2013-05-09 07:17:19,8707 INFO  procqueue.h:96 x.x.0.100:49685 Throttling request 122 from 10.100.0.100:49685.
2013-05-09 07:17:19,8708 INFO  procqueue.h:96 x.x.0.103:36549 Throttling request 122 from 10.100.0.103:36549.
2013-05-09 07:17:19,8711 INFO  procqueue.h:96 x.x.0.117:44155 Throttling request 122 from 10.100.0.117:44155.
</code>

then here is it returning to normal

<code>2013-05-09 07:58:13,0742 INFO  kvstoremultiop.cc:1943 x.x.0.0:0 Multiop 186242344 on cid 1 without logflush took 4025 msec
2013-05-09 07:58:13,0745 INFO  kvstoremultiop.cc:1943 x.x.0.0:0 Multiop 186242343 on cid 1 without logflush took 5034 msec
2013-05-09 07:58:13,0745 INFO  kvstoremultiop.cc:2137 x.x.0.0:0 Multiop 186242343 on cid 1 took 5034 msec
2013-05-09 07:58:13,0745 INFO  kvstoremultiop.cc:1943 x.x.0.0:0 Multiop 186242340 on cid 1 without logflush took 5394 msec
2013-05-09 07:58:13,0745 INFO  kvstoremultiop.cc:2137 x.x.0.0:0 Multiop 186242340 on cid 1 took 5394 msec
2013-05-09 07:58:49,4355 INFO  create.cc:1829 x.x.0.0:0 Waiting for replicas to consume VN 415840375 for cid 6476 after it completed with err 17 locally
2013-05-09 07:58:49,4355 INFO  replicate.cc:2505 x.x.0.0:0 Replicating consume vn (415840375) in vnSpace 0 op for container 6476
2013-05-09 07:58:53,1385 INFO  inoderestore.cc:694 x.x.0.0:0 Container restore for cid 4074772748, srccid 4112631779, replicacid 4500, maxinum on replica is 4095 and on src is 40
95, no inode delete needed
2013-05-09 07:58:53,1385 INFO  containerrestore.cc:2774 x.x.0.0:0 ContainerRestore updating versioninfo for cid 4500 txnvn 14132838:14132838 writevn 14132838:14132838 snapvn 416:
416 maxUniq 217404 updateSnapVnSpace 1
2013-05-09 07:58:53,1387 INFO  containerrestore.cc:2840 x.x.0.0:0 Updating mirror id 0 on container 4074772748
2013-05-09 07:58:53,1449 INFO  containerrestore.cc:3487 x.x.0.0:0 RestoreDataEnd Complete resync data WA 0x6374e90 srccid 4500 replicacid 4500
2013-05-09 07:58:53,2095 INFO  inoderestore.cc:694 x.x.0.0:0 Container restore for cid 4074772748, srccid 4500, replicacid 4500, maxinum on replica is 4095 and on src is 4095, no
 inode delete needed
2013-05-09 07:58:53,2095 INFO  containerrestore.cc:2774 x.x.0.0:0 ContainerRestore updating versioninfo for cid 4500 txnvn 14132838:14132838 writevn 14132838:14132838 snapvn 416:
416 maxUniq 217404 updateSnapVnSpace 1
2013-05-09 07:58:53,2095 INFO  containerrestore.cc:2840 x.x.0.0:0 Updating mirror id 0 on container 4074772748
</code>

Outcomes