AnsweredAssumed Answered

Spark/MapR Streaming - not updating consumer info.

Question asked by john.humphreys on Aug 24, 2017
Latest reply on Sep 6, 2017 by cathy

I'm using spark streaming to read from one MapR streams topic and write to another MapR streams topic.  So, the first topic is consumed by Spark Streaming, and the second one is consumed by a Java consumer.

 

If I use the MapR REST API endpoint "/rest/stream/cursor/list", the second topic correctly shows the consumer lag and the consumer time-stamp, so I can see how far behind the consumer is.

 

The first topic being consumed by Spark Streaming with the Kafka Direct method does not seem to update these statistics though.  I can see the "committedoffset" catching up to the "produceroffset" as a general status, but the "consumerlagmillis" and "consumertimestamp" are defaulted to 1970.

 

Any idea why Spark Streaming isn't updating these values?  Is there any way to make it maintain them properly using MapR streams?

Outcomes