I have around 300 gb data (around 127 million rows) sitting in Mapr json tables and I want to import that to Mapr Streams for one of my usecase. Is there a quick and efficient way to do this?
Hi Charan Thota,
Recommend you to check out a similar question: How to import data from My SQL to MapR DBJSON table? and let us know if you have additional question.
When you will be "publishing" the document from MapR-DB JSON to a MapR Streams Topic, do you have to respect a specific order? (not clear to me what is the exact use case)
You can either write a Java app that read the document, using the ID and range, or a Spark Job and both of them would push the data into the topic.
I do not see any other way than reading the table from an application.
Retrieving data ...