I think Sangita Satapathy actually wanted to compare MapReduce jobs VS Hive/Pig from the context.
Basically Hive/Pig jobs can save much coding effort to write jobs by yourself as long as your application does not contain too much customized logic.
Hive effectively enables generating and running MapReduce jobs by exposing an end user SQL interface. This aides in lowering the barrier to entry in big data. If this is appealing to you, I would also encourage you to take a look at Apache Drill.
Pig effectively enables generating and running MapReduce jobs by exposing a procedural end user language called Pig Latin. If this is appealing to you I would also encourage you to look into Apache Spark.
The dynamics of these two software packages are drastically different from one another from an end user perspective, but behind the scenes they both perform optimizations for the MapReduce jobs that they generate and they both run in an asynchronous manner.
Retrieving data ...