When you write partitioned parquet files with Spark, they take the form
In Spark, if I query for year 2017, it uses the directory structure for partition pruning.
But when I try the same query in Drill it fails as drill does not seem to recognize the directory structure as partitions. I'm a bit surprised that Drill is unable to recognize the partition structure generated by Spark. Interoperability between Spark and Drill would seem natural. But I think partitioned created by Drill have a different structure. Will Drill ever recognize Spark generated partition structures?