Sponsored Links

Jumat, 03 November 2017

Sponsored Links

Apache Parquet: Parquet file internals and inspecting Parquet file ...
src: i.ytimg.com

Apache Parquet is a free and open source column-oriented data store of the Apache Hadoop ecosystem. It is similar to the other columnar storage file formats available in Hadoop namely RCFile and Optimized RCFile. It is compatible with most of the data processing frameworks in the Hadoop environment. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.

The open source project to build Apache Parquet began as a joint effort between Twitter and Cloudera. The first version Apache Parquet 1.0 was released in July 2013. From April 27, 2015 Apache Parquet is a top-level Apache Software Foundation (ASF)-sponsored project.


Video Apache Parquet



Features

Apache Parquet is implemented using the record shredding and assembly algorithm taking into account the complex data structures that can be used to store the data. Apache Parquet stores data where the values in each column are physically stored in contiguous memory locations. It is similar to the data storage format of the RCFile. Due to the columnar storage, Apache Parquet provides the following benefits:

  • Column-wise compression is efficient and saves storage space
  • Compression techniques specific to a type can be applied as the column values tend to be of the same type
  • Queries that fetch specific column values need not read the entire row data thus improving performance
  • Different encoding techniques can be applied to different columns
  • Apache Parquet is implemented using the Apache Thrift framework which increases its flexibility to work with a number of programming languages like C++, Java, Python, PHP, etc.

As of August 2015, Parquet supports the big data processing frameworks including Apache Hive, Apache Drill, Cloudera Impala, Apache Crunch, Apache Pig, Cascading and Apache Spark.


Maps Apache Parquet



Compression and encoding

In Parquet, compression is performed column by column hence enabling different encoding schemes to be used for text and integer data. In addition this strategy also keeps the door open for newer and better encoding schemes to be implemented as they are invented.

Dictionary encoding

Parquet has an automatic dictionary encoding enabled dynamically for data with a small number of unique values ( < 10^5 ) that aids in significant compression and boosts processing speed.

Bit packing

Storage of integers is usually done with a dedicated 32 or 64 bits per integer. For small integers packing multiple integers into the same space makes storage more efficient.

Run-length encoding ( RLE )

To optimize storage of multiple occurrences of the same value, the value is stored only once along with the number of occurrences.

Parquet implements a hybrid of bit packing and RLE where the encoding switches based on the which produces the best compression results. This strategy works well for certain types of integer data and combines well with dictionary encoding.


Apache Parquet (@ApacheParquet) | Twitter
src: pbs.twimg.com


Comparison

Apache Parquet can be compared with RCFile and Optimized RCFile (ORC) file formats as all the three fall under the category of columnar data storage within the Hadoop ecosystem. They all have better compression and encoding with improved read performance at the cost of slower writes. In addition to these features, Apache Parquet supports limited schema evolution where the schema can be modified according to the changes in the data. It also provides the ability to add new columns at the end of the file structure. As of now, only Apache Hive and Cloudera Impala are able to query such newly added columns and the other frameworks like Apache Pig are working it.


The columnar roadmap Apache Parquet and Apache Arrow - YouTube
src: i.ytimg.com


See also

  • Pig (programming tool)
  • Apache Hive
  • Cloudera Impala
  • Apache Drill
  • Apache Kudu
  • Apache Spark
  • Apache Thrift

Efficient DataFrame Storage with Apache Parquet - Blue Yonder ...
src: tech.blue-yonder.com


References


Spark + Parquet In Depth: Spark Summit East talk by: Emily Curtin ...
src: i.ytimg.com


External links

  • Official website

Source of the article : Wikipedia

Comments
0 Comments