Tutorial - Apache Hive - Apache Software Foundation

Hive is a data warehousing infrastructure based on Apache Hadoop. Hadoop provides massive scale out and fault tolerance capabilities for data storage and processing on commodity hardware.

Hive is designed to enable easy data summarization, ad-hoc querying and analysis of large volumes of data. It provides SQL which enables users to do ad-hoc querying, summarization and data analysis easily. At the same time, Hive's SQL gives users multiple places to integrate their own functionality to do custom analysis, such as User Defined Functions (UDFs).  What Hive Is NOT

Hive is not designed for online transaction processing.  It is best used for traditional data warehousing tasks.Getting Started

For details on setting up Hive, HiveServer2, and Beeline, please refer to the GettingStarted guide.

Books about Hive lists some books that may also be helpful for getting started with Hive.

In the following sections we provide a tutorial on the capabilities of the system. We start by describing the concepts of data types, tables, and partitions (which are very similar to what you would find in a traditional relational DBMS) and then illustrate the capabilities of Hive with the help of some examples.Data Units

In the order of granularity - Hive data is organized into:Databases: Namespaces function to avoid naming conflicts for tables, views, partitions, columns, and so on.  Databases can also be used to enforce security for a user or group of users.Tables: Homogeneous units of data which have the same schema. An example of a table could be page_views table, where each row could comprise of the following columns (schema):timestamp—which is of INT type that corresponds to a UNIX timestamp of when the page was viewed.userid —which is of BIGINT type that identifies the user who viewed the page.page_url—which is of STRING type that captures the location of the page.referer_url—which is of STRING that captures the location of the page from where the user arrived at the current page.IP—which is of STRING type that captures the IP address from where the page request was made.Partitions: Each Table can have one or more partition Keys which determines how the data is stored. Partitions—apart from being storage units—also allow the user to efficiently identify the rows that satisfy a specified criteria; for example, a date_partition of type STRING and country_partition of type STRING. Each unique value of the partition keys defines a partition of the Table. For example, all "US" data from "2009-12-23" is a partition of the page_views table. Therefore, if you run analysis on only the "US" data for 2009-12-23, you can run that query only on the relevant partition of the table, thereby speeding up the analysis significantly. Note however, that just because a partition is named 2009-12-23 does not mean that it contains all or only data from that date; partitions are named after dates for convenience; it is the user's job to guarantee the relationship between partition name and data content! Partition columns are virtual columns, they are not part of the data itself but are derived on load.Buckets (or Clusters): Data in each partition may in turn be divided into Buckets based on the value of a hash function of some column of the Table. For example the page_views table may be bucketed by userid, which is one of the columns, other than the partitions columns, of the page_view table. These can be used to efficiently sample the data.

Note that it is not necessary for tables to be partitioned or bucketed, but these abstractions allow the system to prune large quantities of data during query processing, resulting in faster query execution.Type System

Hive supports primitive and complex data types, as described below. See Hive Data Types for additional information.Primitive TypesTypes are associated with the columns in the tables. The following Primitive types are supported:IntegersTINYINT—1 byte integerSMALLINT—2 byte integerINT—4 byte integerBIGINT—8 byte integerBoolean typeBOOLEAN—TRUE/FALSEFloating point numbersFLOAT—single precisionDOUBLE—Double precisionFixed point numbersDECIMAL—a fixed point value of user defined scale and precisionString typesSTRING—sequence of characters in a specified character setVARCHAR—sequence of characters in a specified character set with a maximum lengthCHAR—sequence of characters in a specified character set with a defined lengthDate and time typesTIMESTAMP — A date and time without a timezone ("LocalDateTime" semantics)TIMESTAMP WITH LOCAL TIME ZONE — A point in time measured down to nanoseconds ("Instant" semantics)DATE—a dateBinary typesBINARY—a sequence of bytes

The Types are organized in the following hierarchy (where the parent is a super type of all the children instances):Type

This type hierarchy defines how the types are implicitly converted in the query language. Implicit conversion is allowed for types from child to an ancestor. So when a query expression expects type1 and the data is of type2, type2 is implicitly converted to type1 if type1 is an ancestor of type2 in the type hierarchy. Note that the type hierarchy allows the implicit conversion of STRING to DOUBLE.

Explicit type conversion can be done using the cast operator as shown in the #Built In Functions section below.Complex Types

Complex Types can be built up from primitive types and other composite types using:Structs: the elements within the type can be accessed using the DOT (.) notation. For example, for a column c of type STRUCT {a INT; b INT}, the a field is accessed by the expression c.aMaps (key-value tuples): The elements are accessed using ['element name'] notation. For example in a map M comprising of a mapping from 'group' -> gid the gid value can be accessed using M['group']Arrays (indexable lists): The elements in the array have to be in the same type. Elements can be accessed using the [n] notation where n is an index (zero-based) into the array. For example, for an array A having the elements ['a', 'b', 'c'], A[1] retruns 'b'.

Using the primitive types and the constructs for creating complex types, types with arbitrary levels of nesting can be created. For example, a type User may comprise of the following fields:gender—which is a STRING.active—which is a BOOLEAN.Timestamp

Timestamps have been the source of much confusion, so we try to document the intended semantics of Hive.Timestamp ("LocalDateTime" semantics)

Java's "LocalDateTime" timestamps record a date and time as year, month, date, hour, minute, and seconds without a timezone. These timestamps always have those same values regardless of the local time zone.

For example, the timestamp value of "2014-12-12 12:34:56" is decomposed into year, month, day, hour, minute and seconds fields, but with no time zone information available. It does not correspond to any specific instant. It will always be the same value regardless of the local time zone. Unless your application uses UTC consistently, timestamp with local time zone is strongly preferred over timestamp for most applications. When users say an event is at 10:00, it is always in reference to a certain timezone and means a point in time, rather than 10:00 in an arbitrary time zone.Timestamp with local time zone ("Instant" semantics)

Java's "Instant" timestamps define a point in time that remains constant regardless of where the data is read. Thus, the timestamp will be adjusted by the local time zone to match the original point in time.TypeValue in America/Los_AngelesValue in America/New_Yorktimestamp2014-12-12 12:34:56

2014-12-12 12:34:56timestamp with local time zone2014-12-12 12:34:562014-12-12 15:34:56Comparisons with other toolsSQL 2003OracleSybasePostgresMySQLMicrosoft SQLIBM DB2PrestoSnowflakeHive >= 3.1IcebergSparktimestampLocalLocalLocalLocalInstantOtherLocalLocalLocalLocalLocalInstanttimestamp with local time zoneInstantInstantInstanttimestamp with time zoneOffsetOffsetOffsetInstantOffsetOffsetOffsetInstant

timestamp without time zoneLocalLocalLocalLocal

Post a Comment (0)
Previous Post Next Post