Your virtual cluster on your machine

Playing with HDP sandbox is cool. This is a quick way to see how things are going, try some Hive requests, and so on.

At one time, you may want to build your own virtual cluster on your machine : several virtual machines, making a “real” cluster to have something closest to reality.

Of course you can start from scratch, but there’s a quickest and easiest way : playing with structor.

Structor is a project hosted by Hortonworks, based on Vagrant (basically a tool who plays with VMs on VirtualBox or VMWare, dealing with OS images) and there’s a lot of forks for its functionalities to grow.
I based my stuff on Andrew Miller’s fork, with some minor modifications.

Ok, let’s start with getting structor :

~$ mkdir Repository && cd Repository
Repository~ $ git clone https://github.com/laurentedel/structor.git

This will create a structor directory in which you’ll find basically a Vagrantfile which contains everything to build your cluster !
You just have to specify aa HDP version, an Ambari version and a profile name to build the corresponding cluster.

For building my 3-nodes cluster, I just type

structor~ $ ./ambari-cluster hdp-2.1.3 ambari-1.6.1 3node-full

In the ./profiles directory you’ll find some predefined profiles : 1node-min, 1node-full, 3node-min and 3node-full, feel free to create and submit some other profiles, but remember that your VMs will need some memory to be usable ! I put 2GB for each VM, so a 3-nodes cluster will take 6GB (7GB with a proxy VM)

I added some slight modifications for VirtualBox VMs to be named as their configuration so you can have multiple ambari-clusters in VirtualBox.

Things you have to know :

1. add the hosts on /etc/hosts on your Linux or Mac, or the equivalent on Windows (look at the doc on my github)

2. the private key is stored in each VM

3. to ssh in a machine, cd to your structor directory and do a vagrant ssh HOSTNAME

structor~ $ vagrant ssh gw
Loading profile /Users/ledel/Repository/structor/profiles/3node.profile
Last login: Tue Mar 10 12:12:38 2015 from 10.0.2.2
Welcome to your Vagrant-built virtual machine.

[vagrant@gw ~]$

4. to suspend the cluster, just

structor~ $ vagrant suspend

 

to wake the cluster up :

structor~ $ vagrant up

 

 

5. To copy files from/to your local machine, the current Vagrant folder is shared with each VM on /vagrant

Enjoy !


HBase regions merge

HBase writes data to multiple servers, called Region Servers.

Each region server contains one or several Regions, and data is allocated on these regions; Hbase will control which region server controls which region(s).

Regions number can be defined at the table creation level :

[hbase@gw vagrant]$ kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase
[hbase@gw vagrant]$ hbase shell
hbase(main):001:0> create 'table2', 'columnfamily1', {NUMREGIONS => 5, SPLITALGO => 'HexStringSplit'}

We have previously defined that 5 regions would be accurate, regarding region servers number and desired regions size, and 2 basic algorithms are supplied, HexStringSplit and UniformSplit (but you can add yours).

You can provide your own splits :

hbase(main):001:0> create 'table2', 'columnfamily1', {NUMREGIONS => 5, SPLITS=> ['a', 'b', 'c']}

So this table2 has been created with our 5 regions, let’s go to HBase webUI to see what it looks like :

hbase01We do have our 5 regions, see the keys repartition, and we can see in the regions names : table_name,start_key,end_key,timestamp.ENCODED_REGIONNAME.

So now, if we want to merge regions, we can use the merge_region in hbase shell.
The regions have to be adjacent.

hbase(main):010:0> merge_region '234a12e83e203f2e3158c39e1da6b6e7', '89dd2d5a88e1b2b9787e3254b85b91d3'
0 row(s) in 0.0140 seconds

Yeah.

Notice that the ENCODED_REGIONNAME of the result region is a new one.

hbase(main):012:0> merge_region 'bfad503057fca37bd60b5a83109f7dc6','e37d7ab5513e06268459c76d5e7335e4'
0 row(s) in 0.0040 seconds

Let merge all regions, eventually !

hbase(main):013:0> merge_region '0f5fc22bf0beacbf83c1ad562324c778','af6d7af861f577ba456cff88bf5e5e38','3f1e029afd907bc62f5e5fb8b6e1b5cf','3f1e029afd907bc62f5e5fb8b6e1b5cf'
0 row(s) in 0.0290 seconds

Then we can see that only one region remains :

hbase02

 

For the record, you can create a HBase table pre-splitted if you know the repartition of your keys : either by passing SPLITS, or by providing a SPLITS_FILE which contains the points of splitting (so lines number =regions -1)
Be aware of the order, SPLITS_FILE before {…} won’t work.

[hbase@gw vagrant]$ echo "a\nb\nc" > /tmp/splits.txt;
[hbase@gw vagrant]$ kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase
[hbase@gw vagrant]$ hbase shell
hbase(main):011:0> create 'test_split', { NAME=> 'cf', VERSIONS => 1, TTL => 69200 }, SPLITS_FILE => '/tmp/splits.txt'

And the result :

hbase03


Queues and capacity scheduler

Capacity scheduler allows you to create some queues at different levels, with allocating different ratio of usage.

At the beginning, you have only one queue, which is root.
All of the following is defined in conf/capacity-scheduler.xml (/etc/hadoop/conf.empty/capacity-scheduler.xml in my HDP 2.1.3) or in YARN Configs/Scheduler in Ambari.

scheduler_ambari

Let’s start with two queues : a “production” and a “development” queues, which are all root subqueues

Queues definition

yarn.scheduler.capacity.root.queues=prod,dev

Now, maybe we have 2 teams in the dev department : Enginers and Datascientists.
Let split the dev queue in two sub-queues  :

yarn.scheduler.capacity.root.dev.queues=eng,ds

Queues capacity

We now have to define the percentage capacity of all these queues; Note that the total must be 100 (the root capacity), if not you won’t be able to start that scheduler.

yarn.scheduler.capacity.root.capacity=100

yarn.scheduler.capacity.root.prod.capacity=70
yarn.scheduler.capacity.root.dev.capacity=30

yarn.scheduler.capacity.root.dev.eng.capacity=60
yarn.scheduler.capacity.root.dev.ds.capacity=40

So prod will have 70% of the cluster resources and dev will have 30%.
Not really, infact ! If a job is run in dev and there’s no use of prod, then dev will take 100% of the cluster.
This make sense, because we don’t want the cluster to be under-utilized.

As you can imagine, eng will take 60% of dev capacity, and is able to reach 100% of dev if ds is empty.

We may want to limit dev to a maximum extended capacity (default is so 100%) because we never want this queue to use too much resources.
For that purpose, use the max-capacity parameter

yarn.scheduler.capacity.root.dev.max-capacity=60

 Queues status

Must be set to RUNNING; If set to STOPPED then you won’t be able to submit new jobs to that queue.

yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.dev.state=RUNNING

Queues ACLs

The most important thing to understand is that ACLs are inherited. That means that you can’t restrain permissions, only extends them !
Most common mistake is ACLs set to * (meaning all users) on the root level : consequently, any user will be able to submit jobs to any queue : default is

yarn.scheduler.capacity.root.default.acl_submit_applications=*

ACLs format is a bit tricky :

<username>,<username>,[...]
<username>SPACE<group>,[...]

Then, on each queue, you can set 3 parameters : acl_submit_applications, acl_administer_queue and acl_administer_jobs.

yarn.scheduler.capacity.root.dev.acl_administer_jobs=nick dev
yarn.scheduler.capacity.root.dev.acl_administer_queue=john dev
yarn.scheduler.capacity.root.dev.acl_submit_applications=* dev

Any user of dev group can submit jobs but only John an administer queue.

You can see the “real” authorizations in a terminal :

[vagrant@gw ~]$ su ambari-qa
[ambari-qa@gw ~]# mapred queue -showacls
Queue acls for user : ambari-qa
Queue Operations
=====================
root ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
dev ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
prod ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
[root@gw ~]#

Of course, yarn.acl.enable has to be set to true

Another thing is you don’t have to restart YARN for each scheduler modification, except for deleting existing queues; If you’re only adding queues or adjusting some settings, just type a simple

[root@gw ~]# kinit -kt /etc/security/keytabs/yarn.headless.keytab yarn@EXAMPLE.COM
[root@gw ~]# yarn rmadmin -refreshQueues

You can see the queues in two ways :
– in the CLI

[root@gw ~]# mapred queue -list
15/03/05 09:13:11 INFO client.RMProxy: Connecting to ResourceManager at nn.example.com/240.0.0.11:8050
======================
Queue Name : dev
Queue State : running
Scheduling Info : Capacity: 60.000004, MaximumCapacity: 100.0, CurrentCapacity: 0.0
======================
Queue Name : ds
Queue State : running
Scheduling Info : Capacity: 30.000002, MaximumCapacity: 100.0, CurrentCapacity: 0.0
======================
Queue Name : eng
Queue State : running
Scheduling Info : Capacity: 70.0, MaximumCapacity: 100.0, CurrentCapacity: 0.0
======================
Queue Name : prod
Queue State : running
Scheduling Info : Capacity: 40.0, MaximumCapacity: 100.0, CurrentCapacity: 0.0

– in the UI : go to the ResourceManager UI (Ambari YARN/Quick links), then click on Scheduler :

scheduler


Ambari tips & tricks

Restarting some components

(including clients, which you can’t put to another state than “INSTALLED”) :

curl -uadmin:admin -H 'X-Requested-By: ambari' -X POST -d '
{
"RequestInfo":{
"command":"RESTART",
"context":"Restart ZooKeeper Client and HDFS Client",
"operation_level":{
"level":"HOST",
"cluster_name":"hdp-cluster"
}
},
"Requests/resource_filters":[
{
"service_name":"ZOOKEEPER",
"component_name":"ZOOKEEPER_CLIENT",
"hosts":"gw.example.com"
},
{
"service_name":"ZOOKEEPER",
"component_name":"ZOOKEEPER_SERVER",
"hosts":"gw.example.com,nn.example.com,dn1.example.com"
}
]
}' http://gw.example.com:8080/api/v1/clusters/hdp-cluster/requests

As indicated in the wiki, the RESTART refreshs the configs.

 

Delete a host from Ambari

// get all COMPONENTS for the host

[root@uabdfes03 ~]# curl -u admin:admin -H "X-Requested-By:ambari" -i -X GET http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/hosts/$HOSTNAME/host_components

// delete all COMPONENTS for this HOST
[root@host ~]# for COMPONENT in ZOOKEEPER_CLIENT YARN_CLIENT PIG OOZIE_CLIENT NODEMANAGER MAPREDUCE2_CLIENT HIVE_CLIENT HDFS_CLIENT HCAT HBASE_REGIONSERVER HBASE_CLIENT GANGLIA_MONITOR DATANODE; do curl -u admin:admin -H "X-Requested-By:ambari" -i -X DELETE http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/hosts/$HOSTNAME/host_components/$COMPONENT; done
// delete HOST
[root@host ~]# curl -u admin:admin -H "X-Requested-By:ambari" -i -X DELETE http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/hosts/$HOSTNAME

Delete a service

(for example STORM)

// get the components for that service
[vagrant@gw ~]$ curl -u admin:admin -X GET  http://gw.example.com:8080/api/v1/clusters/hdp-cluster/services/STORM
// stop the service
[vagrant@gw ~]$ curl -u admin:admin -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://gw.example.com:8080/api/v1/clusters/hdp-cluster/services/STORM
//stop each component on each host
[vagrant@gw ~]$ for COMPONENT_NAME in DRPC_SERVER NIMBUS STORM_REST_API STORM_UI_SERVER SUPERVISOR; do curl -u admin:admin -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"Stop Component"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' http://gw.example.com:8080/api/v1/clusters/hdp-cluster/hosts/gw.example.com/host_components/${COMPONENT_NAME}; done
// stop service components
[vagrant@gw ~]$ for COMPONENT_NAME in DRPC_SERVER NIMBUS STORM_REST_API STORM_UI_SERVER SUPERVISOR; do curl -u admin:admin -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"Stop All Components"},"Body":{"ServiceComponentInfo":{"state":"INSTALLED"}}}' http://gw.example.com:8080/api/v1/clusters/hdp-cluster/services/STORM/components/${COMPONENT_NAME}; done
// delete the service
[vagrant@gw ~]$ curl -u admin:admin -H 'X-Requested-By: ambari' -X DELETE http://gw.example.com:8080/api/v1/clusters/hdp-cluster/services/STORM

 Add a component

For example we want to add a HBase RegionServer

// add the component
[vagrant@gw ~]$ curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST http://gw.example.com:8080/api/v1/clusters/hdp-cluster/hosts/gw.example.com/host_components/HBASE_REGIONSERVER

// then install
[vagrant@gw ~]$ curl -u admin:admin -H "X-Requested-By:ambari" -i -X PUT -d '{"RequestInfo": {"context": "Install RegionServer","query":"HostRoles/component_name.in('HBASE_REGIONSERVER')"}, "Body":{"HostRoles": {"state": "INSTALLED"}}}' http://gw.example.com:8080/api/v1/clusters/hdp-cluster/hosts/gw.example.com/host_components

 Get host components for a service

[vagrant@gw ~]$ curl -u admin:admin -H "X-Requested-By:ambari" -i -X GET http://gw.example.com:8080/api/v1/clusters/hdp-cluster/hosts?host_components/HostRoles/service_name=HDFS&fields=host_components/HostRoles/service_name

 


Kerberos Tips & Tricks

Read a keytab to see principals :

[root@gw ~]# ktutil
ktutil: read_kt /etc/security/keytabs/nn.service.keytab
ktutil: list
slot KVNO Principal

---- ---- ---------------------------------------------------------------------
1 3 nn/gw.example.com@EXAMPLE.COM
2 3 nn/gw.example.com@EXAMPLE.COM
3 3 nn/gw.example.com@EXAMPLE.COM
4 3 nn/gw.example.com@EXAMPLE.COM
5 3 nn/nn.example.com@EXAMPLE.COM
6 3 nn/nn.example.com@EXAMPLE.COM
7 3 nn/nn.example.com@EXAMPLE.COM
8 3 nn/nn.example.com@EXAMPLE.COM
ktutil: quit
[root@gw ~]#

Service keytabs are for a service, so added for a specific machine.
Therefore, if you want to add an existing service to another node, you must create that service for that additional node.

[root@ ~]# ipa service-add zookeeper/newnode@MY_CLUSTER
[root@~]# ipa-getkeytab -s IPASERVER -p zookeeper/newnode@MY_CLUSTER -k zk.service.keytab.newnode
[root@~]# chmod 400 zk.service.keytab.newnode
[root@~]# scp zk.service.keytab.newnode NEWNODE:/etc/security/keytabs/.
[root@NEWNODE ~]# mv /etc/security/keytabs/zk.service.keytab{.newnode,}
[root@NEWNODE ~]# chown zookeeper:hadoop /etc/security/keytabs/zk.service.keytab

If you do the ipa-getkeytab on an existing keytab, it will add the service in the keytab, not replace it.

 

If for some reason IPA doesn’t work :

// adding principal
[root@gw ~]# kadmin.local -q "addprinc -randkey hbase/nn.example.com@EXAMPLE.COM" -x ipa-setup-override-restrictions
// then get the keytab
[root@gw ~]# kadmin.local -q "xst -k /home/vagrant/tmp_keytabs/hbase.service.keytab.nn hbase/nn.example.com@EXAMPLE.COM"

 


Hadoop CLI tips & tricks

Here are some Hadoop CLI tips & tricks

For manual switch between Active & Standby NameNodes, you have to take in consideration the ServiceIds, which are by default nn1 and nn2.

If nn1 is the Active and nn2 the Standby NameNode, switch nn2 to Active with

[vagrant@gw ~]$ sudo -u hdfs hdfs haadmin -failover nn1 nn2

 


sample table in Hive

When you want to test some “real” stuff, it can be useful to have a Hive table with some “big” data in it.

Let’s build a table based on Wikipedia traffic stats. It will contains page counts for every page since 2008, allowing us to have partitions too.

These stats are available here so let’s grab those files first.
This is a lot of files, here we’re getting only 2008/01 files, which are each between 20 and 30MB.

I installed axel which is a download accelerator, available on EPEL repo or with the rpm

[vagrant@gw ~]$ sudo rpm -ivh http://pkgs.repoforge.org/axel/axel-2.4-1.el6.rf.x86_64.rpm
[vagrant@gw pagecounts]$ for YEAR in {2008..2008}; do for MONTH in {01..01}; do for DAY in {01..08}; do for HOUR in {00..23}; do axel -a http://dumps.wikimedia.org/other/pagecounts-raw/${YEAR}/${YEAR}-${MONTH}/pagecounts-${YEAR}${MONTH}${DAY}-${HOUR}0000.gz; done; done; done; done

These files contains :

fr.b Special:Recherche/Achille_Baraguey_d%5C%27Hilliers 1 624
fr.b Special:Recherche/Acteurs_et_actrices_N 1 739
fr.b Special:Recherche/Agrippa_d/%27Aubign%C3%A9 1 743

fr.b is the project name (french books), then the title of the page, the number of requests and finally the size of the content returned.

Let copy that data in HDFS :

[vagrant@gw pagecounts]$ for DAY in {01..08}; do
sudo -u hdfs hadoop fs -mkdir -p /apps/hive/warehouse/pagecounts/dt=2008-01-${DAY};
sudo -u hdfs hadoop fs -put pagecounts-200801${DAY}-* /apps/hive/warehouse/pagecounts/dt=2008-01-${DAY};
done

Now we can create the Hive table

hive> CREATE EXTERNAL TABLE pagecounts (project String, title String, requests Int, size Int) PARTITIONED BY (dt String) ROW FORMAT DELIMITED FIELDS TERMINATED BY ' ' LINES TERMINATED BY '\n' STORED AS TEXTFILE;

Then you can add manually the partitions
hive> ALTER TABLE pagecounts ADD IF NOT EXISTS PARTITION(dt="2008-01-01") LOCATION "/apps/hive/warehouse/pagecounts/dt=2008-01-01";

or let Hive do it four you automagically :

hive> MSCK REPAIR TABLE pagecounts;
OK
Partitions not in metastore: pagecounts:dt=2008-01-02 pagecounts:dt=2008-01-03 pagecounts:dt=2008-01-04 pagecounts:dt=2008-01-05 pagecounts:dt=2008-01-06 pagecounts:dt=2008-01-07 pagecounts:dt=2008-01-08 pagecounts:dt=2008-01-09 pagecounts:dt=2008-01-10
Repair: Added partition to metastore pagecounts:dt=2008-01-02
Repair: Added partition to metastore pagecounts:dt=2008-01-03
Repair: Added partition to metastore pagecounts:dt=2008-01-04
Repair: Added partition to metastore pagecounts:dt=2008-01-05
Repair: Added partition to metastore pagecounts:dt=2008-01-06
Repair: Added partition to metastore pagecounts:dt=2008-01-07
Repair: Added partition to metastore pagecounts:dt=2008-01-08
Repair: Added partition to metastore pagecounts:dt=2008-01-09
Repair: Added partition to metastore pagecounts:dt=2008-01-10
Time taken: 0.762 seconds, Fetched: 9 row(s)

 


enabling Application Timeline Server on a kerberized cluster

When you enable security on your HDP cluster, the wizard deletes the ATS (App Timeline Server) which is useful to follow YARN applications history.

Re-enabling it is not very difficult :

1. Installation through Ambari API

[vagrant@gw ~]# curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST http://gw.example.com:8080/api/v1/clusters/hdp-cluster/hosts/gw.example.com/host_components/APP_TIMELINE_SERVER

HTTP/1.1 201 Created

2. Component activation

[vagrant@gw ~]# curl -u admin:admin -H "X-Requested-By:ambari" -i -X PUT -d '{"RequestInfo": {"context": "Install AppTimelineServer via REST","query":"HostRoles/component_name.in('APP_TIMELINE_SERVER')"}, "Body":{"HostRoles": {"state": "INSTALLED"}}}' http://gw.example.com:8080/api/v1/clusters/hdp-cluster/hosts/gw.example.com/host_components

HTTP/1.1 202 Accepted
Set-Cookie: AMBARISESSIONID=1dkx65ox0k0vx1vigxw7r0i4aw;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 170
Server: Jetty(7.6.7.v20120910)

{
"href" : "http://gw.example.com:8080/api/v1/clusters/hdp-cluster/requests/260",
"Requests" : {
"id" : 260,
"status" : "InProgress"
}
}

3. Keytab for ATS creation

[vagrant@gw ~]# ipa service-add ats/gw.example.com@EXAMPLE.COM

4. Copy

[vagrant@gw ~]# ipa-getkeytab -s nn.example.com -p ats/gw.example.com@EXAMPLE.COM -k /etc/security/keytabs/ats.service.keytab

Keytab successfully retrieved and stored in: /etc/security/keytabs/ats.service.keytab
[vagrant@gw ~]# chown yarn:hadoop /etc/security/keytabs/ats.service.keytab && chmod 400 /etc/security/keytabs/ats.service.keytab

5. Add parameters in YARN

yarn.timeline-service.principal = ats/_HOST@EXAMPLE.COM
yarn.timeline-service.keytab = /etc/security/keytabs/ats.service.keytab
yarn.timeline-service.http-authentication.type = kerberos
yarn.timeline-service.http-authentication.kerberos.principal = HTTP/_HOST@EXAMPLE.COM
yarn.timeline-service.http-authentication.kerberos.keytab = /etc/security/keytabs/spnego.service.keytab

6. Restart YARN + ATS

The UI is enabled by default on port 8188 : http://gw.example.com:8188/applicationhistory


containers, memory and sizing

In the Hadoop world, and especially in the YARN sub-world, it can be tricky to understand how to achieve memory settings.

Based on this blogpost, here are the main concepts to understand :

– YARN is based on containers

– containers can be MAP containers or REDUCE containers

– total memory for YARN on each node : yarn.nodemanager.resource.memory-mb=40960

– min memory used by a container : yarn.scheduler.minimum-allocation-mb=2048

the combination of the above will give us the max containers number : total memory / container memory = containers number.

– memory used by a (map|reduce) container : mapreduce.(map|reduce).memory.mb=4096

each (map|reduce) container will spawn JVMs for map|reduce tasks, so we limit the heapsize of the map|reduce JVM :

mapreduce.(map|reduce).java.opts=-Xmx3072m

The last parameter is yarn.nodemanager.vmem-pmem-ratio (default 2.1) which is the ratio between physical memory and virtual memory.

Here we’ll have for a Map task :

  • 4 GB RAM allocated
  • 3 GB JVM heap space upper limit
  • 4*2.1 = 8.2GB Virtual memory upper limit

 

You’ll find max number of containers based on Map or Reduce containers ! so here we’ll have 40/4 = 10 mappers

source


Hive tips & tricks

in Hive CLI, see the headers when executing a request :

hive> select * from sample_07 limit 2;
OK
00-0000 All Occupations 134354250 40690
11-0000 Management occupations 6003930 96150
Time taken: 1.313 seconds, Fetched: 2 row(s)
hive> set hive.cli.print.header=true;
hive> select * from sample_07 limit 2;
OK
sample_07.code sample_07.description sample_07.total_emp sample_07.salary
00-0000 All Occupations 134354250 40690
11-0000 Management occupations 6003930 96150
Time taken: 1.199 seconds, Fetched: 2 row(s)

include the database in the Hive prompt :

hive> set hive.cli.print.current.db=true;
hive (default)>

 

When you want to merge small files in a Hive partition, you have to simple solutions :

* from Hive 0.14, use CONCATENATE

ALTER TABLE table_name [PARTITION (partition_key = 'partition_value' [, ...])] CONCATENATE;

* or you can use the ARCHIVE command to merge to HAR (Hadoop ARchive) file

hive> set hive.archive.enabled=true;
hive> set har.partfile.size=1099511627776;
hive> ALTER TABLE tabname ARCHIVE PARTITION (dt_partition='2015-03-10');