Browsing posts in: linux

check file content with ascii codes

If you want to deeply check a file (spaces are spaces, commas, quotes, etc) you can have a look on the ascii codes as well with the hexdump command :

$ hexdump -C /etc/passwd
00000000 72 6f 6f 74 3a 78 3a 30 3a 30 3a 72 6f 6f 74 3a |root:x:0:0:root:|
00000010 2f 72 6f 6f 74 3a 2f 62 69 6e 2f 62 61 73 68 0a |/root:/bin/bash.|
00000020 64 61 65 6d 6f 6e 3a 78 3a 31 3a 31 3a 64 61 65 |daemon:x:1:1:dae|
00000030 6d 6f 6e 3a 2f 75 73 72 2f 73 62 69 6e 3a 2f 62 |mon:/usr/sbin:/b|
00000040 69 6e 2f 73 68 0a 62 69 6e 3a 78 3a 32 3a 32 3a |in/sh.bin:x:2:2:|
00000050 62 69 6e 3a 2f 62 69 6e 3a 2f 62 69 6e 2f 73 68 |bin:/bin:/bin/sh|

Sqoop Teradata password file extra control character

Sqoop is a great tool to get SQL from/to Hadoop.
Using it with Teradata, you have the possibility to use a password file instead of a plaintext password :
   <arg>--password-file</arg>
  <arg>hdfs://NAMENODE/teradata.password</arg>
You may end with error
3737 [main] ERROR org.apache.sqoop.teradata.TeradataSqoopExportHelper  - Exception running Teradata export job
com.teradata.connector.common.exception.ConnectorException: java.sql.SQLException: [Teradata Database] [TeraJDBC 15.00.00.20] [Error 8017] [SQLState 28000] The UserId, Password or Account is invalid.
But you’re 100% sure of the password ? If you set your “Password” password with vi you’ll end with a line feed control character
To find if there’s a LF ending the password file :
[root@localhost ~]# od -c teradata.password
0000000    P   a   s   s   w   o   r   d  \n
0000011
So you’ll have to delete your newline control character using tr :
[root@localhost ~]# tr -d '\n' < teradata.password > teradata.password.new
[root@localhost ~]# od -c teradata.password.new
0000000    P   a   s   s   w   o   r   d
0000010

Feb 17, 2016    |      0 comments

check Ambari or any http-backed app with telnet

For some reason I wanted to check if Ambari was correctly working but didn’t have any browser access.

Check if it was listening on its 8080 port is easy with

$ netstat -anpe | grep 8080

If you really want to check if ambari is answering to requests, you can check that with telnet to the host and typing

GET / HTTP/1.1
Host: <AMBARI_FQDN>

 

localhost$ telnet ambari.mycluster.com 8080
Trying 10.195.196.48...
Connected to ambari.mycluster.com.
Escape character is '^]'.
GET / HTTP/1.1
Host: ambari.mycluster.com

HTTP/1.1 200 OK
Content-Type: text/html
Last-Modified: Fri, 02 Oct 2015 18:12:56 GMT
Accept-Ranges: bytes
Content-Length: 2012
Server: Jetty(8.1.17.v20150415)

<!--
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
-->

[...]

Jan 11, 2016    |      0 comments

check if a port is opened without using telnet

On several machines in a corporate SI, telnet is not installed, so your usual telnet HOST PORT command is not working.

You can replace that with a netcat command :

[root@dn25 ~]# nc -z nn.fqdn.com 8020
Connection to nn.fqdn.com 8020 port [tcp/intu-ec-svcdisc] succeeded!

Hint : use the -u option to specify a UDP port.


Dec 14, 2015    |      0 comments

installing a NFS gateway on Sandbox

NFS gateway is a neat way to access HDFS without a HDFS client, HDFS would then be appears mounted on the local filesystem as any directory.

We have to start by saying NFS user to be able to inpersonate users which will access our cluster, so let’s add in HDFS/configs/custom core-site.xml hadoop.proxyuser.nfsserver.groups and hadoop.proxyuser.nfsserver.hosts

NFS proxyuser

 

Then we add on custom hdfs-site.xml our Kerberos credentials (of course, your Sandbox is kerberized, is it?)

NFS: Kerberos credentials

 

in the same custom hdfs-site.xml, add the following properties which will respectively indicates a spool temporary directory (to re-order sequential writes before writing to HDFS) and the access control policy (here anyone can read/write but you could use another policy represented by MACHINE_NAME RW_POLICY, the latest could be rw (read&write) or ro (read-only))

NFS: mount points

Of course we have to add principal and get keytab for our NFS gateway.
Notice I had to use dfs.nfs.keytab.file and dfs.nfs.kerberos.principal for nfs3 gateway to launch.

We have to launch portmap and nfs3 :

[root@sandbox ~]# hadoop-daemon.sh start portmap

[root@sandbox ~]# hadoop-daemon.sh start nfs3

and mount a new directory as the new mount point for accessing HDFS :

[root@sandbox ~]# mkdir -p /media/hdfs

[root@sandbox ~]# mount -t nfs -o vers=3,proto=tcp,nolock 10.0.2.15:/ /media/hdfs/

We can check NFS is functional :

[root@sandbox ~]# ls -l /media/hdfs/
total 5
drwxrwxrwx 3 yarn hadoop 96 2015-12-03 14:42 app-logs
drwxr-xr-x 5 hdfs hdfs 160 2015-04-24 15:11 apps
drwxr-xr-x 3 hdfs hdfs 96 2015-04-24 15:56 demo
drwxr-xr-x 3 hdfs hdfs 96 2015-04-24 14:53 hdp
drwxr-xr-x 3 mapred hdfs 96 2015-04-24 14:52 mapred
drwxrwxrwx 4 hdfs hdfs 128 2015-04-24 14:52 mr-history
drwxr-xr-x 3 hdfs hdfs 96 2015-04-24 15:41 ranger
drwxr-xr-x 3 hdfs hdfs 96 2015-04-24 14:57 system
drwxrwxrwx 14 hdfs hdfs 448 2015-12-14 15:24 tmp
drwxr-xr-x 11 hdfs hdfs 352 2015-04-24 15:33 user

[root@sandbox ~]# cp ./test01 /media/hdfs/tmp/
[root@sandbox ~]# ls -l /media/hdfs/tmp/
total 16
drwx------ 3 ambari-qa hdfs 96 2015-12-03 14:44 ambari-qa
drwx-wx-wx 6 ambari-qa hdfs 192 2015-04-24 15:32 hive
-rw-r--r-- 1 root hdfs 87 2015-12-14 16:24 test01
drwxrwxrwx 8 hdfs hdfs 256 2015-04-24 15:31 udfs
drwx------ 3 ambari-qa hdfs 96 2015-12-03 14:44 yarn

Perfect ! :)


Dec 9, 2015    |      0 comments

scp keeping rights and permissions with rsync

We all had once to scp something and keeping owner,group,permissions,etc.

There’s no option like that in scp, so you may want to use rsync for copying the localhost 2.3.2.0-2950 content directory on machine1

[root@localhost ~]# rsync -avI /etc/hadoop/2.3.2.0-2950/ machine1:/etc/hadoop/2.3.2.0-2950

Here are the chosen options :

-a = archive mode (equals -rlptgoD)
-v = verbose
-p = preserve permissions
-o = preserve owner
-g = preserve group
-r = recurse into directories
-I = don’t skip files that has already been transferred


Nov 30, 2015    |      0 comments

remount without noexec attribute

CentOS /var mount point is usually with noexec attribute. This is annoying when executing scripts on that mount point, like Ambari scripts !

SO if you have these “permission denied” on executing a script, simply remount your mount point :

$ sudo mount -o remount,exec /var

Don’t forget to modify accordingly your /etc/fstab file so that modification will be permanent and not loose at each reboot.


Jul 15, 2015    |      0 comments

bash profile for quick launch a VirtualBox instance

When testing Hadoop on virtual machines, you usually have to launch instances in VirtualBox and open a terminal to ssh in.

Since you only need a terminal and not these VirtualBox windows, you may update your .bash_profile like

$ cat ~/.bash_profile
function hdp22() {
if [[ $1 == "start" ]];
then VBoxManage startvm "Hortonworks Sandbox with HDP 2.2 Preview" --type headless && ssh 127.0.0.1
elif [[ $1 == "stop" ]];
then VBoxManage controlvm "Hortonworks Sandbox with HDP 2.2 Preview" savestate
else echo "Usage : hdp22 start|stop";
fi
}

 

Type source ~/.bash_profile for loading aliases without need to reboot, and you’ll just then have to type hdp22 start to launch the VM and ssh into it.


Jul 10, 2015    |      0 comments

test if a UDP port is opened

nc -u <server> <port>

 

-u is for “udp”


Jun 24, 2015    |      0 comments

count process threads

Quick count of process threads :

ls /proc/PID/task |wc -l


Pages:12