If you have an error when doing a ls like
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.lang.StringBuffer.toString(StringBuffer.java:561)
at java.net.URI.toString(URI.java:1926)
at java.net.URI.<init>(URI.java:749)
at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
...
You might increase the client memory :
HADOOP_CLIENT_OPTS="-Xmx4g" hdfs dfs -ls -R /
So, what do you think ?