¸®´ª½º °ü·Ã °Ô½ÃÆÇ

2015/07/03(23:19) from 58.125.30.140
ÀÛ¼ºÀÚ : ÁÖÀÎÀå Á¶È¸¼ö : 5695 , ÁÙ¼ö : 56
Re: [Hadoop] 3. ¿¡·¯¸Þ½ÃÁö
[root@hadoop01 hadoop]# hadoop fs -ls
15/07/03 23:16:36 INFO ipc.Client: Retrying connect to server: hadoop01/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/03 23:16:37 INFO ipc.Client: Retrying connect to server: hadoop01/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/03 23:16:38 INFO ipc.Client: Retrying connect to server: hadoop01/127.0.0.1:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/03 23:16:39 INFO ipc.Client: Retrying connect to server: hadoop01/127.0.0.1:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/03 23:16:40 INFO ipc.Client: Retrying connect to server: hadoop01/127.0.0.1:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/03 23:16:41 INFO ipc.Client: Retrying connect to server: hadoop01/127.0.0.1:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/03 23:16:42 INFO ipc.Client: Retrying connect to server: hadoop01/127.0.0.1:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/03 23:16:43 INFO ipc.Client: Retrying connect to server: hadoop01/127.0.0.1:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/03 23:16:44 INFO ipc.Client: Retrying connect to server: hadoop01/127.0.0.1:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/03 23:16:45 INFO ipc.Client: Retrying connect to server: hadoop01/127.0.0.1:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
ls: Call to hadoop01/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused


@ ÇØ°áÃ¥
[root@hadoop01 hadoop]# hadoop namenode -format

[root@hadoop01 bin]# ./stop-all.sh

[root@hadoop01 bin]# ./start-all.sh

@ ¼º°ø
[root@hadoop01 bin]# hadoop fs -ls
ls: Cannot access .: No such file or directory.



[root@hadoop01 hadoop]# hadoop fs -copyFromLocal README.txt /input
15/07/04 00:15:26 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /input/README.txt could only be replicated to 0 nodes, instead of 1
       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
       at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
       at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:606)
       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
       at java.security.AccessController.doPrivileged(Native Method)
       at javax.security.auth.Subject.doAs(Subject.java:415)
       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)


@ ÇØ°áÃ¥




@¿¡·¯
INFO common.Storage: Cannot lock storage /home/hadoop/hdfs/name. The directory is already locked.

@ÇØ°áÃ¥

$ find / -name in_use.lock -exec ls -l {} \;
$ find / -name in_use.lock -exec rm -i {} \;

Modify Delete Post Reply Backward Forward List
Powered by Kang Jul Ki