{"id":1459,"date":"2024-05-08T11:13:52","date_gmt":"2024-05-08T03:13:52","guid":{"rendered":"http:\/\/oneai.eu.org\/?p=1459"},"modified":"2024-05-08T11:14:06","modified_gmt":"2024-05-08T03:14:06","slug":"hadoop%e5%8d%95%e6%9c%ba%e6%a8%a1%e5%bc%8f%e9%83%a8%e7%bd%b2","status":"publish","type":"post","link":"https:\/\/oneai.eu.org\/?p=1459","title":{"rendered":"hadoop\u5355\u673a\u6a21\u5f0f\u90e8\u7f72"},"content":{"rendered":"<pre><code class=\"language-shell\">1.\u6dfb\u52a0\u73af\u5883\u53d8\u91cf:\n\nvi etc\/hadoop\/hadoop-env.sh   \u53caprofile\u91cc\u914d\u7f6e\uff1a\nexport JAVA_HOME=\/opt\/jdk1.8.0_25\nexport HADOOP_PREFIX=\/home\/zjy\/hadoop\n\n2\uff0c\u4fee\u6539\u914d\u7f6e\u6587\u4ef6\uff1a\netc\/hadoop\/core-site.xml:\n\n&lt;configuration&gt;\n    &lt;property&gt;\n        &lt;name&gt;fs.defaultFS&lt;\/name&gt;\n        &lt;value&gt;hdfs:\/\/localhost:9000&lt;\/value&gt;\n    &lt;\/property&gt;\n&lt;\/configuration&gt;\netc\/hadoop\/hdfs-site.xml:\n\n&lt;configuration&gt;\n    &lt;property&gt;\n        &lt;name&gt;dfs.replication&lt;\/name&gt;\n        &lt;value&gt;1&lt;\/value&gt;\n    &lt;\/property&gt;\n&lt;\/configuration&gt;\n--3\u3002\u5efa\u8bae\u65e0\u5bc6\u7801\u767b\u5f55\u8ba4\u8bc1\uff1a\n#ssh-keygen -t dsa -P &#039;&#039; -f ~\/.ssh\/id_dsa\n#cat ~\/.ssh\/id_dsa.pub &gt;&gt; ~\/.ssh\/authorized_keys\n--4\u3002\u683c\u5f0f\u5316\u6587\u4ef6\u7cfb\u7edf\uff1a\nbin\/hdfs namenode -format\n\n--5\u3002\u542f\u52a8\nsbin\/start-dfs.sh\n     --6.\u67e5\u770b\n\u65e5\u5fd7\uff1a$HADOOP_HOME\/logs\n\u524d\u53f0\u67e5\u770bnamenode\u4fe1\u606f\uff1a http:\/\/localhost:50070\/\n\n       --7\u3002\u521b\u5efaHDFS \n $ bin\/hdfs dfs -mkdir \/user\n  $ bin\/hdfs dfs -mkdir \/user\/&lt;username&gt;          Copy the input files into the distributed filesystem: \n         --8\u3002\u8fd0\u884c\u81ea\u5e26\u7684\u4f8b\u5b50\uff1a\n $ bin\/hdfs dfs -put etc\/hadoop  input            \/\/\u5c06 \uff1a\/etc\/hadoop \u88c5\u5165\u6587\u4ef6\u7cfb\u7edf\n  $ bin\/hadoop jar share\/hadoop\/mapreduce\/hadoop-mapreduce-examples-2.5.1.jar grep input output &#039;dfs[a-z.]+&#039;\n\n$ bin\/hdfs dfs -get output output      \/\/\u4eceHDFS\u91cc\u53d6\u6587\u4ef6 \n$ cat output\/*                               \/\/\u67e5\u770b\u53d6\u51fa\u7684\u7684\u6587\u4ef6\n$ bin\/hdfs dfs -cat output\/*              \/\/\u4e5f\u662f\u67e5\u770b\u6587\u4ef6\u3002\n --9\u3002\u505c\u6b62\uff1a\nsbin\/stop-dfs.sh                        \/\/\u505c\u6b62hadoop\n\n--YARN (\u65b0\u7248\u672c\u7279\u6027)  \u5728\u5355\u673a\u6a21\u5f0f\u4e0b\u914d\u7f6e\uff1a\n\u4fee\u6539\u914d\u7f6e\u6587\u4ef6\uff1a\netc\/hadoop\/mapred-site.xml:\n\n&lt;configuration&gt;\n    &lt;property&gt;\n        &lt;name&gt;mapreduce.framework.name&lt;\/name&gt;\n        &lt;value&gt;yarn&lt;\/value&gt;\n    &lt;\/property&gt;\n&lt;\/configuration&gt;\n-------\netc\/hadoop\/yarn-site.xml:\n\n&lt;configuration&gt;\n    &lt;property&gt;\n        &lt;name&gt;yarn.nodemanager.aux-services&lt;\/name&gt;\n        &lt;value&gt;mapreduce_shuffle&lt;\/value&gt;\n    &lt;\/property&gt;\n&lt;\/configuration&gt;\n---------\n\u542f\u52a8\uff1ayarn \n\n sbin\/start-yarn.sh\n\u505c\u6b62yarn:\n sbin\/stop-yarn.sh\n------------------------------------------------\n\u5e38\u89c1\u95ee\u9898\uff1a\n1\u3002\/tmp\/hadoop-zjy-secondarynamenode.pid: Permission denied\nfix:chmod -R 777 tmp\/ \n\n2\u3002Java HotSpot(TM) Client VM warning: You have loaded library \/home\/zjy\/hadoop\/lib\/native\/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.\nIt&#039;s highly recommended that you fix the library with &#039;execstack -c &lt;libfile&gt;&#039;, or link it with &#039;-z noexecstack&#039;.\n14\/11\/09 23:09:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\nfix:\nvi ~\/.profile \nexport HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}\/lib\/native\nexport HADOOP_OPTS=&quot;-Djava.library.path=$HADOOP_PREFIX\/lib&quot;\nsource ~\/.profile\n\n3. \u5f80hdfs\u91cc\u6254\u6587\u4ef6\u65f6\uff0c\u62a5\uff1a\nput: File \/input\/file1.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.\n:\u539f\u56e0\uff1a\u662f\u7531\u4e8e\u591a\u6b21format\u5bfc\u81f4dfs\u7248\u672c\u4e0d\u4e00\u81f4\nfix:\u5220\u9664\u6587\u4ef6\u7cfb\u7edf\uff0c\u91cd\u65b0:  hdfs namenode -format\n\u6216\u662f\u628a\u6587\u4ef6\u7cfb\u7edf\u7248\u672c\u6539\u6210\u4e0e $HADOOP_PREFIX\/TMP\/dfs\/version\u4e2d\u7684\u7248\u672c\u4e00\u81f4\u3002 \n\n4\u3002Name node is in safe mode  (\u5b89\u5168\u6a21\u5f0f)\nfix:bin\/hadoop dfsadmin -safemode leave\n\u7528\u6237\u53ef\u4ee5\u901a\u8fc7dfsadmin -safemode value   \u6765\u64cd\u4f5c\u5b89\u5168\u6a21\u5f0f\uff0c\n\u53c2\u6570value\u7684\u8bf4\u660e\u5982\u4e0b\uff1a\nenter - \u8fdb\u5165\u5b89\u5168\u6a21\u5f0f\nleave - \u5f3a\u5236NameNode\u79bb\u5f00\u5b89\u5168\u6a21\u5f0f\nget -   \u8fd4\u56de\u5b89\u5168\u6a21\u5f0f\u662f\u5426\u5f00\u542f\u7684\u4fe1\u606f\nwait - \u7b49\u5f85\uff0c\u4e00\u76f4\u5230\u5b89\u5168\u6a21\u5f0f\u7ed3\u675f\u3002\n-------------\n5\u3002org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs:\/\/localhost\/user\/zjy\/input \n\u76ee\u5f55\u6ca1\u6709\u52a0\u8fdb\u53bb\u3002\nfix\uff1ahadoop fs -put conf input\n\n6\u30022014-11-10 01:01:53,107 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: localhost\/127.0.0.1:9000\n\u539f\u56e0\uff1a\/etc\/hosts\u6587\u4ef6 127.0.0.1 localhost\u7684\u6620\u5c04\u5173\u7cfb\u5bfc\u81f4\nfix:vi \/etc\/hosts\n# 127.0.0.1 localhost\n\u518d stop-all.sh\n \u91cd\u65b0\u683c\u5f0f\u5316\uff1ahdfs namenode -format\n\n7\u3002There are no datanodes in the cluster.\n----------\n\u6267\u884c\u4e0b\u9762\u547d\u4ee4\uff0c\u91cd\u65b0\u8fdb\u884c\u542f\u52a8dataNode\u5373\u53ef\u3002\nHadoop\u542f\u52a8 \u683c\u5f0f\u5316\u96c6\u7fa4 \n\u4ee5\u4e0b\u7528hadoop\u7528\u6237\u6267\u884c  \nhadoop namenode -format -clusterid clustername  \n\u542f\u52a8hdfs \u6267\u884c \nstart-dfs.sh \n\u5f00\u542f hadoop dfs\u670d\u52a1       \n\u542f\u52a8Yarn \n\u5f00\u542f yarn \u8d44\u6e90\u7ba1\u7406\u670d\u52a1 \nstart-yarn.sh   \n\u542f\u52a8httpfs \n\u5f00\u542f httpfs \u670d \u52a1 \nhttpfs.sh start  \n-------------------------------------\n8\u3002Initialization failed for Block pool &lt;registering&gt; (Datanode Uuid unassigned) service to YFCS-S6-APP\/10.200.25.154:9000. Exiting\n\u6253\u5f00hdfs-site.xml\u91cc\u914d\u7f6e\u7684datanode\u548cnamenode\u5bf9\u5e94\u7684\u76ee\u5f55\uff0c\u5206\u522b\u6253\u5f00current\u6587\u4ef6\u5939\u91cc\u7684VERSION\uff0c\u53ef\u4ee5\u770b\u5230clusterID\u9879\u6b63\u5982\u65e5\u5fd7\u91cc\u8bb0\u5f55\u7684\u4e00\u6837\uff0c\u786e\u5b9e\u4e0d\u4e00\u81f4\uff0c\u4fee\u6539datanode\u91ccVERSION\u6587\u4ef6\u7684clusterID \u4e0enamenode\u91cc\u7684\u4e00\u81f4\uff0c\u518d\u91cd\u65b0\u542f\u52a8dfs\uff08\u6267\u884cstart-dfs.sh\uff09\u518d\u6267\u884cjps\u547d\u4ee4\u53ef\u4ee5\u770b\u5230datanode\u5df2\u6b63\u5e38\u542f\u52a8\u3002\n\u51fa\u73b0\u8be5\u95ee\u9898\u7684\u539f\u56e0\uff1a\u5728\u7b2c\u4e00\u6b21\u683c\u5f0f\u5316dfs\u540e\uff0c\u542f\u52a8\u5e76\u4f7f\u7528\u4e86hadoop\uff0c\u540e\u6765\u53c8\u91cd\u65b0\u6267\u884c\u4e86\u683c\u5f0f\u5316\u547d\u4ee4\uff08hdfs namenode -format)\uff0c\u8fd9\u65f6namenode\u7684clusterID\u4f1a\u91cd\u65b0\u751f\u6210\uff0c\u800cdatanode\u7684clusterID \u4fdd\u6301\u4e0d\u53d8\u3002\n\u5982\u679c\u6539\u6210\u4e00\u81f4\u4e0d\u80fd\u89e3\u51b3\uff0c\u5219\u5220\u9664datanode\u76ee\u5f55\u4e0b\u7684\u6587\u4ef6\uff1arm -rf \/home\/zjy\/hadoop\/tmp\/dfs\/data\/current\/* \u518d\u91cd\u542f\n\n9.eclipse\u8fdc\u7a0b\u8fd0\u884chadoop\u81ea\u5b9a\u4e49\u7684\u4ee3\u7801\u62a5\uff1aException in thread &quot;main&quot; org.apache.hadoop.security.AccessControlException: Permission denied: user=Administrator, access=WRITE, inode=&quot;\/&quot;:zjy:supergroup:drwxr-xr-x\n\uff1a\u539f\u56e0\uff1a\u672c\u5730windows\u5f00\u53d1\u73af\u5883\u672a\u5b89\u88c5ssh\uff0c\u6216\u672a\u914d\u7f6e\u6b63\u786e\uff0c\u8fdc\u7a0bhdfs\u670d\u52a1\u5668\u76f8\u5173\u76ee\u5f55\u6ca1\u6709\u64cd\u4f5c\u6743\u9650\n       1\u3002\u89e3\u51b3\uff1a\u5b89\u88c5ssh\uff0c\u5e76\u914d\u7f6e\u597dssh .\u5728\u672c\u5730\u6dfb\u52a0\u76f8\u5173\u7528\u6237\uff0c\u6700\u597d\u80fd\u6dfb\u52a0\u5230Administrators\u7ec4\u3002\n        2.hdfs\u670d\u52a1\u7aef :hdfs dfs -chmod 777 -R \/tmp     ;hdfs dfs -chmod 777 -R \/user\/\n\n\u5b9e\u4f8b\u8fd0\u884c\u6210\u529f\uff1a\n\nzjy@zjy:\/home\/zjy\/hadoop$bin\/hadoop jar share\/hadoop\/mapreduce\/hadoop-mapreduce-examples-2.5.1.jar grep \/input \/output &#039;dfs[a-z.]+&#039;\nJava HotSpot(TM) Client VM warning: You have loaded library \/home\/zjy\/hadoop\/lib\/native\/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.\nIt&#039;s highly recommended that you fix the library with &#039;execstack -c &lt;libfile&gt;&#039;, or link it with &#039;-z noexecstack&#039;.\n14\/11\/10 17:52:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\n14\/11\/10 17:52:34 INFO client.RMProxy: Connecting to ResourceManager at \/0.0.0.0:8032\n14\/11\/10 17:52:34 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).\n14\/11\/10 17:52:34 INFO input.FileInputFormat: Total input paths to process : 1\n14\/11\/10 17:52:35 INFO mapreduce.JobSubmitter: number of splits:1\n14\/11\/10 17:52:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1415670144299_0004\n14\/11\/10 17:52:35 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.\n14\/11\/10 17:52:35 INFO impl.YarnClientImpl: Submitted application application_1415670144299_0004\n14\/11\/10 17:52:35 INFO mapreduce.Job: The url to track the job: http:\/\/zjy:8088\/proxy\/application_1415670144299_0004\/\n14\/11\/10 17:52:35 INFO mapreduce.Job: Running job: job_1415670144299_0004\n14\/11\/10 17:52:41 INFO mapreduce.Job: Job job_1415670144299_0004 running in uber mode : false\n14\/11\/10 17:52:41 INFO mapreduce.Job:  map 0% reduce 0%\n14\/11\/10 17:52:47 INFO mapreduce.Job:  map 100% reduce 0%\n14\/11\/10 17:52:54 INFO mapreduce.Job:  map 100% reduce 100%\n14\/11\/10 17:52:54 INFO mapreduce.Job: Job job_1415670144299_0004 completed successfully\n14\/11\/10 17:52:54 INFO mapreduce.Job: Counters: 49\n        File System Counters\n                FILE: Number of bytes read=6\n                FILE: Number of bytes written=194175\n                FILE: Number of read operations=0\n                FILE: Number of large read operations=0\n                FILE: Number of write operations=0\n                HDFS: Number of bytes read=115\n                HDFS: Number of bytes written=86\n                HDFS: Number of read operations=6\n                HDFS: Number of large read operations=0\n                HDFS: Number of write operations=2\n        Job Counters \n                Launched map tasks=1\n                Launched reduce tasks=1\n                Data-local map tasks=1\n                Total time spent by all maps in occupied slots (ms)=3082\n                Total time spent by all reduces in occupied slots (ms)=3570\n                Total time spent by all map tasks (ms)=3082\n                Total time spent by all reduce tasks (ms)=3570\n                Total vcore-seconds taken by all map tasks=3082\n                Total vcore-seconds taken by all reduce tasks=3570\n                Total megabyte-seconds taken by all map tasks=3155968\n                Total megabyte-seconds taken by all reduce tasks=3655680\n        Map-Reduce Framework\n                Map input records=1\n                Map output records=0\n                Map output bytes=0\n                Map output materialized bytes=6\n                Input split bytes=104\n                Combine input records=0\n                Combine output records=0\n                Reduce input groups=0\n                Reduce shuffle bytes=6\n                Reduce input records=0\n                Reduce output records=0\n                Spilled Records=0\n                Shuffled Maps =1\n                Failed Shuffles=0\n                Merged Map outputs=1\n                GC time elapsed (ms)=162\n                CPU time spent (ms)=1160\n                Physical memory (bytes) snapshot=221036544\n                Virtual memory (bytes) snapshot=629686272\n                Total committed heap usage (bytes)=137498624\n        Shuffle Errors\n                BAD_ID=0\n                CONNECTION=0\n                IO_ERROR=0\n                WRONG_LENGTH=0\n                WRONG_MAP=0\n                WRONG_REDUCE=0\n        File Input Format Counters \n                Bytes Read=11\n        File Output Format Counters \n                Bytes Written=86\n14\/11\/10 17:52:54 INFO client.RMProxy: Connecting to ResourceManager at \/0.0.0.0:8032\n14\/11\/10 17:52:54 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).\n14\/11\/10 17:52:54 INFO input.FileInputFormat: Total input paths to process : 1\n14\/11\/10 17:52:54 INFO mapreduce.JobSubmitter: number of splits:1\n14\/11\/10 17:52:54 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1415670144299_0005\n14\/11\/10 17:52:54 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.\n14\/11\/10 17:52:54 INFO impl.YarnClientImpl: Submitted application application_1415670144299_0005\n14\/11\/10 17:52:54 INFO mapreduce.Job: The url to track the job: http:\/\/zjy:8088\/proxy\/application_1415670144299_0005\/\n14\/11\/10 17:52:54 INFO mapreduce.Job: Running job: job_1415670144299_0005\n14\/11\/10 17:53:06 INFO mapreduce.Job: Job job_1415670144299_0005 running in uber mode : false\n14\/11\/10 17:53:06 INFO mapreduce.Job:  map 0% reduce 0%\n14\/11\/10 17:53:12 INFO mapreduce.Job:  map 100% reduce 0%\n14\/11\/10 17:53:17 INFO mapreduce.Job:  map 100% reduce 100%\n14\/11\/10 17:53:18 INFO mapreduce.Job: Job job_1415670144299_0005 completed successfully\n14\/11\/10 17:53:18 INFO mapreduce.Job: Counters: 49\n        File System Counters\n                FILE: Number of bytes read=6\n                FILE: Number of bytes written=193153\n                FILE: Number of read operations=0\n                FILE: Number of large read operations=0\n                FILE: Number of write operations=0\n                HDFS: Number of bytes read=220\n                HDFS: Number of bytes written=0\n                HDFS: Number of read operations=7\n                HDFS: Number of large read operations=0\n                HDFS: Number of write operations=2\n        Job Counters \n                Launched map tasks=1\n                Launched reduce tasks=1\n                Data-local map tasks=1\n                Total time spent by all maps in occupied slots (ms)=3259\n                Total time spent by all reduces in occupied slots (ms)=2955\n                Total time spent by all map tasks (ms)=3259\n                Total time spent by all reduce tasks (ms)=2955\n                Total vcore-seconds taken by all map tasks=3259\n                Total vcore-seconds taken by all reduce tasks=2955\n                Total megabyte-seconds taken by all map tasks=3337216\n                Total megabyte-seconds taken by all reduce tasks=3025920\n        Map-Reduce Framework\n                Map input records=0\n                Map output records=0\n                Map output bytes=0\n                Map output materialized bytes=6\n                Input split bytes=134\n                Combine input records=0\n                Combine output records=0\n                Reduce input groups=0\n                Reduce shuffle bytes=6\n                Reduce input records=0\n                Reduce output records=0\n                Spilled Records=0\n                Shuffled Maps =1\n                Failed Shuffles=0\n                Merged Map outputs=1\n                GC time elapsed (ms)=157\n                CPU time spent (ms)=1200\n                Physical memory (bytes) snapshot=219996160\n                Virtual memory (bytes) snapshot=628498432\n                Total committed heap usage (bytes)=137498624\n        Shuffle Errors\n                BAD_ID=0\n                CONNECTION=0\n                IO_ERROR=0\n                WRONG_LENGTH=0\n                WRONG_MAP=0\n                WRONG_REDUCE=0\n        File Input Format Counters \n                Bytes Read=86\n        File Output Format Counters \n                Bytes Written=0\nzjy@zjy:\/home\/zjy\/hadoop$       <\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>1.\u6dfb\u52a0\u73af\u5883\u53d8\u91cf: vi etc\/hadoop\/hadoop-env.sh \u53caprofile\u91cc\u914d\u7f6e\uff1a export JAVA_HOME=\/opt\/jdk1.8.0_25 export HADOOP_PREFIX=\/home\/zjy\/hadoop 2\uff0c\u4fee\u6539\u914d\u7f6e\u6587\u4ef6\uff1a etc\/hadoop\/core-site.xml: &lt;configuration&gt; &lt;property&gt; &lt;name&gt;fs.defaultFS&lt;\/name&gt; &lt;value&gt;hdfs:\/\/localhost:9000&lt;\/value&gt; &lt;\/property&gt; &lt;\/configuration&gt; etc\/hadoop\/hdfs-site.xml: &lt;configuration&gt; &lt;property&gt; &lt;name&gt;dfs.replication&lt;\/name&gt; &lt;value&gt;1&lt;\/value&gt; &lt;\/property&gt; &lt;\/configuration&gt; &#8211;3\u3002\u5efa\u8bae\u65e0\u5bc6\u7801\u767b\u5f55\u8ba4\u8bc1\uff1a #ssh-keygen -t dsa -P &#039;&#039; -f ~\/.ssh\/id_dsa #cat ~\/.ssh\/id_dsa.pub &gt;&gt; ~\/.ssh\/authorized_keys &#8211;4\u3002\u683c\u5f0f\u5316\u6587\u4ef6\u7cfb\u7edf\uff1a bin\/hdfs namenode -format &#8211;5\u3002\u542f\u52a8 sbin\/start-dfs.sh &#8211;6.\u67e5\u770b \u65e5\u5fd7\uff1a$HADOOP_HOME\/logs \u524d\u53f0\u67e5\u770bnamenode\u4fe1\u606f\uff1a http:\/\/localhost:50070\/ &#8211;7\u3002\u521b\u5efaHDFS $ bin\/hdfs dfs -mkdir \/user $ bin\/hdfs dfs -mkdir \/user\/&lt;username&gt; Copy the input files into the distributed filesystem: &#8211;8\u3002\u8fd0\u884c\u81ea\u5e26\u7684\u4f8b\u5b50\uff1a $ bin\/hdfs dfs -put etc\/hadoop input \/\/\u5c06 \uff1a\/etc\/hadoop \u88c5\u5165\u6587\u4ef6\u7cfb\u7edf $ bin\/hadoop jar share\/hadoop\/mapreduce\/hadoop-mapreduce-examples-2.5.1.jar grep input output &#039;dfs[a-z.]+&#039; $ bin\/hdfs dfs -get output output \/\/\u4eceHDFS\u91cc\u53d6\u6587\u4ef6 $ cat output\/* \/\/\u67e5\u770b\u53d6\u51fa\u7684\u7684\u6587\u4ef6 $ bin\/hdfs dfs -cat output\/* \/\/\u4e5f\u662f\u67e5\u770b\u6587\u4ef6\u3002 &#8211;9\u3002\u505c\u6b62\uff1a sbin\/stop-dfs.sh \/\/\u505c\u6b62hadoop &#8211;YARN (\u65b0\u7248\u672c\u7279\u6027) \u5728\u5355\u673a\u6a21\u5f0f\u4e0b\u914d\u7f6e\uff1a \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\uff1a etc\/hadoop\/mapred-site.xml: &lt;configuration&gt; &lt;property&gt; &lt;name&gt;mapreduce.framework.name&lt;\/name&gt; &lt;value&gt;yarn&lt;\/value&gt; &lt;\/property&gt; &lt;\/configuration&gt; &#8212;&#8212;- etc\/hadoop\/yarn-site.xml: &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.aux-services&lt;\/name&gt; &lt;value&gt;mapreduce_shuffle&lt;\/value&gt; &lt;\/property&gt; &lt;\/configuration&gt; &#8212;&#8212;&#8212; \u542f\u52a8\uff1ayarn sbin\/start-yarn.sh \u505c\u6b62yarn: sbin\/stop-yarn.sh &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212; \u5e38\u89c1\u95ee\u9898\uff1a 1\u3002\/tmp\/hadoop-zjy-secondarynamenode.pid: Permission denied fix:chmod -R 777 tmp\/ 2\u3002Java HotSpot(TM) Client VM warning: You have loaded library \/home\/zjy\/hadoop\/lib\/native\/libhadoop.so.1.0.0 which might have disabl&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_eb_attr":"","footnotes":""},"categories":[85,9],"tags":[],"class_list":["post-1459","post","type-post","status-publish","format-standard","hentry","category-hadoop","category-9"],"_links":{"self":[{"href":"https:\/\/oneai.eu.org\/index.php?rest_route=\/wp\/v2\/posts\/1459","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/oneai.eu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/oneai.eu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/oneai.eu.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/oneai.eu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1459"}],"version-history":[{"count":1,"href":"https:\/\/oneai.eu.org\/index.php?rest_route=\/wp\/v2\/posts\/1459\/revisions"}],"predecessor-version":[{"id":1460,"href":"https:\/\/oneai.eu.org\/index.php?rest_route=\/wp\/v2\/posts\/1459\/revisions\/1460"}],"wp:attachment":[{"href":"https:\/\/oneai.eu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1459"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/oneai.eu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1459"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/oneai.eu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1459"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}