免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 2110 | 回复: 0
打印 上一主题 下一主题

hadoop安装 [复制链接]

论坛徽章:
2
丑牛
日期:2013-09-29 09:47:222015七夕节徽章
日期:2015-08-21 11:06:17
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2008-03-23 20:47 |只看该作者 |倒序浏览
1.解压
[color="#555555"]1
tar -zvxf hadoop-0.16.1.tar.gz
同时设置环境变量
[color="#555555"]1
2
3
4
5
6
7
8
9
10
11
12
13
14
# vi /etc/profile

JAVA_HOME=/usr/local/jrockit
export JAVA_HOME
PATH=$PATH:$JAVA_HOME/bin
export PATH
CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export CLASSPATH
HADOOP_HOME=/usr/local/hadoop
export HADOOP_HOME
HADOOP_LOG_DIR=/var/log/hadoop
export HADOOP_LOG_DIR
HADOOP_SSH_OPTS=[color="red"]"-p 2222"[color="darkgreen"]//ssh端口
export HADOOP_SSH_OPTS
2.ssh公钥认证
[color="#555555"]1
ssh-keygen -t rsa
在passparse时输入密码空,会在.ssh目录下生成 id_rsa.pub
[color="#555555"]1
cp id_rsa.pub authorized_keys
3.hadoop-site.xml加入
[color="#555555"]1
2
3
4
5
6
7
8
9
10
11

fs.default.name
develop:9000
The name of the[color="navy"]defaultfile system. Either the literal string[color="red"]"local"
or a host:port[color="navy"]forDFS.

mapred.job.tracker
develop:9001
The host and port that the MapReduce job tracker runs at. If[color="red"]"local",
then jobs are run in-process as a single map and reduce task.
4.格式化 namenode
[color="#555555"]1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
hadoop@develop:/usr/local/hadoop/bin> ./hadoop namenode -format
[JRockit] Local management server started.
08/03/16 18:47:29 INFO dfs.NameNode: STARTUP_MSG:
[color="darkgreen"]/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = develop/219.232.239.88
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.16.1
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.16
-r 635123; compiled by 'hadoopqa' on Sun Mar  9 05:44:19 UTC 2008
************************************************************/
Re-format filesystem in /tmp/hadoop-hadoop/dfs/name ? (Y or N) Y
08/03/16 18:47:46 INFO fs.FSNamesystem: fsOwner=hadoop,users,dialout,video
08/03/16 18:47:46 INFO fs.FSNamesystem: supergroup=supergroup
08/03/16 18:47:46 INFO fs.FSNamesystem: isPermissionEnabled=[color="navy"]true
08/03/16 18:47:46 INFO dfs.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name
has been successfully formatted.
08/03/16 18:47:46 INFO dfs.NameNode: SHUTDOWN_MSG:
[color="darkgreen"]/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at develop/*.*.*.*
************************************************************/
5.启动
[color="#555555"]1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
hadoop@develop:/usr/local/hadoop/bin> ./start-dfs.sh

2008-03-16 18:48:20,324 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG:
[color="darkgreen"]/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = develop/*.*.*.*
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.16.1
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.16
-r 635123; compiled by 'hadoopqa' on Sun Mar  9 05:44:19 UTC 2008
************************************************************/
2008-03-16 18:48:20,856 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC
Metrics with hostName=NameNode, port=9000
2008-03-16 18:48:20,878 INFO org.apache.hadoop.dfs.NameNode: Namenode up at:
develop.chinajavaworld.com/219.232.239.88:9000
2008-03-16 18:48:20,886 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing
JVM Metrics with processName=NameNode, sessionId=[color="navy"]null
2008-03-16 18:48:20,887 INFO org.apache.hadoop.dfs.NameNodeMetrics: Initializing
NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2008-03-16 18:48:21,027 INFO org.apache.hadoop.fs.FSNamesystem: fsOwner=hadoop,users,
dialout,video
2008-03-16 18:48:21,028 INFO org.apache.hadoop.fs.FSNamesystem: supergroup=supergroup
2008-03-16 18:48:21,028 INFO org.apache.hadoop.fs.FSNamesystem: isPermissionEnabled=[color="navy"]
true

2008-03-16 18:48:21,169 INFO org.apache.hadoop.fs.FSNamesystem: Finished loading
FSImage in 235 msecs
2008-03-16 18:48:21,185 INFO org.apache.hadoop.fs.FSNamesystem: Leaving safemode
after 251 msecs
2008-03-16 18:48:21,190 INFO org.apache.hadoop.dfs.StateChange: STATE* Network topology
has 0 racks and 0 datanodes
2008-03-16 18:48:21,191 INFO org.apache.hadoop.dfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2008-03-16 18:48:21,207 INFO org.apache.hadoop.fs.FSNamesystem: Registered
FSNamesystemStatusMBean
2008-03-16 18:48:21,345 INFO org.mortbay.util.Credential: Checking Resource aliases
2008-03-16 18:48:21,433 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4
2008-03-16 18:48:21,449 INFO org.mortbay.util.Container: Started HttpContext[/static,
/[color="navy"]static]
2008-03-16 18:48:21,449 INFO org.mortbay.util.Container: Started HttpContext[/logs,
/logs]
2008-03-16 18:48:22,700 INFO org.mortbay.util.Container: Started org.mortbay.jetty.
servlet.WebApplicationHandler@825e45e
2008-03-16 18:48:22,788 INFO org.mortbay.util.Container: Started WebApplicationContext
[/,/]
2008-03-16 18:48:22,809 INFO org.mortbay.http.SocketListener: Started SocketListener
on 0.0.0.0:50070
2008-03-16 18:48:22,809 INFO org.mortbay.util.Container: Started org.mortbay.jetty.
Server@81f2b56
2008-03-16 18:48:22,810 INFO org.apache.hadoop.fs.FSNamesystem: Web-server up at:
0.0.0.0:50070
2008-03-16 18:48:22,817 INFO org.apache.hadoop.ipc.Server: IPC Server Responder:
starting
2008-03-16 18:48:22,822 INFO org.apache.hadoop.ipc.Server: IPC Server listener on
9000: starting
2008-03-16 18:48:22,831 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on
9000: starting
2008-03-16 18:48:22,832 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on
9000: starting
2008-03-16 18:48:22,835 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on
9000: starting
2008-03-16 18:48:22,835 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on
9000: starting
2008-03-16 18:48:22,836 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on
9000: starting
2008-03-16 18:48:22,837 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on
9000: starting
2008-03-16 18:48:22,837 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on
9000: starting
2008-03-16 18:48:22,837 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on
9000: starting
2008-03-16 18:48:22,837 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on
9000: starting
2008-03-16 18:48:22,837 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on
9000: starting
6.文件测试
[color="#555555"]1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
hadoop@develop:/usr/local/hadoop/bin> ./hadoop dfs -mkdir test
hadoop@develop:/usr/local/hadoop/bin> ./hadoop dfs -put /home/hadoop/a.jpg a.jpg
08/03/16 19:30:19 INFO fs.DFSClient: Exception in createBlockOutputStream java.io.
IOException: Bad connect ack with firstBadLink 192.168.0.1:50010
08/03/16 19:30:19 INFO fs.DFSClient: Abandoning block blk_-7295075409727404902
08/03/16 19:30:19 INFO fs.DFSClient: Waiting to find target node: 192.168.1.2:50010
08/03/16 19:31:25 INFO fs.DFSClient: Exception in createBlockOutputStream java.io.
IOException: Bad connect ack with firstBadLink 192.168.0.1:50010
08/03/16 19:31:25 INFO fs.DFSClient: Abandoning block blk_4082747187388859381
08/03/16 19:31:25 INFO fs.DFSClient: Waiting to find target node: 192.168.1.2:50010
hadoop@develop:/usr/local/hadoop/bin> ./hadoop dfs -ls
Found 2 items
/user/hadoop/a.jpg         144603  2008-03-16 19:29        rw-r--r--      
hadoop
supergroup
/user/hadoop/test                  2008-03-16 19:27        rwxr-xr-x       hadoop
supergroup
hadoop@develop:/usr/local/hadoop/bin> ./hadoop dfs -put /home/hadoop/a.jpg b.jpg
hadoop@develop:/usr/local/hadoop/bin> ./hadoop dfs -ls
Found 3 items
/user/hadoop/a.jpg         144603  2008-03-16 19:29        rw-r--r--       hadoop
supergroup
/user/hadoop/b.jpg         144603  2008-03-16 19:33        rw-r--r--       hadoop
supergroup
/user/hadoop/test                  2008-03-16 19:27        rwxr-xr-x       hadoop
supergroup
注,192.168.0.1的
firewall
的50010要打开.否则会出如上错误.
[color="#555555"]1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
hadoop@develop:/usr/local/hadoop/bin> ./hadoop dfs
Usage: java FsShell
[-ls ]
[-lsr ]
[-du ]
[-dus ]
[-mv  ]
[-cp  ]
[-rm ]
[-rmr ]
[-expunge]
[-put  ]
[-copyFromLocal  ]
[-moveFromLocal  ]
[-get [-crc]  ]
[-getmerge   [addnl]]
[-cat ]
[-text ]
[-copyToLocal [-crc]  ]
[-moveToLocal [-crc]  ]
[-mkdir ]
[-setrep [-R] [-w]  ]
[-touchz ]
[-test -[ezd] ]
[-stat [format] ]
[-tail [-f] ]
[-chmod [-R]  PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-chgrp [-R] GROUP PATH...]
[-help [cmd]]
查看Hadoop文件系统的健康状态:
http://192.168.0.1:50070/
查看Hadoop Map/Reduce状态:
http://192.168.0.1:50030/
hadoop-dfs-health.gif

hadoop-map-reduce-admin.gif

hadoop-dfs-files.gif

               
               
               

本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u/4206/showart_505540.html
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP