The start-all.sh and stop-all.sh scripts in the hadoop/bin directory will use SSH to launch some of the hadoop daemons. If for some reason SSH is not available on the server, please follow the steps below to run hadoop without using SSH.
SOLUTION
The goal is to modify all "hadoop-daemons.sh" with "hadoop-daemon.sh". The "hadoop-daemons.sh" simply runs "hadoop-daemon.sh" through SSH.
- Modify start-dfs.sh script:
from:"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters start secondarynamenode"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR --hosts masters start secondarynamenode
- Modify stop-dfs.sh script:
from:"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR stop datanode
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters stop secondarynamenode"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop datanode
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR --hosts masters stop secondarynamenode - Modify start-mapred.sh script:
from:"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start jobtracker
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start tasktracker
to:"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start jobtracker
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start tasktracker - Modify stop-mapred.sh script:
from:"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop jobtracker
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR stop tasktracker
to:"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop jobtracker
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop tasktracker
Note that after this change, start-all.sh and stop-all.sh will not start/stop any other hadoop nodes outside of this server remotely. All other remote slaves must be start/stopped manually, directly on those servers.