ayumu_aoの日記

SIerから事業会社に転職したエンジニアが技術についてや組織論、本の話、今までの体験談などなどを個人的に垂れ流しています。

いろいろ参考にしながらwindowsで環境構築やってみたメモ書き

最近ちょっと時間がとれたのでいろいろなサイトを参考にしながら環境構築をしてみた。

vagrantで仮想環境構築

VirtualBoxvagrantのインストール

下記サイトからダウンロードしてインストール(ひたすら次へでOK)

②boxを追加

$ vagrant box add CentOS6.6 https://github.com/tommy-muehle/puppet-vagrant-boxes/releases/download/1.0.0/centos-6.6-x86_64.box

③サーバー起動

$ mkdir centos
$ cd centos
$ vagrant init CentOS6.6
$ vagrant up

 

④サーバーに接続

 

$ vagrant ssh

 

javaインストール

 

$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.rpm" -O jdk-7u79-linux-x64.rpm
 
$ sudo rpm -ivh jdk-7u79-linux-x64.rpm
Vorbereiten...              ########################################### [100%]
   1:jdk                    ########################################### [100%]
Unpacking JAR files...
        rt.jar...
        jsse.jar...
        charsets.jar...
        tools.jar...
        localedata.jar...
        jfxrt.jar...
 
$ java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

 

環境変数の設定

JAVA_HOMEの追記

$ sudo vi /etc/profile
[
export JAVA_HOME=/usr/java/default
]
$ source /etc/profile
$ echo $JAVA_HOME
/usr/java/default

 

mongodb

①10genリポジトリの追加

$ sudo vim /etc/yum.repos.d/10gen.repo
[10gen]
name=10gen Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=0

②mongodbインストール

$ sudo yum install mongo-10gen-server.x86_64 mongo-10gen.x86_64 --enablerepo=10gen
Geladene Plugins: fastestmirror
Einrichten des Installationsprozess
Determining fastest mirrors
 * base: ftp.yz.yamagata-u.ac.jp
中略
 
Installiert:
  mongodb-org.x86_64 0:2.6.11-1                                                                                mongodb-org-server.x86_64 0:2.6.11-1
 
Abhängigkeit installiert:
  mongodb-org-mongos.x86_64 0:2.6.11-1                                        mongodb-org-shell.x86_64 0:2.6.11-1                                        mongodb-org-tools.x86_64 0:2.6.11-1

 
Komplett!

③起動確認

$ sudo /etc/init.d/mongod start
Starting mongod:                                           [  OK  ]

 

④起動時に自動立ち上げ設定

$ sudo chkconfig mongod on

 

rabbitMQ

SSL証明書/認証局作成

$ cd /tmp
$ wget http://sensuapp.org/docs/0.16/tools/ssl_certs.tar
 
$ tar xvf ssl_certs.tar
ssl_certs/
ssl_certs/sensu_ca/
ssl_certs/ssl_certs.sh
ssl_certs/sensu_ca/openssl.cnf
 
$ cd ssl_certs
$ ./ssl_certs.sh generate
Generating SSL certificates for Sensu ...
Generating a 2048 bit RSA private key
中略
 
Write out database with 1 new entries
Data Base Updated

erlangインストール

$ sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
Retrieving http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
warning: /var/tmp/rpm-tmp.NeVGgn: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Preparing...                ########################################### [100%]
   1:epel-release           ########################################### [100%]
 
$ sudo yum install erlang
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
中略
 
Dependency Installed:
中略
 
Complete!

③RabbitMQインストール

$ sudo rpm --import http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
$ sudo rpm -Uvh http://www.rabbitmq.com/releases/rabbitmq-server/v3.2.1/rabbitmq-server-3.2.1-1.noarch.rpm
Retrieving http://www.rabbitmq.com/releases/rabbitmq-server/v3.2.1/rabbitmq-server-3.2.1-1.noarch.rpm
Preparing...                ########################################### [100%]
   1:rabbitmq-server        ########################################### [100%]

④RabbitMQ server起動

$ sudo chkconfig rabbitmq-server on
$ sudo service rabbitmq-server start
Starting rabbitmq-server: SUCCESS
rabbitmq-server.

 

SSL設定

$ sudo mkdir -p /etc/rabbitmq/ssl
$ sudo cp -a sensu_ca/cacert.pem /etc/rabbitmq/ssl/
$ sudo cp -a server/cert.pem /etc/rabbitmq/ssl/
$ sudo cp -a server/key.pem /etc/rabbitmq/ssl/

⑥RabbitMQ SSL設定

$ sudo vi /etc/rabbitmq/rabbitmq.config
[
    {rabbit, [
    {ssl_listeners, [5671]},
    {ssl_options, [{cacertfile,"/etc/rabbitmq/ssl/cacert.pem"},
                   {certfile,"/etc/rabbitmq/ssl/cert.pem"},
                   {keyfile,"/etc/rabbitmq/ssl/key.pem"},
                   {verify,verify_peer},
                   {fail_if_no_peer_cert,true}]}
  ]}
].

⑦RabbitMQ再起動

$ sudo service rabbitmq-server restart
Restarting rabbitmq-server: SUCCESS
rabbitmq-server.

Hadoop

①reposの登録

$ wget http://archive.cloudera.com/redhat/cdh/cloudera-cdh3.repo
--YYYY-MM-DD hh:mm:ss--  http://archive.cloudera.com/redhat/cdh/cloudera-cdh3.repo
中略
 
YYYY-MM-DD hh:mm:ss  (74.3 MB/s) - “cloudera-cdh3.repo” saved [295/295]
 
$ sudo mv cloudera-cdh3.repo /etc/yum.repos.d/
$ sudo yum update yum
Loaded plugins: fastestmirror
中略
 
Updated:
  yum.noarch 0:3.2.29-69.el6.centos
 
Complete!
 
$ yum search hadoop
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: ftp.riken.jp
 * epel: ftp.riken.jp
 * extras: ftp.riken.jp
 * updates: ftp.riken.jp
cloudera-cdh3                                                                                                                                                                                                                75/75
======================================================================================================= N/S Matched: hadoop =======================================================================================================
hadoop-0.20.noarch : Hadoop is a software platform for processing vast amounts of data
hadoop-0.20-conf-pseudo.noarch : Hadoop installation in pseudo-distributed mode
hadoop-0.20-datanode.noarch : Hadoop Data Node
hadoop-0.20-debuginfo.i386 : Debug information for package hadoop-0.20
hadoop-0.20-debuginfo.x86_64 : Debug information for package hadoop-0.20
hadoop-0.20-doc.noarch : Hadoop Documentation
hadoop-0.20-jobtracker.noarch : Hadoop Job Tracker
hadoop-0.20-libhdfs.i386 : Hadoop Filesystem Library
hadoop-0.20-libhdfs.x86_64 : Hadoop Filesystem Library
hadoop-0.20-namenode.noarch : The Hadoop namenode manages the block locations of HDFS files
hadoop-0.20-native.i386 : Native libraries for Hadoop Compression
hadoop-0.20-native.x86_64 : Native libraries for Hadoop Compression
hadoop-0.20-pipes.i386 : Hadoop Pipes Library
hadoop-0.20-pipes.x86_64 : Hadoop Pipes Library
hadoop-0.20-sbin.i386 : Binaries for secured Hadoop clusters
hadoop-0.20-sbin.x86_64 : Binaries for secured Hadoop clusters
hadoop-0.20-secondarynamenode.noarch : Hadoop Secondary namenode
hadoop-0.20-source.noarch : Source code for Hadoop
hadoop-0.20-tasktracker.noarch : Hadoop Task Tracker
hadoop-hbase.noarch : HBase is the Hadoop database. Use it when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns --
                    : atop clusters of commodity hardware.
hadoop-hbase-master.noarch : The Hadoop HBase master Server.
hadoop-hbase-regionserver.noarch : The Hadoop HBase RegionServer server.
hadoop-hbase-thrift.noarch : The Hadoop HBase Thrift Interface
hadoop-hive.noarch : Hive is a data warehouse infrastructure built on top of Hadoop
hadoop-zookeeper-server.noarch : The Hadoop Zookeeper server
flume.noarch : Flume is a reliable, scalable, and manageable distributed log collection application for collecting data such as logs and delivering it to data stores such as Hadoop's HDFS.
flume-ng.noarch : Flume is a reliable, scalable, and manageable distributed log collection application for collecting data such as logs and delivering it to data stores such as Hadoop's HDFS.
hadoop-0.20-fuse.i386 : Mountable HDFS
hadoop-0.20-fuse.x86_64 : Mountable HDFS
hadoop-hbase-doc.noarch : Hbase Documentation
hadoop-hbase-rest.noarch : The Apache HBase REST gateway
hadoop-hive-hbase.noarch : Provides integration between Apache HBase and Apache Hive
hadoop-hive-metastore.noarch : Shared metadata repository for Hive.
hadoop-hive-server.noarch : Provides a Hive Thrift service.
hadoop-pig.noarch : Pig is a platform for analyzing large data sets
hadoop-zookeeper.noarch : A high-performance coordination service for distributed applications.
hue-common.i386 : A browser-based desktop interface for Hadoop
hue-common.x86_64 : A browser-based desktop interface for Hadoop
hue-filebrowser.noarch : A UI for the Hadoop Distributed File System (HDFS)
hue-jobbrowser.noarch : A UI for viewing Hadoop map-reduce jobs
hue-jobsub.noarch : A UI for designing and submitting map-reduce jobs to Hadoop
hue-plugins.noarch : Hadoop plugins for Hue
hue-shell.i386 : A shell for console based Hadoop applications
hue-shell.x86_64 : A shell for console based Hadoop applications
oozie.noarch : Oozie is a system that runs workflows of Hadoop jobs.
sqoop.noarch : Sqoop allows easy imports and exports of data sets between databases and the Hadoop Distributed File System (HDFS).
 
  Name and summary matches only, use "search all" for everything.

Hadoopインストール

$ su
$ yum install hadoop-0.20 -y
Setting up Install Process
Loading mirror speeds from cached hostfile
中略
 
Complete!

③1台で動かす設定追加

 

$ yum install hadoop-0.20-conf-pseudo -y
Loaded plugins: fastestmirror
中略
 
Complete!

④hive,pig,hbaseインストール

$ yum install hadoop-hive -y
Loaded plugins: fastestmirror
Setting up Install Process
中略
 
Complete!

$ yum install hadoop-pig -y
Loaded plugins: fastestmirror
Setting up Install Process
中略
 
Complete!
 
$ yum install hadoop-hbase -y
Loaded plugins: fastestmirror
Setting up Install Process
中略
 
Complete!

Hadoop起動

$ /etc/init.d/hadoop-0.20-datanode start
Starting Hadoop datanode daemon (hadoop-datanode): starting datanode, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-datanode-localhost.localdomain.out
datanode (pid  6243) is running...                         [   OK  ]
 
$ /etc/init.d/hadoop-0.20-namenode start
Starting Hadoop namenode daemon (hadoop-namenode): starting namenode, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-namenode-localhost.localdomain.out
[root@localhost vagrant]#                                  [   OK  ]
 
$ /etc/init.d/hadoop-0.20-tasktracker start
Starting Hadoop tasktracker daemon (hadoop-tasktracker): starting tasktracker, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-tasktracker-localhost.localdomain.out
[root@localhost vagrant]#                                  [   OK  ]
 
$ /etc/init.d/hadoop-0.20-jobtracker start
Starting Hadoop jobtracker daemon (hadoop-jobtracker): starting jobtracker, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-jobtracker-localhost.localdomain.out
[root@localhost vagrant]#                                  [   OK  ]
 
$ /etc/init.d/hadoop-0.20-secondarynamenode start #これは最初は不要
Starting Hadoop secondarynamenode daemon (hadoop-secondarynamenode): starting secondarynamenode, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out
[root@localhost vagrant]#                                  [   OK  ]

ここまででいろいろ入れてみたので時間を作って触りつつ何かつくりたいな−と。。。

参考にしたサイト

qiita.com www.task-notes.com qiita.com qiita.com yut.hatenablog.com