Tagged: google cloud Toggle Comment Threads | Keyboard Shortcuts

  • Wang 22:58 on 2021-07-04 Permalink | Reply
    Tags: google cloud,   

    MLOps from Google 

     
  • Wang 22:55 on 2020-02-16 Permalink | Reply
    Tags: , google cloud, ,   

    Migrate blog from GCP —> WordPress.com! 😀😀

     
  • Wang 23:22 on 2019-10-25 Permalink | Reply
    Tags: , , , google cloud, , ,   

    SpringOne Platform 2019 in Austin, https://springoneplatform.io/

     
  • Wang 23:42 on 2018-05-11 Permalink | Reply
    Tags: , , , , google cloud   

    Website down 

    Today I received alert email suddenly which said my blog site went down…😂😂😂

    So I logged in server and checked containers’s status, everything looked fine

    [root@blog xiaowang]# docker stack ps blog
    ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
    qwsjjol3jk2f        blog_mysql.1        mysql:5.7           blog                Running             Running 15 days ago                       
    n9gbil4zcavy        blog_nginx.1        nginx:1.13.8        blog                Running             Running 15 days ago                       
    hg778gcc35vz        blog_wordpress.1    wordpress:4.9.1     blog                Running             Running 15 days ago
    

    When I checked the port, everything also looked fine

    [root@blog xiaowang]# netstat -tuapn | egrep '80|443'
    tcp6       4      0 :::80                   :::*                    LISTEN      12146/dockerd       
    tcp6       2      0 :::443                  :::*                    LISTEN      12146/dockerd       
    tcp6      74      0 ::1:80                  ::1:47352               CLOSE_WAIT  -                   
    tcp6       3      0 ::1:80                  ::1:47348               CLOSE_WAIT  -                   
    tcp6      74      0 ::1:80                  ::1:47402               CLOSE_WAIT  -                   
    tcp6      78      0 ::1:443                 ::1:56994               CLOSE_WAIT  -                   
    tcp6      78      0 ::1:443                 ::1:56944               CLOSE_WAIT  -                   
    tcp6      74      0 ::1:80                  ::1:47350               CLOSE_WAIT  -
    

    But when I executed “curl http://localhost, it was blocked, so I guess something wrong with local network.

    After checking I executed “sysctl -w net.ipv4.ip_forward=1” to enable ip forward, and I finally could access the port. So I executed “echo “net.ipv4.ip_forward=1″ >> /etc/sysctl.conf” to make it permanent.

    I’m using google cloud, I guess maybe they have reset the network which I didn’t make it permanent before.

     
  • Wang 21:43 on 2018-03-02 Permalink | Reply
    Tags: , , , , google cloud, , , , ,   

    [GCP ] Install bigdata cluster 

    I applied google cloud for trial which give me 300$, so I initialize 4 severs to do test.

    Servers:

    Host

    OS

    Memory

    CPU

    Disk

    Region

    master.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    slave1.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    slave2.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    slave3.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    1.prepare

    1.1.configure ssh key on each slave to make master login without password

    1.2.install jdk1.8 on each server, download, set JAVA_HOME in profile

    1.3.configure hostnames in /etc/hosts on each server


    2.install hadoop

    2.1.download hadoop 2.8.2

    wget http://ftp.jaist.ac.jp/pub/apache/hadoop/common/hadoop-2.8.3/hadoop-2.8.3.tar.gz
    tar -vzxf hadoop-2.8.3.tar.gz && cd hadoop-2.8.3
    

    2.2.configure core-site.xml

    <property>
        <name>fs.default.name</name>
        <value>hdfs://master.c.ambari-195807.internal:9000</value> 
    </property>
    <property>
        <name>hadoop.tmp.dir</name>  
        <value>/data/hadoop/hdfs/tmp</value>
    </property>
    <property>
        <name>hadoop.http.filter.initializers</name>
        <value>org.apache.hadoop.security.HttpCrossOriginFilterInitializer</value>
    </property>
    

    2.3.configure hdfs-site.xml

    <property>
        <name>dfs.name.dir</name>
        <value>/data/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/opt/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    

    2.4.configure mapred-site.xml

    <property>  
        <name>mapred.job.tracker</name>  
        <value>master.c.ambari-195807.internal:49001</value>  
    </property>
    <property>
        <name>mapreduce.framework.name</name>  
        <value>yarn</value>  
    </property>
    <property>
        <name>mapred.local.dir</name>  
        <value>/data/hadoop/mapred</value>  
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>2048</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>4096</value>
    </property>
      <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx6144m</value>
    </property>
    <property>
        <name>mapreduce.reduce.java.opts</name>
        <value>-Xmx6144m</value>
    </property>
    

    2.5.configure yarn-site.xml

    <property>  
        <name>yarn.resourcemanager.hostname</name>  
        <value>master.c.ambari-195807.internal</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.address</name>  
        <value>${yarn.resourcemanager.hostname}:8032</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.scheduler.address</name>  
        <value>${yarn.resourcemanager.hostname}:8030</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.webapp.address</name>  
        <value>${yarn.resourcemanager.hostname}:8088</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.webapp.https.address</name>  
        <value>${yarn.resourcemanager.hostname}:8090</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.resource-tracker.address</name>  
        <value>${yarn.resourcemanager.hostname}:8031</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.admin.address</name>  
        <value>${yarn.resourcemanager.hostname}:8033</value>  
    </property>  
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.timeline-service.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.timeline-service.generic-application-history.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.timeline-service.http-cross-origin.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.timeline-service.hostname</name>
        <value>master.c.ambari-195807.internal</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.cross-origin.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master.c.ambari-195807.internal:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master.c.ambari-195807.internal:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master.c.ambari-195807.internal:8031</value>
    </property>
    

    2.6.set slaves

    echo slave1.c.ambari-195807.internal >>slaves
    echo slave2.c.ambari-195807.internal >>slaves
    echo slave3.c.ambari-195807.internal >>slaves
    

    2.7.copy hadoop from master to each slave

    scp -r hadoop-2.8.3/ gizmo@slave1.c.ambari-195807.internal:/opt/apps/
    scp -r hadoop-2.8.3/ gizmo@slave2.c.ambari-195807.internal:/opt/apps/
    scp -r hadoop-2.8.3/ gizmo@slave3.c.ambari-195807.internal:/opt/apps/
    

    2.8.configure hadoop env profile

    echo 'export HADOOP_HOME=/opt/apps/hadoop-2.8.3' >>~/.bashrc
    echo 'export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop' >>~/.bashrc
    echo 'export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$JAVA_HOME/bin' >>~/.bashrc
    

    2.9.start hdfs/yarn

    start-dfs.hs
    start-yarn.sh
    

    2.10.check

    hdfs, http://master.c.ambari-195807.internal:50070

    yarn, http://master.c.ambari-195807.internal:8088


    3.install hive

    3.1.download hive 2.3.2

    wget http://ftp.jaist.ac.jp/pub/apache/hive/hive-2.3.2/apache-hive-2.3.2-bin.tar.gz
    tar -zvxf apache-hive-2.3.2-bin.tar.gz && cd apache-hive-2.3.2-bin
    

    3.2.configure hive env profile

    echo 'export HIVE_HOME=/opt/apps/apache-hive-2.3.2-bin' >>~/.bashrc
    echo 'export PATH=$PATH:$HIVE_HOME/bin' >>~/.bashrc
    

    3.3.install mysql to store metadata

    rpm -ivh http://repo.mysql.com/mysql57-community-release-el7.rpm
    yum install -y mysql-server
    systemctl start mysqld
    mysql_password="pa12ss34wo!@d#"
    mysql_default_password=`grep 'temporary password' /var/log/mysqld.log | awk -F ': ' '{print $2}'`
    mysql -u root -p${mysql_default_password} -e "set global validate_password_policy=0; set global validate_password_length=4;" --connect-expired-password
    mysqladmin -u root -p${mysql_default_password} password ${mysql_password}
    mysql -u root -p${mysql_password} -e "create database hive default charset 'utf8'; flush privileges;"
    mysql -u root -p${mysql_password} -e "grant all privileges on hive.* to hive@'' identified by 'hive'; flush privileges;"
    

    3.4.download mysql driver

    wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.45/mysql-connector-java-5.1.45.jar -O $HIVE_HOME/lib
    

    3.5.configure hive-site.xml

    <configuration>
        <property>
            <name>javax.jdo.option.ConnectionURL</name>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionDriverName</name>
            <value>com.mysql.jdbc.Driver</value>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionUserName</name>
            <value>hive</value>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionPassword</name>
            <value>hive</value>
        </property>
    </configuration>
    

    3.6.initialize hive meta tables

    schematool -dbType mysql -initSchema
    

    3.7.test hive


    4.install tez

    4.1.please follow the instruction “install tez on single server” on each server


    5.install hbase

    5.1.download hbase 1.2.6

    wget http://ftp.jaist.ac.jp/pub/apache/hbase/1.2.6/hbase-1.2.6-bin.tar.gz
    tar -vzxf hbase-1.2.6-bin.tar.gz && cd hbase-1.2.6
    

    5.2.configure hbase-site.xml

    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://master.c.ambari-195807.internal:9000/hbase</value>
    </property>
    <property>
        <name>hbase.master</name>
        <value>master</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>slave1.c.ambari-195807.internal,slave2.c.ambari-195807.internal,slave3.c.ambari-195807.internal</value>
    </property>
    <property>
        <name>dfs.support.append</name>
        <value>true</value>
    </property>
    <property>  
        <name>hbase.master.info.port</name>  
        <value>60010</value>  
    </property>
    

    5.3.configure regionservers

    echo slave1.c.ambari-195807.internal >>regionservers
    echo slave2.c.ambari-195807.internal >>regionservers
    echo slave3.c.ambari-195807.internal >>regionservers
    

    5.4.copy hbase from master to each slave

    5.5.configure hbase env profile

    echo 'export HBASE_HOME=/opt/apps/hbase-1.2.6' >>~/.bashrc 
    echo 'export PATH=$PATH:$HBASE_HOME/bin' >>~/.bashrc
    

    5.6.start hbase

    start-hbase.sh
    

    5.7.check, http://35.194.253.162:60010


    Things done!

     
  • Wang 22:18 on 2018-01-26 Permalink | Reply
    Tags: , , google cloud, , marathon, Mesos, , Zookeeper   

    Install Mesos/Marathon 

    I applied GCE recently, so I installed Mesos/Marathon for test.

    Compute Engine: n1-standard-1 (1 vCPU, 3.75 GB, Intel Ivy Bridge, asia-east1-a region)

    OS: CentOS 7

    10.140.0.1 master
    10.140.0.2 slave1
    10.140.0.3 slave2
    10.140.0.4 slave3
    

    Prepare

    1.install git

    sudo yum install -y tar wget git
    

    2.install and import apache maven repository

    sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
    sudo yum install -y epel-release
    sudo bash -c 'cat > /etc/yum.repos.d/wandisco-svn.repo <<EOF
    [WANdiscoSVN]
    name=WANdisco SVN Repo 1.9
    enabled=1
    baseurl=http://opensource.wandisco.com/centos/7/svn-1.9/RPMS/$basearch/
    gpgcheck=1
    gpgkey=http://opensource.wandisco.com/RPM-GPG-KEY-WANdisco
    EOF'
    

    3.install tools

    sudo yum update systemd
    sudo yum groupinstall -y "Development Tools"
    sudo yum install -y apache-maven python-devel python-six python-virtualenv java-1.8.0-openjdk-devel zlib-devel libcurl-devel openssl-devel cyrus-sasl-devel cyrus-sasl-md5 apr-devel subversion-devel apr-util-devel
    

    Installation

    1.append hosts

    cat << EOF >>/etc/hosts
    10.140.0.1 master
    10.140.0.2 slave1
    10.140.0.3 slave2
    10.140.0.4 slave3
    EOF
    

    2.zookeeper

    2.1.install zookeeper on slave1/slave2/slave3

    2.2.modify conf/zoo.cfg on slave1/slave2/slave3

    cat << EOF > conf/zoo.cfg
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=./data
    clientPort=2181
    maxClientCnxns=0
    autopurge.snapRetainCount=3
    autopurge.purgeInterval=0
    leaderServes=yes
    skipAcl=no
    server.1=slave1:2888:3888
    server.2=slave2:2889:3889
    server.3=slave3:2890:3890
    EOF
    

    2.3.create data folder, and write serverid to myid on slave1/slave2/slave3, id is equals server’s sequence

    mkdir data && echo ${id} > data/myid
    

    2.4.start zookeeper on slave1/slave2/slave3, check zk’s status

    bin/zkServer.sh start
    bin/zkServer.sh status
    

    3.mesos

    3.1.install and import mesos repository on each server

    rpm -Uvh http://repos.mesosphere.io/el/7/noarch/RPMS/mesosphere-el-repo-7-1.noarch.rpm
    rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-mesosphere
    

    3.2.install mesos on each server

    yum install mesos -y
    

    3.3.modify mesos-master’s zk address on master/slave1

    echo "zk://slave1:2181,slave2:2181,slave3:2181/mesos" >/etc/mesos/zk
    

    3.4.modify quorum of mesos-master on master/slave1

    echo 2 > /etc/mesos-master/quorum
    

    3.5. start master and enable auto start on master/slave1

    systemctl enable mesos-master.service
    systemctl start mesos-slave.service
    

    3.6.start slave and enable auto start on slave1/slave2/slave3

    systemctl enable mesos-slave.service
    systemctl start mesos-slave.service
    

    4.marathon

    4.1.install marathon on master

    yum install marathon -y
    

    4.2.config master/zk address on master

    cat << EOF >>/etc/default/marathon
    MARATHON_MASTER="zk://slave1:2181,slave2:2181,slave3:2181/mesos"
    MARATHON_ZK="zk://slave1:2181,slave2:2181,slave3:2181/marathon"
    EOF
    

    4.3.start marathon and enable auto start on master

    systemctl enable marathon.service
    systemctl start marathon.service
    

    Test

    mesos: http://master:5050

    marathon: http://master:8080w

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: