Tagged: CentOS7 Toggle Comment Threads | Keyboard Shortcuts

  • Wang 22:34 on 2019-05-10 Permalink | Reply
    Tags: CentOS7, , ,   

    Kubernetes node in “NotReady” status 

    Rencetly I found some k8s nodes became “NotReady”, I checked disk and memory, they both seems fine.

    [xxx@xxx-xxx ~]# kubectl describe node xxx-xxx
    ...
    ...
    Conditions:
      Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                    Message
      ----             ------    -----------------                 ------------------                ------                    -------
      ...
      PIDPressure      False     Fri, 10 May 2019 09:24:43 +0900   Fri, 10 May 2018 00:10:12 +0900   KubeletHasSufficientPID   kubelet has sufficient PID available
      ...
    

    Then I restarted kubelet on server and checked logs, I found:

    [xxx@xxx-xxx ~]# systemctl status kubelet
    ● kubelet.service - Kubernetes Kubelet Server
       Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    ...
    May 10 12:30:30 xxx-xxx kubelet[16776]: F0322 12:30:30.810434   16776 server.go:233] failed to run Kubelet: Running with swap on is not supported, plea...
    ...
    

    So I checked server’s status and turn off swap, then I restarted kubelet and the nodes went well.

    [xxx@xxx-xxx ~]# swapoff -a
    [xxx@xxx-xxx ~]# systemctl restart kubelet
    

     
  • Wang 23:42 on 2018-05-11 Permalink | Reply
    Tags: , CentOS7, , ,   

    Website down 

    Today I received alert email suddenly which said my blog site went down…😂😂😂

    So I logged in server and checked containers’s status, everything looked fine

    [root@blog xiaowang]# docker stack ps blog
    ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
    qwsjjol3jk2f        blog_mysql.1        mysql:5.7           blog                Running             Running 15 days ago                       
    n9gbil4zcavy        blog_nginx.1        nginx:1.13.8        blog                Running             Running 15 days ago                       
    hg778gcc35vz        blog_wordpress.1    wordpress:4.9.1     blog                Running             Running 15 days ago
    

    When I checked the port, everything also looked fine

    [root@blog xiaowang]# netstat -tuapn | egrep '80|443'
    tcp6       4      0 :::80                   :::*                    LISTEN      12146/dockerd       
    tcp6       2      0 :::443                  :::*                    LISTEN      12146/dockerd       
    tcp6      74      0 ::1:80                  ::1:47352               CLOSE_WAIT  -                   
    tcp6       3      0 ::1:80                  ::1:47348               CLOSE_WAIT  -                   
    tcp6      74      0 ::1:80                  ::1:47402               CLOSE_WAIT  -                   
    tcp6      78      0 ::1:443                 ::1:56994               CLOSE_WAIT  -                   
    tcp6      78      0 ::1:443                 ::1:56944               CLOSE_WAIT  -                   
    tcp6      74      0 ::1:80                  ::1:47350               CLOSE_WAIT  -
    

    But when I executed “curl http://localhost, it was blocked, so I guess something wrong with local network.

    After checking I executed “sysctl -w net.ipv4.ip_forward=1” to enable ip forward, and I finally could access the port. So I executed “echo “net.ipv4.ip_forward=1″ >> /etc/sysctl.conf” to make it permanent.

    I’m using google cloud, I guess maybe they have reset the network which I didn’t make it permanent before.

     
  • Wang 21:43 on 2018-03-02 Permalink | Reply
    Tags: , CentOS7, , , , , , ,   

    [GCP ] Install bigdata cluster 

    I applied google cloud for trial which give me 300$, so I initialize 4 severs to do test.

    Servers:

    Host

    OS

    Memory

    CPU

    Disk

    Region

    master.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    slave1.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    slave2.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    slave3.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    1.prepare

    1.1.configure ssh key on each slave to make master login without password

    1.2.install jdk1.8 on each server, download, set JAVA_HOME in profile

    1.3.configure hostnames in /etc/hosts on each server


    2.install hadoop

    2.1.download hadoop 2.8.2

    wget http://ftp.jaist.ac.jp/pub/apache/hadoop/common/hadoop-2.8.3/hadoop-2.8.3.tar.gz
    tar -vzxf hadoop-2.8.3.tar.gz && cd hadoop-2.8.3
    

    2.2.configure core-site.xml

    <property>
        <name>fs.default.name</name>
        <value>hdfs://master.c.ambari-195807.internal:9000</value> 
    </property>
    <property>
        <name>hadoop.tmp.dir</name>  
        <value>/data/hadoop/hdfs/tmp</value>
    </property>
    <property>
        <name>hadoop.http.filter.initializers</name>
        <value>org.apache.hadoop.security.HttpCrossOriginFilterInitializer</value>
    </property>
    

    2.3.configure hdfs-site.xml

    <property>
        <name>dfs.name.dir</name>
        <value>/data/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/opt/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    

    2.4.configure mapred-site.xml

    <property>  
        <name>mapred.job.tracker</name>  
        <value>master.c.ambari-195807.internal:49001</value>  
    </property>
    <property>
        <name>mapreduce.framework.name</name>  
        <value>yarn</value>  
    </property>
    <property>
        <name>mapred.local.dir</name>  
        <value>/data/hadoop/mapred</value>  
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>2048</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>4096</value>
    </property>
      <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx6144m</value>
    </property>
    <property>
        <name>mapreduce.reduce.java.opts</name>
        <value>-Xmx6144m</value>
    </property>
    

    2.5.configure yarn-site.xml

    <property>  
        <name>yarn.resourcemanager.hostname</name>  
        <value>master.c.ambari-195807.internal</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.address</name>  
        <value>${yarn.resourcemanager.hostname}:8032</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.scheduler.address</name>  
        <value>${yarn.resourcemanager.hostname}:8030</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.webapp.address</name>  
        <value>${yarn.resourcemanager.hostname}:8088</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.webapp.https.address</name>  
        <value>${yarn.resourcemanager.hostname}:8090</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.resource-tracker.address</name>  
        <value>${yarn.resourcemanager.hostname}:8031</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.admin.address</name>  
        <value>${yarn.resourcemanager.hostname}:8033</value>  
    </property>  
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.timeline-service.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.timeline-service.generic-application-history.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.timeline-service.http-cross-origin.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.timeline-service.hostname</name>
        <value>master.c.ambari-195807.internal</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.cross-origin.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master.c.ambari-195807.internal:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master.c.ambari-195807.internal:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master.c.ambari-195807.internal:8031</value>
    </property>
    

    2.6.set slaves

    echo slave1.c.ambari-195807.internal >>slaves
    echo slave2.c.ambari-195807.internal >>slaves
    echo slave3.c.ambari-195807.internal >>slaves
    

    2.7.copy hadoop from master to each slave

    scp -r hadoop-2.8.3/ gizmo@slave1.c.ambari-195807.internal:/opt/apps/
    scp -r hadoop-2.8.3/ gizmo@slave2.c.ambari-195807.internal:/opt/apps/
    scp -r hadoop-2.8.3/ gizmo@slave3.c.ambari-195807.internal:/opt/apps/
    

    2.8.configure hadoop env profile

    echo 'export HADOOP_HOME=/opt/apps/hadoop-2.8.3' >>~/.bashrc
    echo 'export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop' >>~/.bashrc
    echo 'export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$JAVA_HOME/bin' >>~/.bashrc
    

    2.9.start hdfs/yarn

    start-dfs.hs
    start-yarn.sh
    

    2.10.check

    hdfs, http://master.c.ambari-195807.internal:50070

    yarn, http://master.c.ambari-195807.internal:8088


    3.install hive

    3.1.download hive 2.3.2

    wget http://ftp.jaist.ac.jp/pub/apache/hive/hive-2.3.2/apache-hive-2.3.2-bin.tar.gz
    tar -zvxf apache-hive-2.3.2-bin.tar.gz && cd apache-hive-2.3.2-bin
    

    3.2.configure hive env profile

    echo 'export HIVE_HOME=/opt/apps/apache-hive-2.3.2-bin' >>~/.bashrc
    echo 'export PATH=$PATH:$HIVE_HOME/bin' >>~/.bashrc
    

    3.3.install mysql to store metadata

    rpm -ivh http://repo.mysql.com/mysql57-community-release-el7.rpm
    yum install -y mysql-server
    systemctl start mysqld
    mysql_password="pa12ss34wo!@d#"
    mysql_default_password=`grep 'temporary password' /var/log/mysqld.log | awk -F ': ' '{print $2}'`
    mysql -u root -p${mysql_default_password} -e "set global validate_password_policy=0; set global validate_password_length=4;" --connect-expired-password
    mysqladmin -u root -p${mysql_default_password} password ${mysql_password}
    mysql -u root -p${mysql_password} -e "create database hive default charset 'utf8'; flush privileges;"
    mysql -u root -p${mysql_password} -e "grant all privileges on hive.* to hive@'' identified by 'hive'; flush privileges;"
    

    3.4.download mysql driver

    wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.45/mysql-connector-java-5.1.45.jar -O $HIVE_HOME/lib
    

    3.5.configure hive-site.xml

    <configuration>
        <property>
            <name>javax.jdo.option.ConnectionURL</name>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionDriverName</name>
            <value>com.mysql.jdbc.Driver</value>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionUserName</name>
            <value>hive</value>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionPassword</name>
            <value>hive</value>
        </property>
    </configuration>
    

    3.6.initialize hive meta tables

    schematool -dbType mysql -initSchema
    

    3.7.test hive


    4.install tez

    4.1.please follow the instruction “install tez on single server” on each server


    5.install hbase

    5.1.download hbase 1.2.6

    wget http://ftp.jaist.ac.jp/pub/apache/hbase/1.2.6/hbase-1.2.6-bin.tar.gz
    tar -vzxf hbase-1.2.6-bin.tar.gz && cd hbase-1.2.6
    

    5.2.configure hbase-site.xml

    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://master.c.ambari-195807.internal:9000/hbase</value>
    </property>
    <property>
        <name>hbase.master</name>
        <value>master</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>slave1.c.ambari-195807.internal,slave2.c.ambari-195807.internal,slave3.c.ambari-195807.internal</value>
    </property>
    <property>
        <name>dfs.support.append</name>
        <value>true</value>
    </property>
    <property>  
        <name>hbase.master.info.port</name>  
        <value>60010</value>  
    </property>
    

    5.3.configure regionservers

    echo slave1.c.ambari-195807.internal >>regionservers
    echo slave2.c.ambari-195807.internal >>regionservers
    echo slave3.c.ambari-195807.internal >>regionservers
    

    5.4.copy hbase from master to each slave

    5.5.configure hbase env profile

    echo 'export HBASE_HOME=/opt/apps/hbase-1.2.6' >>~/.bashrc 
    echo 'export PATH=$PATH:$HBASE_HOME/bin' >>~/.bashrc
    

    5.6.start hbase

    start-hbase.sh
    

    5.7.check, http://35.194.253.162:60010


    Things done!

     
  • Wang 22:13 on 2018-02-21 Permalink | Reply
    Tags: , CentOS7, , ,   

    Manage BDP by ambari 

    It’s boring and complicated to manage bigdata platforms, there are so many softwares need to be installed and coordinated to make them work well together, so I tried ambari to manage them.

    1.run centos7 container

    docker run -dit --name centos7 --privileged --publish 8080:8080 centos:7 /usr/sbin/init
    

    2.operate container

    2.1.enter container

    docker exec -it centos7 bash
    

    2.2.update yum and install tools

    yum update -y && yum install -y wget
    

    2.3.download the ambari repository

    wget -nv http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.6.0.0/ambari.repo -O /etc/yum.repos.d/ambari.repo
    

    2.4.install the ambari

    yum install -y ambari-server
    yum install -y ambari-agent
    

    2.5.install mysql as metastore, ,create mysql repo under /etc/yum.repos.d

    cat << 'EOF' >/etc/yum.repos.d/mysql.5.7.repo
    [mysql57-community]
    name=MySQL 5.7 Community Server
    baseurl=http://repo.mysql.com/yum/mysql-5.7-community/el/7/$basearch/
    enabled=1
    gpgcheck=0
    EOF
    

    2.6.install mysql server

    yum install -y mysql-community-server
    

    2.7.start mysql

    systemctl start mysqld
    

    2.8.create mysql user && init database

    mysql_password=ambari
    mysql_default_password=`grep 'temporary password' /var/log/mysqld.log | awk -F ': ' '{print $2}'`
    mysql -u root -p${mysql_default_password} -e "set global validate_password_policy=0; set global validate_password_length=4;" --connect-expired-password
    mysqladmin -u root -p${mysql_default_password} password ${mysql_password}
    mysql -u root -p${mysql_password} -e "create database ambari default charset 'utf8'; flush privileges;"
    mysql -u root -p${mysql_password} -e "grant all privileges on ambari.* to ambari@'' identified by 'ambari'; flush privileges;"
    mysql -u root -p${mysql_password} -e "use ambari; source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql;"
    

    2.9.download mysql driver

    driver_path=/usr/share/java
    mkdir ${driver_path}
    wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.45/mysql-connector-java-5.1.45.jar -O ${driver_path}/mysql-connector.jar
    

    2.10.setup ambari server, pay attention to database configuration, need select mysql manually

    ambari-server setup
    

    2.11.modify ambari database configuration

    echo "server.jdbc.driver.path=${driver_path}/mysql-connector.jar" >> /etc/ambari-server/conf/ambari.properties
    

    2.12.start ambari

    ambari-server start
    ambari-agent start
    ambari-server setup --jdbc-db=mysql --jdbc-driver=${driver_path}/mysql-connector.jar
    

    3.login, default accuont: admin/admin
    http://localhost:8080


    P.S.

    The above steps are configured on single server,  if you wanna build cluster with several servers, you also need configure ssh key(please google for specific steps, it’s simple) and start ambari-agent on slave servers.


    Below are screenshots of a mini cluster which was built by 4 servers:

     
  • Wang 22:18 on 2018-01-26 Permalink | Reply
    Tags: CentOS7, , , , Marathon, Mesos, , Zookeeper   

    Install Mesos/Marathon 

    I applied GCE recently, so I installed Mesos/Marathon for test.

    Compute Engine: n1-standard-1 (1 vCPU, 3.75 GB, Intel Ivy Bridge, asia-east1-a region)

    OS: CentOS 7

    10.140.0.1 master
    10.140.0.2 slave1
    10.140.0.3 slave2
    10.140.0.4 slave3
    

    Prepare

    1.install git

    sudo yum install -y tar wget git
    

    2.install and import apache maven repository

    sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
    sudo yum install -y epel-release
    sudo bash -c 'cat > /etc/yum.repos.d/wandisco-svn.repo <<EOF
    [WANdiscoSVN]
    name=WANdisco SVN Repo 1.9
    enabled=1
    baseurl=http://opensource.wandisco.com/centos/7/svn-1.9/RPMS/$basearch/
    gpgcheck=1
    gpgkey=http://opensource.wandisco.com/RPM-GPG-KEY-WANdisco
    EOF'
    

    3.install tools

    sudo yum update systemd
    sudo yum groupinstall -y "Development Tools"
    sudo yum install -y apache-maven python-devel python-six python-virtualenv java-1.8.0-openjdk-devel zlib-devel libcurl-devel openssl-devel cyrus-sasl-devel cyrus-sasl-md5 apr-devel subversion-devel apr-util-devel
    

    Installation

    1.append hosts

    cat << EOF >>/etc/hosts
    10.140.0.1 master
    10.140.0.2 slave1
    10.140.0.3 slave2
    10.140.0.4 slave3
    EOF
    

    2.zookeeper

    2.1.install zookeeper on slave1/slave2/slave3

    2.2.modify conf/zoo.cfg on slave1/slave2/slave3

    cat << EOF > conf/zoo.cfg
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=./data
    clientPort=2181
    maxClientCnxns=0
    autopurge.snapRetainCount=3
    autopurge.purgeInterval=0
    leaderServes=yes
    skipAcl=no
    server.1=slave1:2888:3888
    server.2=slave2:2889:3889
    server.3=slave3:2890:3890
    EOF
    

    2.3.create data folder, and write serverid to myid on slave1/slave2/slave3, id is equals server’s sequence

    mkdir data && echo ${id} > data/myid
    

    2.4.start zookeeper on slave1/slave2/slave3, check zk’s status

    bin/zkServer.sh start
    bin/zkServer.sh status
    

    3.mesos

    3.1.install and import mesos repository on each server

    rpm -Uvh http://repos.mesosphere.io/el/7/noarch/RPMS/mesosphere-el-repo-7-1.noarch.rpm
    rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-mesosphere
    

    3.2.install mesos on each server

    yum install mesos -y
    

    3.3.modify mesos-master’s zk address on master/slave1

    echo "zk://slave1:2181,slave2:2181,slave3:2181/mesos" >/etc/mesos/zk
    

    3.4.modify quorum of mesos-master on master/slave1

    echo 2 > /etc/mesos-master/quorum
    

    3.5. start master and enable auto start on master/slave1

    systemctl enable mesos-master.service
    systemctl start mesos-slave.service
    

    3.6.start slave and enable auto start on slave1/slave2/slave3

    systemctl enable mesos-slave.service
    systemctl start mesos-slave.service
    

    4.marathon

    4.1.install marathon on master

    yum install marathon -y
    

    4.2.config master/zk address on master

    cat << EOF >>/etc/default/marathon
    MARATHON_MASTER="zk://slave1:2181,slave2:2181,slave3:2181/mesos"
    MARATHON_ZK="zk://slave1:2181,slave2:2181,slave3:2181/marathon"
    EOF
    

    4.3.start marathon and enable auto start on master

    systemctl enable marathon.service
    systemctl start marathon.service
    

    Test

    mesos: http://master:5050

    marathon: http://master:8080w

     
  • Wang 23:27 on 2018-01-06 Permalink | Reply
    Tags: , CentOS7, , ,   

    Build blog with Docker/WordPress with https 

    1.install docker

    1.1.update yum

    sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
    [dockerrepo]
    name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/7/ enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg EOF 
    

    1.2.install docker

    sudo yum update -y
    sudo yum install -y docker-engine
    sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
    sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
    

    1.3.start docker

    sudo systemctl enable docker
    sudo systemctl start docker
    

    2.https/nginx configuration

    2.1.replace certificate

    replace domain.key/chained.pem with your certificate, you could apply free certificate on Let’s Encrypt

    2.2.nginx configuration

    replace wanghongmeng.com with your domain in nginx.conf

    3.initialize

    3.1.wordpress initialize

    login http://xxx.com, setup wordpress

    3.2.install https plugin

    install Really Simple SSL plugin, setup whole site covered by https

    3.3.test

    https://xxx.com

     
  • Wang 21:46 on 2018-01-06 Permalink | Reply
    Tags: , CentOS7, ,   

    Build blog with Docker/WordPress 

    1.install docker

    1.1.update yum

    sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
    [dockerrepo]
    name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/7/ enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg EOF 
    

    1.2.install docker

    sudo yum update -y
    sudo yum install -y docker-engine
    sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
    sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
    

    1.3.start docker

    sudo systemctl enable docker
    sudo systemctl start docker
    

    2.start wordpress by docker-compose

    sudo docker-compose -f blog-compose.yml up -d
    

    3.test wordpress

    http://localhost

    P.S. start container by docker instead of docker-compose

    docker run --name blog-mysql -v /var/lib/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=mysql -e MYSQL_DATABASE=blog -e MYSQL_USER=blog -e=MYSQL_PASSWORD=blog -d mysql:5.7 --character-set-server=utf8 --collation-server=utf8_general_ci
    docker run --name blog-wordpress --link blog-mysql:mysql -e WORDPRESS_DB_USER=blog -e WORDPRESS_DB_PASSWORD=blog -e WORDPRESS_DB_NAME=blog -p 8080:80 -d wordpress:4.9.1
    
     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel