Tagged: Mysql Toggle Comment Threads | Keyboard Shortcuts

  • Wang 22:27 on 2022-10-25 Permalink | Reply
    Tags: , , Mysql   

    Intall Mysql Operator in K8S 

    #install kind and create local K8S cluster
    #kind create cluster –name local
    helm repo add mysql-operator https://mysql.github.io/mysql-operator/
    helm repo update
    MYSQL_OPERATOR_NS=mysql-operator
    helm install mysql-operator mysql-operator/mysql-operator –namespace ${MYSQL_OPERATOR_NS} –create-namespace
    MYSQL_NAME=mysql
    MYSQL_NS=mysql
    MYSQL_USER=root
    MYSQL_PWD=root
    MYSQL_SERVER_INSTANCE=1
    MYSQL_ROUTER_INSTANCE=1
    helm install ${MYSQL_NAME} mysql-operator/mysql-innodbcluster \
    –namespace ${MYSQL_NS} \
    –create-namespace \
    –version 2.0.5 \
    –set credentials.root.user=${MYSQL_USER} \
    –set credentials.root.password=${MYSQL_PWD} \
    –set credentials.root.host='%' \
    –set serverInstances=${MYSQL_SERVER_INSTANCE} \
    –set routerInstances=${MYSQL_ROUTER_INSTANCE} \
    –set tls.useSelfSigned=true
    kubectl get service -n ${MYSQL_NS}
    kubectl port-forward service/${MYSQL_NAME} 3306:3306 -n ${MYSQL_NS}
    #run mysql client
    #kubectl run mysql-client –image=mysql:5.7 -it –rm –restart=Never \
    #— mysql -h${MYSQL_NAME}.${MYSQL_NS}.svc.cluster.local -P3306 -u${MYSQL_USER} -p${MYSQL_PWD}
    kubectl run mysql-client –image=mysql:5.7 -it –rm –restart=Never /bin/bash
    mysql -hmysql.mysql.svc.cluster.local -P3306 -uroot -proot
    #mysql -h127.0.0.1 -P3306 -uroot -p
     
  • Wang 19:03 on 2020-03-06 Permalink | Reply
    Tags: Mysql,   

    how to configure database connection pool 

    Configuring a connection pool is something that developers often get wrong. There are several, possibly counter-intuitive for some, principles that need to be understood when configuring the pool.

    https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-Sizing

     
  • Wang 20:24 on 2018-03-16 Permalink | Reply
    Tags: , Mysql,   

    [Sqoop2] Notebook 

    Recently I tested sqoop2 which has many new features compared to sqoop1, about the comparision, I think you could check here and stackoverflow, I will introduce about the operation manual.

    1.install

    wget http://ftp.jaist.ac.jp/pub/apache/sqoop/1.99.7/sqoop-1.99.7-bin-hadoop200.tar.gz
    tar -vzxf sqoop-1.99.7-bin-hadoop200.tar.gz && cd sqoop-1.99.7-bin-hadoop200
    

    2.replace @LOGDIR@/@BASEDIR@ in sqoop.properties

    3.download mysql driver into server/lib

    4.configure proxy user in core-site.xml

    <property>
        <name>hadoop.proxyuser.sqoop2.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.sqoop2.groups</name>
        <value>*</value>
    </property>
    

    5.verify & start sqoop2 server

    bin/sqoop2-tool verify
    bin/sqoop2-server start
    

    6.start client & test

    bin/sqoop2-shell
    

    7.show the connectors

    sqoop:000> show connector
    +------------------------+---------+------------------------------------------------------------+----------------------+
    | Name | Version | Class | Supported Directions |
    +------------------------+---------+------------------------------------------------------------+----------------------+
    | generic-jdbc-connector | 1.99.7 | org.apache.sqoop.connector.jdbc.GenericJdbcConnector | FROM/TO |
    | kite-connector | 1.99.7 | org.apache.sqoop.connector.kite.KiteConnector | FROM/TO |
    | oracle-jdbc-connector | 1.99.7 | org.apache.sqoop.connector.jdbc.oracle.OracleJdbcConnector | FROM/TO |
    | ftp-connector | 1.99.7 | org.apache.sqoop.connector.ftp.FtpConnector | TO |
    | hdfs-connector | 1.99.7 | org.apache.sqoop.connector.hdfs.HdfsConnector | FROM/TO |
    | kafka-connector | 1.99.7 | org.apache.sqoop.connector.kafka.KafkaConnector | TO |
    | sftp-connector | 1.99.7 | org.apache.sqoop.connector.sftp.SftpConnector | TO |
    +------------------------+---------+------------------------------------------------------------+----------------------+
    

    8.create links & show links

    sqoop:000> create link -connector generic-jdbc-connector
    sqoop:000> create link -connector hdfs-connector
    
    sqoop:000> show link
    +-------------+------------------------+---------+
    | Name | Connector Name | Enabled |
    +-------------+------------------------+---------+
    | mysql-local | generic-jdbc-connector | true |
    | hdfs-local | hdfs-connector | true |
    +-------------+------------------------+---------+
    
    sqoop:000> show link --all
    2 link(s) to show:
    link with name mysql-local (Enabled: true, Created by hongmeng.wang at 3/1/18 10:56 AM, Updated by hongmeng.wang at 3/1/18 12:51 PM)
    Using Connector generic-jdbc-connector with name {1}
    Database connection
    Driver class: com.mysql.jdbc.Driver
    Connection String: jdbc:mysql://localhost:3306
    Username: root
    Password:
    Fetch Size: 100
    Connection Properties:
    protocol = tcp
    useUnicode = true
    characterEncoding = utf-8
    autoReconnect = true
    SQL Dialect
    Identifier enclose: (blank, if use default, will get error)
    link with name hdfs-local (Enabled: true, Created by hongmeng.wang at 3/1/18 10:58 AM, Updated by hongmeng.wang at 3/1/18 12:54 PM)
    Using Connector hdfs-connector with name {1}
    HDFS cluster
    URI: hdfs://localhost:9000
    Conf directory: /usr/local/Cellar/hadoop/2.8.2/libexec/etc/hadoop
    Additional configs::
    

    9.create job & show job

    sqoop:000> create job -f "mysql-local" -t "hdfs-local"
    
    sqoop:000> show job
    +----+----------------------+--------------------------------------+-----------------------------+---------+
    | Id | Name | From Connector | To Connector | Enabled |
    +----+----------------------+--------------------------------------+-----------------------------+---------+
    | 1 | mysql-2-hdfs-t1 | mysql-local (generic-jdbc-connector) | hdfs-local (hdfs-connector) | true |
    +----+----------------------+--------------------------------------+-----------------------------+---------+
    
    sqoop:000> show job --all
    1 job(s) to show:
    Job with name mysql-2-hdfs-segment (Enabled: true, Created by hongmeng.wang at 3/1/18 11:06 AM, Updated by hongmeng.wang at 3/1/18 11:39 AM)
    Throttling resources
    Extractors:
    Loaders:
    Classpath configuration
    Extra mapper jars:
    From link: mysql-local
    Database source
    Schema name: test
    Table name: t1
    SQL statement:
    Column names:
    Partition column: id
    Partition column nullable:
    Boundary query:
    Incremental read
    Check column:
    Last value:
    To link: hdfs-local
    Target configuration
    Override null value: true
    Null value:
    File format: TEXT_FILE
    Compression codec: NONE
    Custom codec:
    Output directory: /sqoop/mysql/test
    Append mode:
    

    10.start job & check job’s status

    sqoop:000> start job -name mysql-2-hdfs-segment
    Submission details
    Job Name: mysql-2-hdfs-segment
    Server URL: http://localhost:12000/sqoop/
    Created by: sqoop2
    Creation date: 2018-03-01 13:53:37 JST
    Lastly updated by: sqoop2
    External ID: job_1519869491258_0001
    http://localhost:8088/proxy/application_1519869491258_0001/
    2018-03-01 13:53:37 JST: BOOTING - Progress is not available
    
    sqoop:000> status job -n mysql-2-hdfs-segment
    Submission details
    Job Name: mysql-2-hdfs-segment
    Server URL: http://localhost:12000/sqoop/
    Created by: sqoop2
    Creation date: 2018-03-01 14:01:54 JST
    Lastly updated by: sqoop2
    External ID: job_1519869491258_0002
    http://localhost:8088/proxy/application_1519869491258_0002/
    2018-03-01 14:03:31 JST: BOOTING - 0.00 %
    

    Issues

    1.modify “org.apache.sqoop.submission.engine.mapreduce.configuration.directory=”directory of hadoop configuration” in conf/sqoop.properties if you got below error when executing bin/sqoop2-tool verify

    Exception in thread "main" java.lang.RuntimeException: Failure in server initialization
    at org.apache.sqoop.core.SqoopServer.initialize(SqoopServer.java:68)
    at org.apache.sqoop.server.SqoopJettyServer.<init>(SqoopJettyServer.java:67)
    at org.apache.sqoop.server.SqoopJettyServer.main(SqoopJettyServer.java:177)
    Caused by: org.apache.sqoop.common.SqoopException: MAPREDUCE_0002:Failure on submission engine initialization - Invalid Hadoop configuration directory (not a directory or permission issues): /etc/hadoop/conf/
    at org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.initialize(MapreduceSubmissionEngine.java:97)
    at org.apache.sqoop.driver.JobManager.initialize(JobManager.java:257)
    at org.apache.sqoop.core.SqoopServer.initialize(SqoopServer.java:64)
    ... 2 more
    

    2.check $CLASSPATH and $HADOOP_CLASSPATH, maybe some jars conflict if got below error:

    Caused by: java.lang.SecurityException: sealing violation: package org.apache.derby.impl.services.locks is sealed
    at java.net.URLClassLoader.getAndVerifyPackage(URLClassLoader.java:399)
    at java.net.URLClassLoader.definePackageInternal(URLClassLoader.java:419)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:451)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:264)
    at org.apache.derby.impl.services.monitor.BaseMonitor.getImplementations(Unknown Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.getDefaultImplementations(Unknown Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.runWithState(Unknown Source)
    at org.apache.derby.iampl.services.monitor.FileMonitor.<init>(Unknown Source)
    at org.apache.derby.iapi.services.monitor.Monitor.startMonitor(Unknown Source)
    at org.apache.derby.iapi.jdbc.JDBCBoot.boot(Unknown Source)
    at org.apache.derby.jdbc.EmbeddedDriver.boot(Unknown Source)
    at org.apache.derby.jdbc.EmbeddedDriver.<clinit>(Unknown Source)
    ... 11 more
    
     
  • Wang 21:33 on 2018-03-11 Permalink | Reply
    Tags: , , , , Mysql,   

    [Sqoop1] Interact MySQL with HDFS/Hive/HBase 

    install sqoop1 on mac

    brew install sqoop
    

    #if you have set env profiles, uncomment profiles in conf/sqoop-env.sh

    1.MySQL -> HDFS

    1.1.import table

    sqoop import --connect jdbc:mysql://localhost/test --direct --username root --P --table t1 --warehouse-dir /mysql/test --fields-terminated-by ','
    

    1.2.import schema

    sqoop import-all-tables --connect jdbc:mysql://localhost/test --direct --username root -P --warehouse-dir /mysql/test --fields-terminated-by ','
    

    2.MySQL -> Hive

    2.1.import definition

    sqoop create-hive-table --connect jdbc:mysql://localhost/test --table t1 --username root --P --hive-database test
    

    2.2.import table

    sqoop import --connect jdbc:mysql://localhost/test --username root --P --table t1 --hive-import --hive-database test --hive-table t1 --fields-terminated-by ','
    

    2.3.import schema

    sqoop import-all-tables --connect jdbc:mysql://localhost/test --username root --P --hive-import --hive-database test --fields-terminated-by ','
    

    3.MySQL -> HBase

    3.1.definition

    sqoop import --connect jdbc:mysql://localhost/test --username root --P --table t1
    

    3.2.import table, need create table in hbase first

    sqoop import --connect jdbc:mysql://localhost/test --username root --P --table t1 --hbase-bulkload --hbase-table test.t1 --column-family basic --fields-terminated-by ','
    

    3.3.import table without creating table in hbase, but pay attention to hbase/sqoop version

    sqoop import --connect jdbc:mysql://localhost/test --username root --P --table t1 --hbase-bulkload --hbase-create-table --hbase-table test.t1 --column-family basic --fields-terminated-by ','
    

    4.HDFS/Hive/HBase -> MySQL

    sqoop export --connect jdbc:mysql://localhost/test --username root --P --table t1 --export-dir /user/hive/warehouse/test.db/t1 --fields-terminated-by ','
    
     
  • Wang 21:43 on 2018-03-02 Permalink | Reply
    Tags: , , , , , , , Mysql,   

    [GCP ] Install bigdata cluster 

    I applied google cloud for trial which give me 300$, so I initialize 4 severs to do test.

    Servers:

    Host

    OS

    Memory

    CPU

    Disk

    Region

    master.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    slave1.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    slave2.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    slave3.c.ambari-195807.internal

    CentOS 7

    13 GB

    Intel Ivy Bridge: 2

    200G

    asia-east1-a

    1.prepare

    1.1.configure ssh key on each slave to make master login without password

    1.2.install jdk1.8 on each server, download, set JAVA_HOME in profile

    1.3.configure hostnames in /etc/hosts on each server


    2.install hadoop

    2.1.download hadoop 2.8.2

    wget http://ftp.jaist.ac.jp/pub/apache/hadoop/common/hadoop-2.8.3/hadoop-2.8.3.tar.gz
    tar -vzxf hadoop-2.8.3.tar.gz && cd hadoop-2.8.3
    

    2.2.configure core-site.xml

    <property>
        <name>fs.default.name</name>
        <value>hdfs://master.c.ambari-195807.internal:9000</value> 
    </property>
    <property>
        <name>hadoop.tmp.dir</name>  
        <value>/data/hadoop/hdfs/tmp</value>
    </property>
    <property>
        <name>hadoop.http.filter.initializers</name>
        <value>org.apache.hadoop.security.HttpCrossOriginFilterInitializer</value>
    </property>
    

    2.3.configure hdfs-site.xml

    <property>
        <name>dfs.name.dir</name>
        <value>/data/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/opt/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    

    2.4.configure mapred-site.xml

    <property>  
        <name>mapred.job.tracker</name>  
        <value>master.c.ambari-195807.internal:49001</value>  
    </property>
    <property>
        <name>mapreduce.framework.name</name>  
        <value>yarn</value>  
    </property>
    <property>
        <name>mapred.local.dir</name>  
        <value>/data/hadoop/mapred</value>  
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>2048</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>4096</value>
    </property>
      <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx6144m</value>
    </property>
    <property>
        <name>mapreduce.reduce.java.opts</name>
        <value>-Xmx6144m</value>
    </property>
    

    2.5.configure yarn-site.xml

    <property>  
        <name>yarn.resourcemanager.hostname</name>  
        <value>master.c.ambari-195807.internal</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.address</name>  
        <value>${yarn.resourcemanager.hostname}:8032</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.scheduler.address</name>  
        <value>${yarn.resourcemanager.hostname}:8030</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.webapp.address</name>  
        <value>${yarn.resourcemanager.hostname}:8088</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.webapp.https.address</name>  
        <value>${yarn.resourcemanager.hostname}:8090</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.resource-tracker.address</name>  
        <value>${yarn.resourcemanager.hostname}:8031</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.admin.address</name>  
        <value>${yarn.resourcemanager.hostname}:8033</value>  
    </property>  
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.timeline-service.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.timeline-service.generic-application-history.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.timeline-service.http-cross-origin.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.timeline-service.hostname</name>
        <value>master.c.ambari-195807.internal</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.cross-origin.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master.c.ambari-195807.internal:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master.c.ambari-195807.internal:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master.c.ambari-195807.internal:8031</value>
    </property>
    

    2.6.set slaves

    echo slave1.c.ambari-195807.internal >>slaves
    echo slave2.c.ambari-195807.internal >>slaves
    echo slave3.c.ambari-195807.internal >>slaves
    

    2.7.copy hadoop from master to each slave

    scp -r hadoop-2.8.3/ gizmo@slave1.c.ambari-195807.internal:/opt/apps/
    scp -r hadoop-2.8.3/ gizmo@slave2.c.ambari-195807.internal:/opt/apps/
    scp -r hadoop-2.8.3/ gizmo@slave3.c.ambari-195807.internal:/opt/apps/
    

    2.8.configure hadoop env profile

    echo 'export HADOOP_HOME=/opt/apps/hadoop-2.8.3' >>~/.bashrc
    echo 'export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop' >>~/.bashrc
    echo 'export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$JAVA_HOME/bin' >>~/.bashrc
    

    2.9.start hdfs/yarn

    start-dfs.hs
    start-yarn.sh
    

    2.10.check

    hdfs, http://master.c.ambari-195807.internal:50070

    yarn, http://master.c.ambari-195807.internal:8088


    3.install hive

    3.1.download hive 2.3.2

    wget http://ftp.jaist.ac.jp/pub/apache/hive/hive-2.3.2/apache-hive-2.3.2-bin.tar.gz
    tar -zvxf apache-hive-2.3.2-bin.tar.gz && cd apache-hive-2.3.2-bin
    

    3.2.configure hive env profile

    echo 'export HIVE_HOME=/opt/apps/apache-hive-2.3.2-bin' >>~/.bashrc
    echo 'export PATH=$PATH:$HIVE_HOME/bin' >>~/.bashrc
    

    3.3.install mysql to store metadata

    rpm -ivh http://repo.mysql.com/mysql57-community-release-el7.rpm
    yum install -y mysql-server
    systemctl start mysqld
    mysql_password="pa12ss34wo!@d#"
    mysql_default_password=`grep 'temporary password' /var/log/mysqld.log | awk -F ': ' '{print $2}'`
    mysql -u root -p${mysql_default_password} -e "set global validate_password_policy=0; set global validate_password_length=4;" --connect-expired-password
    mysqladmin -u root -p${mysql_default_password} password ${mysql_password}
    mysql -u root -p${mysql_password} -e "create database hive default charset 'utf8'; flush privileges;"
    mysql -u root -p${mysql_password} -e "grant all privileges on hive.* to hive@'' identified by 'hive'; flush privileges;"
    

    3.4.download mysql driver

    wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.45/mysql-connector-java-5.1.45.jar -O $HIVE_HOME/lib
    

    3.5.configure hive-site.xml

    <configuration>
        <property>
            <name>javax.jdo.option.ConnectionURL</name>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionDriverName</name>
            <value>com.mysql.jdbc.Driver</value>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionUserName</name>
            <value>hive</value>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionPassword</name>
            <value>hive</value>
        </property>
    </configuration>
    

    3.6.initialize hive meta tables

    schematool -dbType mysql -initSchema
    

    3.7.test hive


    4.install tez

    4.1.please follow the instruction “install tez on single server” on each server


    5.install hbase

    5.1.download hbase 1.2.6

    wget http://ftp.jaist.ac.jp/pub/apache/hbase/1.2.6/hbase-1.2.6-bin.tar.gz
    tar -vzxf hbase-1.2.6-bin.tar.gz && cd hbase-1.2.6
    

    5.2.configure hbase-site.xml

    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://master.c.ambari-195807.internal:9000/hbase</value>
    </property>
    <property>
        <name>hbase.master</name>
        <value>master</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>slave1.c.ambari-195807.internal,slave2.c.ambari-195807.internal,slave3.c.ambari-195807.internal</value>
    </property>
    <property>
        <name>dfs.support.append</name>
        <value>true</value>
    </property>
    <property>  
        <name>hbase.master.info.port</name>  
        <value>60010</value>  
    </property>
    

    5.3.configure regionservers

    echo slave1.c.ambari-195807.internal >>regionservers
    echo slave2.c.ambari-195807.internal >>regionservers
    echo slave3.c.ambari-195807.internal >>regionservers
    

    5.4.copy hbase from master to each slave

    5.5.configure hbase env profile

    echo 'export HBASE_HOME=/opt/apps/hbase-1.2.6' >>~/.bashrc 
    echo 'export PATH=$PATH:$HBASE_HOME/bin' >>~/.bashrc
    

    5.6.start hbase

    start-hbase.sh
    

    5.7.check, http://35.194.253.162:60010


    Things done!

     
  • Wang 22:13 on 2018-02-21 Permalink | Reply
    Tags: , , , , Mysql   

    Manage BDP by ambari 

    It’s boring and complicated to manage bigdata platforms, there are so many softwares need to be installed and coordinated to make them work well together, so I tried ambari to manage them.

    1.run centos7 container

    docker run -dit --name centos7 --privileged --publish 8080:8080 centos:7 /usr/sbin/init
    

    2.operate container

    2.1.enter container

    docker exec -it centos7 bash
    

    2.2.update yum and install tools

    yum update -y && yum install -y wget
    

    2.3.download the ambari repository

    wget -nv http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.6.0.0/ambari.repo -O /etc/yum.repos.d/ambari.repo
    

    2.4.install the ambari

    yum install -y ambari-server
    yum install -y ambari-agent
    

    2.5.install mysql as metastore, ,create mysql repo under /etc/yum.repos.d

    cat << 'EOF' >/etc/yum.repos.d/mysql.5.7.repo
    [mysql57-community]
    name=MySQL 5.7 Community Server
    baseurl=http://repo.mysql.com/yum/mysql-5.7-community/el/7/$basearch/
    enabled=1
    gpgcheck=0
    EOF
    

    2.6.install mysql server

    yum install -y mysql-community-server
    

    2.7.start mysql

    systemctl start mysqld
    

    2.8.create mysql user && init database

    mysql_password=ambari
    mysql_default_password=`grep 'temporary password' /var/log/mysqld.log | awk -F ': ' '{print $2}'`
    mysql -u root -p${mysql_default_password} -e "set global validate_password_policy=0; set global validate_password_length=4;" --connect-expired-password
    mysqladmin -u root -p${mysql_default_password} password ${mysql_password}
    mysql -u root -p${mysql_password} -e "create database ambari default charset 'utf8'; flush privileges;"
    mysql -u root -p${mysql_password} -e "grant all privileges on ambari.* to ambari@'' identified by 'ambari'; flush privileges;"
    mysql -u root -p${mysql_password} -e "use ambari; source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql;"
    

    2.9.download mysql driver

    driver_path=/usr/share/java
    mkdir ${driver_path}
    wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.45/mysql-connector-java-5.1.45.jar -O ${driver_path}/mysql-connector.jar
    

    2.10.setup ambari server, pay attention to database configuration, need select mysql manually

    ambari-server setup
    

    2.11.modify ambari database configuration

    echo "server.jdbc.driver.path=${driver_path}/mysql-connector.jar" >> /etc/ambari-server/conf/ambari.properties
    

    2.12.start ambari

    ambari-server start
    ambari-agent start
    ambari-server setup --jdbc-db=mysql --jdbc-driver=${driver_path}/mysql-connector.jar
    

    3.login, default accuont: admin/admin
    http://localhost:8080


    P.S.

    The above steps are configured on single server,  if you wanna build cluster with several servers, you also need configure ssh key(please google for specific steps, it’s simple) and start ambari-agent on slave servers.


    Below are screenshots of a mini cluster which was built by 4 servers:

     
  • Wang 23:19 on 2018-02-13 Permalink | Reply
    Tags: , Mysql   

    Delete Hive Meta Error 

    I got this error when I deleted hive table, due to delete hive metastore cascade in mysql:

    com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; 
    check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
    • Mysql Version: 5.6
    • Hive Version: 2.3.1
    • Mysql Connector Version: 5.1.26

    After searching, I found it’s the bug of mysql connector, https://bugs.mysql.com/bug.php?id=66659.

    So I change mysql connector to 5.1.39, things done😀

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: