Linux

HoTo: Create HA Cluster on Centos6.7

Posted on

Worked on versions

OS : Centos 6.7
Building the cluster
To build this simple cluster, we need a few basic components:
Resource manager that can start and stop resources (like Pacemaker)
Messaging component which is responsible for communication and membership (like Corosync or Heartbeat)
Optionally: Cluster manager to easily manange the cluster settings on all nodes (like PCS)

Preparation
Start with configuring both cluster nodes with a static IP, a hostname, make sure that they are in the same subnet and can reach each other by nodename.

1, Local name binding using hosts

cat /etc/hosts
10.0.0.11 dir01 dir01.cluster.domain.com
10.0.0.12 dir02 dir02.cluster.domain.com
10.0.0.13 dir03 dir03.cluster.domain.com
10.0.0.10 ldap-ha ldap-ha.cluster.domain.com

2, Disable SELINUX

vi /etc/selinux/config
SELINUX=disabled

3, Clean update the server

yum install clean all
yum update

Basic Firewall setting for All the nodes in the cluster:
When testing the cluster, we could temporarily disable the firewall to be sure that blocked ports aren’t causing unexpected problems.

1, Open UDP-ports 5404 and 5405 for Corosync:

iptables -I INPUT -m state --state NEW -p udp -m multiport --dports 5404,5405 -j ACCEPT

2, Open TCP-port 2224 for PCS

iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 2224 -j ACCEPT

3, Allow IGMP-traffic

iptables -I INPUT -p igmp -j ACCEPT

4, Allow multicast-traffic

iptables -I INPUT -m addrtype --dst-type MULTICAST -j ACCEPT

5, Save the changes made in iptables and restart

service iptables save
service iptables start

Installation
1, After setting up the basics, we need to install the packages for the components on all the server,

yum install corosync pcs pacemaker cman

2, To manage the cluster nodes, we will use PCS. This allows us to have a single interface to manage all cluster nodes. By installing the necessary packages, Yum also created a user, hacluster, which can be used together with PCS to do the configuration of the cluster nodes. Before we can use PCS, we need to configure public key authentication or give the user a password on all the nodes:

echo "hapasswd" | passwd hacluster --stdin

3, Startng pcsd pacemaker manager in all the nodes

service pacemaker start
service pcsd start
chkconfig pacemaker on
chkocnfig pcsd on

4, Create new corosync multicast configuration with the below given,

vi /etc/corosync/corosync.conf

compatibility: whitetank
totem {
 version: 2
 # Time (in ms) to wait for a token (1)
 token: 10000
 # How many token retransmits before forming a new
 # configuration
 token_retransmits_before_loss_const: 10
 # How long to wait for join messages in the membership protocol (ms)
 join: 1000
 # How long to wait for consensus to be achieved before starting a new
 # round of membership configuration (ms)
 consensus: 7500
 # Number of messages that may be sent by one processor on receipt of the token
 max_messages: 20
 # Stagger sending the node join messages by 1..send_join ms
 send_join: 45
 # Limit generated nodeids to 31-bits (positive signed integers)
 clear_node_high_bit: yes
 # Turn off the virtual synchrony filter
 vsftype: none
 # Enable encryption (2)
 secauth: on
 # How many threads to use for encryption/decryption
 threads: 0
 # This specifies the redundant ring protocol, which may be
 # none, active, or passive. (3)
 rrp_mode: active

# The following is a two-ring multicast configuration. (4)
 interface {
 ringnumber: 0
 bindnetaddr: 10.0.0.11
 mcastaddr: 239.255.1.1
 mcastport: 5405
 }
}

amf {
 mode: disabled
}

service {
 # Load the Pacemaker Cluster Resource Manager (5)
 ver: 1
 name: pacemaker
}

aisexec {
 user: root
 group: root
}

logging {
 fileline: off
 to_stderr: yes
 to_logfile: yes
 logfile: /var/log/cluster/corosync.log
 to_syslog: yes
 syslog_facility: daemon
 debug: off
 timestamp: on
 logger_subsys {
 subsys: AMF
 debug: off
 tags: enter|leave|trace1|trace2|trace3|trace4|trace6
 }}

5, Since we will configure all nodes from one point, we need to authenticate on all nodes before we are allowed to change the configuration. Use the previously configured hacluster user and password.

pcs cluster auth dir01 dir02 -u hacluster

From here, we can control the cluster by using PCS from dir01. It’s no longer required to repeat all commands on all the nodes.
Authorisation tokens are stored in the file /var/lib/pcsd/tokens.

Create the cluster and add nodes
1, start adding all nodes to a cluster named LDAP-HA-Cluster

pcs cluster setup --name LDAP-HA-Cluster dir01 dir02

2, After creating the cluster and adding nodes, start cluster packeages from the single poing , it will start pacemaker and corosync in all the nodes.

pcs cluster start --all

3, Optionally, depending on requirements, we can enable cluster services to start on boot,

pcs cluster enable --all

To check the status of the cluster after starting it:

pcs status
service pacemaker status
service corosync status

To check the status of the nodes in the cluster

pcs status nodes
corosync-objctl runtime.totem.pg.mrp.srp.members
corosync-cfgtool -s
pcs status corosync

Cluster configuration
1, Check the configuration for errors, and there still are some

crm_verify -L -V

The above command tells that erros still in the cluster, First time we can see error regarding STONITH (Shoot The Other Node In The Head), which is a mechanism to ensure that you don’t end up with two nodes that both think they are active and claim to be the service and virtual IP owner, also called a split brain situation. Since we have simple cluster, we’ll just disable the stonith option

pcs property set stonith-enabled=false

2, Ignore a low quorum

pcs property set no-quorum-policy=ignore

The below settings needed only if we have 3 servers in the cluster

pcs property set expected-quorum-votes=”3”

3, Set the basic cluster properties

pcs property set pe-warn-series-max=1000 \
 pe-input-series-max=1000 \
 pe-error-series-max=1000 \
 cluster-recheck-interval=5min

4, I believe we already configured HA-Proxy in the server, if not let start with basic install and start haproxy. Because we need to configure haproxy lsb in the cluster.

Yum install haproxy
service haproxy start

5, Adding Floating IP with hearbeat to monitor servers, This IP is used to connect HA-proxy and won’t assign to any serve where haproxy failed to start

pcs resource create LDAPfrontendIP0 ocf:heartbeat:IPaddr2 ip=10.0.0.10 cidr_netmask=32 op monitor interval=30s

To check the status;

pcs status resources

Now we can get the responce from the floating IP,

ping -c1 10.0.0.10

To see who is the current owner of the resource/virtual IP:

pcs status|grep virtual_ip

Adding HA-Proxy to Pacemaker configuration
1, Because there is no OCF agent for HA-Proxy we define a LSB resource haproxy (Note: This must be the same name as the startup script in /etc/init.d and comply to the LSB standard. The expected behavior of the startup scripts can be found at Linux-HA documentation. Fortunately the haproxy script can be used, so a recource LDAP-HA-Proxy will be created:

pcs resource create LDAP-HA-Proxy lsb:haproxy op monitor interval=5s

The resource will start on the node with the LDAPfrontendIP0 resource but complain about the other hosts in the HA-Cluster:

pcs status

2, Obviously the haproxy service fails to start if the IP adress of the loadbalancer does not exist. The default behavior of Pacemaker spreads the resources across all cluster nodes. Because the LDAPfrontendIP0 and LDAP-HA-Proxy resources are related to each other LDAP-HA-Proxy can only run on the node with the LDAPfrontendIP0 resource. To archive this a “colocation constraint” is needed. The weight score of INFINITY makes it mandatory to start the LDAP-HA-Proxy resource on the node with the LDAPfrontendIP0 resource:

pcs constraint colocation add LDAP-HA-Proxy LDAPfrontendIP0 INFINITY

3, The order of the resource is important otherwise the LDAPfrontendIP0 resource will be started on the node with the LDAP-HA-Proxy resource (which can not start because the LDAPfrontendIP0 resource provides the interface configuration for LDAP-HA-Proxy). Futhermore the LDAPfrontendIP0 resource should always start before LDAP-HA-Proxy resource so we have to enforce the resource start/stop ordering:

pcs constraint order LDAPfrontendIP0 then LDAP-HA-Proxy

After configuring the cluster with the correct constraints, restart it and check the status:

pcs cluster stop --all && pcs cluster start –all 
pcs status

Hence we completed cluster setup with HA-proxy, the following setup required to know how we can switch/Adding/removing resources

Howto: Backup & Restore Database in PostgreSQL (pg_dump,pg_restore)

Posted on

H ow to backup and restore dabatase in PostgreSQL

1)Backup data with pg_dump

pg_dump -i -h localhost -p 5432 -U postgres -F c -b -v -f  "/home/anand/ltchiedb.backup" ltchiedb

To list all of the available options of pg_dump , please issue following command.

pg_dump -?
-p, –port=PORT database server port number
-i, –ignore-version proceed even when server version mismatches
-h, –host=HOSTNAME database server host or socket directory
-U, –username=NAME connect as specified database user
-W, –password force password prompt (should happen automatically)
-d, –dbname=NAME connect to database name
-v, –verbose verbose mode
-F, –format=c|t|p output file format (custom, tar, plain text)
-c, –clean clean (drop) schema prior to create
-b, –blobs include large objects in dump
-v, –verbose verbose mode
-f, –file=FILENAME output file name

2) Restore data with pg_restore

pg_restore -i -h localhost -p 5432 -U postgres -d ltchiedb -v "/home/anand/ltchiedb.backup"

To list all of the available options of pg_restore , please issue following command.

pg_restore -?
-p, –port=PORT database server port number
-i, –ignore-version proceed even when server version mismatches
-h, –host=HOSTNAME database server host or socket directory
-U, –username=NAME connect as specified database user
-W, –password force password prompt (should happen automatically)
-d, –dbname=NAME connect to database name
-v, –verbose verbose mode

Error: “ldap_bind: Can’t contact LDAP server (-1)” on nagios check

Posted on Updated on

Nagios check_ldaps plugin working with SSL or TLS
Error:

[root@nagios libexec]# ./check_ldaps  -H 10.0.0.51  -w 10 -c 15 -b dc=tolven,dc=com -p 636 -v
ldap_bind: Can't contact LDAP server (-1)
 additional info: TLS error -8172:Peer's certificate issuer has been marked as not trusted by the user.
 Could not bind to the LDAP server

 

To fix this issue, simple understand the client is not issuing certificate, The client environment is not fully configured. so I configure the bellow setting. It works charm in both ways, byt ignoring the SSL check or adding client certificate,

Create new configuration file if not exist, /etc/openldap/ldap.conf

Ignoring SSL certificate, Add the bellow settings

TLS_REQCERT never
TLS_CACERT /etc/openldap/certs/ldap-client-ca.crt

Output:

root@nagios libexec]# ./check_ldaps -H 10.0.0.51 -w 10 -c 15 -b dc=tolven,dc=com -p 636 -v
LDAP OK - 0.062 seconds response time|time=0.061526s;10.000000;15.000000;0.000000

HowTo: Generate Certificate for OpenLDAP and using it for certificate authentication.

Posted on

LDAPS Server Certificate Requirements

LDAPS requires a properly formatted X.509 certificate. This certificate lets a OpenLDAP service listen for and automatically accept SSL connections. The server certificate is used for authenticating the OpenLDAP server to the client during the LDAPS setup and for enabling the SSL communication tunnel between the client and the server. As an option, we can also use LDAPS for client authentication.

Having spent quite some time to make a TLS work, I thought this may be usefull to some :

Creating Self CA certificate:

1, Create the  ldapclient-key.pem private key :

openssl genrsa -des3 -out ldapclient-key.pem 1024

2, Create the ldapserver-cacerts.pem certificate :

openssl req -new -key ldapclient-key.pem -x509 -days 1095 -out ldapserver-cacerts.pem

Creating a certificate for server:

1, Create the ldapserver-key.pem private key

openssl genrsa -out ldapserver-key.pem

2, Create a server.csr certificate request:

openssl req -new -key ldapserver-key.pem -out server.csr

3, Create the ldapserver-cert.pem certificate signed by your own CA :

openssl x509 -req -days 2000 -in server.csr -CA ldapserver-cacerts.pem -CAkey ldapclient-key.pem -CAcreateserial -out ldapserver-cert.pem

4, Create CA copy for the client:

cp -rpf ldapserver-cacerts.pem   ldapclient-cacerts.pem

Now configure the certificates in slapd.conf, the correct files must be copied on each server:

TLSCACertificateFile /etc/openldap/certs/ldapserver-cacerts.pem
TLSCertificateFile /etc/openldap/certs/ldapserver-cert.pem
TLSCertificateKeyFile /etc/openldap/certs/ldapserver-key.pem
TLSCipherSuite HIGH:MEDIUM:+SSLv2

# personnally, I only check servers from client.
# If you do, add this :
TLSVerifyClient never

Configure certificate for ldap clients

Key : ldapclient-key.pem
Crt : ldapclient-cert.pem

HowTo: Manage Sudo users commands and previleges

Posted on

If you want to prevent users from executing a specific command have a look at this.

ssh ALL=(user1) ALL, !/usr/bin/passwd 

Add users and use specific commands

#includedir /etc/sudoers.d

User_Alias JAVATEAM = fileupuser
Cmnd_Alias JUSERCMD =/etc/init.d/tomcat,/usr/bin/tail
JAVATEAM ALL = NOPASSWD : JUSERCMD
User_Alias ADMINTEAM = innouser
Cmnd_Alias SYSTEM =/sbin/service,/usr/sbin/ss,/bin/df,/usr/bin/du,/usr/bin/top,/bin/netstat,/usr/sbin/lsof,/bin/ps,/sbin/chkconfig
Cmnd_Alias FILEM =/bin/zcat,/usr/bin/tail,/bin/cat,/bin/grep
Cmnd_Alias COMPRESS =/usr/bin/unzip,/usr/bin/bzip2,/usr/bin/zip,/bin/tar
ADMINTEAM ALL = NOPASSWD : SYSTEM,FILEM,COMPRESS

this could be understand the logic easily.

Howto: Android_device_enable_rooting

Posted on Updated on

Download packages:

Kingo-compactable devices : http://www.kingoapp.com/android-root/devices.htm

Kingo-ROOT download : http://www.kingoapp.com/index.htm

Step one: Download and install Kingo Android Root into the PC.

ROOT-2

Step two: Enable USB debugging mode on your phone. If it’s running Android 4.0 or 4.1, tap Settings, Developer Options, then tick the box for “USB debugging.” (You may need to switch “Developer options” to On before you can do so.) On Android 4.2, tap Settings, About Phone, Developer Options, and then tick USB debugging.” Then tap OK to approve the setting change.
On Android 4.3 and later (and some versions of 4.2), tap Settings, About Phone, then scroll down to Build Number. Tap it seven times, at which point you should see the message, “You are now a developer!”

Step three: Run Android Root on your PC, then connect your phone via its USB cable. Make sure Device compactable USB driver installed,

ROOT-1

Step four: Click Root and wait for couple of minutes to complete, including the automated reboot at the end.

Howto: Install OpenCV + Apache + Mysql + WSGI with ffmpeg and QT support on Ubuntu 14.04

Posted on Updated on

Install java version “1.7.0_65” and Python 2.7.6 (default, Mar 22 2014, 22:59:56), follow any method
##Update the current installed packages

 sudo apt-get update && sudo apt-get -y upgrade

## To install OpenCV 2.4.2 or 2.4.3 on the Ubuntu 12.04 operating system, first install a developer environment to build OpenCV.

 sudo apt-get install build-essential cmake pkg-config
 sudo apt-get install curl qt-sdk unzip yasm checkinstall build-dep

##Install Image I/O libraries

 sudo apt-get install libjpeg62-dev libtiff4-dev libjasper-dev

##Install the GTK dev library

 sudo apt-get install libgtk2.0-dev

##Install Video I/O libraries

 sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev

##Optional – install support for Firewire video cameras

 sudo apt-get install libdc1394-22-dev

##Install video streaming libraries

 sudo apt-get install libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev

##Install the Python development environment and the Python Numerical library

 sudo apt-get install python-dev python-numpy python-pip
 sudo apt-get install python-opencv python-software-properties python-mysqldb python-xml

##Install the parallel code processing library (the Intel tbb library)

 sudo apt-get install libtbb-dev

##Install the Qt dev library

 sudo apt-get install libqt4-dev

##Install OpenCV Additional support Video/Audio and SSL libraries

 sudo apt-get install zlib1g-dev libssl-dev libreadline-dev libyaml-dev libxml2-dev libxslt1-dev libcurl4-openssl-dev libopencv-dev libmp3lame-dev libopencore-amrnb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils

##Install Apache Server and dependencies

 sudo apt-get install libapache2-mod-wsgi apache2 apache2.2-common apache2-mpm-prefork apache2-utils libexpat1 ssl-cert

##Install Mysql Database Server

 sudo apt-get install mysql-server libmysqlclient-dev

##Installing Python Modules

pip install numpy
pip install pyopencv
pip install Django==1.7.3
pip install django-admin-tools==0.5.2
pip install django-debug-toolbar==1.2.2
pip install django-extensions==1.4.9
pip install ipython==2.3.1
pip install six==1.9.0
pip install sqlparse==0.1.13
pip install wsgiref==0.1.2
pip install MySQL-python==1.2.5

##Download and Extraction OpenCV package

OPENCV_VER=2.4.10
curl "http://fossies.org/linux/misc/opencv-${OPENCV_VER}.zip" -o opencv-${OPENCV_VER}.zip
unzip "opencv-${OPENCV_VER}.zip" && cd "opencv-${OPENCV_VER}"
mkdir build && cd build

##Building OpenCV package from source

cmake -G "Unix Makefiles" -D PYTHON_LIBRARY=/usr/lib/python2.7/config-x86_64-linux-gnu/libpython2.7.so -D CMAKE_BUILD_TYPE=RELEASE -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_FFMPEG=ON -D WITH_OPENGL=ON ..

##Installing OpenCV Package

make -g2 && make install

## Providing Dummy Firewire Video camera device

sudo ln /dev/null /dev/raw1394

##Including Additional Library path

echo "/usr/local/lib" >> /etc/ld.so.conf.d/opencv.conf
sudo ldconfig

##SettingUp environmetn variables

echo "PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
PYTHONPATH=/usr/local/lib/python2.7/dist-packages:$PYTHONPATH
JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
JAVA_BIN=$JAVA_HOME/bin
PATH=$PATH:$JAVA_BIN
export PKG_CONFIG_PATH PYTHONPATH JAVA_BIN JAVA_HOME PATH" >> /etc/profile.d/python_env.sh

## Execution environment variables for the currnet shell

source /etc/profile.d/python_env.sh

##ToVerify

python -c "import cv2; print(cv2.__version__)"
pkg-config --modversion opencv

#Add Vhost for Apache

root@ip-10-184-30-74:~# pip --version #will show the dist-packages path
pip 1.5.4 from /usr/lib/python2.7/dist-packages (python 2.7)
vi /etc/apache2/sites-available/rasberry-pi.planetcure.in.conf

<VirtualHost *:80>
 ServerName rasberry-pi.planetcure.in
 DocumentRoot /opt/web-home/raspberrypi/facecount
 WSGIDaemonProcess rasberry-pi_demo user=anand group=www-data maximum-requests=10000 python-path=/opt/web-home/raspberrypi/facecount:/usr/lib/python2.7/dist-packages
 WSGIScriptAlias / /opt/web-home/raspberrypi/facecount/wsgi.py
WSGIScriptReloading On
WSGIPassAuthorization On
<Directory /opt/web-home/raspberrypi/facecount/>
 <Files wsgi.py>
 Require all granted
 </Files>
 </Directory>
 <Location />
 WSGIProcessGroup rasberry-pi_demo
 </Location>
Alias /static /opt/web-home/raspberrypi/facecount/static-assets/
<Directory /opt/web-home/raspberrypi/facecount/static-assets/>
 Require all granted
 </Directory>
ErrorLog /opt/web-home/raspberrypi/apache_logs/error.log
 # Possible values include: debug, info, notice, warn, error, crit,
 # alert, emerg.
 LogLevel warn
 CustomLog /opt/web-home/raspberrypi/apache_logs/access.log combined
</VirtualHost>
a2ensite rasberry-pi.planetcure.in

Now Restart Apache

service apache restart

Error: ctypes error: libdc1394 error: Failed to initialize libdc1394

Posted on Updated on

Error while importing Open CV in python with django framework , While compiling the application , it thrown the error like below. libdc1394 is a library for controlling camera hardware. It is an optional installation for the OpenCV package which unable to load the hardware for this time

Error :

libdc1394 error: Failed to initialize libdc1394

We don’t need to use camera hardware, is there perhaps a way of compiling without that part of OpenCV or If the server is in VBox/Virtualization system simply enable USB controller.
If it is remote server and we don’t need camera hardware you can create the null link for the IO dev.

 sudo ln /dev/null /dev/raw1394

Error: Fatal Python error: PyEval_AcquireThread: NULL new thread state

Posted on

This might be cause of various issue.

1, mod_wsgi is compiled for a different Python version and/or a different Python installation than the Python virtual environment

2, Python installation it is trying to use at runtime

3, If mod_wsgi and mod_python are both enabled.

In my case, I figured out the third cause. for fixing disabled mod_python because I was running website under wsgi wrapper.

sudo a2dismod python
sudo service apache2 restart

 

Error: Authz_core:error Client Denied by Server Configuration

Posted on Updated on

I have upgraded apache2.2 to 2.3, now a strange error I faced. Existing Apache authorization directives are not working,

I have done a modification that fixed the issue

Error :

[Wed Jan 28 04:29:51.468839 2015] [authz_core:error] [pid 29764:tid 139708675897088] [client 117.247.186.108:46348] AH01630: client denied by server configuration: /opt/web-home/raspberrypi/facecount/static-assets/images/detect.png

This changes the way that access control is declared from

  Order allow, deny
  Allow from all

to :

  Require all granted

his means that the total configuration for a Directory is now something like:

  <Directory /path/to/directory>
    Options FollowSymlinks
    AllowOverride none
    Require all granted
  </Directory>

Restart apache and it’ll all work nicely.