Bash

HowTo: Generate Certificate for OpenLDAP and using it for certificate authentication.

Posted on

LDAPS Server Certificate Requirements

LDAPS requires a properly formatted X.509 certificate. This certificate lets a OpenLDAP service listen for and automatically accept SSL connections. The server certificate is used for authenticating the OpenLDAP server to the client during the LDAPS setup and for enabling the SSL communication tunnel between the client and the server. As an option, we can also use LDAPS for client authentication.

Having spent quite some time to make a TLS work, I thought this may be usefull to some :

Creating Self CA certificate:

1, Create the  ldapclient-key.pem private key :

openssl genrsa -des3 -out ldapclient-key.pem 1024

2, Create the ldapserver-cacerts.pem certificate :

openssl req -new -key ldapclient-key.pem -x509 -days 1095 -out ldapserver-cacerts.pem

Creating a certificate for server:

1, Create the ldapserver-key.pem private key

openssl genrsa -out ldapserver-key.pem

2, Create a server.csr certificate request:

openssl req -new -key ldapserver-key.pem -out server.csr

3, Create the ldapserver-cert.pem certificate signed by your own CA :

openssl x509 -req -days 2000 -in server.csr -CA ldapserver-cacerts.pem -CAkey ldapclient-key.pem -CAcreateserial -out ldapserver-cert.pem

4, Create CA copy for the client:

cp -rpf ldapserver-cacerts.pem   ldapclient-cacerts.pem

Now configure the certificates in slapd.conf, the correct files must be copied on each server:

TLSCACertificateFile /etc/openldap/certs/ldapserver-cacerts.pem
TLSCertificateFile /etc/openldap/certs/ldapserver-cert.pem
TLSCertificateKeyFile /etc/openldap/certs/ldapserver-key.pem
TLSCipherSuite HIGH:MEDIUM:+SSLv2

# personnally, I only check servers from client.
# If you do, add this :
TLSVerifyClient never

Configure certificate for ldap clients

Key : ldapclient-key.pem
Crt : ldapclient-cert.pem

Error: posftix: warning: SASL authentication failure: No worthy mechs found

Posted on Updated on

After configuring postfix relay server I found their was some issue with gmail server authentication, it will bounce the emails

Error : 
 postfix/smtp[25857]: 59BF721177: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.25.108]: no mechanism available
 postfix/smtp[25861]: warning: SASL authentication failure: No worthy mechs found

Their must be two solid reasons behind this
1, SASL package is missing for plain module

yum install cyrus-sasl{,-plain}

2, Allow plaintext (which is fine when using STARTTLS): make the connection enrypted

smtp_sasl_security_options = noanonymous

Make Sure you enabled all the below options :

smtp_sasl_auth_enable = yes
smtp_use_tls = yes
smtp_tls_loglevel = 1
smtp_tls_security_level = encrypt
smtp_sasl_mechanism_filter = login

 

HowTo: Password lesslogin in linux.

Posted on Updated on

Password less logins allow you get get into the server even the password has been changed or expired ,

It can be achieve by single unix command you can use either this or the detailed steps given below. It will prompt password for server2,  once it is over the next login will be the password less

 [root@srv-51 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub  syncfuser@192.168.1.52

Detailed steps :

1, Generate public key on server-1, ignore this step if it is already exist

 [root@srv-51 ~]$ ssh-keygen
 Generating public/private rsa key pair.
 Enter file in which to save the key (/root/.ssh/id_rsa):
 Created directory '/root/.ssh'.
 Enter passphrase (empty for no passphrase):
 Enter same passphrase again:
 Your identification has been saved in /root/.ssh/id_rsa.
 Your public key has been saved in /root/.ssh/id_rsa.pub.
 The key fingerprint is:
 8f:99:9f:8f:ba:bf:15:ca:6b:1f:83:06:a2:1a:9c:59 root@srv-51
 The key's randomart image is:
 +--[ RSA 2048]----+
 | |
 | |
 | |
 | |
 | E . S . |
 | . + . . B o . |
 | = . + * + |
 | o o.= o |
 | . o=B+o |
 +-----------------+

3, Grab the key and add it in the authorized_keys file in server2

[root@srv-51 ~]# cat ~/.ssh/id_rsa.pub
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAz9iTxsWIYZyLtGN47MQZkSrPqXoGwATAKD/ZqIyemFRvKnlkSllkEEQ7+MlMstz6HvONfTJuJROegELqTIA7PoR43LTTKw7zfqJtt1J4fUH/6mbYlB5bedXtl/7L9auRYr276d04CFUCKfINEG4KJXYlbuSM8Mr5ZiUvLCkiu4Jx77DSy0iWaDa90C6cEbl1vRX9yl1pdWQbAMuazYLfiDPOnbqb7JtcI9du5bNEuFuA26VahaYbaYtXFnKBbKrCUMzTHT2uuNesYpckUHT4f0T1fU9qNsAYBlyQBgMIu/2qdJ+Y8luMVCkydXx8ZJmSTmAp+yR+qaZDYCqujrvjdQ== root@localhost.localdomain

4, Server2 authorized_keys key entry is looks like this

[root@srv-52 ~]# cat /home/syncfuser/.ssh/authorized_keys
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAz9iTxsWIYZyLtGN47MQZkSrPqXoGwATAKD/ZqIyemFRvKnlkSllkEEQ7+MlMstz6HvONfTJuJROegELqTIA7PoR43LTTKw7zfqJtt1J4fUH/6mbYlB5bedXtl/7L9auRYr276d04CFUCKfINEG4KJXYlbuSM8Mr5ZiUvLCkiu4Jx77DSy0iWaDa90C6cEbl1vRX9yl1pdWQbAMuazYLfiDPOnbqb7JtcI9du5bNEuFuA26VahaYbaYtXFnKBbKrCUMzTHT2uuNesYpckUHT4f0T1fU9qNsAYBlyQBgMIu/2qdJ+Y8luMVCkydXx8ZJmSTmAp+yR+qaZDYCqujrvjdQ== root@localhost.localdomain

Finally output will be like this

 [root@srv-51 ~]# ssh syncfuser@192.168.1.52
 Last login: Wed Jun 25 17:08:25 2014 from 192.168.1.51
 [syncfuser@srv-52 ~]$

Now server1 root user can enter password less to the syncfuser on server2. 🙂

HowTo: Recover RAID volume and mount seperatly

Posted on Updated on

My NAS storage was crashed, this time I was forced to move one of the raid volume to another server to make the service up because the volume contains all VM’s used by XEN server,  most probably  it is a LVM disk.

Everybody knows that we can’t simply attach the raid disk to another machine, so just follow the procedures below.

Once I attached the HDD to another machine. check the disk availability

root@ubuntu:~# mdadm --examine /dev/sdb
/dev/sdb:
 Magic : a92b4efc
 Version : 1.2
 Feature Map : 0x0
 Array UUID : ec2c6fb2:f211cfa5:8dfa8777:4f08bfed
 Name : openmediavault:storage
 Creation Time : Fri May 9 16:22:45 2014
 Raid Level : raid1
 Raid Devices : 2
Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
 Array Size : 976761424 (931.51 GiB 1000.20 GB)
 Used Dev Size : 1953522848 (931.51 GiB 1000.20 GB)
 Data Offset : 2048 sectors
 Super Offset : 8 sectors
 State : clean
 Device UUID : 3a9e90a0:ca0e458e:c48e1b34:f3aaf06f
Update Time : Tue Jun 24 16:20:00 2014
 Checksum : eaa54b02 - correct
 Events : 24468
 Device Role : Active device 1
 Array State : .A ('A' == active, '.' == missing)

It sounds good now move to the next step, It should be create the block device md* so it will be reveal the partitions.

root@ubuntu:~# mdadm --assemble --force /dev/md127 /dev/sdb

You will get the output like this

root@ubuntu:~# ll  /dev/md127
 brw-rw---- 1 root disk 9, 127 Jun 24 14:27 /dev/md127

Now you can see the LVM names

root@ubuntu:~# lvs
 LV   VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
 nfs  storage -wi-ao 931.51g
 root@ubuntu:~# pvs
 PV         VG      Fmt  Attr PSize   PFree
 /dev/md127 storage lvm2 a-   931.51g    0
 root@ubuntu:~# vgs
 VG      #PV #LV #SN Attr   VSize   VFree
 storage   1   1   0 wz--n- 931.51g    0

Mount the partition manually

root@ubuntu:~# mount /dev/mapper/storage-nfs /export/
root@ubuntu:~# mount | grep nfs
 /dev/mapper/storage-nfs on /export type ext4 (rw)

That’s it now I got my files back,

 

 

 

 

 

HowTo: Change Instance store AMI to EBS-backend AMI

Posted on Updated on

Amazon not providing any feature for changing AMI root device type, once we generate an instance with Instance-store  we can’t upgrade the instance because for upgrading instance should stop. The stop option is disable for such instance-store AMI’s. I followed the steps below, It can be workout by two ways either using rsync or dd

Here is the steps:

  • Create an EBS vol with size as same or more, I used 10G because my existing instance having 10G on root.

EBS_fresh

After creating which is look like this

EBS_new

  • Attach the EBSLogin to existing Instance-store backend AMI,

Right- click and select Attach Volume,

EBS_attach

  • Login to the Instance-store backend  server, and stop all the running services (Optional), (eg., mysqld , httpd , xinted )

Execute the the disk mirroring commands below, it will take few min to complete according to the server perfomance.

[root@ip-10-128-5-222 ~]# dd bs=65536 if=/dev/sda1 of=/dev/sdf

or

mkfs.ext3 /dev/sdf                              #create filesystem
mkdir /mnt/ebs                                  #New dir for mounting 
mount /dev/sdh /mnt/ebs                         #Mount as a partition
rsync -avHx / /mnt/ebs                          #Synchronizing root and ebs  
rsync -avHx /dev /mnt/ebs                       #Synchronizing device informations  
tune2fs -L '/' /dev/sdf                         #Creating partition label for ebs  
sync;sync;sync;sync && umount /mnt/ebs          #Sync and umounting ebs 

Check the EBS volume for consistency

[root@ip-10-128-5-222 ~]# fsck /dev/sdf
 fsck 1.39 (29-May-2006)
 e2fsck 1.39 (29-May-2006)
 /dev/sdf: clean, 126372/1310720 files, 721346/2621440 blocks

Mount the EBS volume into the instance, Remove the /mnt entry from the fstab on your EBS vol

[root@ip-10-128-5-222 ~]# mount /dev/sdf /mnt/ebs-vol
[root@ip-10-128-5-222 ~]# vim /root/ebs-vol/etc/fstab
  • Create a snapshot of the EBS volume using the AWS management console

Right-Click the EBS_vol –> select Create Snapshot , it will take few min to create

EBS_snapshot

After creating snapshot it will list under snapshot list.

EBS_snapshotpng

Now Right-click snapshot  –> select Create Image from snapshot

EBS_create_image

  • Launch new EC2 using newly create AMI, so while creating new EC2 you can select any instance type also you may use the same keypair and Elastic IP for the new instance

Creating New instance using new AMI.

NEW_EC2

Running instance

EC2_newpng

  • Now you can login to the new server, If you select more than the size of snapshot you have to use the below command to retain the storage back
#resize2fs /dev/sda1
  •  Successfully migrated the server as EBS-backend. Start all the services if it is necessary, This time we can upgrade the instance type

HowTo: S3 bucket dynamic URI access

Posted on Updated on

s3cmd is a command line tool for uploading, retrieving and managing data in Amazon S3. Still their are no wiki is updated.
you may get the packages from sourceforge official

Also the download repository is available here : Download Now

It will also support including unix dynamic resource access method, for example we can use * for calling all the resources or {dir1,file2} for specific resource.

I was shown in the example for setting up public acl for dynamic sub directories.

Installation:

root@planetcure:wget http://kaz.dl.sourceforge.net/project/s3tools/s3cmd/1.0.1/s3cmd-1.0.1.tar.gz
root@planetcure:tar -zxvf s3cmd-1.0.1.tar.gz
root@planetcure:export  PATH=$PATH:/opt/installer/s3cmd-1.0.1

Now we can access the binary from any of the location.

root@planetcure:/opt/installer/s3cmd-1.0.1# s3cmd setacl --acl-public s3://my-bucket-name/{dev,stg1,stg2}/*/dir5/*/3/*

This command will execute the following scenarios

s3://my-bucket-name/  is my S3 bucket

* will represent all the subdirectories

{dev,stg1,stg2} will represent the specific directories from the group of directories

dir5/ ,3/ will represent specific sub-directory

Enjoy the day, 🙂

HowTo: Increase The Maximum Number Of Open Files / File Descriptors (FD)

Posted on

Sometimes we will get the error message is like “too many files open“, it is because of you have reached the limits of opened file, You could always try doing a ulimit -n 2048. This will only reset the limit for your current shell and the number you specify must not exceed the hard limit.

Each operating system has a different hard limit setup in a configuration file. For instance, the hard open file limit on Solaris can be set on boot from /etc/system.

[anand@planetcure ~]$ cat /proc/sys/fs/file-max
172214

This show the maxmimum number of opened files for the single user, you can also use the below commad.

# ulimit -Hn
# ulimit -Sn

We can set this as System-wide and userlevel, for Global user configuration we can use /etc/sysctl.conf file under Linux operating systems. So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):

System-wide File Descriptors (FD) Limits

# sysctl -w fs.file-max=100000

The command allows to extend the new limit as 100000. You need to append the variable “fs.file-max = 100000” in the file /etc/sysctl.conf for the permanent set. It won’t be change after the reboot.

#sysctl -p

Verify by using below command

#sysctl fs.file-max

User-level File Descriptors (FD) Limits

Some of the case we need to specify the different level of setting for the particular users. This will override the sysetm wide settings and give the new limits for the users.

To specific limits by editing /etc/security/limits.conf file, we can all so use this file for all user limits

For apache:

httpd soft nofile 1024
httpd hard nofile 2048

All user limits

* soft nofile 1024
* hard nofile 2048

Save and close the file. You have to re-login to the console to get the new value.

su httpd -c "ulimit -Hn"
su httpd -c "ulimit -Sn"

Script: https traffic block

Posted on Updated on

This script is for blocking https traffic in the software router it self, I am using squid and it is not capable for  handling HTTPS traffics, because 1 , the url is encrypted. 2, The routing table is only for handing traffic over port 80.

This script have two input file, it will create automatically when the first run. It having capability for private-IP based restriction

Editable area in the script :

DIST=192.168.1.6            #IP where the request has to forward
DPORT=81                    #Port where the request has to forward
BLOCKPORTS=443              #Outgoing + incomming Port 
RULE=forward                #Possible options reject,drop,forward

If you have any web-page for giving a message to the user regarding the block, set it here

Enter the domain and local IP separately in the file, examples are shown below Download here

[anand@planetcure ~]$ sh https_block.sh --help
This script is for block https outbound traffic using source based requests
 -s or --silent Silent execution
 ssl_domains  File for enter SSL domain names
 ip_users     File for enter localip list

You must have to enable forwarding and execute it from root.

First run :

[root@planetcure]# sh https_block.sh 
Parent dir not found, Creating entire structure 
/opt/installer/scripts
|-- ip_users
`-- ssl_domains

0 directories, 2 files
[INFO]:We found empty input file. exiting..

Input Files :

[root@planetcure]# ls /opt/installer/scripts/
ip_users  ssl_domains

File input one by one :

[root@planetcure scripts]# cat ip_users
192.168.1.100
192.168.1.245
[root@planetcure scripts]# cat ssl_domains
www.enlook.wordpress.com
facebook.com
www.facebook.com

Output:

[root@planetcure]# sh https_block.sh 
Validating file structure
checking ssl_domains Ok.
checking ip_users Ok.
/opt/installer/scripts
|-- ip_users
`-- ssl_domains

0 directories, 2 files

 Executing source Ip 192.168.1.100 

76.74.254.123 blocked for the domain www.enlook.wordpress.com
192.0.80.250 blocked for the domain www.enlook.wordpress.com
192.0.81.250 blocked for the domain www.enlook.wordpress.com
66.155.9.238 blocked for the domain www.enlook.wordpress.com
66.155.11.238 blocked for the domain www.enlook.wordpress.com
76.74.254.120 blocked for the domain www.enlook.wordpress.com
173.252.110.27 blocked for the domain facebook.com
31.13.79.128 blocked for the domain www.facebook.com

 Executing source Ip 192.168.1.245 

76.74.254.120 blocked for the domain www.enlook.wordpress.com
76.74.254.123 blocked for the domain www.enlook.wordpress.com
192.0.80.250 blocked for the domain www.enlook.wordpress.com
192.0.81.250 blocked for the domain www.enlook.wordpress.com
66.155.9.238 blocked for the domain www.enlook.wordpress.com
66.155.11.238 blocked for the domain www.enlook.wordpress.com
173.252.110.27 blocked for the domain facebook.com
31.13.79.128 blocked for the domain www.facebook.com

Now set this as crone like below

*/05 * * * * /bin/sh /root/https_block.sh -s

If you run again the script it will show the current status of the blocked domain

[root@localhost bash]# sh https_block.sh 
Validating file structure
checking ssl_domains Ok.
checking ip_users Ok.
/opt/installer/scripts
|-- ip_users
`-- ssl_domains

0 directories, 2 files

 Executing source Ip 192.168.1.100 

Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.100        76.74.254.123       tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.100        192.0.80.250        tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.100        192.0.81.250        tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.100        66.155.9.238        tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.100        66.155.11.238       tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.100        76.74.254.120       tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:facebook.com      DNAT       tcp  --  192.168.1.100        173.252.110.27      tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
31.13.79.144 blocked for the domain www.facebook.com

 Executing source Ip 192.168.1.245 

Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.245        76.74.254.120       tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.245        76.74.254.123       tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.245        192.0.80.250        tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.245        192.0.81.250        tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.245        66.155.9.238        tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:www.enlook.wordpress.com      DNAT       tcp  --  192.168.1.245        66.155.11.238       tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
Domain:facebook.com      DNAT       tcp  --  192.168.1.245        173.252.110.27      tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 
31.13.79.144 blocked for the domain www.facebook.com

Now you have control in the network traffic usage.

Bash: History appending for multiple sessions

Posted on

I got a requirement for listing all the command history if  multiple terminal sessions using in different region for the single user. I followed the below steps.

step1 : Create a new file with the below entries

root@appserver:# cat /etc/profile.d/bash_history.sh
function share_history {
 history -a
 history -c
 history -r
}
HISTSIZE=99999
HISTCONTROL=ignoredups
HISTTIMEFORMAT=`echo -e "33[1;34m%d/%h/%Y 33[1;31m%H:%M:%S 33[0m"`
PROMPT_COMMAND='share_history'
shopt -u histappend

Step2: activate it in run-time

root@appserver:# source /etc/profile.d/bash_history.sh

Now you can see the list of aged histories

Sample Output :

1005 26/Dec/2013 14:23:08 vi /etc/profile.d/bash_history.sh
1006 26/Dec/2013 14:23:27 source /etc/profile.d/bash_history.sh
1007 26/Dec/2013 14:23:31 history

script: Bash script to backup MySQL databases.

Posted on Updated on

 

#!/bin/bash 
# Simple script to backup MySQL databases 
# 
# You have to enter the credintials, the scritp will make backup of all the databases 
# including information schema and perfomance schema as well, and store it as a gunzip format 
# in the backup directory. Each databases are dump as seperate files. 
# 
# This will maintain 30 days backup. If you need to extend, edit the WEIGHT as your own. 
# Website : https://enlook.wordpress.com , http://planetcure.info , http://xtermpro.com 
# Created by : Anandbabu 
# 
#################################################################################################
# Parent backup directory
backup_parent_dir="/backup/"
#Enter multiple email ID using space
Email="email@domain.com email@domain.com"
Email_Content="/tmp/Mail_db"
WEIGHT=30 # 30 days
# MySQL settings
mysql_user="my_database_user"
mysql_password='database_password'
mysql_databases="Default_database"
#Creating file for email
[ ! -f ${Email_Content} ] && touch ${Email_Content} || :> ${Email_Content}
E_mail(){
 for email in ${Email}
 do
 cat ${Email_Content} | mail -s "Notification: Mysql Database Backup $@ from MyServer " ${email} -aFrom:Backup\<backup@domain.com\>
 done
 }
# Read MySQL password from stdin if empty
if [ -z "${mysql_password}" ]; then
 echo -n "Enter MySQL ${mysql_user} password: " >> ${Email_Content}
 read -s mysql_password
 echo
fi
# Check MySQL password
echo exit | mysql --user=${mysql_user} --password=${mysql_password} -B 2>/dev/null
if [ "$?" -gt 0 ]; then
 echo "MySQL ${mysql_user} password incorrect" >> ${Email_Content}
 E_mail Failed
 exit 1
else
 echo "MySQL ${mysql_user} password correct." >> ${Email_Content}
fi
# Create backup directory and set permissions
backup_date=`date +%Y_%m_%d_%H_%M`
backup_dir="${backup_parent_dir}/${backup_date}"
echo "Backup directory: ${backup_dir}" >> ${Email_Content}
mkdir -p "${backup_dir}"
chmod 700 "${backup_dir}"
# Get MySQL databases
mysql_databases=`echo 'show databases' | mysql --user=${mysql_user} --password=${mysql_password} -B | sed /^Database$/d`
# Backup and compress each database
for database in $mysql_databases
do
if [[ "$database" =~ "information_schema" || "$database" =~ "performance_schema" ]] ; then
 additional_mysqldump_params="--skip-lock-tables"
else
 additional_mysqldump_params=""
fi
 echo "Creating backup of \"${database}\" database" >> ${Email_Content}
 mysqldump ${additional_mysqldump_params} --user=${mysql_user} --password=${mysql_password} ${database} | gzip > "${backup_dir}/${database}.sql.gz"
 chmod 600 "${backup_dir}/${database}.sql.gz"
done

##Removing folder older than 30 days
ECOUT=""
echo "" >> ${Email_Content}
ECOUT=`find ${backup_parent_dir} -type d -ctime +$WEIGHT`
if [ -z $ECOUT ]; then
 echo "No more older backups to remove" >> ${Email_Content}
 E_mail Success
 exit
else
 echo "Following older backups are removed" >> ${Email_Content}
 for i in $ECOUT
 do
 rm -rvf $i 1>>${Email_Content} 2>>${Email_Content}
 done
 E_mail Success
 exit
fi