installation

HowTo: Generate Certificate for OpenLDAP and using it for certificate authentication.

Posted on

LDAPS Server Certificate Requirements

LDAPS requires a properly formatted X.509 certificate. This certificate lets a OpenLDAP service listen for and automatically accept SSL connections. The server certificate is used for authenticating the OpenLDAP server to the client during the LDAPS setup and for enabling the SSL communication tunnel between the client and the server. As an option, we can also use LDAPS for client authentication.

Having spent quite some time to make a TLS work, I thought this may be usefull to some :

Creating Self CA certificate:

1, Create the  ldapclient-key.pem private key :

openssl genrsa -des3 -out ldapclient-key.pem 1024

2, Create the ldapserver-cacerts.pem certificate :

openssl req -new -key ldapclient-key.pem -x509 -days 1095 -out ldapserver-cacerts.pem

Creating a certificate for server:

1, Create the ldapserver-key.pem private key

openssl genrsa -out ldapserver-key.pem

2, Create a server.csr certificate request:

openssl req -new -key ldapserver-key.pem -out server.csr

3, Create the ldapserver-cert.pem certificate signed by your own CA :

openssl x509 -req -days 2000 -in server.csr -CA ldapserver-cacerts.pem -CAkey ldapclient-key.pem -CAcreateserial -out ldapserver-cert.pem

4, Create CA copy for the client:

cp -rpf ldapserver-cacerts.pem   ldapclient-cacerts.pem

Now configure the certificates in slapd.conf, the correct files must be copied on each server:

TLSCACertificateFile /etc/openldap/certs/ldapserver-cacerts.pem
TLSCertificateFile /etc/openldap/certs/ldapserver-cert.pem
TLSCertificateKeyFile /etc/openldap/certs/ldapserver-key.pem
TLSCipherSuite HIGH:MEDIUM:+SSLv2

# personnally, I only check servers from client.
# If you do, add this :
TLSVerifyClient never

Configure certificate for ldap clients

Key : ldapclient-key.pem
Crt : ldapclient-cert.pem

Howto: Allowing SFTP access while chrooting the user and denying shell access.

Posted on Updated on

Usually SFTP will allow a system user to access their home directory to upload and download files with their account. The SFTP user can navigate anywhere in the server some times can download files it will produce security vulnerability.

The Chroot for SFTP will be denied to access the rest of the system as they will be chrooted to the user home directory. Thus users will not be able to snoop around the system to /etc or application directories. User login to a shell account will also be denied.

I the below procedures will allowed me to enable SFTP security,

1, Add a new group

2, Create a Chroot dir for launch the logins, which should owned by root

3, Modify sftp-internal for forcing chroot dir

4, reload the configuration

Steps :

Create Chroot launch directory with other have no previlege

mkdir /opt/chroot
chown root:root /opt/chroot
chmod 700 /opt/chroot

Create a common group for the chrooted users , SSH rule will work for the group

groupadd sftpgroup
useradd -g sftpgroup -s /sbin/nologin  -d /opt/chroot/planetuser planetuser
passwd planetuser

Modify ssh configuration

vi /etc/ssh/sshd_config

Comment the general sftp subsubsystem and add new rule

#Subsystem sftp /usr/lib/openssh/sftp-server

#Add the line 
Subsystem sftp internal-sftp

# Rules for sftp group
Match group sftpgroup
ChrootDirectory %h
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp

Then restart SSH service

service sshd restart

HowTo: Enable URL rewite for tomcat or other servlet container

Posted on Updated on

It is a URL rewrite feature which is most similar to the apache mod_rewrite, we can use the similar rules to apply the rewrite. Ensure that the ‘UrlRewriteFilter‘ JAR file is on your web-application’s classpath.  place the JAR file in your webapp under ‘/WEB-INF/lib’ will do the trick, and if you’ve spent any time at all working with webapps you probably already have a preferred way of doing this. Alternately, you may want to install the JAR file in your servlet container’s ‘/lib’ folder, particularly if you are deploying multiple webapps on your server and you want to have ‘UrlRewriteFilter‘ available to any/all of them automatically.

Download JAR from here

Read more Examples

once you have the ‘UrlRewriteFilter‘ JAR on your webapp’s classpath, the real setup can begin. Open your application’s ‘web.xml‘ file, and add the following filter configuration to your webapp

<filter>
 <filter-name>UrlRewriteFilter</filter-name>
 <filter-class>org.tuckey.web.filters.urlrewrite.UrlRewriteFilter</filter-class>
 <init-param>
 <param-name>logLevel</param-name>
 <param-value>WARN</param-value>
 </init-param>
<init-param>
 <param-name>confPath</param-name>
 <param-value>/WEB-INF/urlrewrite.xml</param-value>
 </init-param>
</filter>
 <filter-mapping>
 <filter-name>UrlRewriteFilter</filter-name>
 <url-pattern>/*</url-pattern>
 </filter-mapping>

This will make the serverlet container to redirect the traffic to UrlRewriteFilter.  Note that although it is not discussed on the official site, that ‘logLevel‘ parameter is absolutely essential for filter to be apply for the traffic.

If you finish adding the tags in web.xml, then move to create urlrewrite.xml in the same directory as with the web.xml. Configure the example rules  for  the URL rewrite.

<?xml version="1.0" encoding="utf-8"?>
 <!DOCTYPE urlrewrite PUBLIC "-//tuckey.org//DTD UrlRewrite 3.2//EN"
 "http://tuckey.org/res/dtds/urlrewrite3.2.dtd">
 <urlrewrite>
  <rule>
        <name>Domain Name Check</name>
        <condition name="host" operator="notequal">www.server.com</condition>
        <from>^(.*)$</from>
        <to type="redirect">http://www.server.com/$1</to>
    </rule>
    <rule>
        <from>/test</from>
        <to type="redirect">%{context-path}/examples</to>
    </rule>
</urlrewrite>

The first rule is for any request tot he application with IP or alternative alias Domain name added in the server has to rewrite to server.com. It can be also use to rewite for including www. in the URL .

The second rule is for the redirect the invalid application “test” to  to the examples,

Its looks like this :  http://test.com/test   –>  http://www.server.com/examples/  . Both the test.com and server.com are in the same server and same webapps

 

 

Error: posftix: warning: SASL authentication failure: No worthy mechs found

Posted on Updated on

After configuring postfix relay server I found their was some issue with gmail server authentication, it will bounce the emails

Error : 
 postfix/smtp[25857]: 59BF721177: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.25.108]: no mechanism available
 postfix/smtp[25861]: warning: SASL authentication failure: No worthy mechs found

Their must be two solid reasons behind this
1, SASL package is missing for plain module

yum install cyrus-sasl{,-plain}

2, Allow plaintext (which is fine when using STARTTLS): make the connection enrypted

smtp_sasl_security_options = noanonymous

Make Sure you enabled all the below options :

smtp_sasl_auth_enable = yes
smtp_use_tls = yes
smtp_tls_loglevel = 1
smtp_tls_security_level = encrypt
smtp_sasl_mechanism_filter = login

 

HowTo: Password lesslogin in linux.

Posted on Updated on

Password less logins allow you get get into the server even the password has been changed or expired ,

It can be achieve by single unix command you can use either this or the detailed steps given below. It will prompt password for server2,  once it is over the next login will be the password less

 [root@srv-51 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub  syncfuser@192.168.1.52

Detailed steps :

1, Generate public key on server-1, ignore this step if it is already exist

 [root@srv-51 ~]$ ssh-keygen
 Generating public/private rsa key pair.
 Enter file in which to save the key (/root/.ssh/id_rsa):
 Created directory '/root/.ssh'.
 Enter passphrase (empty for no passphrase):
 Enter same passphrase again:
 Your identification has been saved in /root/.ssh/id_rsa.
 Your public key has been saved in /root/.ssh/id_rsa.pub.
 The key fingerprint is:
 8f:99:9f:8f:ba:bf:15:ca:6b:1f:83:06:a2:1a:9c:59 root@srv-51
 The key's randomart image is:
 +--[ RSA 2048]----+
 | |
 | |
 | |
 | |
 | E . S . |
 | . + . . B o . |
 | = . + * + |
 | o o.= o |
 | . o=B+o |
 +-----------------+

3, Grab the key and add it in the authorized_keys file in server2

[root@srv-51 ~]# cat ~/.ssh/id_rsa.pub
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAz9iTxsWIYZyLtGN47MQZkSrPqXoGwATAKD/ZqIyemFRvKnlkSllkEEQ7+MlMstz6HvONfTJuJROegELqTIA7PoR43LTTKw7zfqJtt1J4fUH/6mbYlB5bedXtl/7L9auRYr276d04CFUCKfINEG4KJXYlbuSM8Mr5ZiUvLCkiu4Jx77DSy0iWaDa90C6cEbl1vRX9yl1pdWQbAMuazYLfiDPOnbqb7JtcI9du5bNEuFuA26VahaYbaYtXFnKBbKrCUMzTHT2uuNesYpckUHT4f0T1fU9qNsAYBlyQBgMIu/2qdJ+Y8luMVCkydXx8ZJmSTmAp+yR+qaZDYCqujrvjdQ== root@localhost.localdomain

4, Server2 authorized_keys key entry is looks like this

[root@srv-52 ~]# cat /home/syncfuser/.ssh/authorized_keys
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAz9iTxsWIYZyLtGN47MQZkSrPqXoGwATAKD/ZqIyemFRvKnlkSllkEEQ7+MlMstz6HvONfTJuJROegELqTIA7PoR43LTTKw7zfqJtt1J4fUH/6mbYlB5bedXtl/7L9auRYr276d04CFUCKfINEG4KJXYlbuSM8Mr5ZiUvLCkiu4Jx77DSy0iWaDa90C6cEbl1vRX9yl1pdWQbAMuazYLfiDPOnbqb7JtcI9du5bNEuFuA26VahaYbaYtXFnKBbKrCUMzTHT2uuNesYpckUHT4f0T1fU9qNsAYBlyQBgMIu/2qdJ+Y8luMVCkydXx8ZJmSTmAp+yR+qaZDYCqujrvjdQ== root@localhost.localdomain

Finally output will be like this

 [root@srv-51 ~]# ssh syncfuser@192.168.1.52
 Last login: Wed Jun 25 17:08:25 2014 from 192.168.1.51
 [syncfuser@srv-52 ~]$

Now server1 root user can enter password less to the syncfuser on server2. 🙂

HowTo: Recover RAID volume and mount seperatly

Posted on Updated on

My NAS storage was crashed, this time I was forced to move one of the raid volume to another server to make the service up because the volume contains all VM’s used by XEN server,  most probably  it is a LVM disk.

Everybody knows that we can’t simply attach the raid disk to another machine, so just follow the procedures below.

Once I attached the HDD to another machine. check the disk availability

root@ubuntu:~# mdadm --examine /dev/sdb
/dev/sdb:
 Magic : a92b4efc
 Version : 1.2
 Feature Map : 0x0
 Array UUID : ec2c6fb2:f211cfa5:8dfa8777:4f08bfed
 Name : openmediavault:storage
 Creation Time : Fri May 9 16:22:45 2014
 Raid Level : raid1
 Raid Devices : 2
Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
 Array Size : 976761424 (931.51 GiB 1000.20 GB)
 Used Dev Size : 1953522848 (931.51 GiB 1000.20 GB)
 Data Offset : 2048 sectors
 Super Offset : 8 sectors
 State : clean
 Device UUID : 3a9e90a0:ca0e458e:c48e1b34:f3aaf06f
Update Time : Tue Jun 24 16:20:00 2014
 Checksum : eaa54b02 - correct
 Events : 24468
 Device Role : Active device 1
 Array State : .A ('A' == active, '.' == missing)

It sounds good now move to the next step, It should be create the block device md* so it will be reveal the partitions.

root@ubuntu:~# mdadm --assemble --force /dev/md127 /dev/sdb

You will get the output like this

root@ubuntu:~# ll  /dev/md127
 brw-rw---- 1 root disk 9, 127 Jun 24 14:27 /dev/md127

Now you can see the LVM names

root@ubuntu:~# lvs
 LV   VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
 nfs  storage -wi-ao 931.51g
 root@ubuntu:~# pvs
 PV         VG      Fmt  Attr PSize   PFree
 /dev/md127 storage lvm2 a-   931.51g    0
 root@ubuntu:~# vgs
 VG      #PV #LV #SN Attr   VSize   VFree
 storage   1   1   0 wz--n- 931.51g    0

Mount the partition manually

root@ubuntu:~# mount /dev/mapper/storage-nfs /export/
root@ubuntu:~# mount | grep nfs
 /dev/mapper/storage-nfs on /export type ext4 (rw)

That’s it now I got my files back,

 

 

 

 

 

HowTo: Extend the volume in windows

Posted on Updated on

It will be possible to resize the system partion with tools that are either commercially available or opensource. Acronis ‘Disk Partition Manager’ is a good example of a commerical product (www.acronis.com), Thier is another tool that comes with Linux ‘Live CD’ called “GParted”. This will also resize partions without data loss. To extend a volume, follow these steps,

Run –> cmd –> type diskpart.exe.

Type list volume to display the existing volumes on the computer.

Type Select volume volume number where volume number is number of the volume that you want to extend.

Type extend [size=n] [disk=n] [noerr]. The following describes the parameters: size=n The space, in megabytes (MB), to add to the current partition. If you do not specify a size, the disk is extended to use all the next contiguous unallocated space. disk=n

The dynamic disk on which to extend the volume. Space equal to size=n is allocated on the disk. If no disk is specified, the volume is extended on the current disk.

HowTo: S3 bucket dynamic URI access

Posted on Updated on

s3cmd is a command line tool for uploading, retrieving and managing data in Amazon S3. Still their are no wiki is updated.
you may get the packages from sourceforge official

Also the download repository is available here : Download Now

It will also support including unix dynamic resource access method, for example we can use * for calling all the resources or {dir1,file2} for specific resource.

I was shown in the example for setting up public acl for dynamic sub directories.

Installation:

root@planetcure:wget http://kaz.dl.sourceforge.net/project/s3tools/s3cmd/1.0.1/s3cmd-1.0.1.tar.gz
root@planetcure:tar -zxvf s3cmd-1.0.1.tar.gz
root@planetcure:export  PATH=$PATH:/opt/installer/s3cmd-1.0.1

Now we can access the binary from any of the location.

root@planetcure:/opt/installer/s3cmd-1.0.1# s3cmd setacl --acl-public s3://my-bucket-name/{dev,stg1,stg2}/*/dir5/*/3/*

This command will execute the following scenarios

s3://my-bucket-name/  is my S3 bucket

* will represent all the subdirectories

{dev,stg1,stg2} will represent the specific directories from the group of directories

dir5/ ,3/ will represent specific sub-directory

Enjoy the day, 🙂

Error: InfiniDB DBRM in Read only mode error

Posted on Updated on

I was using infinidb 2.11 community edition, after couple of usage my data1 directory is growing rapidly, so I moved it to the NAS storage location because the community edition is not supporting for data compression. I realized that it will affect the Infinidb performance.

At the time of using NAS storage, I was faced many issues like data dir permission some thing, I findout the error is “DBRM in Read only mode”, From the infinidb forum nothing workout the solution they specified, I can’t restart Infinidb server for this issues, basically it is a busy server.

At last doing couple of research about Infinidb, I got the solution for this error without restarting Infinidb. Follow the steps below

This error because of DBRM unable to rollback the broken transaction.

Use the commands and make the operation normal

/usr/local/Calpont/bin/save_brm
/usr/local/Calpont/bin/dbrmctl reload
/usr/local/Calpont/bin/DMLProc

If everything seems good the last command shows the output like this

[root@infinidb02 bin]# ./DMLProc
Locale is : C
terminate called after throwing an instance of 'std::runtime_error'
  what():  InetStreamSocket::bind: bind() error: Address already in use

Solution from Infinidb : http://infinidb.co/community/infinidb-not-starting

HowTo: Set Up Multiple SSL Certificates on One IP with Apache

Posted on Updated on

As the Apache Web server grows and matures, new features are added and old bugs are fixed. Perhaps one of the most important new features added to recent Apache versions (2.2.12, to be specific) is the long-awaited support for multiple SSL sites on a single IP address.

prerequisites,

  • The server, obviously, must use Apache 2.2.12 or higher.
  • It must also use OpenSSL 0.9.8f or later and must be built with the TLS extensions option.
  •  Apache must be built against this version of OpenSSL as it will enable SNI support if it detects the right version of OpenSSL — the version of OpenSSL that includes TLS extension support.( Default installation contains all these things)

Note:

SNI can only be used for serving multiple SSL sites from your web server and is not likely to work at all on other daemons, such as mail servers, etc. There are also a small percentage of older web browsers that may still give certificate errors. Wikipedia has an updated list of software that does and does not support this TLS extension.

Here am using wild card SSL for hosting two sub-domain in single server, similearly we can also use different ssl for different domain with the same IP.

Follow the basic installation of apache

Redhat :

[root@ip-10-132-82-251 ~]# yum install httpd openssl openssl-devel mod_ssl

Ubuntu:

apt-get install apache2 openssl mod_ssl

Get the the certificate from the authority or use self singed SSL, Verify you have enabled SSL module in the existing apache installation

[root@ip-10-132-82-251 ~]# httpd -M  |grep ssl

Add the following lines in the apace main configuration file httpd.conf

[root@ip-10-132-82-251 ~]#  vi /etc/httpd/conf/httpd.conf 
###FOR SSL
NameVirtualHost *:443
<IfModule mod_ssl.c>
    # If you add NameVirtualHost *:443 here, you will also have to change
    # the VirtualHost statement in /etc/apache2/sites-available/default-ssl
    # to
    # Server Name Indication for SSL named virtual hosts is currently not
    # supported by MSIE on Windows XP.
    Listen 443
</IfModule>
<IfModule mod_gnutls.c>
    Listen 443
</IfModule>

Create the Virtual Hosts

Once you downloaded all required files for SSL, proceed to creating Vhost.

Here is the Vhost entry that I used

[root@ip-10-132-82-251 ~]# vi /etc/httpd/conf.d/domain1-ssl.conf
<IfModule mod_ssl.c>
<VirtualHost *:443>
        ServerName domain1.mydomain.com
        DocumentRoot "/opt/web-home/domain1/public_html"
        <Directory />
                Options FollowSymLinks
                AllowOverride all
        </Directory>
        <Directory /opt/web-home/domain1/public_html>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride all
                Order allow,deny
                allow from all
        </Directory>
        ScriptAlias /cgi-bin/ /opt/web-home/domain1/public_html/cgi-bin/
        <Directory "/opt/web-home/domain1/public_html/cgi-bin/">
                AllowOverride None
                Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
                Order allow,deny
                Allow from all
        </Directory>
ErrorLog logs/ssl_error_log
TransferLog logs/ssl_access_log
LogLevel warn
SSLEngine on
SSLProtocol all -SSLv2
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
SSLCertificateFile /etc/ssl/certs/planetcure.in.crt
SSLCertificateKeyFile /etc/ssl/certs/planetcure.in.key
SSLCertificateChainFile /etc/ssl/certs/planetcure.in.csr
SSLCACertificateFile /etc/ssl/certs/planetcure.in.ca
SetEnvIf User-Agent ".*MSIE.*" \
         nokeepalive ssl-unclean-shutdown \
         downgrade-1.0 force-response-1.0
CustomLog logs/ssl_request_log \
          "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"

</VirtualHost>
</IfModule>
SSLPassPhraseDialog  builtin
SSLSessionCache         shmcb:/var/cache/mod_ssl/scache(512000)
SSLSessionCacheTimeout  300
SSLMutex default
SSLRandomSeed startup file:/dev/urandom  256
SSLRandomSeed connect builtin
SSLCryptoDevice builtin

You can also create more Vhost files using this entry. By changing the domain name and the SSL path.

Now restart the apache

[root@ip-10-132-82-251 ~]# service httpd restart

To verify the list of enabled vhost, use the below command

[root@ip-10-132-82-251 ~]# apachectl -S
VirtualHost configuration:
wildcard NameVirtualHosts and _default_ servers:
*:443                  is a NameVirtualHost
         default server domain1.planetcure.in (/etc/httpd/conf.d/domain1-ssl.conf:2)
         port 443 namevhost domain1.planetcure.in (/etc/httpd/conf.d/domain1-ssl.conf:2)
         port 443 namevhost domain2.planetcure.in (/etc/httpd/conf.d/domain2-ssl.conf:2)
Syntax OK

Phew, these domains having their own SSL with single IP 🙂