Hackz

Howto: Allowing SFTP access while chrooting the user and denying shell access.

Posted on Updated on

Usually SFTP will allow a system user to access their home directory to upload and download files with their account. The SFTP user can navigate anywhere in the server some times can download files it will produce security vulnerability.

The Chroot for SFTP will be denied to access the rest of the system as they will be chrooted to the user home directory. Thus users will not be able to snoop around the system to /etc or application directories. User login to a shell account will also be denied.

I the below procedures will allowed me to enable SFTP security,

1, Add a new group

2, Create a Chroot dir for launch the logins, which should owned by root

3, Modify sftp-internal for forcing chroot dir

4, reload the configuration

Steps :

Create Chroot launch directory with other have no previlege

mkdir /opt/chroot
chown root:root /opt/chroot
chmod 700 /opt/chroot

Create a common group for the chrooted users , SSH rule will work for the group

groupadd sftpgroup
useradd -g sftpgroup -s /sbin/nologin  -d /opt/chroot/planetuser planetuser
passwd planetuser

Modify ssh configuration

vi /etc/ssh/sshd_config

Comment the general sftp subsubsystem and add new rule

#Subsystem sftp /usr/lib/openssh/sftp-server

#Add the line 
Subsystem sftp internal-sftp

# Rules for sftp group
Match group sftpgroup
ChrootDirectory %h
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp

Then restart SSH service

service sshd restart

HowTo: Enable URL rewite for tomcat or other servlet container

Posted on Updated on

It is a URL rewrite feature which is most similar to the apache mod_rewrite, we can use the similar rules to apply the rewrite. Ensure that the ‘UrlRewriteFilter‘ JAR file is on your web-application’s classpath.  place the JAR file in your webapp under ‘/WEB-INF/lib’ will do the trick, and if you’ve spent any time at all working with webapps you probably already have a preferred way of doing this. Alternately, you may want to install the JAR file in your servlet container’s ‘/lib’ folder, particularly if you are deploying multiple webapps on your server and you want to have ‘UrlRewriteFilter‘ available to any/all of them automatically.

Download JAR from here

Read more Examples

once you have the ‘UrlRewriteFilter‘ JAR on your webapp’s classpath, the real setup can begin. Open your application’s ‘web.xml‘ file, and add the following filter configuration to your webapp

<filter>
 <filter-name>UrlRewriteFilter</filter-name>
 <filter-class>org.tuckey.web.filters.urlrewrite.UrlRewriteFilter</filter-class>
 <init-param>
 <param-name>logLevel</param-name>
 <param-value>WARN</param-value>
 </init-param>
<init-param>
 <param-name>confPath</param-name>
 <param-value>/WEB-INF/urlrewrite.xml</param-value>
 </init-param>
</filter>
 <filter-mapping>
 <filter-name>UrlRewriteFilter</filter-name>
 <url-pattern>/*</url-pattern>
 </filter-mapping>

This will make the serverlet container to redirect the traffic to UrlRewriteFilter.  Note that although it is not discussed on the official site, that ‘logLevel‘ parameter is absolutely essential for filter to be apply for the traffic.

If you finish adding the tags in web.xml, then move to create urlrewrite.xml in the same directory as with the web.xml. Configure the example rules  for  the URL rewrite.

<?xml version="1.0" encoding="utf-8"?>
 <!DOCTYPE urlrewrite PUBLIC "-//tuckey.org//DTD UrlRewrite 3.2//EN"
 "http://tuckey.org/res/dtds/urlrewrite3.2.dtd">
 <urlrewrite>
  <rule>
        <name>Domain Name Check</name>
        <condition name="host" operator="notequal">www.server.com</condition>
        <from>^(.*)$</from>
        <to type="redirect">http://www.server.com/$1</to>
    </rule>
    <rule>
        <from>/test</from>
        <to type="redirect">%{context-path}/examples</to>
    </rule>
</urlrewrite>

The first rule is for any request tot he application with IP or alternative alias Domain name added in the server has to rewrite to server.com. It can be also use to rewite for including www. in the URL .

The second rule is for the redirect the invalid application “test” to  to the examples,

Its looks like this :  http://test.com/test   –>  http://www.server.com/examples/  . Both the test.com and server.com are in the same server and same webapps

 

 

Error: posftix: warning: SASL authentication failure: No worthy mechs found

Posted on Updated on

After configuring postfix relay server I found their was some issue with gmail server authentication, it will bounce the emails

Error : 
 postfix/smtp[25857]: 59BF721177: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.25.108]: no mechanism available
 postfix/smtp[25861]: warning: SASL authentication failure: No worthy mechs found

Their must be two solid reasons behind this
1, SASL package is missing for plain module

yum install cyrus-sasl{,-plain}

2, Allow plaintext (which is fine when using STARTTLS): make the connection enrypted

smtp_sasl_security_options = noanonymous

Make Sure you enabled all the below options :

smtp_sasl_auth_enable = yes
smtp_use_tls = yes
smtp_tls_loglevel = 1
smtp_tls_security_level = encrypt
smtp_sasl_mechanism_filter = login

 

HowTo: Two way file sync between three or more servers

Posted on Updated on

I was working in a project have multiple server, just assume that more than two, all those webservers are under loadbalancer and which doesn’t having centralize storage. So one thing I can do is sync files between the servers.

The first priority to avoid unwanted process and sync the file only if any updates happen, this action will effect all the servers.

I used the tool called Lsync + Unison

Lsyncd

Source : https://code.google.com/p/lsyncd/

Lsyncd watches a local directory trees event monitor interface (inotify or fsevents). It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes. By default this is rsync. Lsyncd is thus a light-weight live mirror solution that is comparatively easy to install not requiring new filesystems or blockdevices and does not hamper local filesystem performance.

Unison 

Source : http://olex.openlogic.com/packages/unison

Unison is a file-synchronization tool for Unix and Windows. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other. License GPLv3

Please follow the implementation steps below for all the servers, because each server should check for updates

Server1 : 192.168.1.51

Server2: 192.168.1.52

Server3: 192.168.1.53

Shared folder is same on all the server /home/syncfuser/fileupload

SSH port : 10022

Web root writable user : syncfuser

Installation

Add additional Centos repository here I used CentOS 6.4 64bit. you may get different version from the give link

http://wiki.centos.org/AdditionalResources/Repositories/RPMForge

[root@srv-51 ~]# rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
 Retrieving http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
 warning: /var/tmp/rpm-tmp.iORM9p: Header V3 DSA/SHA1 Signature, key ID 6b8d79e6: NOKEY
 Preparing... ########################################### [100%]
 1:rpmforge-release ########################################### [100%]
[root@srv-51 ~]# rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt

Installing  Lsycd , unison

[root@srv-51 ~]# wget http://lsyncd.googlecode.com/files/lsyncd-2.0.6.tar.gz
[root@srv-51 ~]# tar -zxvf lsyncd-2.0.6.tar.gz
[root@srv-51 ~]# cd lsyncd-2.0.6 ; make ; make install
[root@srv-51 ~]# yum install pkgconfig lua.x86_64 lua-devel.x86_64 lua-static.x86_64 gcc unison -y
[root@srv-51 ~]# touch /var/log/lsyncd/{lsyncd,lsyncd-status}.log
[root@srv-51 ~]# vi /etc/init.d/lsyncd

 #!/bin/bash
 #
 # chkconfig: - 85 15
 # description: Lightweight inotify based sync daemon
 #
 # processname:  lsyncd
 # config:       /etc/lsyncd.conf
 # config:       /etc/sysconfig/lsyncd
 # pidfile:      /var/run/lsyncd.pid
# Source function library
 . /etc/init.d/functions
# Source networking configuration.
 . /etc/sysconfig/network
# Check that networking is up.
 [ "$NETWORKING" = "no" ] && exit 0
LSYNCD_OPTIONS="-pidfile /var/run/lsyncd.pid /etc/lsyncd.conf"
if [ -e /etc/sysconfig/lsyncd ]; then
 . /etc/sysconfig/lsyncd
 fi
RETVAL=0
prog="lsyncd"
 thelock=/var/lock/subsys/lsyncd
start() {
 [ -f /etc/lsyncd.conf ] || exit 6
 echo -n $"Starting $prog: "
 if [ $UID -ne 0 ]; then
 RETVAL=1
 failure
 else
 daemon /usr/local/bin/lsyncd $LSYNCD_OPTIONS
 RETVAL=$?
 [ $RETVAL -eq 0 ] && touch $thelock
 fi;
 echo
 return $RETVAL
 }
stop() {
 echo -n $"Stopping $prog: "
 if [ $UID -ne 0 ]; then
 RETVAL=1
 failure
 else
 killproc lsyncd
 RETVAL=$?
 [ $RETVAL -eq 0 ] && rm -f $thelock
 fi;
 echo
 return $RETVAL
 }
reload(){
 echo -n $"Reloading $prog: "
 killproc lsyncd -HUP
 RETVAL=$?
 echo
 return $RETVAL
 }
restart(){
 stop
 start
 }
condrestart(){
 [ -e $thelock ] && restart
 return 0
 }
case "$1" in
 start)
 start
 ;;
 stop)
 stop
 ;;
 restart)
 restart
 ;;
 reload)
 reload
 ;;
 condrestart)
 condrestart
 ;;
 status)
 status lsyncd
 RETVAL=$?
 ;;
 *)
 echo $"Usage: $0 {start|stop|status|restart|condrestart|reload}"
 RETVAL=1
 esac
exit $RETVAL
chmod +x /etc/init.d/lsyncd

Before start configuring lsync please make sure that password less login is enabled for all the servers.

Create a new conf file /etc/lsyncd.conf

[root@srv-51 ~]# vi /etc/lsyncd.conf
settings = {
 logfile = "/var/log/lsyncd/lsyncd.log",
 statusFile = "/var/log/lsyncd/lsyncd-status.log",
 maxDelays = 3
 }
 runUnison2 = {
 maxProcesses = 1,
 delay = 3,
 onAttrib = "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.52:10022//home/syncfuser/fileupload",
 onCreate = "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.52:10022//home/syncfuser/fileupload",
 onDelete = "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.52:10022//home/syncfuser/fileupload",
 onModify = "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.52:10022//home/syncfuser/fileupload",
 onMove = "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.52:10022//home/syncfuser/fileupload",
 }
runUnison3 = {
 maxProcesses = 1,
 delay = 3,
 onAttrib = "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.53:10022//home/syncfuser/fileupload",
 onCreate = "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.53:10022//home/syncfuser/fileupload",
 onDelete = "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.53:10022//home/syncfuser/fileupload",
 onModify = "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.53:10022//home/syncfuser/fileupload",
 onMove = "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.53:10022//home/syncfuser/fileupload",
 }
sync{ runUnison2, source="/home/syncfuser/fileupload" }
sync{ runUnison3, source="/home/syncfuser/fileupload" }

 

Please use -confirmbigdel=false if you have clear Idea about what you are doing, which will give the power to remove the file forcefully even if the directory is to be empty.

With out -confirmbigdel=false will stop syncing (crash lsync) for the scenario ( directory going to be empty) . It will protect the file from accidentally removal commad rm -rf * , if it is a server you don’t need to worry about it  because the file removal is handled by the application.
Sample logs for file syncs .

[root@srv-51 fileupload]# cat > samplefile-sv1.txt
This is the sample file from server1.
^C
tail -f /var/log/lsyncd/lsyncd.log
Fri Jun 27 18:41:44 2014 Normal: Event Delete spawns shell "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false  /home/syncfuser/fileupload ssh://syncfuser@192.168.1.53:10022//home/syncfuser/fileupload"
Fri Jun 27 18:41:44 2014 Normal: Event Delete spawns shell "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false  /home/syncfuser/fileupload ssh://syncfuser@192.168.1.51:10022//home/syncfuser/fileupload"
Contacting server...
Contacting server...
Connected [//srv-51//home/syncfuser/fileupload -> //srv-52//home/syncfuser/fileupload]
Looking for changes
  Waiting for changes from server
Reconciling changes
Connected [//srv-52//home/syncfuser/fileupload -> //srv-53//home/syncfuser/fileupload]
Looking for changes
  Waiting for changes from server
Nothing to do: replicas have been changed only in identical ways since last sync.
Fri Jun 27 18:41:44 2014 Normal: Retrying Delete on /home/syncfuser/fileupload//.unison.samplefile-sv1.txt.f629b5a5e1d6f2942bd1ec2ad54122b6.unison.tmp = 0
Fri Jun 27 18:41:44 2014 Normal: Event Create spawns shell "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false  /home/syncfuser/fileupload ssh://syncfuser@192.168.1.51:10022//home/syncfuser/fileupload"
Reconciling changes
Nothing to do: replicas have been changed only in identical ways since last sync.
Fri Jun 27 18:41:44 2014 Normal: Retrying Delete on /home/syncfuser/fileupload//.unison.samplefile-sv1.txt.f629b5a5e1d6f2942bd1ec2ad54122b6.unison.tmp = 0
Fri Jun 27 18:41:44 2014 Normal: Event Create spawns shell "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false  /home/syncfuser/fileupload ssh://syncfuser@192.168.1.53:10022//home/syncfuser/fileupload"
Contacting server...
Contacting server...
Connected [//srv-51//home/syncfuser/fileupload -> //srv-52//home/syncfuser/fileupload]
Looking for changes
  Waiting for changes from server
Reconciling changes
Nothing to do: replicas have not changed since last sync.
Fri Jun 27 18:41:44 2014 Normal: Retrying Create on /home/syncfuser/fileupload//samplefile-sv1.txt = 0
Connected [//srv-52//home/syncfuser/fileupload -> //srv-53//home/syncfuser/fileupload]
Looking for changes
  Waiting for changes from server
Reconciling changes
Nothing to do: replicas have not changed since last sync.
Fri Jun 27 18:41:44 2014 Normal: Retrying Create on /home/syncfuser/fileupload//samplefile-sv1.txt = 0
Fri Jun 27 18:41:54 2014 Normal: Event Delete spawns shell "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false  /home/syncfuser/fileupload ssh://syncfuser@192.168.1.51:10022//home/syncfuser/fileupload"
Fri Jun 27 18:41:54 2014 Normal: Event Delete spawns shell "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false  /home/syncfuser/fileupload ssh://syncfuser@192.168.1.53:10022//home/syncfuser/fileupload"
Contacting server...
Contacting server...
Connected [//srv-52//home/syncfuser/fileupload -> //srv-53//home/syncfuser/fileupload]
Connected [//srv-51//home/syncfuser/fileupload -> //srv-52//home/syncfuser/fileupload]
Looking for changes
  Waiting for changes from server
Looking for changes
  Waiting for changes from server
Reconciling changes
Reconciling changes
Nothing to do: replicas have been changed only in identical ways since last sync.
Fri Jun 27 18:41:54 2014 Normal: Retrying Delete on /home/syncfuser/fileupload//.unison.samplefile-sv1.txt.f629b5a5e1d6f2942bd1ec2ad54122b6.unison.tmp = 0
Fri Jun 27 18:41:54 2014 Normal: Event Create spawns shell "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false  /home/syncfuser/fileupload ssh://syncfuser@192.168.1.51:10022//home/syncfuser/fileupload"
Nothing to do: replicas have been changed only in identical ways since last sync.
Fri Jun 27 18:41:54 2014 Normal: Retrying Delete on /home/syncfuser/fileupload//.unison.samplefile-sv1.txt.f629b5a5e1d6f2942bd1ec2ad54122b6.unison.tmp = 0
Fri Jun 27 18:41:54 2014 Normal: Event Create spawns shell "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false  /home/syncfuser/fileupload ssh://syncfuser@192.168.1.53:10022//home/syncfuser/fileupload"
Contacting server...
Contacting server...
Connected [//srv-51//home/syncfuser/fileupload -> //srv-52//home/syncfuser/fileupload]
Connected [//srv-52//home/syncfuser/fileupload -> //srv-53//home/syncfuser/fileupload]
Looking for changes
  Waiting for changes from server
Looking for changes
Reconciling changes
Nothing to do: replicas have not changed since last sync.
Fri Jun 27 18:41:54 2014 Normal: Retrying Create on /home/syncfuser/fileupload//samplefile-sv1.txt = 0
  Waiting for changes from server
Reconciling changes
Nothing to do: replicas have not changed since last sync.
Fri Jun 27 18:41:54 2014 Normal: Retrying Create on /home/syncfuser/fileupload//samplefile-sv1.txt = 0

[root@srv-52 fileupload]# cat >> samplefile-sv1.txt
File edited from server2.
^C
Log:
Fri Jun 27 18:43:54 2014 Normal: Event Modify spawns shell "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.53:10022//home/syncfuser/fileupload"
Fri Jun 27 18:43:54 2014 Normal: Event Modify spawns shell "export HOME=/root ; /usr/bin/unison -batch -confirmbigdel=false /home/syncfuser/fileupload ssh://syncfuser@192.168.1.51:10022//home/syncfuser/fileupload"
Contacting server...
Contacting server...
Connected [//srv-51//home/syncfuser/fileupload -> //srv-52//home/syncfuser/fileupload]
Connected [//srv-52//home/syncfuser/fileupload -> //srv-53//home/syncfuser/fileupload]
Looking for changes
Looking for changes
 Waiting for changes from server
 Waiting for changes from server
Reconciling changes
changed ----> samplefile-sv1.txt 
local : changed file modified on 2014-06-27 at 18:43:50 size 65 rw-r--r--
srv-53 : unchanged file modified on 2014-06-27 at 18:41:51 size 38 rw-r--r--
Propagating updates
UNISON 2.40.63 started propagating changes at 18:43:54.25 on 27 Jun 2014
[BGN] Updating file samplefile-sv1.txt from /home/syncfuser/fileupload to //srv-53//home/syncfuser/fileupload
Reconciling changes
changed ----> samplefile-sv1.txt 
local : changed file modified on 2014-06-27 at 18:43:50 size 65 rw-r--r--
srv-51 : unchanged file modified on 2014-06-27 at 18:41:51 size 38 rw-r--r--
Propagating updates
UNISON 2.40.63 started propagating changes at 18:43:54.25 on 27 Jun 2014
[BGN] Updating file samplefile-sv1.txt from /home/syncfuser/fileupload to //srv-51//home/syncfuser/fileupload
[END] Updating file samplefile-sv1.txt
UNISON 2.40.63 finished propagating changes at 18:43:54.25 on 27 Jun 2014
Saving synchronizer state
[END] Updating file samplefile-sv1.txt
UNISON 2.40.63 finished propagating changes at 18:43:54.25 on 27 Jun 2014
Saving synchronizer state
Synchronization complete at 18:43:54 (1 item transferred, 0 skipped, 0 failed)
Fri Jun 27 18:43:54 2014 Normal: Retrying Modify on /home/syncfuser/fileupload//samplefile-sv1.txt = 0
Synchronization complete at 18:43:54 (1 item transferred, 0 skipped, 0 failed)
Fri Jun 27 18:43:54 2014 Normal: Retrying Modify on /home/syncfuser/fileupload//samplefile-sv1.txt = 0

[root@srv-53 fileupload]# rm -rf samplefile-sv1.txt
[root@srv-53 fileupload]# ll
total 0

Log :
Connected [//srv-51//home/syncfuser/fileupload -> //srv-53//home/syncfuser/fileupload]
Connected [//srv-52//home/syncfuser/fileupload -> //srv-53//home/syncfuser/fileupload]
Looking for changes
 Waiting for changes from server
Looking for changes
Reconciling changes
deleted ----> samplefile-sv1.txt 
local : deleted
srv-51 : unchanged file modified on 2014-06-27 at 18:53:14 size 0 rw-r--r--
Propagating updates
UNISON 2.40.63 started propagating changes at 18:53:31.85 on 27 Jun 2014
[BGN] Deleting samplefile-sv1.txt from //srv-51//home/syncfuser/fileupload
 Waiting for changes from server
[END] Deleting samplefile-sv1.txt
UNISON 2.40.63 finished propagating changes at 18:53:31.85 on 27 Jun 2014
Saving synchronizer state
Reconciling changes
deleted ----> samplefile-sv1.txt 
local : deleted
srv-52 : unchanged file modified on 2014-06-27 at 18:53:14 size 0 rw-r--r--
Propagating updates
UNISON 2.40.63 started propagating changes at 18:53:31.85 on 27 Jun 2014
[BGN] Deleting samplefile-sv1.txt from //srv-52//home/syncfuser/fileupload
[END] Deleting samplefile-sv1.txt
UNISON 2.40.63 finished propagating changes at 18:53:31.85 on 27 Jun 2014
Saving synchronizer state
Synchronization complete at 18:53:31 (1 item transferred, 0 skipped, 0 failed)
Fri Jun 27 18:53:31 2014 Normal: Retrying Delete on /home/syncfuser/fileupload//samplefile-sv1.txt = 0
Synchronization complete at 18:53:31 (1 item transferred, 0 skipped, 0 failed)
Fri Jun 27 18:53:31 2014 Normal: Retrying Delete on /home/syncfuser/fileupload//samplefile-sv1.txt = 0

 

Now You have unique file in all the servers.

 

 

 

 

HowTo: Tomcat Logging – log customized with {X-Forwarded-For}

Posted on Updated on

Tomcat is allowing us to track back logs with enamours of information by customizing the log pattern. There is preset patten is available, we can also implement is in single line

I enabled few more information like execution time , request size , cookies etc..

Default tag should be like this

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"  
               prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/>

Common : %{X-Forwarded-For}i %l %u %t “%r” %s %b
Combined : %{X-Forwarded-For}i %l %u %t %r %s %b %{User-Agent}i %{Referer}i %{Cookie}i

You can change either Common or Combined

I have implemented my own pattern like below, so it should more detailed

pattern="%h %{X-Forwarded-For}i %l %u %t  &quot;%r&quot; %s %b  &quot;%{User-Agent}i&quot; &quot;%{Referer}i&quot; &quot;%{Cookie}i&quot; %T"

Access Log pattern new look

-----------------------------
192.168.1.185 - - - [18/Mar/2014:10:52:06 +0530]  "GET /ajax/norm/list/status?ids=23%2C11%2C9%2C7%2C6%2C5%2C2%2C1%2C HTTP/1.1" 200 42  "Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Firefox/24.0" "http://192.168.1.188/norm/list" "JSESSIONID=4FD1DBEB911CD2E19AA4798F9A26DCA8" 0.007
-----------------------------
Log Details : 192.168.1.185 : Remote host name (or IP address if resolveHosts is false)
– : X-Forwarded-For – : Remote logical username
– : Remote user that was authenticated
[18/Mar/2014:10:52:06 +0530]  : Date and time, in Common Log Format
GET /ajax/norm/list/…… : First line of the request (method and request URI)
HTTP/1.1 : Request protocol
200 : HTTP status code of the response
42 : Bytes sent, excluding HTTP headers (Content size)
Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Firefox/24.0: User Agent
http://192.168.1.188/norm/list : Referer
JSESSIONID=4FD1DBEB911CD2E19AA4798F9A26DCA8 : Cookie header
0.007 : Time taken to process the request, in seconds

Once every thing has been done you can restart the tomcat to make it effect, more options are given below

%a – Remote IP address
%A – Local IP address
%b – Bytes sent, excluding HTTP headers, or ‘-‘ if zero
%B – Bytes sent, excluding HTTP headers
%h – Remote host name (or IP address if resolveHosts is false)
%H – Request protocol
%l – Remote logical username from identd (always returns ‘-‘)
%m – Request method (GET, POST, etc.)
%p – Local port on which this request was received
%q – Query string (prepended with a ‘?’ if it exists)
%r – First line of the request (method and request URI)
%s – HTTP status code of the response
%S – User session ID
%t – Date and time, in Common Log Format
%u – Remote user that was authenticated (if any), else ‘-‘
%U – Requested URL path
%v – Local server name
%D – Time taken to process the request, in millis
%T – Time taken to process the request, in seconds
%I – current request thread name (can compare later with stacktraces)
%f – X-Forwarded-For IP address
%F – X-Forwarded-For address

HowTo: Change Instance store AMI to EBS-backend AMI

Posted on Updated on

Amazon not providing any feature for changing AMI root device type, once we generate an instance with Instance-store  we can’t upgrade the instance because for upgrading instance should stop. The stop option is disable for such instance-store AMI’s. I followed the steps below, It can be workout by two ways either using rsync or dd

Here is the steps:

  • Create an EBS vol with size as same or more, I used 10G because my existing instance having 10G on root.

EBS_fresh

After creating which is look like this

EBS_new

  • Attach the EBSLogin to existing Instance-store backend AMI,

Right- click and select Attach Volume,

EBS_attach

  • Login to the Instance-store backend  server, and stop all the running services (Optional), (eg., mysqld , httpd , xinted )

Execute the the disk mirroring commands below, it will take few min to complete according to the server perfomance.

[root@ip-10-128-5-222 ~]# dd bs=65536 if=/dev/sda1 of=/dev/sdf

or

mkfs.ext3 /dev/sdf                              #create filesystem
mkdir /mnt/ebs                                  #New dir for mounting 
mount /dev/sdh /mnt/ebs                         #Mount as a partition
rsync -avHx / /mnt/ebs                          #Synchronizing root and ebs  
rsync -avHx /dev /mnt/ebs                       #Synchronizing device informations  
tune2fs -L '/' /dev/sdf                         #Creating partition label for ebs  
sync;sync;sync;sync && umount /mnt/ebs          #Sync and umounting ebs 

Check the EBS volume for consistency

[root@ip-10-128-5-222 ~]# fsck /dev/sdf
 fsck 1.39 (29-May-2006)
 e2fsck 1.39 (29-May-2006)
 /dev/sdf: clean, 126372/1310720 files, 721346/2621440 blocks

Mount the EBS volume into the instance, Remove the /mnt entry from the fstab on your EBS vol

[root@ip-10-128-5-222 ~]# mount /dev/sdf /mnt/ebs-vol
[root@ip-10-128-5-222 ~]# vim /root/ebs-vol/etc/fstab
  • Create a snapshot of the EBS volume using the AWS management console

Right-Click the EBS_vol –> select Create Snapshot , it will take few min to create

EBS_snapshot

After creating snapshot it will list under snapshot list.

EBS_snapshotpng

Now Right-click snapshot  –> select Create Image from snapshot

EBS_create_image

  • Launch new EC2 using newly create AMI, so while creating new EC2 you can select any instance type also you may use the same keypair and Elastic IP for the new instance

Creating New instance using new AMI.

NEW_EC2

Running instance

EC2_newpng

  • Now you can login to the new server, If you select more than the size of snapshot you have to use the below command to retain the storage back
#resize2fs /dev/sda1
  •  Successfully migrated the server as EBS-backend. Start all the services if it is necessary, This time we can upgrade the instance type

HowTo: S3 bucket dynamic URI access

Posted on Updated on

s3cmd is a command line tool for uploading, retrieving and managing data in Amazon S3. Still their are no wiki is updated.
you may get the packages from sourceforge official

Also the download repository is available here : Download Now

It will also support including unix dynamic resource access method, for example we can use * for calling all the resources or {dir1,file2} for specific resource.

I was shown in the example for setting up public acl for dynamic sub directories.

Installation:

root@planetcure:wget http://kaz.dl.sourceforge.net/project/s3tools/s3cmd/1.0.1/s3cmd-1.0.1.tar.gz
root@planetcure:tar -zxvf s3cmd-1.0.1.tar.gz
root@planetcure:export  PATH=$PATH:/opt/installer/s3cmd-1.0.1

Now we can access the binary from any of the location.

root@planetcure:/opt/installer/s3cmd-1.0.1# s3cmd setacl --acl-public s3://my-bucket-name/{dev,stg1,stg2}/*/dir5/*/3/*

This command will execute the following scenarios

s3://my-bucket-name/  is my S3 bucket

* will represent all the subdirectories

{dev,stg1,stg2} will represent the specific directories from the group of directories

dir5/ ,3/ will represent specific sub-directory

Enjoy the day, 🙂