Applications
HowTo: Generate Certificate for OpenLDAP and using it for certificate authentication.
LDAPS Server Certificate Requirements
LDAPS requires a properly formatted X.509 certificate. This certificate lets a OpenLDAP service listen for and automatically accept SSL connections. The server certificate is used for authenticating the OpenLDAP server to the client during the LDAPS setup and for enabling the SSL communication tunnel between the client and the server. As an option, we can also use LDAPS for client authentication.
Having spent quite some time to make a TLS work, I thought this may be usefull to some :
Creating Self CA certificate:
1, Create the ldapclient-key.pem private key :
openssl genrsa -des3 -out ldapclient-key.pem 1024
2, Create the ldapserver-cacerts.pem certificate :
openssl req -new -key ldapclient-key.pem -x509 -days 1095 -out ldapserver-cacerts.pem
Creating a certificate for server:
1, Create the ldapserver-key.pem private key
openssl genrsa -out ldapserver-key.pem
2, Create a server.csr certificate request:
openssl req -new -key ldapserver-key.pem -out server.csr
3, Create the ldapserver-cert.pem certificate signed by your own CA :
openssl x509 -req -days 2000 -in server.csr -CA ldapserver-cacerts.pem -CAkey ldapclient-key.pem -CAcreateserial -out ldapserver-cert.pem
4, Create CA copy for the client:
cp -rpf ldapserver-cacerts.pem ldapclient-cacerts.pem
Now configure the certificates in slapd.conf, the correct files must be copied on each server:
TLSCACertificateFile /etc/openldap/certs/ldapserver-cacerts.pem TLSCertificateFile /etc/openldap/certs/ldapserver-cert.pem TLSCertificateKeyFile /etc/openldap/certs/ldapserver-key.pem TLSCipherSuite HIGH:MEDIUM:+SSLv2 # personnally, I only check servers from client. # If you do, add this : TLSVerifyClient never
Configure certificate for ldap clients
Key : ldapclient-key.pem Crt : ldapclient-cert.pem
HowTo: Extend the volume in windows
It will be possible to resize the system partion with tools that are either commercially available or opensource. Acronis ‘Disk Partition Manager’ is a good example of a commerical product (www.acronis.com), Thier is another tool that comes with Linux ‘Live CD’ called “GParted”. This will also resize partions without data loss. To extend a volume, follow these steps,
Run –> cmd –> type diskpart.exe.
Type list volume to display the existing volumes on the computer.
Type Select volume volume number where volume number is number of the volume that you want to extend.
Type extend [size=n] [disk=n] [noerr]. The following describes the parameters: size=n The space, in megabytes (MB), to add to the current partition. If you do not specify a size, the disk is extended to use all the next contiguous unallocated space. disk=n
The dynamic disk on which to extend the volume. Space equal to size=n is allocated on the disk. If no disk is specified, the volume is extended on the current disk.
HowTo: Change Instance store AMI to EBS-backend AMI
Amazon not providing any feature for changing AMI root device type, once we generate an instance with Instance-store we can’t upgrade the instance because for upgrading instance should stop. The stop option is disable for such instance-store AMI’s. I followed the steps below, It can be workout by two ways either using rsync or dd
Here is the steps:
- Create an EBS vol with size as same or more, I used 10G because my existing instance having 10G on root.
After creating which is look like this
- Attach the EBSLogin to existing Instance-store backend AMI,
Right- click and select Attach Volume,
- Login to the Instance-store backend server, and stop all the running services (Optional), (eg., mysqld , httpd , xinted )
Execute the the disk mirroring commands below, it will take few min to complete according to the server perfomance.
[root@ip-10-128-5-222 ~]# dd bs=65536 if=/dev/sda1 of=/dev/sdf
or
mkfs.ext3 /dev/sdf #create filesystem mkdir /mnt/ebs #New dir for mounting mount /dev/sdh /mnt/ebs #Mount as a partition
rsync -avHx / /mnt/ebs #Synchronizing root and ebs rsync -avHx /dev /mnt/ebs #Synchronizing device informations tune2fs -L '/' /dev/sdf #Creating partition label for ebs sync;sync;sync;sync && umount /mnt/ebs #Sync and umounting ebs
Check the EBS volume for consistency
[root@ip-10-128-5-222 ~]# fsck /dev/sdf fsck 1.39 (29-May-2006) e2fsck 1.39 (29-May-2006) /dev/sdf: clean, 126372/1310720 files, 721346/2621440 blocks
Mount the EBS volume into the instance, Remove the /mnt entry from the fstab on your EBS vol
[root@ip-10-128-5-222 ~]# mount /dev/sdf /mnt/ebs-vol [root@ip-10-128-5-222 ~]# vim /root/ebs-vol/etc/fstab
- Create a snapshot of the EBS volume using the AWS management console
Right-Click the EBS_vol –> select Create Snapshot , it will take few min to create
After creating snapshot it will list under snapshot list.
Now Right-click snapshot –> select Create Image from snapshot
- Launch new EC2 using newly create AMI, so while creating new EC2 you can select any instance type also you may use the same keypair and Elastic IP for the new instance
Creating New instance using new AMI.
Running instance
- Now you can login to the new server, If you select more than the size of snapshot you have to use the below command to retain the storage back
#resize2fs /dev/sda1
- Successfully migrated the server as EBS-backend. Start all the services if it is necessary, This time we can upgrade the instance type
HowTo: S3 bucket dynamic URI access
s3cmd is a command line tool for uploading, retrieving and managing data in Amazon S3. Still their are no wiki is updated.
you may get the packages from sourceforge official
Also the download repository is available here : Download Now
It will also support including unix dynamic resource access method, for example we can use * for calling all the resources or {dir1,file2} for specific resource.
I was shown in the example for setting up public acl for dynamic sub directories.
Installation:
root@planetcure:wget http://kaz.dl.sourceforge.net/project/s3tools/s3cmd/1.0.1/s3cmd-1.0.1.tar.gz root@planetcure:tar -zxvf s3cmd-1.0.1.tar.gz root@planetcure:export PATH=$PATH:/opt/installer/s3cmd-1.0.1
Now we can access the binary from any of the location.
root@planetcure:/opt/installer/s3cmd-1.0.1# s3cmd setacl --acl-public s3://my-bucket-name/{dev,stg1,stg2}/*/dir5/*/3/*
This command will execute the following scenarios
s3://my-bucket-name/ is my S3 bucket
* will represent all the subdirectories
{dev,stg1,stg2} will represent the specific directories from the group of directories
dir5/ ,3/ will represent specific sub-directory
Enjoy the day, 🙂
HowTo: Set Up Multiple SSL Certificates on One IP with Apache
As the Apache Web server grows and matures, new features are added and old bugs are fixed. Perhaps one of the most important new features added to recent Apache versions (2.2.12, to be specific) is the long-awaited support for multiple SSL sites on a single IP address.
prerequisites,
- The server, obviously, must use Apache 2.2.12 or higher.
- It must also use OpenSSL 0.9.8f or later and must be built with the TLS extensions option.
- Apache must be built against this version of OpenSSL as it will enable SNI support if it detects the right version of OpenSSL — the version of OpenSSL that includes TLS extension support.( Default installation contains all these things)
Note:
SNI can only be used for serving multiple SSL sites from your web server and is not likely to work at all on other daemons, such as mail servers, etc. There are also a small percentage of older web browsers that may still give certificate errors. Wikipedia has an updated list of software that does and does not support this TLS extension.
Here am using wild card SSL for hosting two sub-domain in single server, similearly we can also use different ssl for different domain with the same IP.
Follow the basic installation of apache
Redhat :
[root@ip-10-132-82-251 ~]# yum install httpd openssl openssl-devel mod_ssl
Ubuntu:
apt-get install apache2 openssl mod_ssl
Get the the certificate from the authority or use self singed SSL, Verify you have enabled SSL module in the existing apache installation
[root@ip-10-132-82-251 ~]# httpd -M |grep ssl
Add the following lines in the apace main configuration file httpd.conf
[root@ip-10-132-82-251 ~]# vi /etc/httpd/conf/httpd.conf ###FOR SSL NameVirtualHost *:443 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule>
Create the Virtual Hosts
Once you downloaded all required files for SSL, proceed to creating Vhost.
Here is the Vhost entry that I used
[root@ip-10-132-82-251 ~]# vi /etc/httpd/conf.d/domain1-ssl.conf <IfModule mod_ssl.c> <VirtualHost *:443> ServerName domain1.mydomain.com DocumentRoot "/opt/web-home/domain1/public_html" <Directory /> Options FollowSymLinks AllowOverride all </Directory> <Directory /opt/web-home/domain1/public_html> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /opt/web-home/domain1/public_html/cgi-bin/ <Directory "/opt/web-home/domain1/public_html/cgi-bin/"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel warn SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /etc/ssl/certs/planetcure.in.crt SSLCertificateKeyFile /etc/ssl/certs/planetcure.in.key SSLCertificateChainFile /etc/ssl/certs/planetcure.in.csr SSLCACertificateFile /etc/ssl/certs/planetcure.in.ca SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> </IfModule> SSLPassPhraseDialog builtin SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) SSLSessionCacheTimeout 300 SSLMutex default SSLRandomSeed startup file:/dev/urandom 256 SSLRandomSeed connect builtin SSLCryptoDevice builtin
You can also create more Vhost files using this entry. By changing the domain name and the SSL path.
Now restart the apache
[root@ip-10-132-82-251 ~]# service httpd restart
To verify the list of enabled vhost, use the below command
[root@ip-10-132-82-251 ~]# apachectl -S VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server domain1.planetcure.in (/etc/httpd/conf.d/domain1-ssl.conf:2) port 443 namevhost domain1.planetcure.in (/etc/httpd/conf.d/domain1-ssl.conf:2) port 443 namevhost domain2.planetcure.in (/etc/httpd/conf.d/domain2-ssl.conf:2) Syntax OK
Phew, these domains having their own SSL with single IP 🙂
Script: https traffic block
This script is for blocking https traffic in the software router it self, I am using squid and it is not capable for handling HTTPS traffics, because 1 , the url is encrypted. 2, The routing table is only for handing traffic over port 80.
This script have two input file, it will create automatically when the first run. It having capability for private-IP based restriction
Editable area in the script :
DIST=192.168.1.6 #IP where the request has to forward DPORT=81 #Port where the request has to forward BLOCKPORTS=443 #Outgoing + incomming Port RULE=forward #Possible options reject,drop,forward
If you have any web-page for giving a message to the user regarding the block, set it here
Enter the domain and local IP separately in the file, examples are shown below Download here
[anand@planetcure ~]$ sh https_block.sh --help This script is for block https outbound traffic using source based requests -s or --silent Silent execution ssl_domains File for enter SSL domain names ip_users File for enter localip list
You must have to enable forwarding and execute it from root.
First run :
[root@planetcure]# sh https_block.sh Parent dir not found, Creating entire structure /opt/installer/scripts |-- ip_users `-- ssl_domains 0 directories, 2 files [INFO]:We found empty input file. exiting..
Input Files :
[root@planetcure]# ls /opt/installer/scripts/ ip_users ssl_domains
File input one by one :
[root@planetcure scripts]# cat ip_users 192.168.1.100 192.168.1.245 [root@planetcure scripts]# cat ssl_domains www.enlook.wordpress.com facebook.com www.facebook.com
Output:
[root@planetcure]# sh https_block.sh Validating file structure checking ssl_domains Ok. checking ip_users Ok. /opt/installer/scripts |-- ip_users `-- ssl_domains 0 directories, 2 files Executing source Ip 192.168.1.100 76.74.254.123 blocked for the domain www.enlook.wordpress.com 192.0.80.250 blocked for the domain www.enlook.wordpress.com 192.0.81.250 blocked for the domain www.enlook.wordpress.com 66.155.9.238 blocked for the domain www.enlook.wordpress.com 66.155.11.238 blocked for the domain www.enlook.wordpress.com 76.74.254.120 blocked for the domain www.enlook.wordpress.com 173.252.110.27 blocked for the domain facebook.com 31.13.79.128 blocked for the domain www.facebook.com Executing source Ip 192.168.1.245 76.74.254.120 blocked for the domain www.enlook.wordpress.com 76.74.254.123 blocked for the domain www.enlook.wordpress.com 192.0.80.250 blocked for the domain www.enlook.wordpress.com 192.0.81.250 blocked for the domain www.enlook.wordpress.com 66.155.9.238 blocked for the domain www.enlook.wordpress.com 66.155.11.238 blocked for the domain www.enlook.wordpress.com 173.252.110.27 blocked for the domain facebook.com 31.13.79.128 blocked for the domain www.facebook.com
Now set this as crone like below
*/05 * * * * /bin/sh /root/https_block.sh -s
If you run again the script it will show the current status of the blocked domain
[root@localhost bash]# sh https_block.sh Validating file structure checking ssl_domains Ok. checking ip_users Ok. /opt/installer/scripts |-- ip_users `-- ssl_domains 0 directories, 2 files Executing source Ip 192.168.1.100 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.100 76.74.254.123 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.100 192.0.80.250 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.100 192.0.81.250 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.100 66.155.9.238 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.100 66.155.11.238 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.100 76.74.254.120 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:facebook.com DNAT tcp -- 192.168.1.100 173.252.110.27 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 31.13.79.144 blocked for the domain www.facebook.com Executing source Ip 192.168.1.245 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.245 76.74.254.120 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.245 76.74.254.123 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.245 192.0.80.250 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.245 192.0.81.250 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.245 66.155.9.238 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:www.enlook.wordpress.com DNAT tcp -- 192.168.1.245 66.155.11.238 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 Domain:facebook.com DNAT tcp -- 192.168.1.245 173.252.110.27 tcp dpt:443 tcp dpt:443 to:192.168.1.6:81 31.13.79.144 blocked for the domain www.facebook.com
Now you have control in the network traffic usage.
Info: Configure Redmine on cpanel hosting account with sending and receiving emails.
Wiki : http://en.wikipedia.org/wiki/Redmine
Redmine is a free and open source, web-based project management and bug-tracking tool. It includes a calendar and Gantt charts to aid visual representation of projects and their deadlines. It handles multiple projects. Redmine provides integrated project management features, issue tracking, and support for various version control systems.
The design of Redmine is significantly influenced by Trac, a software package with some similar features.
Redmine is written using the Ruby on Rails framework. It is cross-platform and cross-database. It is part of the Bitnami app library that provides an installer and virtual machine for ease of deployment.
Before starting installation you have to make sure that Ruby on rails is working fine in your environment, If not you can follow the installation document for more help.
Installaing Ruby on Rails with Cpanel : https://enlook.wordpress.com/2013/11/19/howto-install-ruby-on-rails-with-cpanel/
Once you have done, then start the redmine installation steps.
Login to the terminal using primary account logins.
#ssh myaccount@mydomain.com
- Create rails_app folder and redmine folder within it then go inside that folder
# mkdir -p ~/rails_apps/redmine/ # cd ~/rails_apps/redmine/
- Download redmine redmine-2.3.3 or latest stable version, extract it and move the content out of it, then delete the files not being used.
-
# wget http://files.rubyforge.vm.bytemark.co.uk/redmine/redmine-2.3.3.tar.gz # tar -zxvf redmine-2.3.3.tar.gz # mv redmine-2.3.3/* ./ # rm -rf redmine-2.3.3/
-
- Move example files where they can be used
# cd config # mv database.yml.example database.yml # mv configuration.yml.example configuration.yml
- Creating the MySQL Database/User/Password
Login to Cpanel account, Create a database , user and grant full privilege to the new user for the particular database.
- Modifying your database.yml file.
# vi database.yml production: adapter: mysql database: redmine host: localhost username: myaccount_databaseuser password: newpassowd encoding: utf8
- Updating the ~/rails_apps/redmine/public/.htaccess file
# cd ../public/ # pwd
- You should see something similar to this.
/home/myaccountuser/rails_apps/redmine/public
- Add these lines
Options -MultiViews PassengerResolveSymlinksInDocumentRoot on #Set this to whatever environment you'll be running in RailsEnv production RackBaseURI / SetEnv GEM_HOME /home/myaccountuser/rails_apps/redmine/public # set to resolve avoid rails control to the folder for image resolution RewriteEngine On RewriteCond %{REQUEST_URI} ^/images.* RewriteRule .* - [L]
- Create a subdomain eg: projects.mydomain.com
Follow cpanel procedure to create subdomain. - Remove projects folder inside public_html and create symbolic link.
# rm -rf ~/public_html/projects
- Creating the symlink
# ln -s ~/rails_app/redmine/public ~/public_html/projects
- Updating Environment variables in ~/.bashrc file
- Add these lines to the bottom of your ~/.bashrc file
export HPATH=$HOME export GEM_HOME=$HPATH/ruby/gems export GEM_PATH=$GEM_HOME:/lib64/ruby/gems/1.9.3 export GEM_CACHE=$GEM_HOME/cache export PATH=$PATH:$HPATH/ruby/gems/bin export PATH=$PATH:$HPATH/ruby/gems
- after which source your .bashrc file
# source ~/.bashrc
- You will then need to check your rails version
rails -v && rake --version && gem -v
- You should get this message
Rails 4.0.1 rake, version 0.9.2.2 1.8.23
- Running bundle install
# cd ~/rails_apps/redmine/ # bundle install # rake generate_session_store
- Running generate_session_store or generate_secret_token
-
# rake generate_session_store
- If you get an error saying that command is deprecated, run this command instead;
# rake generate_secret_token
-
- Start the site session
# rake db:migrate RAILS_ENV=production
- Configuring outgoing emailsUpdate the setting in configuration.yml
default: email_delivery: delivery_method: :smtp smtp_settings: address: localhost port: 25 domain: mydomain.com authentication: :none enable_starttls_auto: false
Now the redmine have capable to send emails using exim install in the cpanel server.
- Configuring Incomming emails for IMAPCreate a cron job for the script to get continuous email feaching
For the first this script must execute from the terminal, so it will display error if any.
/usr/bin/rake -f /home1/innovat4/rails_apps/redmine/Rakefile --silent redmine:email:receive_imap RAILS_ENV="production" port=143 host=mydomain.com username=projects@mydomain.com password=myemailpassword
For more help follow the official link http://www.redmine.org/projects/redmine/wiki/RedmineReceivingEmails#Enabling-unknown-users-to-create-issues-by-email
Note : Each configuration required rails environment reboot for that you can follow the simple way.
# touch ~/rails_app/redmine/tmp/reboot.txt
Howto: Install Ruby on Rails with Cpanel
Installing Ruby on Rails on cPanel
Start the installation steps with root privileged or sudo user or you have to submit a tickte to your hosting provider for enabling Ruby on rails in you hosting account.
For detailed information about RubyGems: commands and system, read their User Guide Manuals at: www.rubygems.org/
– To install Ruby on Rails:
SSH to the server and run this command:
- /scripts/installruby
If LIBSAFE is installed on your server, you need to add the directive /usr/bin/ruby to the exception list to prevent buffer overflow errors. SSH to the server and run this command:
- echo “/usr/bin/ruby” >> /etc/libsafe.exclude
The local path to the binary package is:
/usr/bin/gem
To check on the current version installed on your server:
- /usr/bin/gem -v
To list all installed gems:
- /usr/bin/gem -l
– To uninstall Ruby on Rails:
- List all the gems installed on your server and remove them all using the following command:
- /usr/bin/gem uninstall NAME_OF_GEM
The cPanel/WHM, by default, installs the following Gems:
rails, mongrel, fastthread, actionmailer, actionpack, activerecord, activeresource, activesupport, cgi_multipart_eof_fix, daemons, gem_plugin, rake. For example, to uninstall the Gem: rails, we’ll run this command:- /usr/bin/gem uninstall rails
Sample output:
Successfully uninstalled rails version 0.1.6 - Remove Gem directories and the binary package using the following commands (in that order):
- /bin/rm -rf /usr/lib/ruby
- /bin/rm -rf /home/cprubygemsbuild
- /bin/rm -fv /root/.gem
- /bin/rm -fv /usr/bin/gem
- Remove all ruby directories added to a client’s root directory. The local path is: /home/USER/ruby/
- Restart the cPanel (un-necessary but do it any way)
- /sbin/service cpanel restart
Info: NFS Server&Client Setup
Server Side
NFS share with read/write privilege for the specified UID and GID, So even root will denied to write or read in that particular mount point and completly secure from everything.
Install required packages of NFS server.
apt-get install nfs-kernel-server nfs-common portmap
After the installation of NFS server edit /etc/exports fileand add a line as follows.
/mnt/nfs 192.168.0.0/24(rw,sync,anonuid=106,anongid=114,no_subtree_check) ↓ ↓ ↓ NFSsharepath | network | Options(Here we need to set user id and group id of tomcat user)
Restart nfs server after making necessary changes in the exports file.
#service nfs-kernel restart
Client side Linux
Install nfs client packages on NFS client machine. Mount nfs share in the client machine.
apt-get install portmap nfs-common
Make the following entry in /etc/fstab/
192.168.1.175:/mnt/nfs /home/nfs nfs rsize=8192,wsize=8192,timeo=14,intr ↓ ↓ ↓ Network share details Mount point Filesystem
Client Side Windows
Install nfs services for windows through control panel add or remove windows component wizard.
Edit Windows registery and make changes as follows in the registery.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ClientForNFS \CurrentVersion\Default
1, Create two DWORD values namely AnonymousUid and AnonymousGid
2, Set these values to the UID and GID as set in the NFS server for tomcat user (Eg:-106,114)
3, Restart NFS service.
Go to all programs- Administrative tools- Services for network filesystem and Start service of ClientForNFS.
Select properties of clientfornfs and set permissions as per the requirement. (Eg:- Read&write permission for the
Error: 500 OOPS: vsftpd: refusing to run with writable root inside chroot()
Each time while am installing VSFTPD on ubuntu and enable chroot for the users it will refuse to login to the home directory because of write permission in its parent dir, to fix this I used the command
chmod a-w /path/to/the/ftp/home
but is was most annoying and frustrating problem. I supposed to update the vsftpd package with security fix. the steps are below.
wget http://ftp.us.debian.org/debian/pool/main/v/vsftpd/vsftpd_3.0.2-3_amd64.deb dpkg -i vsftpd_3.0.2-3_amd64.deb echo "allow_writeable_chroot=YES" >> /etc/vsftpd.conf echo "seccomp_sandbox=NO" >> /etc/vsftpd.conf service vsftpd reload
now the FTP service will work calm in my server.