Amazon
Howto: Allowing SFTP access while chrooting the user and denying shell access.
Usually SFTP will allow a system user to access their home directory to upload and download files with their account. The SFTP user can navigate anywhere in the server some times can download files it will produce security vulnerability.
The Chroot for SFTP will be denied to access the rest of the system as they will be chrooted to the user home directory. Thus users will not be able to snoop around the system to /etc or application directories. User login to a shell account will also be denied.
I the below procedures will allowed me to enable SFTP security,
1, Add a new group
2, Create a Chroot dir for launch the logins, which should owned by root
3, Modify sftp-internal for forcing chroot dir
4, reload the configuration
Steps :
Create Chroot launch directory with other have no previlege
mkdir /opt/chroot chown root:root /opt/chroot chmod 700 /opt/chroot
Create a common group for the chrooted users , SSH rule will work for the group
groupadd sftpgroup useradd -g sftpgroup -s /sbin/nologin -d /opt/chroot/planetuser planetuser passwd planetuser
Modify ssh configuration
vi /etc/ssh/sshd_config
Comment the general sftp subsubsystem and add new rule
#Subsystem sftp /usr/lib/openssh/sftp-server #Add the line Subsystem sftp internal-sftp # Rules for sftp group Match group sftpgroup ChrootDirectory %h X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp
Then restart SSH service
service sshd restart
HowTo: Change Instance store AMI to EBS-backend AMI
Amazon not providing any feature for changing AMI root device type, once we generate an instance with Instance-store we can’t upgrade the instance because for upgrading instance should stop. The stop option is disable for such instance-store AMI’s. I followed the steps below, It can be workout by two ways either using rsync or dd
Here is the steps:
- Create an EBS vol with size as same or more, I used 10G because my existing instance having 10G on root.
After creating which is look like this
- Attach the EBSLogin to existing Instance-store backend AMI,
Right- click and select Attach Volume,
- Login to the Instance-store backend server, and stop all the running services (Optional), (eg., mysqld , httpd , xinted )
Execute the the disk mirroring commands below, it will take few min to complete according to the server perfomance.
[root@ip-10-128-5-222 ~]# dd bs=65536 if=/dev/sda1 of=/dev/sdf
or
mkfs.ext3 /dev/sdf #create filesystem mkdir /mnt/ebs #New dir for mounting mount /dev/sdh /mnt/ebs #Mount as a partition
rsync -avHx / /mnt/ebs #Synchronizing root and ebs rsync -avHx /dev /mnt/ebs #Synchronizing device informations tune2fs -L '/' /dev/sdf #Creating partition label for ebs sync;sync;sync;sync && umount /mnt/ebs #Sync and umounting ebs
Check the EBS volume for consistency
[root@ip-10-128-5-222 ~]# fsck /dev/sdf fsck 1.39 (29-May-2006) e2fsck 1.39 (29-May-2006) /dev/sdf: clean, 126372/1310720 files, 721346/2621440 blocks
Mount the EBS volume into the instance, Remove the /mnt entry from the fstab on your EBS vol
[root@ip-10-128-5-222 ~]# mount /dev/sdf /mnt/ebs-vol [root@ip-10-128-5-222 ~]# vim /root/ebs-vol/etc/fstab
- Create a snapshot of the EBS volume using the AWS management console
Right-Click the EBS_vol –> select Create Snapshot , it will take few min to create
After creating snapshot it will list under snapshot list.
Now Right-click snapshot –> select Create Image from snapshot
- Launch new EC2 using newly create AMI, so while creating new EC2 you can select any instance type also you may use the same keypair and Elastic IP for the new instance
Creating New instance using new AMI.
Running instance
- Now you can login to the new server, If you select more than the size of snapshot you have to use the below command to retain the storage back
#resize2fs /dev/sda1
- Successfully migrated the server as EBS-backend. Start all the services if it is necessary, This time we can upgrade the instance type
HowTo: S3 bucket dynamic URI access
s3cmd is a command line tool for uploading, retrieving and managing data in Amazon S3. Still their are no wiki is updated.
you may get the packages from sourceforge official
Also the download repository is available here : Download Now
It will also support including unix dynamic resource access method, for example we can use * for calling all the resources or {dir1,file2} for specific resource.
I was shown in the example for setting up public acl for dynamic sub directories.
Installation:
root@planetcure:wget http://kaz.dl.sourceforge.net/project/s3tools/s3cmd/1.0.1/s3cmd-1.0.1.tar.gz root@planetcure:tar -zxvf s3cmd-1.0.1.tar.gz root@planetcure:export PATH=$PATH:/opt/installer/s3cmd-1.0.1
Now we can access the binary from any of the location.
root@planetcure:/opt/installer/s3cmd-1.0.1# s3cmd setacl --acl-public s3://my-bucket-name/{dev,stg1,stg2}/*/dir5/*/3/*
This command will execute the following scenarios
s3://my-bucket-name/ is my S3 bucket
* will represent all the subdirectories
{dev,stg1,stg2} will represent the specific directories from the group of directories
dir5/ ,3/ will represent specific sub-directory
Enjoy the day, 🙂