Thursday, September 24, 2015

How to Create Encrypted and Bandwidth-efficient Backups Using ‘Duplicity’ in Linux

Experience shows that you can never be too paranoid about system backups. When it comes to protecting and preserving precious data, it is best to go the extra mile and make sure you can depend on your backups if the need arises.
Create Encrypted Linux File System Backups
Duplicity – Create Encrypted Linux File System Backups
Even today, when some cloud and hosting providers offer automated backups for VPS’s at a relatively low cost, you will do well to create your own backup strategy using your own tools in order to save some money and then perhaps use it to buy extra storage or get a bigger VPS.
Sounds interesting? In this article we will show you how to use a tool called Duplicity to backup and encrypt file and directories. In addition, using incremental backups for this task will help us to save space.
That said, let’s get started.

Installing Duplicity

To install duplicity in Fedora-based distros, you will have to enable the EPEL repository first (you can omit this step if you’re using Fedora itself):
# yum update && yum install epel-release
Then run,
# yum install duplicity
For Debian and derivatives:
# aptitude update && aptitude install duplicity
In theory, many methods for connecting to a file server are supported although only ssh/scp/sftp, local file access, rsyncftp, HSI, WebDAV and Amazon S3 have been tested in practice so far.
Once the installation completes, we will exclusively use sftp in various scenarios, both to back up and to restore the data.
Our test environment consists of a CentOS 7 box (to be backed up) and a Debian 8 machine (backup server).

Creating SSH keys to access remote servers and GPG keys for encryption

Let’s begin by creating the SSH keys in our CentOS box and transfer them to the Debian backup server.
The below commands assumes the sshd daemon is listening on port XXXXX in the Debian server. ReplaceAAA.BBB.CCC.DDD with the actual IP of the remote server.
# ssh-keygen -t rsa
# ssh-copy-id -p XXXXX root@AAA.BBB.CCC.DDD
Then you should make sure that you can connect to the backup server without using a password:
Create SSH Keys
Create SSH Keys
Now we need to create the GPG keys that will be used for encryption and decryption of our data:
# gpg --gen-key
You will be prompted to enter:
  1. Kind of key
  2. Key size
  3. How long the key should be valid
  4. A passphrase
Create GPG Keys
Create GPG Keys
To create the entropy needed for the creation of the keys, you can log on to the server via another terminal window and perform a few tasks or run some commands to generate entropy (otherwise you will have to wait for a long time for this part of the process to finish).
Once the keys have been generated, you can list them as follows:
# gpg --list-keys
List Generated GPG Keys
List Generated GPG Keys
The string highlighted in yellow above is known as the public key ID, and is a requested argument to encrypt your files.

Creating a backup with Duplicity

To start simple, let’s only backup the /var/log directory, with the exception of /var/log/anaconda and/var/log/sa.
Since this is our first backup, it will be a full one. Subsequent runs will create incremental backups (unless we add the full option with no dashes right next to duplicity in the command below):
PASSPHRASE="YourPassphraseHere" duplicity --encrypt-key YourPublicKeyIdHere --exclude /var/log/anaconda --exclude /var/log/sa /var/log scp://root@RemoteServer:XXXXX//backups/centos7
Make sure you don’t miss the double slash in the above command! They are used to indicate an absolute path to a directory named /backups/centos7 in the backup box, and is where the backup files will be stored.
Replace YourPassphraseHereYourPublicKeyIdHere and RemoteServer with the passphrase you entered earlier, the GPG public key ID, and with the IP or hostname of the backup server, respectively.
Your output should be similar to the following image:
Create /var Partition Backup
Create Backup using Duplicity
The image above indicates that a total of 86.3 MB was backed up into a 3.22 MB in the destination. Let’s switch to the backup server to check on our newly created backup:
Confirm Backup File
Confirm Backup File
A second run of the same command yields a much smaller backup size and time:
Compress Backup
Compress Backup

Restoring backups using Duplicity

To successfully restore a file, a directory with its contents, or the whole backup, the destination must not exist (duplicity will not overwrite an existing file or directory). To clarify, let’s delete the cron log in the CentOS box:
# rm -f /var/log/cron
Delete Cron Logs
Delete Cron Logs
The syntax to restore a single file from the remote server is:
# PASSPHRASE="YourPassphraseHere" duplicity --file-to-restore filename sftp://root@RemoteHost//backups/centos7 /where/to/restore/filename
where,
  1. filename is the file to be extracted, with a relative path to the directory that was backed up
  2. /where/to/restore is the directory in the local system where we want to restore the file to.
In our case, to restore the cron main log from the remote backup we need to run:
# PASSPHRASE="YourPassphraseHere" duplicity --file-to-restore cron sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7 /var/log/cron
The cron log should be restored to the desired destination.
Likewise, feel free to delete a directory from /var/log and restore it using the backup:
# rm -rf /var/log/mail
# PASSPHRASE="YourPassphraseHere" duplicity --file-to-restore mail sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7 /var/log/mail
In this example, the mail directory should be restored to its original location with all its contents.

Other features of Duplicity

At any time you can display the list of archived files with the following command:
# duplicity list-current-files sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7
Delete backups older than 6 months:
# duplicity remove-older-than 6M sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7
Restore myfile inside directory gacanepa as it was 2 days and 12 hours ago:
# duplicity -t 2D12h --file-to-restore gacanepa/myfile sftp://root@AAA.BBB.CCC.DDD:XXXXX//remotedir/backups /home/gacanepa/myfile
In the last command, we can see an example of the usage of the time interval (as specified by -t): a series of pairs where each one consists of a number followed by one of the characters smhDWM, or Y (indicating seconds, minutes, hourse, days, weeks, months, or years respectively).

Wednesday, July 29, 2015

PHP gettext extension missing (Zabbix)

PHP gettext extension missing (PHP configuration parameter --with-gettext)

The getText PHP usage (getText is a set of tools and libraries to help programmers and translators to develop multi-language software), in order to achieve the PHP i18n.

No way, looked under the original extension was not installed gettest when installed php, no way to re-fill

Mainly two aspects cause PHP gettext extension missing (PHP configuration parameter --with-gettext) this error: First, do not add --with-gettext option when compiling PHP
Solutions are as follows:
The installation is a dynamic installation of php gettext extension module in php source file into the /ext /gettext
cd /ext/gettext
/Usr/local/php/bin/phpize
./configure -with-php-config = /usr/local/php/bin/php-config
make && make install
Second, the system lacks gettext-devel tool library, or already compile and install the gettext but could not find a way to compile php
vi /usr/local/php/etc/php.ini
Find extensions option, add the following entry:
extension = "gettext.so"
Here remember to add the path to extensions

restart apache service
That's it.......

Tuesday, June 30, 2015

How to Mount AWS S3 Bucket on CentOS/RHEL and Ubuntu using s3fs

S3FS is FUSE (File System in User Space) based solution to mount an Amazon S3 buckets, We can use system commands with this drive just like as another Hard Disk in system. On s3fs mounted files systems we can simply use cp, mv and ls the basic Unix commands similar to run on locally attached disks.
If you like to access S3 buckets without mounting on system, use s3cmd command line utility to manage s3 buckets. s3cmd is also provides faster speed for data upload and download rather than s3fs. To work with s3cmd use next articles to install s3cmd in Linux and Windows System.

This article will help you to install S3FS and Fuse by compiling from source, and also help you to mount S3 bucket on your CentOS/RHEL and Ubuntu systems.

Step 1: Remove Existing Packages

First check if you have any existing s3fs or fuse package installed on your system. If installed it already remove it to avoid any file conflicts.

CentOS/RHEL Users:
 # yum remove fuse fuse-s3fs

Ubuntu Users:
 $ sudo apt-get remove fuse
 

Step 2: Install Required Packages

After removing above packages. First we will install all dependencies for fuse and s3cmd. Install the required packages to system use following command.
 
CentOS/RHEL 7 Users:
 # yum install gcc libstdc++-devel gcc-c++ fuse fuse-devel curl-devel libxml2-devel mailcap git automake make
 # yum install openssl-devel
 # yum install s3fs-fuse
 
 Ubuntu Users:
 $ sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support

 

Step 3: Download and Compile Latest Fuse

git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
./configure --prefix=/usr --with-openssl
make
sudo make install

Step 4: Download and Compile Latest S3FS

wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.84.tar.gz
tar zxvf s3fs-1.84.tar.gz
cd s3fs-fuse-1.84/
./autogen.sh
 ./configure --prefix=/usr
 make
 make install

 s3fs --version

Step 5: Setup Access Key

Also In order to configure s3fs we would required Access Key and Secret Key of your S3 Amazon account. Get these security keys from AWS IAM
# echo AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY > ~/.passwd-s3fs
for sytem wide use /etc/passwd-s3fs
# chmod 600 ~/.passwd-s3fs
Note: Change AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with your actual key values.

 

Step 6: Mount S3 Bucket

Finally mount your s3 bucket using following set of commands. For this example, we are using s3 bucket name as mydbbackup and mount point as /s3mnt.

# mkdir /s3mnt
# chmod 777  /s3mnt
# s3fs mybucket /s3mnt   -o passwd_file=/etc/passwd-s3fs
 
If any errors enable debug mode
s3fs mybucket /mnt/s3mnt -o passwd_file=/etc/passwd-s3fs -o dbglevel=info -f -o curldbg

#mount permanently  with /etc/fstab entry
mybucket /mnt/s3mnt  fuse.s3fs _netdev,allow_other,nonempty  0 0

Thats it...........
Now you can create and test some files and folders in the mount point..
 

Wednesday, April 22, 2015

How to Increase root volume on AWS instance

Increase root / volume size:
    Stop the instance
    Create a snapshot from the volume
    Create a new volume based on the snapshot increasing the size
    Check and remember the current's volume mount point (i.e. /dev/sda1)
    Detach current volume
    Attach the recently created volume to the instance, setting the exact mount point
    Restart the instance
    Access via SSH to the instance and run fdisk /dev/sda
    Hit p to show current partitions
    Hit d to delete current partitions (if there are more than one, you have to delete one at a time)
NOTE: Don't worry data is not lost
    Hit n to create a new partition
    Hit p to set it as primary
    Hit 1 to set the first cylinder
    Set the desired new space (if empty the whole space is reserved)
    Hit a to make it bootable
    Hit 1 and w to write changes
    Reboot instance
    Log via SSH and run resize2fs /dev/sda1
    Finally check the new space running df -h

That's it

How to add EBS Volume on AWS/VPC instance

ADD EBS Volume on AWS/VPC instance:

check partitions
# cat /proc/partitions

format drive with ext4
# mkfs.ext4 /dev/sda

make directory
# mkdir /newdrive

mount drive to new directory
# mount /dev/sda /newdrive
# cd /newdrive/
# ls

check disks
# df -ah

add device to fstab
# vi /etc/fstab
add
/dev/sda  /newdrive/    ext3    noatime,nodiratime        0   0

# mount -a


Thats it!

How to create extra swap space on linux machine

To create 8 GB Swap file on linux machine
stop swap first
# swapoff -a

then create 8 Gb swap file
# dd if=/dev/zero of=/var/swapdir/swapfile bs=1024 count=8388608
# mkswap /var/swapdir/swapfile

change ownership to root on swap file
# chown root:root /var/swapdir/swapfile

change permissions
# chmod 0600 /var/swapdir/swapfile

then start swap
# swapon /var/swapdir/swapfile

now need to create the swap file entry in fstab
# vi /etc/fstab
/var2/swapdir/swapfile swap swap defaults 0 0

check swap
$ free -m

clear memory cache
sync; echo 3 > /proc/sys/vm/drop_caches

Thats it!

Friday, December 12, 2014

How to Insall Mongo DB cluster Guide




1: Add the MongoDB Repository
vi /etc/yum.repos.d/mongodb.repo

[mongodb]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/
gpgcheck=0
enabled=1

Then exit and save the file with the command :wq

2: Install MongoDB
yum install mongo-10gen mongo-10gen-server


Config server
The config server processes are mongod instances that store the cluster’s metadata.
More

Replca Set
A MongoDB replica set is a cluster of mongod instances that replicate amongst one another and ensure automated failover.
More

Mongos
mongos for “MongoDB Shard,” is a routing service for MongoDB shard configurations that processes queries from the application layer, and determines the location of this data in the sharded cluster, in order to complete these operations.
More

Example:



Server Setup
Add a new user
Create a new user named mongodb on each server, this user will be the one who starts the mongodb processes.

adduser mongodb
su - mongodb

Prepare directories:
We need to prepare all the data and log directories with proper privileges.

# Commands are using to setup the server.
# Creating a directory for data path.
sudo mkdir /var/lib/mongodb/dbs
sudo chown mongodb:mongodb -R /var/lib/mongodb/dbs
cd /etc/
sudo mkdir mongodb
sudo chown mongodb:mongodb -R /etc/mongodb
sudo cp mongodb.conf mongodb/
sudo mv mongodb.conf mongodb.conf.default
We did this on all the servers running mongodb cluster.

Configuration Servers:
Make a configuration file for mongodb’s config server.

cd /etc/mongodb/
vi config_db.conf
The configuration file of mongod running on config servers should have,

fork=true
dbpath=/var/lib/mongodb/dbs/config_db
logpath=/var/log/mongodb/config_db.log
logappend=true
port=27020
Last step, start the config server by

sudo mongod --configsvr --config /etc/mongodb/config_db.conf
And do the same thing on all 3 config servers of our own.


Mongos
First we need to create a configuration file for mongos.

cd /etc/mongodb/
vi mongos.conf
The content in mongos configuration file is

fork = true
port = 27017
configdb = xxx.xxx.xxx.xxx:port,xxx.xxx.xxx.xxx:port,xxx.xxx.xxx.xxx:port # Here you should put the domain name of your 3 configuration servers.
logpath=/var/log/mongodb/mongos.log
Now we start our mongos process.

mongos --config /etc/mongodb/mongos.conf

Replica Sets
First we need to create configuration files for our mongod.
We have 3 Replica Sets and each set has 3 mongod running, one of them is ”arbiter”, we create 3 

configuration files on each of our data server like.

cd /etc/mongodb/
touch set0_db.conf
touch set1_db.conf
touch set2_db.conf

The content inside should have

fork = true
port = 27017
dbpath=/var/lib/mongodb/dbs/set<index of this set>_db
logpath=/var/log/mongodb/set<index of this set>_db.log
logappend = true
journal = true
replSet = set<index of this set>
And as usual, we start the mongod process using command:

mongod --config set<index of this set>_db.conf
Last step, we need to initialize these 3 sets seperately.

set0

rs.initiate({_id:'set0', members:[{_id: 0, host: 'xxx.xxx.xxx.xxx:port'}, {_id: 1, host: 'xxx.xxx.xxx.xxx:port'}]});
rs.addArb("xxx.xxx.xxx.xxx:port");

set1

rs.initiate({_id:'set1', members:[{_id: 0, host: 'xxx.xxx.xxx.xxx:port'}, {_id: 1, host: 'xxx.xxx.xxx.xxx:port'}]});
rs.addArb("xxx.xxx.xxx.xxx:port");

set2

rs.initiate({_id:'set2', members:[{_id: 0, host: 'xxx.xxx.xxx.xxx:port'}, {_id: 1, host: 'xxx.xxx.xxx.xxx:port'}]});
rs.addArb("xxx.xxx.xxx.xxx:port");

Add Shards:
Now we can connect to ‘mongos’ and add our 3 Replica Sets as 3 shards.

mongo --host <domain name of mongos> --port <port of mongos running>
connecting to: <domain name of mongos>/test
mongos> sh.addShard("set0/<primary of set0>:<port>");
{ "shardAdded" : "set0", "ok" : 1 }
mongos> sh.addShard("set1/<primary of set1>:<port>");
{ "shardAdded" : "set1", "ok" : 1 }
mongos> sh.addShard("set2/<primary of set2>:<port>");
{ "shardAdded" : "set2", "ok" : 1 }

Enable database sharding:
In order to make use of sharding in MongoDB, we need to manually choose the database and collections those
we want them to be sharded.
Take our system as an example.
First connect to mongos,

mongo --host <host> --port <port>
Then type the following commands in mongo shell.
Here we need to set the collection ‘students’ in database ‘test’ being able to be sharded.

use admin
sh.enableSharding("test")
sh.shardCollection("test.students", { "grades": 1 })
That’s it, we have successfully set up our MongoDB Sharding Cluster!

Verify Sharding:
Now you need to find out if your cluster is really working.
You can use the following code to verify the sharding we currently have.

mongo --host 198.211.98.146 --port 27017
use admin
db.runCommand( { listshards : 1 } );
And you suppose to have the result below

{
 "shards" : [
  {
   "_id" : "set0",
   "host" : "set0/198.211.100.130:27018,198.211.100.172:27017"
  },
  {
   "_id" : "set1",
   "host" : "set1/198.211.100.130:27017,198.211.100.158:27018"
  },
  {
   "_id" : "set2",
   "host" : "set2/198.211.100.158:27017,198.211.100.172:27018"
  }
 ],
 "ok" : 1
}

Other Settings

Copy DB
Sometimes, like we encountered once, we need to change one of our config servers to another machine.
In this case, we need to do the following things.
  • Shutdown all processes (mongod, mongos, config server).
  • Copy the data subdirectories (dbpath tree) from the config server to the new config servers.
  • Start the config servers.
  • Restart mongos processes with the new –configdb parameter.
  • Restart mongod processes.
You can use this command to copy a database from another server.

mongo --port 27020
use config
db.copyDatabase("config", "config", "xxx.xxx.xxx.xxx:27020");

Logrotate
Since every day MongoDB generates a lot of logs, we need a way to compress them and delete them after a period of time.
So we can created 2 crontab jobs to achieve this goal.
This script runs daily at 0:05AM to collect the old logs and compress them.

#! /bin/sh
killall -SIGUSR1 mongod
killall -SIGUSR1 mongos # This line only applicable on swordfish
find /var/log/mongodb -type f \( -iname "*.log.*" ! -iname "*.gz" \) -exec gzip -f {} \;
This script runs every first day of a month, this will remove all the compressed logs from last month.

#! /bin/sh
find /var/log/mongodb -type f -name "*.gz" -exec rm -f {} \;
We also need to add crontab for these two shell commands.

crontab -e

0  0 * * * /path/to/your/mongodb_logrotate.sh
0 10 1 * * /path/to/your/mongodb_clearlog.sh

Deploy MMS Agent
We are now using 10gen’s MMS as our monitoring system. In order to use this, we need to let their agent running
on our mongos server.
Here is how we set it up.
First download the agent from your hosts dashboard.
Then

# prereqs
sudo apt-get install python python-setuptools
sudo easy_install pip
sudo pip install pymongo

#set up agent
cd /path/to/your/dir
mkdir mms-agent
unzip name-of-agent.zip -d mms-agent
cd mms-agent
nohup python agent.py > logs/agent.log 2>&1 &
And we finished!
The agent will auto discover other servers in you cluster, although it still needs some manually work for you to do in the dashboard, but it is really helpful for us to monitor the whole system in real time.