Monday, October 12, 2015

How to change network interface name from Enp0s3 to Eth0 on CentOS 7

 CentOS 7 Change Network Interface Name From Enp0s3 To Eth0

CentOS 7 Change Network Interface Name

Step1
# issue the ifconfig command to check the current network interface information, type:
# yum install net-tools









[root@localhost Desktop]$sudo ifconfig
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 192.168.42.129  netmask 255.255.255.0  broadcast 192.168.42.255
inet6 fe80::20c:29ff:fec7:25ae  prefixlen 64  scopeid 0x20<link>
ether 00:0c:29:c7:25:ae  txqueuelen 1000  (Ethernet)
RX packets 200948  bytes 253071365 (241.3 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 56043  bytes 3420351 (3.2 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


Step2
# edit “/etc/sysconfig/grub” configuration file using “vim” command:
before:






[root@localhost Desktop]$sudo vim /etc/sysconfig/grub
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto  vconsole.keymap=us rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

then searching for “GRUB_CMDLINE_LINUX” line  and appending the following content “net.ifnames=0 biosdevname=0“, just like the below:


GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto  vconsole.keymap=us rhgb quiet net.ifnames=0 biosdevname=0"

Step3
# Using “grub2-mkconfig” command to re-generate a new grub configuration file, type:
>








[root@localhost Desktop]$sudo grub2-mkconfig  -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-123.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-123.el7.x86_64.img
Warning: Please don't use old title `CentOS Linux, with Linux 3.10.0-123.el7.x86_64' for GRUB_DEFAULT, use `Advanced options for CentOS Linux CentOS Linux, with Linux 3.10.0-123.el7.x86_64' (for versions before 2.00) or `gnulinux-advanced-dbedd8fa-5d86-4ea0-8551-8444a48cd44f nulinux-3.10.0-123.el7.x86_64-advanced-dbedd8fa-5d86-4ea0-8551-8444a48cd44f' (for 2.00 or later)
Found linux image: /boot/vmlinuz-0-rescue-3303e35a730a41e3b4e99b544acea205
Found initrd image: /boot/initramfs-0-rescue-3303e35a730a41e3b4e99b544acea205.img
done

Step4
#   Rename “Enp” network file using”mv”command, type:

$sudo mv /etc/sysconfig/network-scripts/ifcfg-enp0s3  /etc/sysconfig/network-scripts/ifcfg-eth0

Step5
# Edit “/etc/sysconfig/network-scripts/ifcfg-eth0 ” configuration file and set the value of “Name” field to “eth0″.


















# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
<strong>NAME=eth0</strong>
UUID=5ae10208-855b-41af-99e7-0673d3792d15
ONBOOT=yes
HWADDR=00:0C:29:C7:25:AE
PEERDNS=yes
PEERROUTES=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
Step6
# reboot system, after rebooting system, using “ifconfig” command check network interface information again.









[root@localhost Desktop]# ifconfig
eth0: flags=4163 UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
inet 192.168.42.129  netmask 255.255.255.0  broadcast 192.168.42.255
inet6 fe80::20c:29ff:fec7:25ae  prefixlen 64  scopeid 0x20
ether 00:0c:29:c7:25:ae  txqueuelen 1000  (Ethernet)
RX packets 49  bytes 5285 (5.1 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 76  bytes 8540 (8.3 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
done….

Tuesday, October 6, 2015

Monthly Bandwidth Usage monitoring with Zabbix


Monitoring server's used bandwidth is not something that comes integrated with Zabbix by default.

First of all we deploy vnstat on target server(s).  vnstat is a simple program to monitor network traffic: 

[root@centos ~]# yum install vnstat

Then we need to initialize the vnstat database for the interface we want to monitor bandwidth on. i.e. for eth0 (change this to the interface that you want to monitor!) :

[root@centos ~]# vnstat -u -i eth0

We are done with vnstat . After a while we should be able to start retrieving stats for the given interface. i.e. to get the monthly used bandwidth (you can get daily, hourly ... check man page): 

[root@centos ~]# vnstat –m

eth0  /  monthly
       month        rx      |     tx      |    total    |   avg. rate
    ------------------------+-------------+-------------+---------------
      Nov '14    201.01 GiB |   82.29 GiB |  283.30 GiB |  916.87 kbit/s
      Dec '14    282.30 GiB |  219.39 GiB |  501.69 GiB |    1.57 Mbit/s
      Jan '15    637.36 GiB |  521.07 GiB |    1.13 TiB |    3.63 Mbit/s
      Feb '15    608.20 GiB |  608.40 GiB |    1.19 TiB |    4.22 Mbit/s
      Mar '15    559.96 GiB |  419.99 GiB |  979.95 GiB |    3.07 Mbit/s
      Apr '15    420.36 GiB |  355.42 GiB |  775.78 GiB |    2.51 Mbit/s
      May '15    155.58 GiB |  110.97 GiB |  266.55 GiB |  834.81 kbit/s
      Jun '15    184.76 GiB |  122.08 GiB |  306.84 GiB |  993.05 kbit/s
      Jul '15    288.87 GiB |  143.55 GiB |  432.42 GiB |    1.35 Mbit/s
      Aug '15    268.43 GiB |  266.29 GiB |  534.71 GiB |    1.67 Mbit/s
      Sep '15     93.30 GiB |  235.18 GiB |  328.48 GiB |    1.06 Mbit/s
      Oct '15     15.69 GiB |    6.72 GiB |   22.40 GiB |  419.62 kbit/s
    ------------------------+-------------+-------------+---------------
    estimated     93.79 GiB |   40.18 GiB |  133.97 GiB |


With vnstat in place and correctly working , we move now to creating a Zabbix UserParameter that will retrieve the total bandwidth used in a given month and send it to Zabbix via the Zabbix agent that we have installed in our server. Visit this link for more information on Zabbix UserParameters.

For this we first create a script that will get from vnstat the necessary information (so the total bandwidth used up to now in the current month): 

#!/bin/bash
# Current month total bandwidth in MB
i=$(vnstat --oneline | awk -F\; '{ print $11 }')
bandwidth_number=$(echo $i | awk '{ print $1 }')
bandwidth_unit=$(echo $i | awk '{ print $2 }')
case "$bandwidth_unit" in
KiB)    bandwidth_number_MB=$(echo "$bandwidth_number/1024" | bc)
    ;;
MiB)    bandwidth_number_MB=$bandwidth_number
    ;;
GiB)     bandwidth_number_MB=$(echo "$bandwidth_number*1024" | bc)
    ;;
TiB)    bandwidth_number_MB=$(echo "$bandwidth_number*1024*1024" | bc)
    ;;
esac
echo $bandwidth_number_MB


And now we add the aforementioned  UserParameter to the Zabbix Agent configuration file (in Debian it is located in /etc/zabbix/zabbix_agentd.conf. We add this line under the ####### USER-DEFINED MONITORED PARAMETERS ####### section:
UserParameter=system.monthlybandwidth,/home/zabbix/zabbix_total_month_bandwidth.sh

Please modify the script path and name according to your settings.  Note that the name we have used for this new UserParameter is system.monthlybandwidth . Again, you can customize this according to your needs.

We are mostly done. 

We just need now to go to Zabbix Admin and add the new parameter as an item to the desired server or template. You will want typically to deploy this new parameter to a template so it gets automatically distributed to new servers using this template







Here you can see the definition of the new item with the name of  Bandwidth Month . We have used a multiplier too to convert the final unit to GB.

So now we can check how it looks a graph for one server with this item configured:   



As expected we have a raising line each month from 0 to the total used bandwidth. We will be able to spot sudden rises in bandwidth consumption, max during the period for which we have data ... 

Monday, October 5, 2015

[Docker] x509: certificate signed by unknown authority - Docker

Issue:

# docker run hello-world
Unable to find image 'hello-world:latest' locally
Trying to pull repository docker.io/hello-world ... failed
Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate signed by unknown authority


Solution:

 # echo -n | openssl s_client -connect index.docker.io:443 -showcerts | sed -n -e '/BE-GIN\ CERTIICATE/,/END\ CERTIFICATE/ p'

depth=1 C = IN, ST = Gujarat, L = Ahmedabad, O = Cyberoam, OU = Cyberoam Certificate Authority, CN = Cyberoam SSL CA_C202314364571, emailAddress = support@cyberoam.com
verify error:num=19:self signed certificate in certificate chain
verify return:0
DONE

...and put it in
/usr/share/pki/ca-trust-source/anchors/

# cp cyberoam_ssl_ca.pem /usr/share/pki/ca-trust-source/anchors/

# update-ca-trust extract

# systemctl restart docker

# docker run hello-world

Thursday, September 24, 2015

How to Create Encrypted and Bandwidth-efficient Backups Using ‘Duplicity’ in Linux

Experience shows that you can never be too paranoid about system backups. When it comes to protecting and preserving precious data, it is best to go the extra mile and make sure you can depend on your backups if the need arises.
Create Encrypted Linux File System Backups
Duplicity – Create Encrypted Linux File System Backups
Even today, when some cloud and hosting providers offer automated backups for VPS’s at a relatively low cost, you will do well to create your own backup strategy using your own tools in order to save some money and then perhaps use it to buy extra storage or get a bigger VPS.
Sounds interesting? In this article we will show you how to use a tool called Duplicity to backup and encrypt file and directories. In addition, using incremental backups for this task will help us to save space.
That said, let’s get started.

Installing Duplicity

To install duplicity in Fedora-based distros, you will have to enable the EPEL repository first (you can omit this step if you’re using Fedora itself):
# yum update && yum install epel-release
Then run,
# yum install duplicity
For Debian and derivatives:
# aptitude update && aptitude install duplicity
In theory, many methods for connecting to a file server are supported although only ssh/scp/sftp, local file access, rsyncftp, HSI, WebDAV and Amazon S3 have been tested in practice so far.
Once the installation completes, we will exclusively use sftp in various scenarios, both to back up and to restore the data.
Our test environment consists of a CentOS 7 box (to be backed up) and a Debian 8 machine (backup server).

Creating SSH keys to access remote servers and GPG keys for encryption

Let’s begin by creating the SSH keys in our CentOS box and transfer them to the Debian backup server.
The below commands assumes the sshd daemon is listening on port XXXXX in the Debian server. ReplaceAAA.BBB.CCC.DDD with the actual IP of the remote server.
# ssh-keygen -t rsa
# ssh-copy-id -p XXXXX root@AAA.BBB.CCC.DDD
Then you should make sure that you can connect to the backup server without using a password:
Create SSH Keys
Create SSH Keys
Now we need to create the GPG keys that will be used for encryption and decryption of our data:
# gpg --gen-key
You will be prompted to enter:
  1. Kind of key
  2. Key size
  3. How long the key should be valid
  4. A passphrase
Create GPG Keys
Create GPG Keys
To create the entropy needed for the creation of the keys, you can log on to the server via another terminal window and perform a few tasks or run some commands to generate entropy (otherwise you will have to wait for a long time for this part of the process to finish).
Once the keys have been generated, you can list them as follows:
# gpg --list-keys
List Generated GPG Keys
List Generated GPG Keys
The string highlighted in yellow above is known as the public key ID, and is a requested argument to encrypt your files.

Creating a backup with Duplicity

To start simple, let’s only backup the /var/log directory, with the exception of /var/log/anaconda and/var/log/sa.
Since this is our first backup, it will be a full one. Subsequent runs will create incremental backups (unless we add the full option with no dashes right next to duplicity in the command below):
PASSPHRASE="YourPassphraseHere" duplicity --encrypt-key YourPublicKeyIdHere --exclude /var/log/anaconda --exclude /var/log/sa /var/log scp://root@RemoteServer:XXXXX//backups/centos7
Make sure you don’t miss the double slash in the above command! They are used to indicate an absolute path to a directory named /backups/centos7 in the backup box, and is where the backup files will be stored.
Replace YourPassphraseHereYourPublicKeyIdHere and RemoteServer with the passphrase you entered earlier, the GPG public key ID, and with the IP or hostname of the backup server, respectively.
Your output should be similar to the following image:
Create /var Partition Backup
Create Backup using Duplicity
The image above indicates that a total of 86.3 MB was backed up into a 3.22 MB in the destination. Let’s switch to the backup server to check on our newly created backup:
Confirm Backup File
Confirm Backup File
A second run of the same command yields a much smaller backup size and time:
Compress Backup
Compress Backup

Restoring backups using Duplicity

To successfully restore a file, a directory with its contents, or the whole backup, the destination must not exist (duplicity will not overwrite an existing file or directory). To clarify, let’s delete the cron log in the CentOS box:
# rm -f /var/log/cron
Delete Cron Logs
Delete Cron Logs
The syntax to restore a single file from the remote server is:
# PASSPHRASE="YourPassphraseHere" duplicity --file-to-restore filename sftp://root@RemoteHost//backups/centos7 /where/to/restore/filename
where,
  1. filename is the file to be extracted, with a relative path to the directory that was backed up
  2. /where/to/restore is the directory in the local system where we want to restore the file to.
In our case, to restore the cron main log from the remote backup we need to run:
# PASSPHRASE="YourPassphraseHere" duplicity --file-to-restore cron sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7 /var/log/cron
The cron log should be restored to the desired destination.
Likewise, feel free to delete a directory from /var/log and restore it using the backup:
# rm -rf /var/log/mail
# PASSPHRASE="YourPassphraseHere" duplicity --file-to-restore mail sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7 /var/log/mail
In this example, the mail directory should be restored to its original location with all its contents.

Other features of Duplicity

At any time you can display the list of archived files with the following command:
# duplicity list-current-files sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7
Delete backups older than 6 months:
# duplicity remove-older-than 6M sftp://root@AAA.BBB.CCC.DDD:XXXXX//backups/centos7
Restore myfile inside directory gacanepa as it was 2 days and 12 hours ago:
# duplicity -t 2D12h --file-to-restore gacanepa/myfile sftp://root@AAA.BBB.CCC.DDD:XXXXX//remotedir/backups /home/gacanepa/myfile
In the last command, we can see an example of the usage of the time interval (as specified by -t): a series of pairs where each one consists of a number followed by one of the characters smhDWM, or Y (indicating seconds, minutes, hourse, days, weeks, months, or years respectively).

Wednesday, July 29, 2015

PHP gettext extension missing (Zabbix)

PHP gettext extension missing (PHP configuration parameter --with-gettext)

The getText PHP usage (getText is a set of tools and libraries to help programmers and translators to develop multi-language software), in order to achieve the PHP i18n.

No way, looked under the original extension was not installed gettest when installed php, no way to re-fill

Mainly two aspects cause PHP gettext extension missing (PHP configuration parameter --with-gettext) this error: First, do not add --with-gettext option when compiling PHP
Solutions are as follows:
The installation is a dynamic installation of php gettext extension module in php source file into the /ext /gettext
cd /ext/gettext
/Usr/local/php/bin/phpize
./configure -with-php-config = /usr/local/php/bin/php-config
make && make install
Second, the system lacks gettext-devel tool library, or already compile and install the gettext but could not find a way to compile php
vi /usr/local/php/etc/php.ini
Find extensions option, add the following entry:
extension = "gettext.so"
Here remember to add the path to extensions

restart apache service
That's it.......

Tuesday, June 30, 2015

How to Mount AWS S3 Bucket on CentOS/RHEL and Ubuntu using s3fs

S3FS is FUSE (File System in User Space) based solution to mount an Amazon S3 buckets, We can use system commands with this drive just like as another Hard Disk in system. On s3fs mounted files systems we can simply use cp, mv and ls the basic Unix commands similar to run on locally attached disks.
If you like to access S3 buckets without mounting on system, use s3cmd command line utility to manage s3 buckets. s3cmd is also provides faster speed for data upload and download rather than s3fs. To work with s3cmd use next articles to install s3cmd in Linux and Windows System.

This article will help you to install S3FS and Fuse by compiling from source, and also help you to mount S3 bucket on your CentOS/RHEL and Ubuntu systems.

Step 1: Remove Existing Packages

First check if you have any existing s3fs or fuse package installed on your system. If installed it already remove it to avoid any file conflicts.

CentOS/RHEL Users:
 # yum remove fuse fuse-s3fs

Ubuntu Users:
 $ sudo apt-get remove fuse
 

Step 2: Install Required Packages

After removing above packages. First we will install all dependencies for fuse and s3cmd. Install the required packages to system use following command.
 
CentOS/RHEL 7 Users:
 # yum install gcc libstdc++-devel gcc-c++ fuse fuse-devel curl-devel libxml2-devel mailcap git automake make
 # yum install openssl-devel
 # yum install s3fs-fuse
 
 Ubuntu Users:
 $ sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support

 

Step 3: Download and Compile Latest Fuse

git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
./configure --prefix=/usr --with-openssl
make
sudo make install

Step 4: Download and Compile Latest S3FS

wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.84.tar.gz
tar zxvf s3fs-1.84.tar.gz
cd s3fs-fuse-1.84/
./autogen.sh
 ./configure --prefix=/usr
 make
 make install

 s3fs --version

Step 5: Setup Access Key

Also In order to configure s3fs we would required Access Key and Secret Key of your S3 Amazon account. Get these security keys from AWS IAM
# echo AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY > ~/.passwd-s3fs
for sytem wide use /etc/passwd-s3fs
# chmod 600 ~/.passwd-s3fs
Note: Change AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with your actual key values.

 

Step 6: Mount S3 Bucket

Finally mount your s3 bucket using following set of commands. For this example, we are using s3 bucket name as mydbbackup and mount point as /s3mnt.

# mkdir /s3mnt
# chmod 777  /s3mnt
# s3fs mybucket /s3mnt   -o passwd_file=/etc/passwd-s3fs
 
If any errors enable debug mode
s3fs mybucket /mnt/s3mnt -o passwd_file=/etc/passwd-s3fs -o dbglevel=info -f -o curldbg

#mount permanently  with /etc/fstab entry
mybucket /mnt/s3mnt  fuse.s3fs _netdev,allow_other,nonempty  0 0

Thats it...........
Now you can create and test some files and folders in the mount point..
 

Wednesday, April 22, 2015

How to Increase root volume on AWS instance

Increase root / volume size:
    Stop the instance
    Create a snapshot from the volume
    Create a new volume based on the snapshot increasing the size
    Check and remember the current's volume mount point (i.e. /dev/sda1)
    Detach current volume
    Attach the recently created volume to the instance, setting the exact mount point
    Restart the instance
    Access via SSH to the instance and run fdisk /dev/sda
    Hit p to show current partitions
    Hit d to delete current partitions (if there are more than one, you have to delete one at a time)
NOTE: Don't worry data is not lost
    Hit n to create a new partition
    Hit p to set it as primary
    Hit 1 to set the first cylinder
    Set the desired new space (if empty the whole space is reserved)
    Hit a to make it bootable
    Hit 1 and w to write changes
    Reboot instance
    Log via SSH and run resize2fs /dev/sda1
    Finally check the new space running df -h

That's it

How to add EBS Volume on AWS/VPC instance

ADD EBS Volume on AWS/VPC instance:

check partitions
# cat /proc/partitions

format drive with ext4
# mkfs.ext4 /dev/sda

make directory
# mkdir /newdrive

mount drive to new directory
# mount /dev/sda /newdrive
# cd /newdrive/
# ls

check disks
# df -ah

add device to fstab
# vi /etc/fstab
add
/dev/sda  /newdrive/    ext3    noatime,nodiratime        0   0

# mount -a


Thats it!

How to create extra swap space on linux machine

To create 8 GB Swap file on linux machine
stop swap first
# swapoff -a

then create 8 Gb swap file
# dd if=/dev/zero of=/var/swapdir/swapfile bs=1024 count=8388608
# mkswap /var/swapdir/swapfile

change ownership to root on swap file
# chown root:root /var/swapdir/swapfile

change permissions
# chmod 0600 /var/swapdir/swapfile

then start swap
# swapon /var/swapdir/swapfile

now need to create the swap file entry in fstab
# vi /etc/fstab
/var2/swapdir/swapfile swap swap defaults 0 0

check swap
$ free -m

clear memory cache
sync; echo 3 > /proc/sys/vm/drop_caches

Thats it!