Wednesday, July 29, 2015

PHP gettext extension missing (Zabbix)

PHP gettext extension missing (PHP configuration parameter --with-gettext)

The getText PHP usage (getText is a set of tools and libraries to help programmers and translators to develop multi-language software), in order to achieve the PHP i18n.

No way, looked under the original extension was not installed gettest when installed php, no way to re-fill

Mainly two aspects cause PHP gettext extension missing (PHP configuration parameter --with-gettext) this error: First, do not add --with-gettext option when compiling PHP
Solutions are as follows:
The installation is a dynamic installation of php gettext extension module in php source file into the /ext /gettext
cd /ext/gettext
/Usr/local/php/bin/phpize
./configure -with-php-config = /usr/local/php/bin/php-config
make && make install
Second, the system lacks gettext-devel tool library, or already compile and install the gettext but could not find a way to compile php
vi /usr/local/php/etc/php.ini
Find extensions option, add the following entry:
extension = "gettext.so"
Here remember to add the path to extensions

restart apache service
That's it.......

Tuesday, June 30, 2015

How to Mount AWS S3 Bucket on CentOS/RHEL and Ubuntu using s3fs

S3FS is FUSE (File System in User Space) based solution to mount an Amazon S3 buckets, We can use system commands with this drive just like as another Hard Disk in system. On s3fs mounted files systems we can simply use cp, mv and ls the basic Unix commands similar to run on locally attached disks.
If you like to access S3 buckets without mounting on system, use s3cmd command line utility to manage s3 buckets. s3cmd is also provides faster speed for data upload and download rather than s3fs. To work with s3cmd use next articles to install s3cmd in Linux and Windows System.

This article will help you to install S3FS and Fuse by compiling from source, and also help you to mount S3 bucket on your CentOS/RHEL and Ubuntu systems.

Step 1: Remove Existing Packages

First check if you have any existing s3fs or fuse package installed on your system. If installed it already remove it to avoid any file conflicts.

CentOS/RHEL Users:
 # yum remove fuse fuse-s3fs

Ubuntu Users:
 $ sudo apt-get remove fuse
 

Step 2: Install Required Packages

After removing above packages. First we will install all dependencies for fuse and s3cmd. Install the required packages to system use following command.
 
CentOS/RHEL 7 Users:
 # yum install gcc libstdc++-devel gcc-c++ fuse fuse-devel curl-devel libxml2-devel mailcap git automake make
 # yum install openssl-devel
 # yum install s3fs-fuse
 
 Ubuntu Users:
 $ sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support

 

Step 3: Download and Compile Latest Fuse

git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
./configure --prefix=/usr --with-openssl
make
sudo make install

Step 4: Download and Compile Latest S3FS

wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.84.tar.gz
tar zxvf s3fs-1.84.tar.gz
cd s3fs-fuse-1.84/
./autogen.sh
 ./configure --prefix=/usr
 make
 make install

 s3fs --version

Step 5: Setup Access Key

Also In order to configure s3fs we would required Access Key and Secret Key of your S3 Amazon account. Get these security keys from AWS IAM
# echo AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY > ~/.passwd-s3fs
for sytem wide use /etc/passwd-s3fs
# chmod 600 ~/.passwd-s3fs
Note: Change AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with your actual key values.

 

Step 6: Mount S3 Bucket

Finally mount your s3 bucket using following set of commands. For this example, we are using s3 bucket name as mydbbackup and mount point as /s3mnt.

# mkdir /s3mnt
# chmod 777  /s3mnt
# s3fs mybucket /s3mnt   -o passwd_file=/etc/passwd-s3fs
 
If any errors enable debug mode
s3fs mybucket /mnt/s3mnt -o passwd_file=/etc/passwd-s3fs -o dbglevel=info -f -o curldbg

#mount permanently  with /etc/fstab entry
mybucket /mnt/s3mnt  fuse.s3fs _netdev,allow_other,nonempty  0 0

Thats it...........
Now you can create and test some files and folders in the mount point..
 

Wednesday, April 22, 2015

How to Increase root volume on AWS instance

Increase root / volume size:
    Stop the instance
    Create a snapshot from the volume
    Create a new volume based on the snapshot increasing the size
    Check and remember the current's volume mount point (i.e. /dev/sda1)
    Detach current volume
    Attach the recently created volume to the instance, setting the exact mount point
    Restart the instance
    Access via SSH to the instance and run fdisk /dev/sda
    Hit p to show current partitions
    Hit d to delete current partitions (if there are more than one, you have to delete one at a time)
NOTE: Don't worry data is not lost
    Hit n to create a new partition
    Hit p to set it as primary
    Hit 1 to set the first cylinder
    Set the desired new space (if empty the whole space is reserved)
    Hit a to make it bootable
    Hit 1 and w to write changes
    Reboot instance
    Log via SSH and run resize2fs /dev/sda1
    Finally check the new space running df -h

That's it

How to add EBS Volume on AWS/VPC instance

ADD EBS Volume on AWS/VPC instance:

check partitions
# cat /proc/partitions

format drive with ext4
# mkfs.ext4 /dev/sda

make directory
# mkdir /newdrive

mount drive to new directory
# mount /dev/sda /newdrive
# cd /newdrive/
# ls

check disks
# df -ah

add device to fstab
# vi /etc/fstab
add
/dev/sda  /newdrive/    ext3    noatime,nodiratime        0   0

# mount -a


Thats it!

How to create extra swap space on linux machine

To create 8 GB Swap file on linux machine
stop swap first
# swapoff -a

then create 8 Gb swap file
# dd if=/dev/zero of=/var/swapdir/swapfile bs=1024 count=8388608
# mkswap /var/swapdir/swapfile

change ownership to root on swap file
# chown root:root /var/swapdir/swapfile

change permissions
# chmod 0600 /var/swapdir/swapfile

then start swap
# swapon /var/swapdir/swapfile

now need to create the swap file entry in fstab
# vi /etc/fstab
/var2/swapdir/swapfile swap swap defaults 0 0

check swap
$ free -m

clear memory cache
sync; echo 3 > /proc/sys/vm/drop_caches

Thats it!

Friday, December 12, 2014

How to Insall Mongo DB cluster Guide




1: Add the MongoDB Repository
vi /etc/yum.repos.d/mongodb.repo

[mongodb]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/
gpgcheck=0
enabled=1

Then exit and save the file with the command :wq

2: Install MongoDB
yum install mongo-10gen mongo-10gen-server


Config server
The config server processes are mongod instances that store the cluster’s metadata.
More

Replca Set
A MongoDB replica set is a cluster of mongod instances that replicate amongst one another and ensure automated failover.
More

Mongos
mongos for “MongoDB Shard,” is a routing service for MongoDB shard configurations that processes queries from the application layer, and determines the location of this data in the sharded cluster, in order to complete these operations.
More

Example:



Server Setup
Add a new user
Create a new user named mongodb on each server, this user will be the one who starts the mongodb processes.

adduser mongodb
su - mongodb

Prepare directories:
We need to prepare all the data and log directories with proper privileges.

# Commands are using to setup the server.
# Creating a directory for data path.
sudo mkdir /var/lib/mongodb/dbs
sudo chown mongodb:mongodb -R /var/lib/mongodb/dbs
cd /etc/
sudo mkdir mongodb
sudo chown mongodb:mongodb -R /etc/mongodb
sudo cp mongodb.conf mongodb/
sudo mv mongodb.conf mongodb.conf.default
We did this on all the servers running mongodb cluster.

Configuration Servers:
Make a configuration file for mongodb’s config server.

cd /etc/mongodb/
vi config_db.conf
The configuration file of mongod running on config servers should have,

fork=true
dbpath=/var/lib/mongodb/dbs/config_db
logpath=/var/log/mongodb/config_db.log
logappend=true
port=27020
Last step, start the config server by

sudo mongod --configsvr --config /etc/mongodb/config_db.conf
And do the same thing on all 3 config servers of our own.


Mongos
First we need to create a configuration file for mongos.

cd /etc/mongodb/
vi mongos.conf
The content in mongos configuration file is

fork = true
port = 27017
configdb = xxx.xxx.xxx.xxx:port,xxx.xxx.xxx.xxx:port,xxx.xxx.xxx.xxx:port # Here you should put the domain name of your 3 configuration servers.
logpath=/var/log/mongodb/mongos.log
Now we start our mongos process.

mongos --config /etc/mongodb/mongos.conf

Replica Sets
First we need to create configuration files for our mongod.
We have 3 Replica Sets and each set has 3 mongod running, one of them is ”arbiter”, we create 3 

configuration files on each of our data server like.

cd /etc/mongodb/
touch set0_db.conf
touch set1_db.conf
touch set2_db.conf

The content inside should have

fork = true
port = 27017
dbpath=/var/lib/mongodb/dbs/set<index of this set>_db
logpath=/var/log/mongodb/set<index of this set>_db.log
logappend = true
journal = true
replSet = set<index of this set>
And as usual, we start the mongod process using command:

mongod --config set<index of this set>_db.conf
Last step, we need to initialize these 3 sets seperately.

set0

rs.initiate({_id:'set0', members:[{_id: 0, host: 'xxx.xxx.xxx.xxx:port'}, {_id: 1, host: 'xxx.xxx.xxx.xxx:port'}]});
rs.addArb("xxx.xxx.xxx.xxx:port");

set1

rs.initiate({_id:'set1', members:[{_id: 0, host: 'xxx.xxx.xxx.xxx:port'}, {_id: 1, host: 'xxx.xxx.xxx.xxx:port'}]});
rs.addArb("xxx.xxx.xxx.xxx:port");

set2

rs.initiate({_id:'set2', members:[{_id: 0, host: 'xxx.xxx.xxx.xxx:port'}, {_id: 1, host: 'xxx.xxx.xxx.xxx:port'}]});
rs.addArb("xxx.xxx.xxx.xxx:port");

Add Shards:
Now we can connect to ‘mongos’ and add our 3 Replica Sets as 3 shards.

mongo --host <domain name of mongos> --port <port of mongos running>
connecting to: <domain name of mongos>/test
mongos> sh.addShard("set0/<primary of set0>:<port>");
{ "shardAdded" : "set0", "ok" : 1 }
mongos> sh.addShard("set1/<primary of set1>:<port>");
{ "shardAdded" : "set1", "ok" : 1 }
mongos> sh.addShard("set2/<primary of set2>:<port>");
{ "shardAdded" : "set2", "ok" : 1 }

Enable database sharding:
In order to make use of sharding in MongoDB, we need to manually choose the database and collections those
we want them to be sharded.
Take our system as an example.
First connect to mongos,

mongo --host <host> --port <port>
Then type the following commands in mongo shell.
Here we need to set the collection ‘students’ in database ‘test’ being able to be sharded.

use admin
sh.enableSharding("test")
sh.shardCollection("test.students", { "grades": 1 })
That’s it, we have successfully set up our MongoDB Sharding Cluster!

Verify Sharding:
Now you need to find out if your cluster is really working.
You can use the following code to verify the sharding we currently have.

mongo --host 198.211.98.146 --port 27017
use admin
db.runCommand( { listshards : 1 } );
And you suppose to have the result below

{
 "shards" : [
  {
   "_id" : "set0",
   "host" : "set0/198.211.100.130:27018,198.211.100.172:27017"
  },
  {
   "_id" : "set1",
   "host" : "set1/198.211.100.130:27017,198.211.100.158:27018"
  },
  {
   "_id" : "set2",
   "host" : "set2/198.211.100.158:27017,198.211.100.172:27018"
  }
 ],
 "ok" : 1
}

Other Settings

Copy DB
Sometimes, like we encountered once, we need to change one of our config servers to another machine.
In this case, we need to do the following things.
  • Shutdown all processes (mongod, mongos, config server).
  • Copy the data subdirectories (dbpath tree) from the config server to the new config servers.
  • Start the config servers.
  • Restart mongos processes with the new –configdb parameter.
  • Restart mongod processes.
You can use this command to copy a database from another server.

mongo --port 27020
use config
db.copyDatabase("config", "config", "xxx.xxx.xxx.xxx:27020");

Logrotate
Since every day MongoDB generates a lot of logs, we need a way to compress them and delete them after a period of time.
So we can created 2 crontab jobs to achieve this goal.
This script runs daily at 0:05AM to collect the old logs and compress them.

#! /bin/sh
killall -SIGUSR1 mongod
killall -SIGUSR1 mongos # This line only applicable on swordfish
find /var/log/mongodb -type f \( -iname "*.log.*" ! -iname "*.gz" \) -exec gzip -f {} \;
This script runs every first day of a month, this will remove all the compressed logs from last month.

#! /bin/sh
find /var/log/mongodb -type f -name "*.gz" -exec rm -f {} \;
We also need to add crontab for these two shell commands.

crontab -e

0  0 * * * /path/to/your/mongodb_logrotate.sh
0 10 1 * * /path/to/your/mongodb_clearlog.sh

Deploy MMS Agent
We are now using 10gen’s MMS as our monitoring system. In order to use this, we need to let their agent running
on our mongos server.
Here is how we set it up.
First download the agent from your hosts dashboard.
Then

# prereqs
sudo apt-get install python python-setuptools
sudo easy_install pip
sudo pip install pymongo

#set up agent
cd /path/to/your/dir
mkdir mms-agent
unzip name-of-agent.zip -d mms-agent
cd mms-agent
nohup python agent.py > logs/agent.log 2>&1 &
And we finished!
The agent will auto discover other servers in you cluster, although it still needs some manually work for you to do in the dashboard, but it is really helpful for us to monitor the whole system in real time.