Thursday, July 6, 2017

How to install Web Log Analyzer - GoAccess

GoAccess is an open source real-time web log analyzer and interactive viewer that runs in a terminal or through your browser.

It provides fast and valuable HTTP statistics for system administrators that require a visual server report on the fly.

Installation instructions:

# wget http://tar.goaccess.io/goaccess-1.1.1.tar.gz

$ wget http://tar.goaccess.io/goaccess-1.1.1.tar.gz
$ tar -xzvf goaccess-1.1.1.tar.gz
$ cd goaccess-1.1.1/
$ ./configure --enable-geoip --enable-utf8
$ make
# make install

command to analyze log file:
# goaccess -f /var/log/httpd/access_log



You will be prompted to select the log format. If you are using a default server with the standard log file output, select the NCSA combined log format.

Press the Enter key and GoAccess will begin to analyze your log file. Once it is fully parsed, you will be redirected to the following command line interface.

Configure unattended startup

find the parameters that correspond to your log format and uncomment them in /usr/local/etc/goaccess.conf

vi /usr/local/etc/goaccess.conf
time-format %H:%M:%S
date-format %d/%b/%Y
log-format %h %^[%d:%^] "%r" %s %b

Start GoAccess with HTML Generation
To enable live reporting, simply issue the following command. This will output statsreport.html at the root of the /var/www/html directory. You may output it in any folder served by your Apache instance so that you can view the HTML file:

# goaccess -f /var/log/httpd/access_log -a -o /var/www/html/statsreport.html



For real time html output:
add option --real-time-html
# goaccess -f /var/log/httpd/access_log -a -o /var/www/html/statsreport.html  --real-time-html

In order to view the generated HTML report, simply navigate to report.html using your web browser.




To generate a CSV file:
# goaccess access.log --no-csv-summary -o report.csv

That's it....

Friday, February 24, 2017

Linux commands and tricks



1. Runing the last command as Root
sudo !!

2. To find your external IP address.
host myip.opendns.com resolver1.opendns.com
lynx --dump http://ipecho.net/plain
curl ifconfig.me

4. Auto-empty any file without removing it
> file.txt

5. Execute command without saving it in the history
<space>command

6. Slick way to copy or backup a file before you edit it.
cp filename{,.bak}

7. Traceroute is a nice command but how about a single network diagnostic tool that can combine traceroute with ping? mtr is your command.
mtr efytimes.com

8. To Clear your terminal's screen
ctrl-l

9. List of commands you use most often
history | awk '{a[$2]++}END{for(i in a){print a[i] " " i}}' | sort -rn | head

10. Saving the file you edited in vim/vi without the required permissions
:w !sudo tee %
------------------------------------------------------
1. List all the files that are in current and sub directories

$ find
.
./abc.txt
./subdir
./subdir/how.php
./cool.php

You can also use:

$ find .
$ find . -print

2. Search through a particular directory or path

Check for files in test directory and sub directories.

$ find ./test
./test
./test/abc.txt
./test/subdir
./test/subdir/how.php
./test/cool.php

Alternatively, if you want to search by name, then use:

$ find ./test -name abc.txt
./test/abc.txt

You could easily make a mistake and end up searching through the entire file system. So be careful and use Ctrl+C if you happen to do this mistake.

3. Limit how many levels the find command should go

When traversing directories, you can choose how many levels the find command should go within directories.

$ find ./test -maxdepth 2 -name *.php
./test/subdir/how.php
./test/cool.php

$ find ./test -maxdepth 1 -name *.php
./test/cool.php

This is useful when you want to do a limited search in a directory and not the entire directory.

4. Invert match

If you know what files to exclude from your search or want to get filed following a certain pattern, use this.

$ find ./test -not -name *.php
./test
./test/abc.txt
./test/subdir

Here, it will exclude files with .php extensions.

5. Combine multiple criteria for search

You can also put in multiple criteria in some cases.

$ find ./test -name 'abc*' ! -name '*.php'
./test/abc.txt
./test/abc

This shows you files that have abc in their name and do not have .php extension.

You can also use the OR operator by using the '-o' switch.

$ find -name '*.php' -o -name '*.txt'
./abc.txt
./subdir/how.php
./abc.php
./cool.php

This will show you files that have .php or .txt extensions.

6. Search only files or only directories

You can also search for only files or only directories.

For files

$ find ./test -type f -name abc*
./test/abc.txt

For directories

$ find ./test -type d -name abc*
./test/abc

7. Search through more than one directories together

Searching inside two separate directories.

$ find ./test ./dir2 -type f -name abc*
./test/abc.txt
./dir2/abcdefg.txt

8. Find files that are hidden

If you mention the period, then you can easily find all hidden files. In Linux, all hidden files have a period in their name.

$ find ~ -type f -name ".*"

9. Find files with specific permissions

If you need to find files that have specific permissions.

$ find . -type f -perm 0664
./abc.txt
./subdir/how.php
./abc.php
./cool.php

10. Find files that are read only

This one finds you all of them.

$ find /etc -maxdepth 1 -perm /u=r
/etc
/etc/thunderbird
/etc/brltty
/etc/dkms
/etc/phpmyadmin
... output truncated ... 

tar command

The following command creates a new tar archive:

$ tar cvf archive_name.tar dirname/

Use this when you need to extract from an existing archive:

$ tar xvf archive_name.tar

This is the command that is used to view a tar archive:

$ tar tvf archive_name.tar

grep command

This command searches for a given string within a file:

$ grep -i "the" demo_file

This command prints a matched line and three lines after it:

$ grep -A 3 -i "example" demo_text

Recursively search for a string in all files:

$ grep -r "ramesh" *

find command

Use this to find files when the filename is known:

# find -iname "MyCProgram.c"

This command is used to execute command on files that have been found using find:

$ find -iname "MyCProgram.c" -exec md5sum {} \;

Empty files in the directory:

# find ~ -empty

ssh command
This command allows you to login to a remote host.
# ssh -l jsmith remotehost.example.com

Use this to debug ssh clients:
# ssh -v -l jsmith remotehost.example.com

For displaying the ssh client version:
$ ssh -V

sed command
Convert the DOS file format into Unix format:
$sed 's/.$//' filename

Print the contents of a file in reverse order:
$ sed -n '1!G;h;$p' thegeekstuff.txt

Add a line number for the non-empty lines in a particular file
$ sed '/./=' thegeekstuff.txt | sed 'N; s/\n/ /'

awk command
Remove duplicate lines:
$ awk '!($0 in array) { array[$0]; print }' temp

Print all lines from a file , which have the same uid and gid:
$awk -F ':' '$3==$4' passwd.txt

Printing specific fields from a particular file:
$ awk '{print $2,$5;}' employee.txt

vim command
Go to the file�s 143rd line.
$ vim +143 filename.txt

Go to the first found match of the file specified:
$ vim +/search-term filename.txt

diff command
Ignoring white spaces when comparing files:
# diff -w name_list.txt name_list_new.txt

sort command
Ascending order:
$ sort names.txt

Descending order:
$ sort -r names.txt

Sort a file (passwd) the third field:
$ sort -t: -k 3n /etc/passwd | more

export command
Use this for viewing oracle related environment variables:
$ export | grep ORACLE
declare -x ORACLE_BASE="/u01/app/oracle"
declare -x ORACLE_HOME="/u01/app/oracle/product/10.2.0"
declare -x ORACLE_SID="med"
declare -x ORACLE_TERM="xterm"

Export environment variable:
$ export ORACLE_HOME=/u01/app/oracle/product/10.2.0
------------------------------------------
Tar files from multiple directories to a single tar file
cat listoffiles.txt | xargs tar czvf yourbackupname.tgz - or
tar zcvf yourbackupname.tgz $(cat listoffiles.txt)
cat filePath_2004.txt | xargs cp -t compressedfiles/filePath_2004/
--------------------

perl -pi -e 's/\[PDRrr_v3]/\[PDR_v3_pilot]/g' *
grep -rinl "\[PDR_v3_pilot" *
-----------------------

tar command
The following command creates a new tar archive:
$ tar cvf archive_name.tar dirname/

Use this when you need to extract from an existing archive:
$ tar xvf archive_name.tar

This is the command that is used to view a tar archive:
$ tar tvf archive_name.tar

grep command
This command searches for a given string within a file:
$ grep -i "the" demo_file

This command prints a matched line and three lines after it:
$ grep -A 3 -i "example" demo_text

Recursively search for a string in all files:
$ grep -r "ramesh" *

find command
Use this to find files when the filename is known:
# find -iname "MyCProgram.c"

This command is used to execute command on files that have been found using find:
$ find -iname "MyCProgram.c" -exec md5sum {} \;

Empty files in the directory:
# find ~ -empty

ssh command
This command allows you to login to a remote host.
ssh -l jsmith remotehost.example.com

Use this to debug ssh clients:
ssh -v -l jsmith remotehost.example.com

For displaying the ssh client version:
$ ssh �V

sed command
Convert the DOS file format into Unix format:
$sed 's/.$//' filename

Print the contents of a file in reverse order:
$ sed -n '1!G;h;$p' thegeekstuff.txt

Add a line number for the non-empty lines in a particular file
$ sed '/./=' thegeekstuff.txt | sed 'N; s/\n/ /'

awk command
Remove duplicate lines:
$ awk '!($0 in array) { array[$0]; print }' temp

Print all lines from a file , which have the same uid and gid:
$awk -F ':' '$3==$4' passwd.txt

Printing specific fields from a particular file:
$ awk '{print $2,$5;}' employee.txt

vim command
Go to the file's 143rd line.
$ vim +143 filename.txt

Go to the first found match of the file specified:
$ vim +/search-term filename.txt

diff command
Ignoring white spaces when comparing files:
# diff -w name_list.txt name_list_new.txt

sort command
Ascending order:
$ sort names.txt

Descending order:
$ sort -r names.txt

Sort a file (passwd) the third field:
$ sort -t: -k 3n /etc/passwd | more

export command
Use this for viewing oracle related environment variables:
$ export | grep ORACLE
declare -x ORACLE_BASE="/u01/app/oracle"
declare -x ORACLE_HOME="/u01/app/oracle/product/10.2.0"
declare -x ORACLE_SID="med"
declare -x ORACLE_TERM="xterm"

Export environment variable:
$ export ORACLE_HOME=/u01/app/oracle/product/10.2.0

--------------------------
Random password generator
-------------------------
# egrep -ioam1 '[a-z0-9]{10}' /dev/urandom

# sh -x ./configure ... configure_options ... 

network copy with ssh and tar
# ssh bsmith@apple tar cf - -C /home/bsmith . | tar xvf - 

download only with yum command
# yum update httpd -y --downloadonly --downloaddir=/opt
# rpm -Uivh *.rpm

Undo your changes even after quitting the VIM editor
:set undofile
:set undodir=/tmp

This is to be done every time you start editing a file. In case you need the configuration to be there for all files that you open in VIM, create a file called '.exrc' or '.vimrc' in $HOME directory. In my case, it is /myhome.
Open the just created file and add the following commands:
# vi /myhome/.exrc
set undofile
set undodir=/tmp
Save and close the file.
:wq
From now onwards, the Undo history is maintained in the background for all files that you edit with VIM. 


check php version of webserver from browser
phpinfo.php
<?php
// Show all information, defaults to INFO_ALL
phpinfo();
// Show just the module information.
// phpinfo(8) yields identical results.
phpinfo(INFO_MODULES);
?>

++
\\\\\\\\\
library dependency missing error
-----------------
# yum clean all   
# yum clean metadata
# yum list all   
# yum grouplist


check you external ip address
# lynx http://whatismyip.com

check port 443
# netstat -anp|findstr 443


To check and flush postfix mail queue:
to check
# mailq
to flush queue
# postfix flush
to delete queue  
# postsuper -d ALL

To check configure error/debug
# sh -x ./configure

Starting apache fails (could not bind to address 0.0.0.0:80)
# fuser -k -n tcp 80


Check CPU usage from command line:
# top -b -n 1 | grep "Cpu(s)\:"
# ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10
# sar
# mpstat


Search and replace text in multiple files in a directory:
#perl -pi -e 's/PDR_v3/PDRrr_v3/g' *
#find . -name "*.*" -print | xargs sed -i 's/PDR_v3/PDR_v3_qa/g'
#grep -rl 'PDR_v3' ./ | xargs sed -i 's/PDR_v3/PDR_v3_qa/g'


To find repeated words/lines in a text file
# cat /var/httpd/logs/web-access_log | sort | uniq -c | sort -nr | grep '11/Jul' | less


device eth0 does not seem to be present/
# rm -f /etc/udev/rules.d/70-persistent-net.rules 
# reboot

Delete files older than 60 days
# find . -mtime +60 | xargs rm

Saturday, January 14, 2017

How to Increase or reduce volume size on Centos


Increase virtual Hard Disk space running on ext4 file system

1. Identify the partition type:
# fdkisk -l
Device Boot         Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200  1953523711   975712256   8e  Linux LVM

Disk /dev/mapper/cl-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

2. Check Disk information
# df -ah
/dev/mapper/cl-root   50G  6.4G   44G  13% /


3. Add/create additional volume then scan with this command
#partprobe -s

# fdkisk -l
Device Boot         Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200  1953523711   975712256   8e  Linux LVM
/dev/sda3      1953523711  1953524102    10485760   8e  Linux LVM

4. Increase the Logical Volume
#pvcreate /dev/sda3
Physical volume "/dev/sda3" successfully created

5. confirm name of the current volume group
# vgdisplay
--- Volume group ---
  VG Name               cl
VG Size               53.7 GiB

6. Extend 'cl' volume group  by adding physical volume /dev/sda3
# vgextend cl /dev/sda3
Volume group "Mega" successfully extended

7. Scan all disks for physical volumes
# pvscan
PV /dev/sda2   VG cl              lvm2 [53.7 GiB / 4.00 MiB free]
PV /dev/sda3   VG cl              lvm2 [10.00 GiB / 4.00 MiB free]
Total: 2 [63.7 GiB] / in use: 2 [63.7 GiB] / in no VG: 0 [0   ]


8. confirm the path of the logical volume
#lvdisplay
  --- Logical volume ---
  LV Path                /dev/cl/root

9. Extend the logical volume with lvextend command
# lvextend /dev/cl/root /dev/sda3
  Extending logical volume root to 63.7 GiB
  Logical volume root successfully resized
alternatively you can also extend logical volume with different sizes:
# lvextend -L +10G /dev/cl/root
 
10. Resize the file system using resize2fs command for ext based file system.
# resize2fs /dev/cl/root
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/cl/root is mounted on /; on-line resizing required
old desc_blocks = 2, new_desc_blocks = 2
Performing an on-line resize of /dev/cl/root to 7576576 (4k) blocks.
The filesystem on /dev/cl/root is now 7576576 blocks long.
 
Note : if you are using XFS file system (default on RHEL7/CentOS7) you can extend the file system with this command:
# xfs_growfs /dev/cl/root




Reduce logical volume device size

To reduce logical volume we need to be careful and take backup if any data
1. check file system information
# lvs
LV    VG  Attr        LSize   Pool Origin Data%  Meta%  
data  cl  -wi-ao----  876.63g                                                    
root  cl   -wi-ao----  50.00g                                                    
swap  cl  -wi-ao----  3.88g                                                    

# df -ah
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/cl-data  877G  8.4G  868G   1% /data

2. unmount mount point of the volume which needs to be reduced
umount -v /dev/mapper/cl-data /data

3. check for file-system errors using this command
# e2fsck -ff /dev/mapper/cl-data
e2fsck 1.42.9 (28-Dec-2013)
/dev/mapper/cl-data

Note: Must pass in every 5 steps of file-system check if not there might be some issue with your file-system.

To check hard drive related information and logs run this command
# smartctl -a /dev/sda

4. Now reduce the file system
# resize2fs /dev/mapper/cl-data  10GB

reduce the logical volume using this command
# lvreduce -L -8G /dev/mapper/cl-data


5. Resize the file system back
# resize2fs /dev/cl/data

6. Mount the file system back to same mount point
# mount  /dev/cl/data  /data

7. Check the size of the partition
# #lvdisplay
  --- Logical volume ---
LV Path                /dev/cl/data
LV Size   866.73 GiB

Thats it.. 

Saturday, January 7, 2017

IPTables basic security



To list rules in iptables
#ipables -L

Allow multiple ports to a netowrk.
# iptables -A INPUT -s 123.176.0.0/255.255.0.0 -p tcp -m multiport --dport 22,1521,80 -j ACCEPT

To restrict ping
# iptables -A INPUT -p tcp --syn -m limit --limit 5/s -i eth0 -j ACCEPT
#ping -c 3 -i .005 ipaddress

#iptables -A INPUT -s 192.168.1.80 -p icmp --icmp-type echo-request -j REJECT/DROP/ACCEPT (only one ip restriction for ping)

#iptables -A INPUT -s 0.0.0.0/0.0.0.0 -p icmp --icmp-type echo-request -j REJECT/DROP/ACCEPT (restrict entire network)

#iptables -A INPUT -s 192.168.0.0/255.255.0.0 -p icmp --icmp-type echo-request -j DROP (only to specific network segment)

To flush rules:
#iptables -F :flush chain...

Allow or reject ports to specific ipaddresses
#iptables -A input -s 192.168.1.80 -j reject/drop/accept

#iptables -A input -s 192.168.1.80 -p tcp --dport 80 -j REJECT

#iptables -A input -s 192.168.1.0/24 -p tcp --dport 22 -j REJECT

#iptables -A INPUT -s 123.176.47.0/255.255.255.0 -p tcp --dport 22 -j ACCEPT



To delete the rule in iptables chain
#iptables -D input 5


To block specific IP address
#iptables -A OUTPUT -D 67.215.241.234 -j DROP

#iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP

#iptables -A INPUT -f -j DROP
#iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP
#iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP


Rule to route ip address:
#iptables –t nat –A POSTROUTING –s 192.168.1.0/24 –j SNAT –-to 1.2.3.1

#iptables –t nat –A POSTROUTING –s 10.10.0.0/24 –j SNAT --to 123.176.40.60
#iptables –t nat –A POSTROUTING –s 192.168.1.0/24 –j SNAT –-to 1.2.3.1:1-1024


#iptables –t nat –A PREROUTING –d 1.2.4.1 –j DNAT –-to 192.168.1.50
#iptables –t nat –A PREROUTING –s 1.2.5.17 –d 1.2.4.2 –p tcp –-dport 80 –j DNAT -–to 192.168.1.100

#iptables –t nat –A PREROUTING –d 1.2.4.2 –p tcp –-dport 65521 –j DNAT –-to 192.168.1.100:22

#iptables –t nat –A PREROUTING –d 1.2.4.5 –p tcp –-dport 80 –j DNAT –-to 192.168.1.200


Transparent proxy is a way to force users to use a proxy server, even if their browsers are configured not to. You probably know about the benefits of using a proxy server bandwidth saving for cached pages and access control implementation (e.g. deny downloads of files that have dangerous extensions).
We can perform transparent proxy for all or some users to prevent them from bypassing the proxy whenever they want. This is especially good for children's computers to deny them access to sexually explicit sites, for example.
On our Linux router, we installed a Squid proxy server to cache some content from the Web. Also, we want to deny access to sex sites or malicious downloads for users. The users are not very pleased about using our proxy server, and they usually remove it from their browser configuration. We can force them to use the proxy server anyway. If the proxy server listens on port 3128 we will do the following:

# iptables –t nat –A PREROUTING –s 192.168.1.0/24 –p tcp –-dport 80 –j REDIRECT –-to-port 3128

If we want to allow the manager (who has the IP address 192.168.1.50) to bypass the proxy server, we do so like this:

# iptables –t nat –I PREROUTING –s 192.168.1.50 –p tcp –-dport 80 –j ACCEPT

So this rule will be matched in the PREROUTING chain, and she will be SNATed in the POSTROUTING chain

Saturday, October 8, 2016

How to install Chef server, workstation and node environment



 
Edit hosts file entries on 3 servers:
# vim /etc/hosts
192.168.0.100 chefserver.example.com
192.168.0.101 chefwork.example.com
192.168.0.102 chefnode.example.com  
 
 
 
Chef server installation:

Download Chef server RPM package and install
# rpm -ivh chef-server-11.1.6-1.el6.x86_64.rpm

configure chef server
# chef-server-ctl reconfigure

check service status
# chef-server-ctl status


Chef workstation installation:

Download and install chef RPM package and install
# rpm -ivh chef-12.0.3-1.x86_64.rpm

verify package
# rpmquery chef

create chef directory
# mkdir /root/.chef
# cd /root/.chef

copy chef valication certificates from chef server
# scp root@chef-server:/etc/chef-server/admin.pem .
# scp root@chef-server:/etc/chef-server/chef-validator.pem .
# scp root@chef-server:/etc/chef-server/chef-webui.pem .

fetch ssl certificates
# knife ssl fetch

verify ssl certificates
# knife ssl check

configure workstation and details
# knife configure -i
/root/./chef/knife.rb
https://chefserver.example.com:443
/etc/.chef/admin.pem
/etc/.chef/chef-validator.pem

verify client list
# knife client list
chef-validator
chef-webui

verify user list
# knife user list
admin
user

Chef node installation:

Download chef package and install
# rpm -ivh chef-12.0.3-1.x86_64.rpm
# rpmquery chef

create chef directory
# mkdir /etc/chef
# cd /etc/chef

copy chef validation key from chef server
# scp root@chef-server:/etc/chef-server/chef-validator

Fetch chef SSL certificates
# knife ssl fetch -s https://chefserver.example.com
# ll /root/.chef/trusted_certs
chefserver_example_com.crt
# knife ssl check -s https://chefserver.example.com
# cd /etc/chef

create a file to validate with chef server
# vim client.rb
log_level :info
log_location STDOUT
chef_server_url "https://chefsever.example.com:443"
trusted_certs_dir "/root/.chef/trusted_certs"

Add node to server (node side)
# chef-client -S https://chefserver.example.com -K /etc/chef/chef-validator.pem

Verify client on workstation
# knife client list
chef-validator
chef-webui
chefnode.example.com
# knife user list
admin
user

Now open browser and type chef server url
https://chefserver.example.com
login with default login credentials, then change password and verify node exists.


Go to workstation and create sample apache cookbook.
# knife cookbook create apache
# cd /var/chef/cookbooks/apache
# ll

Edit recipe default.rb and add
# vim recipes/default.rb
package 'httpd' do
 action :install
end
cookbook_file '/var/www/html/index.html' do
 source 'index.html'
end

template 'httpd.conf' do
 path '/etc/httpd/conf/httpd.conf'
 source 'httpd.conf.erb'
end

service 'httpd' do
 action [:restart, :enable]
end
:wq

# cd ../apache/files/default
# vim index.html
<html>
<title>Welcome to chef training by infostork </title>
<h1> Welcome to Chef </h1>
<h2> Using templates and attributes </h2>
</html>
:wq

Create template
# cd ../attributes/
# vim default.rb
default['apache']['Listen'] = '80'

# cd ../templates/default/
# cp /etc/httpd/conf/httpd.conf httpd.conf
# mv httpd.conf.erb
# vim httpd.conf.erb
Listen <%= node['apache']['Listen'] %>

Test cookbook
# knife cookbook test apache

Upload cookbook to chef server
# knife cookbook upload apache
Uploaded 1 cookbook

List cookbooks and verify
# knife cookbook list
apache 0.1.0

Upload cookbook to node's run-list
# knife node run_list add chefnode.example.com apache

also you can do it in GUI mode
Go to node tab, drag 'apache' cookbook recipe to run-list and save.


Apply the run-list to node (node-side)
# cat /etc/apache

now apply the run-list with
# chef-client
# cat /etc/apache

open browser and type node url
http://chefnode.example.com 
Welcome to chef

That's it run-list applied to node.

note: path to find cookbooks on chef server
# cd /var/opt/chef-server/bookshelf/data/bookshelf/
# grep -R -i "httpd.conf.erb" *
<path to recipe file>
# cat <path to recipe file>

Thursday, September 29, 2016

How to setup Jenkins for Continuous Development Integration and build automation on CentOS 7


Introduction
Jenkins is a popular open source tool to perform continuous integration and build automation. The basic functionality of Jenkins is to execute a predefined list of steps, e.g. to compile java source code and build a JAR from the resulting classes. The trigger for this execution can be time or event based. For example, every 20 minutes or after a new commit in a Git repository.
 
Merging code. Coordinating releases. Determining build status. Maintaining updates. If you know the frustration of these processes well enough that the words themselves threaten a headache, you might want to look into Jenkins CI.
Maintaining any project, especially one developed by several team members concurrently and one that might incorporate many functions, components, languages, and environments, is a struggle at the best of times — and at the worst requires a superhuman feat to stay afloat.
Jenkins is here to help. Fundamentally a solution for continuous integration — i.e. the practice of merging all code continually into one central build — Jenkins acts as a headquarters for the operations of your project. It can monitor, regulate, compare, merge, and maintain your project in all its facets.
At its core, Jenkins does two things: automated integration and external build monitoring. This means that it can greatly simplify the process of keeping your code maintainable and keep a close and untiring eye on the quality of your builds, ensuring you don’t end up with nasty surprises when a few of your developers merge their code before it’s ready.

Prerequisites
To follow this tutorial, you will need the following:
·         CentOS 7 Droplet
·         A non-root user with sudo privileges

All the commands in this tutorial should be run as a non-root user. If root access is required for the command, it will be preceded by sudo.

Step 1 — Installing Jenkins
There are two basic ways to install Jenkins on CentOS: through a repository, or repo, and via the WAR file. Installing from a repo is the preferred method, and it's what we'll outline first.
You'll need Java to run Jenkins (either method), so if your server doesn't yet have Java, install it with:
# sudo yum -y install java

In general, if you need a service or tool but you're not sure what package provides it, you can always check by running:
# yum whatprovides service

Where service is the name of the service or tool you require.
Installing from the Repo
Now, run the following to download Jenkins from the RedHat repo:
# sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
The wget tool downloads files into the filename specified after the "O" flag (that's a capital 'O', not a zero).

Then, import the verification key using the package manager RPM:
# sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key

Finally, install Jenkins by running:
# sudo yum install jenkins

That's it! You should now be able to start Jenkins as a service:
# sudo systemctl start jenkins.service

Once the service has started, you can check its status:
# sudo systemctl status jenkins.service

This will give you a fairly lengthy readout with a lot of information on how the process started up and what it's doing, but if everything went well, you should see two lines similar to the following:
Loaded: loaded (/etc/systemd/system/jenkins.service; disabled)
Active: active (running) since Tue 2015-12-29 00:00:16 EST; 17s ago
This means that the Jenkins services completed its startup and is running. You can confirm this by visiting the web interface as before, at http://ip-of-your-machine:8080.

Step 2 — Creating Users
Once Jenkins is running smoothly, establishing good security is the next step. From here on out, your exact actions will largely depend on your purposes for Jenkins. However, the following are general guidelines of how Jenkins can best be set up and used, along with some examples to pave the way.
Jenkins provides settings for security and role management, useful for controlling access and defining user actions. We’ll visit that briefly to introduce those concepts. To get to those settings, return to the Jenkins interface via your browser once your service is running (http://ip-of-your-machine:8080). You will see a menu on the left – choose Manage Jenkins from within that. This will take you to a page containing a number of options for customization. You may also notice an an alert at the top: Unsecured Jenkins allows anyone on the network to launch processes on your behalf. Consider at least enabling authentication to discourage misuse. This is Jenkins’ directive to get you to introduce some element of security to your system.















 







The first step to take here is to go to Configure Global Security, near top of the list of links on the manage Jenkins page. Check the option box for Enable security to bring up a group of options for this purpose. There are any number of ways to configure security on Jenkins – you can read the in-depth explanation in the Standard Security Setup section of the Use Jenkins documentation.
The most straightforward of these options, and the one we will lay out today, has Jenkins use its own database to store user configurations. Under the Access Control section that appeared when we flagged the checkbox, select Jenkins' own user database. Briefly, the other options are to link Jenkins to existing Unix users and groups, to use an organization-wide login (LDAP option), or to allow a Java servlet to manage access. Other options can be added through plugins (we’ll discuss plugins in a bit).
Whether you should allow new users to sign up largely depends on your own needs. In general, however, it pays to restrict access, and allowing users to sign up as they wish can allow a level of openness that can potentially be dangerous. To restrict this, deselect the checkbox marked Allow users to sign up. Once this setting has been turned off, only administrators can create new accounts. In a moment, you'll supply administrative privileges for a user you'll create, and we'll go into detail on adding new users, as well.
Under Authorization, select the Matrix-based security option. This allows some fine-tuning of the controls without resorting to complex setups. You'll see a user named Anonymous is already present. An anonymous user is anybody from anywhere, even when they're not logged in, which is why by default the anonymous user has no abilities. Since this is the initial setup of the Jenkins instance, you must give this user full permissions: there are no users other than anonymous right now, and you're not logged in, so turning off anonymous permissions would effectively cut you off from accessing Jenkins at all.
Use the small button to the right of the Anonymous row to select all permissions. Next, use the User/group to add input field to specify a new user for which to add permissions. Note that this does not actually create a user, but rather specifies permissions for the user you will create shortly.
Normally, you would create a new user first and then specify permissions for them in this part of the form. Since no user exists yet, you'll set up permissions and then create the user.
Enter a username and press Add. Due to a known bug, it is recommended that you keep the usernames lowercase. Give the new user all permissions the same way you did for the anonymous user. This essentially sets up a new administrator.
When you're done, press Apply and then Save.
You will be taken automatically to a signup page, from which you can create a new account. The username of the account you create should correspond to the one for which you specified permissions earlier:




When you finish, you should find yourself automatically logged in.
Return to the security page (Manage Jenkins -> Configure Global Security) and scroll down to the security matrix. Now that you've created an administrative user, you can restrict the permissions for the anonymous user. Deselect all the permissions in the anonymous row, and then click Apply and Save. Your new user will now be the only user with access to Jenkins.
If you turned off the automatic sign up earlier, you might need to manually create additional new users. Here's how:
Return to the Manage Jenkins page, scroll down to near the bottom and click on Manage Users. On the left you'll see a sidebar with links; click on Create User. Enter the information for the new user the same way as you created the first user, and click Sign up. You'll be redirected to the list of users, which will now include the new user. This user will have no permissions, so you will need to repeat the permissions process, going to Configure Global Security, using the User/group to add field to add a row to the matrix, specifying permissions, and clicking Apply and Save. For simplicity's sake, if you have multiple users to create, create them all before moving on to adding permissions.
When creating new users, keep in mind that restrictiveness can be a major security asset. You can learn more about the specific ins and outs of matrix-based security in the Matrix-based Security section of the Use Jenkins documentation.
Typically, the next step is to assign roles to your users, controlling their exact abilities. We won’t go into details in this article, but this is a good article on the subject. Be sure to save your changes after you assign roles.

Step 3 — Installing Plugins
Once Jenkins is installed, minimally configured, and reasonably secured, it's time to make it fit your needs. As found when it is first installed, Jenkins has relatively few abilities. In fact, Jenkins typifies a credo of many software developers: do one thing, and do it well. Jenkins "does one thing" by acting as a middleman for your software projects and “does it well” by providing plugins.
Plugins are add-ons that allow Jenkins to interact with a variety of outside software or otherwise extend its innate abilities. As with many areas of the Jenkins setup, the exact plugins you install will be significantly dependent on your projects.
From the main left hand side menu in Jenkins, click Manage Jenkins -> Manage Plugins. The page you land on shows plugins that are already installed but need updating – you can perform this easily by selecting the plugins you want to update and clicking the button at the bottom.

If you click on Available from this page, you will be taken to a colossal list of available plugins. Obviously, you don't want to install all possible plugins, so the next question is how to select those you will need.
As mentioned, your choice in this matter will depend on your needs and goals. Fortunately, the Jenkins wiki provides a nice rundown of plugins by topic.
This list is definitely worth perusing, but no matter your project, there are a few plugins which you almost certainly should include. Here are a few — some generic, some specific:

Source control
Git, SVN, and Team Foundation Server are some of the more common source control systems. All three of these have plugins in the Jenkins list, and others exist for less common systems as well. If you don't know what source control is, you should really learn about it and start incorporating it in your projects. Be sure to install the plugin for your source control system, so Jenkins can run builds through it and control tests.

Copy Artifact
This plugin allows you to copy components between projects, easing the pain of setting up similar projects if you lack a true dependency manager.

Throttle Concurrent Builds
If you have multiple builds running which might introduce a conflict (due to shared resources, etc), this will easily allow you to alleviate this concern.

Dependency Graph Viewer
A nifty plugin providing a graphic representation of your project dependencies.

Jenkins Disk Usage
Jenkins may be fairly lightweight, but the same can't always be said for the projects with which it integrates. This plugin lets you identify how much of your computing resources any of your jobs are consuming.

Build tools
If your project is large, you probably use a build manager, such as Maven or Ant. Jenkins provides plugins for many of these, both to link in their basic functionality and to add control for individual build steps, projection configuration, and many other aspects of your builds.
Reporting
While Jenkins provides its own reports, you can extend this functionality to many reporting tools.

Additional Authentication
If the default Jenkins abilities for security don't suite you, there are plenty of plugins to extend this – from Google logins, to active directory, to simple modifications of the existing security.

In general, if your project requires a certain tool, search the plugin list page on the wiki for the name of it or for a keyword regarding its function – chances are such a plugin exists, and this is an efficient way to find it.
Once you have selected those plugins you want to install on the Available tab, click the button marked Download now and install after restart.
Now that Jenkins is up and running the way you want it, you can start using it to power your project integration. Jenkins' capabilities are nearly endless within its domain, but the following example should serve to demonstrate both the extent of what Jenkins can do and the beginnings of how to get a Jenkins job started.

Step 4 — Creating a Simple Project
There are a lot of interesting uses you can get out of Jenkins, and even playing around with the settings can be informative. To get started, though, it helps to understand how to set up a basic task. Follow the example in this section to learn how to establish and run a straightforward job.
From the Jenkins interface home, select New Item. Enter a name and select Freestyle project.



This next page is where you specify the job configuration. As you'll quickly observe, there
are a number of settings available when you create a new project. Generally, one of the more important controls is to connect to a source repo. For purposes of this introductory example, we'll skip that step.
On this configuration page you also have the option to add build steps to perform extra actions like running scripts.




This will provide you with a text box in which you can add whatever commands you need. Use this to run various tasks like server maintenance, version control, reading system settings, etc.
We'll use this section to run a script. Again, for demonstration purposes, we'll keep it extremely simple.



If you want, you can add subsequent build steps as well. Keep in mind that if any segment or individual script fails, the entire build will fail.
You can also select post-build actions to run, such as emailing the results to yourself.
Save the project, and you'll be taken to its project overview page. Here you can see information about the project, including its built history, though there won’t be any of that at the moment since this is a brand-new project.





Click Build Now on the left-hand side to start the build. You will momentarily see the build history change to indicate it is working. When done, the status icon will change again to show you the results in a concise form.
To see more information, click on that build in the build history area, whereupon you’ll be taken to a page with an overview of the build information:





The Console Output link on this page is especially useful for examining the results of the job in detail — it provides information about the actions taken during the build and displays all the console output. Especially after a failed build, this can be a useful place to look.
If you go back to Jenkins home, you'll see an overview of all projects and their information, including status (in this case there's only the one):





Status is indicated two ways, by a weather icon (on the home page dashboard, seen above) and by a colored ball (on the individual project page, seen below). The weather icon is particularly helpful as it shows you a record of multiple builds in one image.
In the image above, you see clouds, indicating that some recent builds succeeded and some failed. If all of them had succeeded, you'd see an image of a sun. If all builds had recently failed, there would be a poor weather icon.
These statuses have corresponding tooltips with explanations on hover and, coupled with the other information in the chart, cover most of what you need in an overview.
You can also rebuild the project from here by clicking (Build Now).

Of course, implementing a full-scale project setup will involve a few more steps and some fine-tuning, but it’s clear that without much effort, you can set up some very useful, very pragmatic monitors and controls for your projects. Explore Jenkins, and you’ll quickly find it to be an invaluable tool.

Conclusion
It's highly worthwhile to seek out other tutorials, articles, and videos — there are plenty out there, and the wealth of information makes setting up project integration with Jenkins practically a breeze. The tutorials hosted by the Jenkins team are worth a look.
In particular, bridging the gap between basics and fully fledged projects is a great way to improve your Jenkins skills. Try following these examples as a way to ease that transition.
Additionally, many templates exist for common types of projects, such as PHP applications and Drupal, so chances are strong you won’t even need to set up everything from scratch. So go out there, learn all you dare about Jenkins, and make your life that much easier!