Monday, August 31, 2009

Install Linux Mail Server with 5 minutes

iRedMail is a shell script that lets you quickly deploy a full-featured mail solution in less than 5 minutes on CentOS 5.x and Debian (Lenny) 5.0.1 and Ubuntu (it supports both i386 and x86_64). Its object is to make a Linux mail server installation and configuration simple and easy to use. iRedMail supports both OpenLDAP and MySQL as backends for storing virtual domains and users.This tutorial shows how to use the MYSQL as the backend.

The mail server components: http://code.google.com/p/iredmail/wiki/Main_Components

The discussion forum: http://www.iredmail.org/forum/

1/Preliminary Note

In this tutorial

I use: Hostname mail.test.vn

admin account: Postmaster@test.vn
Mail domain: test.vn
Mail delivery (mailboxes) path: /home/vmail/domains

These settings might differ for you, so you have to replace them where appropriate.

Requirements

Install CentOS 5.x, I suggest to use the minimum install, make sure you don't install Apache, PHP and MySQL. You can remove them with yum if they are installed.Yum is working, because the installation needs to use CentOS source packages.

Installation

Download the iRedMail script:

wget http://iredmail.googlecode.com/files/iRedMail-0.4.0.tar.bz2

tar xjf iRedMail-0.4.0.tar.bz2

Run the script to download all mail server related rpm packages:

Only download packages not shipped within RHEL/CentOS iso files.

cd iRedMail-0.4.0/pkgs/
sh get_all.sh

Run the script to install:

cd ..
sh iRedMail.sh


Step1:welcome page



Step2:Mail delivery (mailboxes) path, all emails should be stored in this directory.



Step3:Choose backend to store virtual domains and virtual users.Note: Please choose the one you are familiar. Here we use MySQL for example.



Step4:Set MySQL account 'root' password.

Step5:Set MySQL account 'vmailadmin' password.Note: vmailadmin is used for manage all virtual domains & users, so that you don't need MySQL root privileges



Step6:Set first virtual domain. e.g. test.vn, botay.com, etc.


Step7:Set admin user for first virtual domain you set above. e.g. postmaster.

Step8:Set password for admin user you set above.

Step9:Set first normal user. e.g. www.
Step10:Set password for normal user you set above.

Step11:Enable SPF Validation, DKIM signing/verification or not.

Step12:Enable managesieve service or not

Step13:Enable POP3, POP3S, IMAP, IMAPS services or not


Step14:Choose your prefer webmail programs

Step15:Choose optional components. It's recommended you choose all.

Step16:If you choose PostfixAdmin above, you need to set a global admin user. It can manage all virtual domains and users.
Step17:If you choose Awstats as log analyzer, you will be prompted to set a username and password

Step18:Set mail alias address for root user in operation system

Step19:drink coffee :D and wait few minutes

Step20:reboot and enjoy

Step21:After reboot .Create Mailbox by postfixadmin

https://mail.test.vn/postfixadmin



After login,I choose Virtual list-> Add Mailbox

Step 22:Create Maillist

I choose Virtual list-> Add Alias


Step23: Test configure and Account

Configure Account use outlook Express check mail




You can use webmail for check mail :https://mail.test.vn/mail/



Other you can create cluster mail server with replication mysql and can configure manual from http://www.postfix.org/


Friday, August 28, 2009

SYNC DATA USE RSYNC

Configure Rsync to copy files.

Following example based on a environment HostA is [192.168.0.19], HostB is [192.168.0.20].

[1] Install xinetd first. It's necessary on HostA.


[root@www ~]#yum -y install xinetd

[root@www ~]#vi /etc/xinetd.d/rsync

# default: off
# description: The rsync server is a good addition to an ftp server, as it \
# allows crc checksumming etc.
service rsync
{
disable = no// change

socket_type = stream

wait = no

user = root

server = /usr/bin/rsync

server_args = --daemon

log_on_failure += USERID

}

[root@www ~]#/etc/rc.d/init.d/xinetd start
Starting xinetd:[ OK ]
[root@www ~]#chkconfig xinetd on

[2] Config for HostA. This example based on a configuration to copy files under /var/www/html to HostB.


[root@www ~]#vi /etc/rsyncd.conf

[site] // name
path = /var/www/html // copied directory
hosts allow = 192.168.0.20
hosts deny = *
list = true
uid = root
gid = root

[3] Config for HostB.
[root@lan ~]#vi /etc/rsync_exclude.lst

// Write directory or files you don't want to copy.

test
test.txt

[4] Run Rsync.

[root@lan ~]#rsync -avz --delete --exclude-from=/etc/rsync_exclude.lst 192.168.0.19::site /home/backup

// add in cron if you'd like to run rsync.

[root@lan ~]#crontab -e

00 06 * * * rsync -avz --delete --exclude-from=/etc/rsync_exclude.lst 192.168.0.19::site /home/backup


LOAD BALANCE WEB SERVER USE POUND

This example is based on the environment below.

(1) cluster.test.vn [192.168.0.17] Pound server
(2) www.test.vn [192.168.0.18] Web server #1
(3) www2.test.vn [192.168.0.21] Web server #2

In this example, Pound server listens HTTP requests, and if requests to jpg or gif files come, they are forwarded to (2)'s server, and if requests to files except jpg or gif, they are forwarded to (3)'s server. It's also necessary to set gateway router that HTTP requests are forwared to pound server first.

[1] Install and configure Pound

[root@cluster ~]# yum -y install pound #or you download pound rpm from http://rpm.pbone.net
[root@cluster ~]#useradd -s /sbin/nologin -d /root pound
[root@cluster ~]#vi /etc/pound.cfg


# an example
# see "man pound" if you'd like to know more

#Global settings
# specify user

User "pound"
# specify group

Group "pound"
# log level (max = 5)

LogLevel 1
# send heartbeat ?/per second

Alive 30
# run as a daemon

Daemon 1

# Pound server settings
ListenHTTP
# IP of Pound server

Address 192.168.0.17

# Listen Port

Port 80

End

# Config for backend server #1

# Backend server settings
Service
# listen requests to jpg,gif

URL ".*.(jpg|gif)"
BackEnd

# server's IP

Address 192.168.0.18

# Listen Port

Port 80

End

End

# Config for backend server #2

Service
# listen requests except the one specified on #1's server

URL ".*"
BackEnd

# server's IP

Address 192.168.0.21

# Listen Port

Port 80

End

End


[root@cluster ~]#vim /etc/init.d/pound

# an example

#!/bin/bash
#
# pound: Starting Pound
#
# chkconfig: 345 98 91
# description:
HTTP/HTTPS reverse-proxy and load-balancer

# processname: pound

. /etc/rc.d/init.d/functions

pound="/usr/sbin/pound"
lockfile="/var/lock/subsys/pound"
prog="pound"
RETVAL=0

start() {

echo -n $"Starting $prog: "


daemon $pound


RETVAL=$?


echo


[ $RETVAL = 0 ] && touch $lockfile


return $RETVAL

}
stop() {

echo -n $"Stopping $prog: "


killproc $pound


RETVAL=$?


echo


[ $RETVAL = 0 ] && rm -f $lockfile


return $RETVAL

}
case "$1" in

start)


start


;;


stop)


stop


;;


restart)


stop


start


;;


status)


status $pound


;;


*)


echo "Usage: $prog {start|stop|restart|status}"


exit 1

esac

exit $?


[root@cluster ~]#chmod 755 /etc/init.d/pound
[root@cluster ~]#/etc/init.d/pound start
Starting pound: starting...[ OK ]
[root@cluster ~]#chkconfig --add pound
[root@cluster ~]#chkconfig pound on

[2] Verify load baranced or not. Upload jpg or gif file on Web Server #1 and create a html file on webserver #2 that shows a file on Web server #1 .

[root@www ~]#vim /var/www/html/index.html


[3] Access with web browser. Pound works normally.


Thursday, August 27, 2009

High Availability HTTP use HeartBeat

I/ Install heartBeat

I set 2 systems as cluster servers on this example. The environment of 2 systems are like below. They have 2 NICs.

(1) www1.test.vn [eth0:192.168.0.21] [eth1:10.0.0.21]
(2) www2.test.vn [eth0:192.168.0.22] [eth1:10.0.0.22]

[1] Install HeartBeat first. It's necessary to do this on both systems.

[root@www1 ~]# yum -y install heartbeat #install heartbeat by yum
[root@www1 ~]# vi /etc/ha.d/authkey # create certificates

auth 1
1 crc

[root@www1 ~]# chmod 600 /etc/ha.d/authkeys

[2] Config for a server of (1).

[root@www1 ~]# vi /etc/ha.d/ha.cf

crm on
# debug log
debugfile /var/log/ha-debug
# log file
logfile /var/log/ha-log
# the way of output to syslog
logfacility local0
# keepalive
keepalive 2
# deadtime
deadtime 30
# deadping
deadping 40
# warntime
warntime 10
# initdead
initdead 60
# port
udpport 694
# interface and IP address of another Host
ucast eth1 10.0.0.22
# auto failback
auto_failback on
# node name (the name of "uname -n")
node www1.test.vn node www2.test.vn respawn root /usr/lib/heartbeat/pingd -m 100 -d 5s -a default_ping_set

[3] Config for a server of (2). The different point is only the section of ucast.

[root@www1 ~]#vi /etc/ha.d/ha.cf
crm on
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
deadping 40
warntime 10
initdead 60
udpport 694
# interface and IP address of another Host

ucast eth1 10.0.0.21
auto_failback on
node www1.test.vn
node www2.test.vn
respawn root /usr/lib/heartbeat/pingd -m 100 -d 5s -a default_ping_set

[4] Start HeartBeat on both server.

[root@www1 ~]#/etc/rc.d/init.d/heartbeat start
Starting High-Availability services: [ OK ]
[root@www1 ~]# chkconfig heartbeat on

[5] Run crm_mon on both server, then if following result is shown, it's OK, heartbeat running normally. These are Basical configuration of HeartBeat.

[root@www1 ~]# crm_mon -i 3
Defaulting to one-shot mode You need to have curses available at compile time to enable console mode ============ Last updated: Sun Jun 15 05:04:34 2008 Current DC: www2.test.vn (f8719a77-70b4-4e5f-851b-dafa7d65e2d3a2) 2 Nodes configured. 0 Resources configured. ============ Node: www2.test.vn (f8719a77-70b4-4e5f-851b-dafa7d65e2d3a2): online Node: www1.test.vn (2bbd6408-ec01-4b8c-bb8e-20723ee7af3a99): online

II/Configure 2 web servers for cluster. httpd is also needed.


The environment of 2 web servers are like below. And more, I set virual IP address [192.168.0.100].

(1) www1.test.vn eth0:192.168.0.21] [eth1:10.0.0.21]
(2) www2.test.vn eth0:192.168.0.22] [eth1:10.0.0.22]
(3) cluster.test.vn Virtual IP:192.168.0.100]
[1] Configure like below on both Host. if httpd is running, stop it because they are controled by HeartBeat.
[root@www1 ~]#/etc/rc.d/init.d/heartbeat stop
Stopping High-Availability services:[ OK ]
[root@www1 ~]# cd /var/lib/heartbeat/crm
[root@www1 crm]#rm -f cib.xml.*
[root@www1 crm]#vi cib.xml










[root@www1 crm]# cd
[root@www1 ~]# vi /etc/httpd/conf/httpd.conf










[root@www1 ~]# /etc/rc.d/init.d/heartbeat start
Starting High-Availability services: [ OK ]

[2] Run crm_mon after some time passed, then following result is shown, it's OK. httpd is running on primary server.

[root@www1 ~]#crm_mon -i 3
Defaulting to one-shot mode You need to have curses available at compile time to enable console mode ============ Last updated: Sun Jun 15 05:58:18 2008 Current DC: www2.test.vn (f8719a77-70b4-4e5f-851b-dafa7d65e2d3a2) 2 Nodes configured. 1 Resources configured. ============ Node: www1.test.vn (2bbd6408-ec01-4b8c-bb8e-20723ee7af3a99): online Node: www2.test.vn (f8719a77-70b4-4e5f-851b-dafa7d65e2d3a2): online Resource Group: group_apache
ipaddr (heartbeat::ocf:IPaddr): Started www1.test.vn
apache (heartbeat::ocf:apache): Started www1.test.vn
[3] Make test page on both servers and access to virtual IP. Primary server replys normally like below.




[4] Shutdown HeartBeat on primary server and verify if HeartBeat works or not.

[root@www1 ~]# /etc/rc.d/init.d/heartbeat stop
Stopping High-Availability services:[ OK ]

Access to virtual IP address you set, then running server is switched normally like below

[5] Start HeartBeat again on primary server and verify if HeartBeat
works or not.

[root@www1 ~]#/etc/rc.d/init.d/heartbeat start
Starting High-Availability services:[ OK ]



Other we can create cluster for FTP and another service by heartbeat.