Mail queue management – Postfix

Using qshape :-

ftp://ftp.kfki.hu/pub/packages/mail/postfix/experimental/postfix-2.2-20050222-newdb-nonprod/auxiliary/qshape/qshape.pl

For example, in the output below we see the top 10 lines of the (mostly forged) sender domain distribution for captured spam in the “hold” queue:

$ qshape -s hold | head
                         T  5 10 20 40 80 160 320 640 1280 1280+
                 TOTAL 486  0  0  1  0  0   2   4  20   40   419
             yahoo.com  14  0  0  1  0  0   0   0   1    0    12
  extremepricecuts.net  13  0  0  0  0  0   0   0   2    0    11
        ms35.hinet.net  12  0  0  0  0  0   0   0   0    1    11
      winnersdaily.net  12  0  0  0  0  0   0   0   2    0    10
           hotmail.com  11  0  0  0  0  0   0   0   0    1    10
           worldnet.fr   6  0  0  0  0  0   0   0   0    0     6
        ms41.hinet.net   6  0  0  0  0  0   0   0   0    0     6
                osn.de   5  0  0  0  0  0   1   0   0    0     4

If one looks at the two queues separately, the incoming queue is empty or perhaps briefly has one or two messages, while the active queue holds more messages and for a somewhat longer time:

$ qshape incoming

                 T  5 10 20 40 80 160 320 640 1280 1280+
          TOTAL  0  0  0  0  0  0   0   0   0    0     0

$ qshape active

                 T  5 10 20 40 80 160 320 640 1280 1280+
          TOTAL  5  0  0  0  1  0   0   0   1    1     2
  meri.uwasa.fi  5  0  0  0  1  0   0   0   1    1     2

This is from a server where recipient validation is not yet available for some of the hosted domains. Dictionary attacks on the unvalidated domains result in bounce backscatter. The bounces dominate the queue, but with proper tuning they do not saturate the incoming or active queues. The high volume of deferred mail is not a direct cause for alarm.

$ qshape deferred | head

                         T  5 10 20 40 80 160 320 640 1280 1280+
                TOTAL 2234  4  2  5  9 31  57 108 201  464  1353
  heyhihellothere.com  207  0  0  1  1  6   6   8  25   68    92
  pleazerzoneprod.com  105  0  0  0  0  0   0   0   5   44    56
       groups.msn.com   63  2  1  2  4  4  14  14  14    8     0
    orion.toppoint.de   49  0  0  0  1  0   2   4   3   16    23
          kali.com.cn   46  0  0  0  0  1   0   2   6   12    25
        meri.uwasa.fi   44  0  0  0  0  1   0   2   8   11    22
    gjr.paknet.com.pk   43  1  0  0  1  1   3   3   6   12    16
 aristotle.algonet.se   41  0  0  0  0  0   1   2  11   12    15

Important commands :-

  • Print queue: postqueue -p
  • Delete all messages from the queue: postsuper -d ALL
  • Read a message: postcat -q <queue file id>
  • See what shape the queue is in: qshape

Release messages from hold

mailq | awk '{if($1 ~ /[A-F0-9]+!$/) {gsub(/!/, "", $1); print($1); system(sprintf("postsuper -H%s", $1)); } }'
postqueue -f

Requeue hold messages to force delivery

mailq | awk '{if($1 ~ /[A-F0-9]+!$/) {gsub(/!/, "", $1); print($1); system(sprintf("postsuper -H%s", $1)); } }'

Flush the queue

postqueue -f

Clean all MAILER-DAEMON error messages

 

Normal Messages

mailq | tail +2 | awk '{ if ($7 == "MAILER-DAEMON") print $1 } ' | postsuper -d -

for me mailq returns the message id with a trailing ! so I use:

mailq | awk '{ if ($7 == "MAILER-DAEMON") print substr ($1, 1, length($1)-1) } ' | postsuper -d -

 

Messages with errors

mailq | grep MAILER-DAEMON |  sed -e 's/!$//' | cut -d " " -f 1 | postsuper -d -

or

mailq | tail +2 | awk '{ if ($7 == "MAILER-DAEMON") print $1 } ' | sed -e 's/!$//' | postsuper -d -

If you want to delete messages with the ! sign on the end, use

mailq | tail +2 | awk '{ if ($7 == "MAILER-DAEMON") print $1 } ' | cut -d! -f 1 | postsuper -d -

If you want to delete messages with the * sign on the end, use

mailq | tail +2 | awk '{ if ($7 == "MAILER-DAEMON") print $1 } ' | cut -d* -f 1 | postsuper -d -

 

NOTE

Sometimes, you may need to omit the

tail +2

courtesy :

http://maia.deec.uc.pt/Computers/Operating_Systems/Linux/Servers/Mail/Postfix/Postfix_Queue_Man

ffmpeg-php error

While compiling ffmpeg-php:-

Error: /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c: In function ‘zif_ffmpeg_frame_toGDImage’: /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c:336: error: ‘PIX_FMT_RGBA32′ undeclared (first use in this function) /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c:336: error: (Each undeclared identifier is reported only once /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c:336: error: for each function it appears in.) /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c: In function ‘zif_ffmpeg_frame_ffmpeg_frame’: /usr/src/ffmpeg-php-0.6.0/ffmpeg_frame.c:421: error: ‘PIX_FMT_RGBA32′ undeclared (first use in this function)

————————————————————————————–

Fix: With the latest version of ffmpeg-php (0.6.0), update ffmpeg_frame.c and replace every instance of PIX_FMT_RGBA32 with PIX_FMT_RGB32

vi ffmpeg_frame.c

:%s/PIX_FMT_RGBA32/PIX_FMT_RGB32

:w :q!

./configure make make install add extension=”ffmpeg.so” inside php.ini .

TCP WRAPPER Services

Services which contain libwrap module can use hosts.deny to control Access
ldd  /usr/sbin/vsftpd    |grep libwrap
ldd  /usr/sbin/sendmail  |grep libwrap
ldd  /usr/sbin/sshd      |grep libwrap

To Restrict a host/network  to control access to a Service.

1.  Using Hostname/Domainname
vim /etc/hosts.deny
vsftpd  .example.com                     ->All hosts in example.com domain denied to access ftp
vsftpd  server.example.com                ->Host server in example.com denied to access

2.  Using  Ipaddress/Network
vim /etc/hosts.deny
vsftpd  192.168.1.0/255.255.255.0         ->All hosts in 1.0 N/W denied.
vsftpd  192.168.1.4                       ->Host 1.4 denied.

3.  To  Deny all Except few
vim /etc/hosts.deny
sshd: ALL  EXCEPT   matrix.com            ->Any domain other than matrix.com are denied the Access to ssh.

4. To  Allow all Except few
vim /etc/hosts.allow
ALL:  .example.com  EXCEPT  cracker.example.com  ->All example.com hosts are allowed to connect to all services except cracker.example.com.

Both entries allow/deny can be given in either hosts.allow or hosts.deny file

Configure a Linux-HA high avaliability heartbeat cluster

In this example we will configure a webserver using apache and we will cluster it. It can be implemented on centos, fedora and other redhat flavors.

Pre-Configuration Requirements

Following are the hostnames and ipv4 addresses that will be used:

  • 192.168.1.15 prime ( webserver)
  • 192.168.1.16 calc (webserver)
  • 192.168.1.20 sigma (ha address)

Configuration

1. Download and install the heartbeat package. In our case we are using CentOS so we will install heartbeat with yum:

yum install heartbeat

or download these packages:

heartbeat-2.08
heartbeat-pils-2.08
heartbeat-stonith-2.08

2. Now we have to configure heartbeat on our two node cluster. We will deal with three files. These are:

  1. /etc/ha.d/ha.cf: protocol, server options and servers.
  2. /etc/ha.d/authkeys: shared keysfile
  3. /etc/ha.d/resources: resource definitions

ha.cf

For the example setup the ha.cf file looks like the following:

debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility     local0
keepalive 2
deadtime 10
udpport 694
bcast     eth0
node    prime
node    calc
auto_failback on

The above options are pretty straightforward; where the debuglog is, logfile, what level, tcp keepalive in seconds, deadtime in between in seconds, what udp port, what interface to broadcast on then the nodes in the cluster.

authkeys

The documentation explains the various options but for this example we are using sha1 algorithm:

#vi authkeys
edit as follows
auth 2
#1 crc
2 sha1 test-ha
#3 md5 Hello!

Also the authkeys file must be read only root:

chmod 0600 authkeys

haresources

The resources file dictates the shared address and services in init to startup (or shutdown as the case may be):

prime 192.168.1.20 apache2

The starting or primary server is put as the first argument. Now the the configuration is done on the primary server – the exact same settings can be used on the secondary one.

 Copy the /etc/ha.d/ directory from node01 to node02:

scp -r /etc/ha.d/ root@calc:/etc/

3.  Now exchange and save authorized keys between node1 and node2.
Key exchange:

On node1:

Generate the key:

[root@prime ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
9f:5d:47:6b:2a:2e:c8:3e:ee:8a:c2:28:5c:ad:57:79 root@prime

Pass the key to node2:
[root@prime ~]# scp .ssh/id_dsa.pub calc:/root/.ssh/authorized_keys

On node2:

Generate the key:

[root@calc ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
40:66:t8:bd:ac:bf:68:38:22:60:d8:9f:18:7d:94:21 root@calc

Pass the key to node1:
[root@calc ~]# scp .ssh/id_dsa.pub prime:/root/.ssh/authorized_keys

NOTE: We don’t need to create a virtual network interface and assign an IP address (192.168.1.20) to it. Heartbeat will do this for you, and start the service (httpd) itself. So don’t worry about this.

4. A basic apache server for the test is required as well:

 #yum install httpd*

To illustrate the test, a simple page on each webserver with its hostname can be used and put into /var/www/html/index.html:

<html><head></head<body>prime</body></html>
<html><head></head<body>calc</body></html>

Next – startup and set to start at boot the webservers (run on both systems):

service apache2 start
chkconfig apache2 on

Now time to test the systems separately with lynx --dump:

# lynx --dump prime
   prime

# lynx --dump calc
   calc

5. On both nodes:

#vi /etc/httpd/conf/httpd.conf
 Listen 192.168.1.20:80

Firing it Up

Starting up is pretty simple:

# chkconfig heartbeat on
# service heartbeat start
Starting High-Availability services2009/07/25_21:04:30 INFO:  \
        Resource is stopped
heartbeat[4071]: 2009/07/25_21:04:30 info: Version 2 support: false
heartbeat[4071]: 2009/07/25_21:04:30 info: **************************
heartbeat[4071]: 2009/07/25_21:04:30 info: \
        Configuration validated. Starting heartbeat 2.99.3

Now a litmus test of the shared address:

#  lynx --dump 192.168.1.20
   prime

Testing

Testing can be a little tricky – the simplest way is to stop the heartbeat service on the active node and let the other one take over, observe the log entries on the calc node:

IPaddr[5106]:   2009/07/25_21:32:55 INFO: eval \
        ifconfig eth0:0 192.168.1.20 netmask 255.255.255.0 broadcast 192.168.1.255
IPaddr[5089]:   2009/07/25_21:32:55 INFO:  Success
ResourceManager[5006]:  2009/07/25_21:32:55 \
        info: Running /etc/init.d/apache2  start
mach_down[4980]:        2009/07/25_21:32:58 info: \
        mach_down takeover complete for node prime.
heartbeat[4241]: 2009/07/25_21:33:05 WARN: node prime: is dead
heartbeat[4241]: 2009/07/25_21:33:05 info: Dead node prime gave up resources.
heartbeat[4241]: 2009/07/25_21:33:05 info: Resources being acquired from prime.
heartbeat[4241]: 2009/07/25_21:33:05 info: Link prime:eth0 dead.
harc[5258]:     2009/07/25_21:33:06 info: Running /etc/ha.d/rc.d/status status
heartbeat[5259]: 2009/07/25_21:33:06 info: \
        No local resources [/usr/share/heartbeat/ResourceManager \
        listkeys calc] to acquire.
mach_down[5287]:        2009/07/25_21:33:06 info: \
        Taking over resource group 192.168.1.20
ResourceManager[5313]:  2009/07/25_21:33:06 \
        info: Acquiring resource group: prime 192.168.1.20 apache2
IPaddr[5340]:   2009/07/25_21:33:06 INFO:  Running OK
mach_down[5287]:        2009/07/25_21:33:07 \
        info: mach_down takeover complete for node prime.

And a quick check with lynx:

#  lynx --dump 192.168.1.20
   calc

Note that once prime is back online that calc gives control back:

ResourceManager[5515]:  2009/07/25_21:33:43 info: \
        Releasing resource group: prime 192.168.1.20 apache2
ResourceManager[5515]:  2009/07/25_21:33:43 info: \
        Running /etc/init.d/apache2  stop
ResourceManager[5515]:  2009/07/25_21:33:44 info: \
        Running /etc/ha.d/resource.d/IPaddr 192.168.1.20 stop
IPaddr[5592]:   2009/07/25_21:33:44 INFO: ifconfig eth0:0 down
IPaddr[5575]:   2009/07/25_21:33:44 INFO:  Success

Don’t use the IP addresses 192.168.1.15 and 192.168.1.16 for services. These addresses are used by heartbeat for communication between node01 and node02. When any of them will be used for services/resources, it will disturb hearbeat and will not work. Be carefull!!!

Recover Deleted Linux Files With lsof

To try this out, create a test text file, save it and then type less deleted.txt. Open another terminal window, and type rm -f deleted.txt. If you try ls deleted.txt you’ll get an error message.

But less still has a reference to the file.:

> lsof | grep testing.txt
less	4607	nithins 4r  REG 254,4   21
           8880214 /home/nithins/deleted.txt (deleted)

Take the PID of  the process, second column  that has the file open (4607), and the fourth one, which gives you the file descriptor (4). Now, look in /proc, and there you will see  a reference to this inode, from which we can copy the file back:

> ls -l /proc/4607/fd/4
lr-x------ 1 nithins nithins 32 Apr  18 02:59
             /proc/4607/fd/4 -> /home/nithins/deleted.txt (deleted)
> cp /proc/4607/fd/4 deleted.txt.bak

Note: don’t use the -a flag with cp, as this will copy the (broken) symbolic link, rather than the actual file contents.

In the same way you can recover apache files (config/log) from the parent process PID if it was deleted accidently.  Try out..!

Now,

Linux Mint 12 ‘Lisa’ with Gnome Shell extensions called “MGSE”

Linux Mint 12 ‘Lisa’ will come with its own customized desktop and it will be based on Gnome 3. The core desktop will be based on a series of Gnome Shell extensions called “MGSE” (Mint Gnome Shell Extensions) that will provide a layer on top of Gnome 3.


Take a look at the screenshot below:

The main features of MGSE are:

  • The bottom panel
  • The application menu
  • The window list
  • A task-centric desktop (i.e. you switch between windows, not applications)
  • Visible system tray icons
MGSE also includes additional extensions such as a media player indicator, and multiple enhancements to Gnome 3. Thus Linux Mint 12 will be more like a hybrid desktop balancing traditional desktop and new modern technologies.

Dealing with Exim Spammers

There are two aspects to dealing with spam for a server administrator:

1. Inbound spam to users

2. Outbound spam from compromised scripts

Both need very different approaches to help detect, remove and resolve.

Inbound spam is the scourge of the modern internet and, the inconvenience to users aside, can cause serious performance and resource issues on the server. These can affect both the server overall and the timely deliver of clean email in particular.

The best way to tackle inbound spam is at the entry point into the server – the MTA, i.e. exim the SMTP server of choice for cPanel. By blocking spam before it has even entered the server you save both on server resources used when delivering the email in addition to 3rd party tools to help detect spam further along the email relay process.

To do this you need to do work at the RCPT stage of the SMTP protocol. This occurs during the transaction between the sender and recipient SMTP servers and comes before the actual body of an email arrives on a server.

The primary form of spam attack is the Dictionary Attack: A common technique for spammers to use is what is known as a dictionary attack on a domain. A dictionary attack, in our context, is a single SMTP connection that attempts to send email from a spam source to a random set of names on our domain, e.g. bob@ourdomain.com fred@ourdomain.com harry@ourdomain.com, in the hope that one of the many hundreds that we try will get a hit and deliver our spam.

This technique is used by spammers mainly because most people don’t advertise their email addresses (due to spam!) and they want to access this untapped market.

To prevent this type of spam getting through, it is essential that you do not use the Default Address (catchall) feature within cPanel to receive emails wherever possible. You should always setup specific Forwarders (aliases) for any email addresses you use and set the Default Address to :fail: for each domain.

By using :fail: exim will automatically reject email at the SMTP RCPT stage and make dictionary attacks redundant. Additionally, you can use exim ACLs to block such spammers who repeatedly perform dictionary attacks to further relieve the server of the load from dealing with them. See: http://www.configserver.com/free/eximdeny.html

From a server performance perspective, it is essential that you use :fail: and not :blackhole: with email addresses or the Default Address to block such spam. Mor information about the reasoning for this is presented here.

Another preventative measure is to enable the WHM options:

WHM > Exim Configuration Editor > Verify the existance of email senders. WHM > Exim Configuration Editor > Use callouts to verify the existance of email senders.

These two options have exim check that any server that attempts to relay email to your server can actually receive email in reply. This is part of the RFC requirements of an SMTP server and the inability of a server to do so indicates a likely spammer.

There are numerous other checks that you can also perform at the SMTP RCPT stage in exim ACLs. Examples are using RBL checks to reject email from IP addresses that originate from IP addresses that are know to harbour spammers, e.g.:

deny message = Message rejected – $sender_fullhost is in an RBL, see $dnslist_text !hosts = +relay_hosts !authenticated = * dnslists = bl.spamcop.net : sbl-xbl.spamhaus.org

You can also check the format of email headers to ensure that they’re RFC compliant, which many spam servers are not. A typical example is checking the SMTP HELO/EHLO protocol command to ensure it’s correctly structured, e.g.:

deny message = HELO/EHLO set to my IP address condition = ${if match {$sender_helo_name}{11.22.33.44} {yes}{no}}

(where 11.22.33.44 is your servers main IP address)

deny message = EHLO/HELO does not contain a dotted address condition = ${if match{$sender_helo_name}{\\.}{no}{yes}}

Finally, once the email has passed through these hoops, you can implement a 3rd party application to scan emails and tag them as likely spam. cPanel has an inbuilt solution that uses SpamAssassin to score email likely to be spam. You can then have such emails filtered to a special account or the client can filter such emails based on the email header record modifications made by SpamAssassin.

An alternative is to use a more thorough tool such as MailScanner which can be very effective at scoring spam emails. A free installation tool is available for cPanel servers from us here or as a paid service here.

However, a cPanel server using such a tool is not supported by cPanel and would have to be removed/disabled before cPanel would investigate any email related issues should you need support.

Outbound spam from compromised scripts

Outgoing spam is likely to come from two sources:

1. Indirectly from a compromised web script in a clients account

2. Directly from a client

The starting point for both will be the exim mainlog:

/var/log/exim_mainlog (Linux)

/var/log/exim/mainlog (FreeBSD)

For the purpose of this document I am going to assume a Linux OS.

The most laborious way to track messages down is to trawl the exim mainlog and to look for anomalous behaviour. This is actually very difficult to do and you really need to narrow down exactly what you are looking for.

Tracking down spammers is a difficult affair, but can be made easier with some preparation of your servers environment. I would strongly advise that you add the following to the exim configuration to enable some extended logging that greatly improves the ease in tracking down on-server spammers:

In WHM > Exim Configuration Editor > Switch to Advanced Mode > in the first textbox add the following line and then Save:

log_selector = +arguments +subject

This tells exim to log the path on disk from where the email was executed and the subject of the email. You can then interrogate the exim mainlog more easily. The best way to do this is to obtain the original email header from the spam originating from your server. This you should receive either from the person reporting the spam, or from remnants of a spam attack in the exim mail queue. The part required in the email is the exim message id in the Received: header line within the email header of the spam.

As an example, take the following email header:

Return-path: Received: from [11.22.33.44] (helo=barfoo.com) by foobar.com with esmtps (TLSv1:AES256-SHA:256) (Exim 4.52) id 1FZ8z3-0006M4-Do for fred@foobar.com; Thu, 27 Apr 2006 17:04:49 +0100 Received: from forums by barfoo.com with local (Exim 4.43) id 1FZ8zt-0005lz-E7 for fred@foobar.com; Thu, 27 Apr 2006 12:05:41 -0400 To: fred@foobar.com Subject: Buy Me! From: bob@barfoo.com

The Received: header lines are added to the email header, so the original Received: line that we’re interested in is:

Received: from forums by barfoo.com with local (Exim 4.43) id 1FZ8zt-0005lz-E7 for fred@foobar.com; Thu, 27 Apr 2006 12:05:41 -0400

And the id we want is 1FZ8zt-0005lz-E7

This is the unique identifier for this email that has originated from the server. With this, we can follow the exim transaction on the server to see how it was processed using:

grep 1FZ8zt-0005lz-E7 /var/log/exim_mainlog

(be aware that the exim_mainlog files may have been rotated so you may have to expand compressed archives and search them instead)

This transaction may look something like this:

2006-04-27 17:43:41 1FZ8zt-0005lz-E7 fred@foobar.com R=lookuphost T=remote_smtp H=foobar.com [44.33.22.11] X=TLSv1:AES256-SHA:256 2006-04-27 17:43:53 1FZ8zt-0005lz-E7 Completed

In this example, we can see that the email originated from the nobody user locally on the server. This means that the likely spam was sent from a script on the server. The nobody user is used to run the Apache web server and is the default username and group that Apache will execute web scripts as. Two things can affect this:

1. suexec, if enabled, will run CGI scripts as the owner of the script file, typically the cPanel account name

2. phpsuexec, if enabled, will run PHP scripts in the same manner as CGI script

Suexec is typically always enabled on web servers and phpsuexec may or may not be. If phpsuexec is not enabled, then in all likelihood, the script run under the nobody account will be a PHP script.

From the example above we can see that a script was run from with the /home/ClientX/public_html/phpBB/ directory on the server, which would suggest a compromised PHP script within that directory.

Here’s another example of a spam originating from a client instead of a script. This can happen either with malicious intent, or if the clients PC has been compromised by a virus or worm:

2006-04-27 17:54:51 1FZ9lT-000707-O2 fred@foobar.com R=boxtraper_autowhitelist T=boxtrapper_autowhitelist 2006-04-27 17:54:52 1FZ9lT-000707-O2 => fred@foobar.com R=lookuphost T=remote_smtp H=foobar.com [44.33.22.11] X=TLSv1:AES256-SHA:256 2006-04-27 17:54:52 1FZ9lT-000707-O2 Completed I

n this example, the key part is: A=fixed_plain:bob@barfoo.com

This shows that the email was authenticated for relaying using SMTP AUTH (i.e. fixed_plain) and the username bob@barfoo.com from that clients PC.

As you can see, there is a great depth to the amount of work needed to track down spammers on a server, plus there’s the additional work of closing holes in insecure scripts if they are the cause.

Some instances can be much more complex and require trawling through the Apache logs for domains in /usr/local/apache/domlogs/* which is not a trivial matter. The best security from such exploitation is to keep your server secure and to be aware of who and what you allow on your server.