Some months ago I set up an email server which is to host the email service for all users on my domains and whichever domains I choose to incorporate into it. This service is currently running on a Linux operating system with Debian stable as the distribution of choice. At the time I write this, there are 4 domains hosted with 5 end users and each domain has a set of default aliases for email addresses that are expected to exist on each domain for administration purposes.
The email server is hosting this service for three relatively new and “unknown” domains and one relatively old and “known” domain. As seen in the table under, the four domains produced 412 legitimate emails and 3438 rejected emails for the specified time period. A total of 89% of all emails that is attempted to be delivered to both known and unknown users are rejected before they even enter the system. To be able to do that, Postfix uses a set of tools and services to identify emails that should be rejected.
Grand Totals ------------ messages 413 received 412 delivered 34 forwarded 0 deferred 4 bounced 3438 rejected (89%) 0 reject warnings 0 held 0 discarded (0%) 11993k bytes received 12009k bytes delivered 101 senders 58 sending hosts/domains 14 recipients 9 recipient hosts/domains Per-Day Traffic Summary ----------------------- date received delivered deferred bounced rejected -------------------------------------------------------------------- Dec 14 2014 30 29 0 1 43 Dec 15 2014 73 73 0 0 547 Dec 16 2014 67 67 0 0 280 Dec 17 2014 33 33 0 0 759 Dec 18 2014 43 43 0 0 451 Dec 19 2014 51 51 0 1 278 Dec 20 2014 40 40 0 1 135 Dec 21 2014 45 45 0 1 153 Dec 22 2014 31 31 0 0 792
- Greylisting; Any host that tries to deliver an email and is not previously known to the system will be temporarily rejected. The reason for doing this is that spammers are known to be in a hurry and therefor rarely comes back.
- SPF; If the sending server comes back after being grey listed, the domain it claims to be sending for, is checked against SPF/TXT records in DNS. If there exist an SPF/TXT record, the record is compared against the IP address for the sending host. Depending on settings defined in the DNS record, the email is rejected if there is no match.
- DNSBL; If the SPF/TXT record matches the sending hosts IP address, the IP address is checked with online DNSBLs. If the IP address is defined in one or more of the included DNSBLs, the email is automatically rejected.
- ClamAV; The email is then checked for virii, if the data is infected, then the email is rejected.
- OpenDKIM; Using asymmetric encryption, the MTA can compare specific header rows in the email being delivered with a specific TXT record in DNS, again, if those does not match the email will be rejected.
It is also possible to implement DMARC which is an initiative from several large companies to standardise how DKIM and SPF should be implemented on a MTA. DMARC does not add any tools to further exclude recognised spam, but it does, in theory at least, help email providers recognise when there is a configuration error in their DKIM/SPF setup.
Note that my setup never lets an email enter the system and then decides to reject it for some reason. If the initial checks does not trigger it to be rejected, then the email is just delivered to whomever it is addressed for. It is my personal experience that this is better than creating a bounce after the email has been received and the deliverer has already disconnected. I believe this is true because if a bounce is created after the email has been received, any email delivered which has counterfeited headers (spam) will generate emails which the owner of the sender address has no prior knowledge of.
I estimate that for my current system and state of the domains hosted on that server, less than 5% of all emails delivered to end users is spam after being looked at using the previously mentioned tools and services. The problem is that 5%, even for my relatively small server still amounts to a noticeable amount of spam. Using the numbers from earlier, about 21 emails for the specified period will still be spam. Now imagine you are administrating a system that provides email for 50, 500 or 5000 users. Even with as little as 1%, 0.5% or 0.1% of delivered emails being spam, it will still be a significant amount of unwanted emails.
Therefor any improvements one can add to the existing spam filters, even minor improvements, can result in big improvements on the receiving end.
Quantifying spam and implementing new tools on server
I am currently looking into two specific things concerning this topic.
- Quantifying spam for easy comparison before and after implementing new spam reducing tools and services on the server.
- Looking for tools and services that I can implement and that will reduce spam or ease reporting in one way or another.
So far in my quest to improve spam recognition I have come over the following tools and services to decrease the amount of spam and that can be implemented straight into my existing system.
I have chosen to test out Amavisd-new since the four other options I mentioned can be implemented through Amavisd-new. The installation, configuration and result from using Amavisd-new will be posted later.
I am very interested in discovering and getting to know tools and services plus tips and tricks to improve reporting or reducing the amount of spam on my server. I would be very happy to see emails from you guys in my mailbox with input on how I can do exactly this.
Send me an email at firstname.lastname@example.org and make my day.