LXC and CentOS 7

Posted 18/11/2018 15:41

In my earlier post I have been experimenting with LXC on Debian 9 as part of my project to move away from OpenVZ containers. With good success.

However at work we run exclusively CentOS servers, and have a lot of custom built RPMs and team knowledge around CentOS. To switch to Debian just to move away from OpenVZ containers would be a dramatic change and require the team to be re-trained.

We have no issues with CentOS 6, beyond its age, so if we could move to a newer system container system (like LXC), but just migrate to CentOS 7, then this would be ideal.

It was with this mind set that I set about trying LXC (and LXCFS) on CentOS 7.


In summary, I can say that LXC and LXCFS work great on CentOS 7. However I did have a lot of trouble with LXCFS initially because CentOS' kernel does not have cgroup namespaces, and LXCFS' emulation of cgroup namespaces caused containers to hang intermittently when starting and stopping them.

The solution to this was to move to an ELRepo style vanilla upstream kernel, but to rebuild my own package, called kernel-llt (Latest Long Term) that provides a more current kernel, but still with long term stability and support.

LXC 3.x package for CentOS 7

The first step was to produce an RPM to install the latest stable LXC version (3.0.2 at time of writing).

Thankfully, there was already a SPEC file in the official LXC repo, which I did try using initially. However I found that it pulled in some additional network dependencies (like dnsmasq) that I didn't require.

LXC is simple to compile, so I produced a new minimal LXC SPEC file and built my RPM from that.

This worked like a charm, and I was now able to start creating containers on CentOS 7.

LXCFS 3.x package for CentOS 7

Next up, the LXC project provides another application called LXCFS. This application runs as a service on the node, and provides emulation of the /proc and other virtual system Linux directories that provide information about the running machine.

The idea behind LXCFS is to emulate some of these virtual system files, such that applications running inside the containers appear to be running inside a proper virtual machine. This allows commands like free to show the memory allowed in the container for example.

There was no RPM SPEC file available for LXCFS, so I again, created my own.

At first this appeared to be working well. Containers started and were able to see the emulated system files, such that applications like free and top functioned as if they were running inside a virtual machine (showing memory limits and container uptime).

LXCFS Trouble

For a few days this appeared to be working well. However during my routine starting and stopping of containers, I started to notice that intermittently the containers would hang during shutdown. The lxc-stop command would never end (unless forcefully killed), and that would just leave the container running, with systemd just waiting inside the container.

After some diagnosis I found that LXCFS itself was crashing, and leave errors in the syslog. So I opened an Issue with the LXCFS team. Christian Brauner was able to fix a potential memory free issue quickly, but sadly this did not solve the problem. He asked for a coredump using GDB (which was a learning experience in itself for me), which I provided for several instances of the crash.

Unusually, in this instance the LXCFS devs were stumped and no solution was forthcoming. I suspect their focus is ensuring LXC works well on vanilla and Ubuntu kernels, and not the rather strange, CentOS kernels, that tend to be very old and have lots of features back-ported into them.

Unperturbed I thought I would try an alternative approach, at least to try and diagnose where the problem was coming from (even if I couldn't fix it). From my experience creating the RPM for LXCFS, I knew that LXCFS installs a mount hook script to be run for all containers. The script is broadly split into 2 sections; one for setting up the emulation of the /proc directory, and the other to setup emulation of the cgroups filesystem (if not provided by the kernel directly on kernels with cgroup namespaces).

I tried adding an exit statement in various places in the hook script to see if the container would still start, whether the LXCFS functionality was broken, and whether or not this fixed the shutdown hangs.

I found that adding an exit statement before the cgroup emulation part, but after the /proc emulation part, allowed the majority of the LXCFS functionality to work, and it meant that the shutdown hangs didn't occur anymore.

This gave me a clue, as the comments in the hook script suggested that if the kernel had cgroup namespaces available, that it would exit and not run the latter half of the script.

A newer kernel was needed

I needed a newer kernel, one that supported cgroup namespaces. This wasn't added until Kernel 4.6 (which probably explains why Debian's 4.9 kernel doesn't experience these issues).

After some searching, I found a third party YUM repo called, ELRepo Kernel-lt that produce RPMs of the vanilla upstream kernel directly from kernel.org but packaged for compatibility with CentOS 7. Awesome!

However, there was another snag, the kernel-lt (Long Term) version supplied at ELRepo was 4.4, but I needed at least 4.6. I could see at kernel.org that the current long term kernel was 4.14, which would be ideal.

So I setup about using the newer SPEC file for the kernel-ml repo and then backporting it to build kernel 4.14. I am now maintaining this as a package called kernel-llt (Latest Long Term) for use with CentOS 7. I planning to switch this to 4.19 once that becomes the latest long term version at kernel.org.

With the kernel 4.14 installed, LXC and LXCFS have been rock solid for months now!

Encrypted calls with Zoiper 5 with Freeswitch 1.6

Posted 21/07/2018 12:58

Zoiper 5 and Freeswitch 1.6 don't allow encrypted calls to work out of the box due to a bug in Freeswitch with some of the newer RTP/SDES encryption suites.

After much time spent with the Zoiper support team (who are awesome by the way!), they suggested changing the cipher preference order in Freeswitch to disable some of the new suites that Freeswitch does not support fully.

From Zoiper Support team:

Basically the current Zoiper for Android and the legacy products for desktop like Zoiper 3 are still using the old library which offers the proper RFC naming but not the FS one. This causes the FS to ignore the 256 and 192 bit offers because it doesn't recognize them due to FS naming and falls back to 128 bit since it's the only one that it accepts. Essentially the 192 and 256 bit encryption never worked before because they don't match the names. In the new library we offer the FS naming and the issue is fixed, but then the FS has another issue and sends the wrong packet size for 192 and 256. There is a workaround you can use until and if this issue is fixed by FS. You may try to rearrange the priority on your FS (put 128 on top instead of 256). Or you can simply keep using older version of Zoiper with 128 bit.

Changing RTP/SDES cipher suites in Freeswitch

In order to change the cipher suites in Freeswitch, you need to add the following variable to your dialplans, such that the outbound leg to your handsets has this set:

<action application="export" data="nolocal:rtp_secure_media=optional:AES_CM_128_HMAC_SHA1_80"/>

Then for outbound calls from your users, you should modify their directory entry so that the same variable is set on the leg from their handset to your server. E.g.

<variable name="rtp_secure_media" value="optional:AES_CM_128_HMAC_SHA1_80"/>

This will allow Freeswitch and Zoiper 5 to work together with encrypted media.

Building LXC and LXCFS 3.0.1 for Debian Stretch (9)

Posted 09/06/2018 01:05

I've been keen to try the new LXC 3.0.1 release on Debian 9, and have got it packaged by re-using the official Debian LXC and LXCFS packages, and tweaking the build process as follows:


mkdir lxc
cd lxc
apt-get source lxc
wget https://linuxcontainers.org/downloads/lxc/lxc-3.0.0.tar.gz
cd lxc-2.0.7/
uupdate ../lxc-3.0.0.tar.gz
cd ../lxc-3.0.0
rm debian/patches/*
rm debian/lua-lxc* -Rvf
  • Remove references to python in debian/control and debian/rules files.
  • Remove references to python in debian/control and debian/rules files.


mkdir lxcfs
cd lxcfs
apt-get source lxcfs
wget https://linuxcontainers.org/downloads/lxcfs/lxc-3.0.1.tar.gz
cd lxcfs-2.0.7
uupdate ../lxcfs-3.0.1.tar.gz
cd ../lxcfs-3.0.1
rm debian/lxcfs.install
  • Now remove the override_dh_install section from debian/rules.
  • Modify override_dh_installinit section from debian/rules to remove cp lines.
  • Remove libpam-cgfs section from debian/control file.
dpkg-buildpackage or dpkg-buildpackage -nc (if rebuilding from a failed build)

And thats all there is to it!

Some big changes in 2016 and 2017

Posted 30/10/2017 19:30

I can't believe its been over two years since my last post! Where has the time gone?

Well for one thing, I have gotten engaged near the end of 2016, so lots of 2017 have been taken up with planning the wedding!

But that's not to say that I haven't been working on lots of Linux and open source related tech in the last couple of years too!

Farewell OpenVZ. Hello LXC

Probably the open source tech that was newest to me and has had the most impact this year is my switch from OpenVZ to LXC Linux containers.

I have been a big fan of OpenVZ Linux containers for the past 10 years, right back when using it with CentOS 5, and containers were called "virtual environments" or VEs. I found both OpenVZ and the CentOS distribution to be very stable and easy to use, and it was fantastic for separating physical servers into several logical virtual roles.

One of the things I liked most about OpenVZ containers was that it supported SAN-free migration of containers across different machines, using the rsync command under the hood. It allowed maintenance to be performed on the physical servers by easily migrating the services running on it to another machine.

The other great feature was called "simfs", and it allowed you to store all of the container file systems in one big file system directory on the physical machine, and have soft quotas and file system separation implemented in OpenVZ rather than having to mess around with disk images (like my colleagues who were using 'proper' virtualisation had to).

I continue to use OpenVZ to this day at work, using CentOS 6. However CentOS 6 is starting to show its age (like really badly), and I was keen to switch to CentOS 7 for several years. However OpenVZ depends on a custom Linux kernel to work, and the OpenVZ team did not support CentOS 7 properly until mid 2016.

Sadly, this year I have spent some time trying out OpenVZ 7 running on CentOS 7, and alas my disappointment with it was what got me looking for alternative Linux container technologies and led me to LXC.

OpenVZ underwent an ownership change in 2016 too, and I believe this is what has caused some of the issues with OpenVZ 7.

Here are a list of issues with OpenVZ 7 that I have found:

Doesn't use vanilla CentOS 7

It requires its own variant Linux distribution called "vzlinux", which is based on CentOS 7, but you cannot install OpenVZ 7 directly on a pre-installed CentOS 7 server. Instead the official way to install it is via an ISO image.

There is an unofficial script that will convert a CentOS 7 to a vzlinux server and then install OpenVZ 7, but that seems to be risky to depend on that for a solution.

Still requires a custom kernel

OK this isn't a major issue, as its been this way for the last 10 years, but it still unfortunate that you can't use OpenVZ containers without a custom kernel.

Custom kernel updates are infrequent

This is a big problem. The open source version of OpenVZ 7 kernel does not get regular kernel security and bug updates, instead they plan to release infrequent updates, and only make regular updates available to their commercial ReadyKernel subscribers or you have to use the automatically generated nightly versions. At the time of writing, the latest open source kernel was vzkernel-3.10.0-514.26.1.vz7.33.22 from July 2017 whereas their latest ReadyKernel patch was from today. There is also the nightly build version in their "factory" repo, that was at version vzkernel-3.10.0-693.1.1.vz7.37.19.

Now clearly, they need to make money as a company, but as an open source project, to withhold security updates to only paying customers is in my view irresponsible. Here is a more thorough post on their kernel release process and here too.

There does appear to be some potential for movement on this issue though, as more recently there was a post on the mailing list suggesting that paid for ReadyKernel updates could be available for OpenVZ without having to buy a Virtuozzo license.

SIMFS is removed from OpenVZ 7

One of the best features (in my opinion) from OpenVz 6 is removed from OpenVZ 7, and that is SIMFS, which gave the ability to do per-container 'soft' quotas using a single directory of container file systems on the host. This was great as it made backing up the containers easy.

In its place we now have to use disk based images called "ploop", and the back up process is not clear and straight forward.

These issues started to lead me toward alternative solutions. Initially I explored using OpenVZ with LVM thin volumes, however the tooling did not support it well, and it felt rather hacky.

Enter LXC

So I started looking for another "system" container solution and remembered I had heard about LXC a couple of years earlier, but had never seriously looked into it. It appears that I was not alone in starting to look for alternatives and finding LXC.

It would be worth pointing out at this stage, that I had discounted Docker for the time being because I wanted something that replaced OpenVZ without needing to re-structure all my current services into "process" or "application" containers, which Docker is more suited for. Whereas OpenVZ and LXC containers provide more of a full system like virtualisation environment.

I will continue my LXC journey in my next post, but suffice to say ultimately it lead me away from both OpenVZ and CentOS, and I have now switched over to Debian 9 and LXC. Which is a massive change for someone who has been exclusively using CentOS for 10 years!

Running Freeswitch ESL Event Viewer on Fedora 22

Posted 09/07/2015 15:57

My job involves me doing a lot of work with Freeswitch and for that it is especially useful to be able to debug the internal event stream via the Event Socket Library (ESL).

Thankfully Paul Labedan has written a GUI in QT that runs on Linux and allows me to see the events in real-time.

Here is how to install it on Fedora 22:

First download the latest ZIP file from Github and unpack.

sudo dnf install unzip
unzip ESLViewer-master.zip

Next install the QT components and C++ compiler:

sudo dnf install gcc-c++ cmake qt5-qtbase-devel

Now proceed to build the application:

cd ESLViewer-master
mkdir build
cd build
cmake ..

If all goes well you will now have an executable file called ESLViewer in the build directory.

Run it like so:

Optionally copy it into your /usr/bin directory for easier use in the future.

sudo cp ESLViewer /usr/bin/

Running OpenVZ with IPv4 and IPv6 on Digital Ocean

Posted 28/06/2015 19:38

With containers all the rage at the moment (LXC/LXD, Docker, Rocket etc), I thought it would be interesting to see if it was possible to get a mature container implementation (OpenVZ) running on the cloud provider Digital Ocean.

I have been running OpenVZ with CentOS 5 & 6 in production for over 5 years now and I have found it to be rock solid and has a simple set of management tools.

First Steps

Firstly you need to sign up for a CentOS 6 Droplet in one of the many data centers that Digital Ocean provide.

I chose the following settings:
  • Hostname: testopenvz
  • Size: $5/mo
  • Region: London 1
  • Image: CentOS 6.5 x64
  • Settings: IPv6

Once the Droplet is up and running, SSH into it and enable a SWAP file.

Then ensure your Droplet is all up to date:

yum upgrade -y

Install some useful tools:

yum install wget nano -y

OpenVZ Installation

Now the time has come to install OpenVZ kernel and management utilities.

wget -P /etc/yum.repos.d/ http://ftp.openvz.org/openvz.repo
rpm --import http://ftp.openvz.org/RPM-GPG-Key-OpenVZ
yum install vzkernel vzctl vzquota ploop -y

Enable OpenVZ IPv4 and IPv6 network settings in /etc/sysctl.conf by adding the following lines:

# On Hardware Node we generally need
# packet forwarding enabled and proxy arp disabled
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.proxy_arp = 0
net.ipv6.conf.all.proxy_ndp = 1

# Enables source route verification
net.ipv4.conf.all.rp_filter = 1

# Enables the magic-sysrq key
kernel.sysrq = 1

# We do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0

Modify /etc/sysconfig/network and add these lines:


Because we will be running NAT on the Droplet firewall we need to ensure that OpenVZ doesn't disable connection tracking by modifying /etc/modprobe.d/openvz.conf as follows:

options nf_conntrack ip_conntrack_disable_ve0=0

Setting up KEXEC

Digital Ocean annoyingly do not allow you to boot your Droplet with a custom kernel. As OpenVZ requires this, the only way to boot the custom kernel is to use the kexec utility, which allows you to boot a standard kernel and then replace it with a custom kernel.

Install kexec:

yum install -y kexec-tools

Then create the following init file /etc/init.d/kexecvz which will start the OpenVZ kernel on boot:

# kexecvz
# chkconfig: 2345 90 60
# Provides:          localkexec
# Required-Start:
# Required-Stop:
# Should-Start:
# Default-Start:     S
# Default-Stop:
# X-Interactive:     true
# Short-Description: kexec

case "$1" in
        if grep -q kexeced /proc/cmdline; then
                exit 0
        /sbin/kexec --load `ls -1t /boot/vmlinuz-*stab* | head -n 1` --initrd=`ls -1t /boot/initramfs-*stab* | head -n 1` --command-line="`cat /proc/cmdline` kexeced"
        /sbin/kexec -e
        echo "Error: argument '$1' not supported" >&2
        exit 3
        # No-op
        echo "Usage: $0 [start|stop]" >&2
        exit 3

Run the following commands to start it on boot:

chmod +x /etc/init.d/kexecvz
chkconfig kexecvz on

Configure OpenVZ Default Container Settings

Modify the /etc/vz/vz.conf file and add the following lines:


Checking OpenVZ is running

Now reboot your Droplet and when it comes back check it is running the OpenVZ kernel.

uname -a

The output should have the word "stab" in the version which indicates the OpenVZ kernel is running:

Linux testopenvz 2.6.32-042stab108.5

Enable NAT on the Droplet firewall

As Digital Ocean only allows a single IPv4 address on their Dropets we need to use NAT in order to allow the containers to access the Internet.

/sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
service iptables save

Create First Container

Now we can create our first container with Container ID 1000 and hostname testcontainer1.localdomain:

vzctl create 1000 --hostname testcontainer1.localdomain

We will add a private IPv4 address to each of our containers so they can communicate with each other and use NAT to access the IPv4 Internet. For this example I am using the subnet.

vzctl set 1000 --ipadd --save

Now we add an IPv6 address to your container from the range of 16 IPv6 addresses that Digital Ocean provide with each Droplet. To find your range go into your Digital Ocean control panel and go to Settings > Networking.

vzctl set 1000 --ipadd 2a03:xxxx:1:d0::424:6002 --save

Copy your SSH public key from your Droplet into the container so you can SSH into your container.

mkdir /vz/private/1000/root/.ssh/
chmod og= /vz/private/1000/root/.ssh/
cp /root/.ssh/authorized_keys /vz/private/1000/root/.ssh/

Now you can start the container and then login to it:

vzctl start 1000
vzctl enter 1000

Test Container Connectivity

Now you are logged into your container, you should test your IPv4 and IPv6 network connectivity.

ping google.com
ping6 google.com

If both succeed, then you're good to go, so exit our of your container back to your Droplet.


Add NAT SSH Port Forward

In order to reach your container from the Internet you can either use the IPv6 address that you assigned to your container, or if you don't have IPv6 connectivity then you need to setup a NAT port forward for SSH from your Droplet's public IP to your container.

iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 8882 -j DNAT --to-destination
service iptables save

You should now be able to SSH to your Droplet's public IP using port 8882.

Final Thoughts

You should now be all setup to add additional containers and IP addresses as needed. Remember to setup a firewall to protect your Droplet and your containers though!.

Setting up Stunnel for secure communication on CentOS 5 and 6

Posted 15/03/2015 12:03

Sometimes you need to secure communication for an internet service that does not support TLS functionality. For example, I needed to perform secure file synchronization over the Internet using rsync, but it does not support TLS. I didn't want to use SSH tunneling as that requires additional security lockdown to prevent the remote user from running shell commands.

To solve this problem the tool Stunnel provides an encrypted TCP tunnel back to your un-encrypted service.

So the flow of communication would be as follows:

rsync client -> stunnel on client machine -> encrypted TLS link -> stunnel on server machine -> rsync server

My requirements for this project are as follows:

  • Support CentOS 5 and 6 clients (server would be running on CentOS 6)
  • Use at least TLSv1, but support TLSv1.1 and TLSv1.2 if client OS supports it
  • Disable SSLv3
  • Disable RC4 cipher

Installing Stunnel

The version of Stunnel that comes with CentOS 5 and 6 does not support TLSv1.1 or TLSv1.2. So straight off the bat I needed to build the latest Stunnel into an RPM. The SPEC file to do this is here. It works with both CentOS 5 and CentOS 6, although CentOS 5's OpenSSL library does not support higher than TLSv1.

To install Stunnel:

yum install stunnel

Configure a Certificate Authority

Using the EasyRSA tool from the OpenVPN project you can create your own Certificate Authority (CA).

Download the latest from EasyRSA Releases and unpack into /root.

cd /root
wget https://github.com/OpenVPN/easy-rsa/releases/download/v3.0.0-rc2/EasyRSA-3.0.0-rc2.tgz
tar zxvf EasyRSA-3.0.0-rc2.tgz

Now create your new CA.

cd EasyRSA-3.0.0-rc2
./easyrsa init-pki
./easyrsa build-ca

Configure Stunnel Server

First lets create a certificate for this server. You will need to enter the password you set for your CA key in the previous step.

./easyrsa build-server-full {your server hostname} nopass

Now copy the CA cert, server key and server cert into /etc/stunnel and secure the permissions on the private key file.

cp pki/ca.crt /etc/stunnel/
cp pki/private/{your server hostname}.key /etc/stunnel/
cp pki/issued/{your server hostname}.crt /etc/stunnel/
chmod 600 /etc/stunnel/*.key

Now create /etc/stunnel/stunnel.conf file containing:

client = no
foreground = yes
syslog = yes
debug = 6

accept = :::1873
connect =
cert = /etc/stunnel/stunnel.pem
CAfile = /etc/stunnel/ca.crt
cert = /etc/stunnel/{your server hostname}.crt
key = /etc/stunnel/{your server hostname}.key
sslVersion = all
options = NO_SSLv2
options = NO_SSLv3

The config file specifies a list of ciphers that are currently deemed secure using Mozilla's SSL Config Generator. Although at time of reading this list may have changed.

You should now be able to start stunnel in the foreground for testing by running:


Use your favorite process manager (upstart, supervisord etc to run the program in the background).

Configure Stunnel Client

On your client machine install stunnel and copy the ca.crt file into /etc/stunnel, then create the stunnel.conf file:
client = yes
foreground = yes

accept = :::873
connect = {your server hostname}:1873
CAfile = /etc/stunnel/ca.crt
sslVersion = all
options = NO_SSLv2
options = NO_SSLv3
verify = 2

You should now be able to start stunnel in the foreground for testing by running:


To test this is working try connecting rsync to and it should copy the files from the remote server.

IPv6 Privacy Extensions in Fedora 20

Posted 17/02/2014 19:41

Previously I blogged about Enabling IPv6 Privacy Extensions in Fedora 18. Unfortunately in Fedora 20, the Network Manager has a bug in it that means that the setting is not used.

Thankfully there has been an issue logged already and a fixed Network Manager can be installed from the testing repo, heres how:

sudo yum update --enablerepo=updates-testing NetworkManager

Now restart, and when you run ifconfig, you should see an additional randomly generated IPv6 address.

Fedora 19 Gnome 3 OpenVPN default route bug workaround

Posted 27/10/2013 14:10

In Fedora 19 and Gnome 3 there is a rather annoying bug when using OpenVPN, the 'Use this connection only for resources on its network' tick box does not remained ticked, and causes the default route to be updated to point through the OpenVPN tunnel.

In some situations (mine) I do not want the default route to go down the OpenVPN tunnel, and so this was a problem.

Luckily there is a simple workaround until it gets fixed, open the relevant file for your VPN connection, for example /etc/NetworkManager/system-connections/Work.

Then find the section:


And add the line:


Hey presto, the tick box is now ticked!

Using Fail2Ban to block bruteforce Wordpress login attacks

Posted 26/10/2013 23:45

A friend of mine hosts a lot of Wordpress sites and we regularly see a lot of brute force attempts from many different IP addresses repeatedly tring to login to the admin section of the site at wp-login.php


mysite.com mysite.com - - [26/Oct/2013:17:42:16 +0000] "POST /wp-login.php HTTP/1.0" 200 3747 "-" "-"
mysite.com mysite.com - - [26/Oct/2013:17:42:16 +0000] "POST /wp-login.php HTTP/1.0" 200 3747 "-" "-"

This affects the load on the web server and we start to get load alerts from the ISP that provides the server.

To try and counter this I have setup the Fail2Ban tool on the server, which automatically blocks any IP that tries to login too many times repeatedly.

Configuring Fail2Ban

First install fail2ban, for CentOS it is available in the EPEL repository.

yum install fail2ban
Next, copy the jail.conf to jail.local for editing:
cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
Now edit /etc/fail2ban/jail.local and add the following lines:

enabled = true
action   = iptables[name=wplogin, port=http, protocol=tcp]
           sendmail-whois[name=wplogin, dest=root, sender=fail2ban@example.com]
filter  = apache-wp-login
logpath = /var/log/httpd/access_log
maxretry = 5

This file tells fail2ban what to do when the filter apache-wp-login matches, it sets up an action using iptables that blocks port http (80) and sends an alert E-mail.

The maxretry line tells fail2ban to only allow 5 repeat login attempts before blocking. The default time window is 5 minutes, which is specified at the top of that file.

Next define a filter to match wp-login.php requests by creating the file /etc/fail2ban/filter.d/apache-wp-login.conf


# Option:  failregex
# Notes.:  Regexp to catch Apache dictionary attacks on Wordpress wp-login
# Values:  TEXT
failregex = [\w\.\-]+ [\w\.\-]+ .*] "POST /wp-login.php

The regex above matches a custom log format I use that shows both the requested HTTP host header and the match vhost server name fields.

Now start the service and ensure it starts on boot:

service fail2ban start
chkconfig fail2ban on

Now your server will block repeat requests for wp-login.php and E-mail you when a block occurs.