Updating a cert on the Cisco 11500 Series Content Services Switches (CSS)

Having recently moved some of our hosting infrastructure to the excellent Rackspace Platform group, we inherited the management of the Cisco 11500 Series Content Services Switches (CSS), which we use for general load balancing + ssl termination.

As a side note, it’s really powerful, fast, and well, plain nice.  Not having to manage SSL certs on each apache instance is really nice, and all the LAN communication is done over plain old HTTP.

This blog post is a regurgitation of some notes I took internally.  Perhaps someone who finds themselves managing this device will benefit…


The task at hand was re-issuing and updating one of our primary wildcard certificates that powers a lot of subdomains.

The first step is to generate the key, csr, and crt…

All these files should be:

  • Named the same as the domain that SSL is being generated for.
  • use WILD for a wildcard subdomain
  • Use this format “www.domain.com-0810.key”, where 08 is the from year and 10 is the to year
  • (the short version is because of name length limits on the CSS)

Start by generating the key and csr

This should be done in the ciscoftp role under the ~/load directory

# openssl genrsa -out WILD.vosecure.com-0810.key 1024
# openssl req -new -key WILD.vosecure.com-0810.key -out WILD.vosecure.com-0810.csr

Then get the certificate issued by (global sign)

Put the certificate into the the ~/load directory.  When done, it should look like:

-rw-rw-r-- 1 ciscoftp ciscoftp  3139 Apr  6 15:59 WILD.vosecure.com-0810.crt
-rw-rw-r-- 1 ciscoftp ciscoftp   773 Apr  6 15:49 WILD.vosecure.com-0810.csr
-rw-rw-r-- 1 ciscoftp ciscoftp   883 Apr  6 15:47 WILD.vosecure.com-0810.key

Put the crt and key onto the load balancer

To do this, use the “copy command” on the load balancer

20132-201292# copy ssl ftp base import WILD.vosecure.com-0810.crt PEM "rack"
20132-201292# copy ssl ftp base import WILD.vosecure.com-0810.key PEM "rack"

Then make the associations...

20132-201292# config
20132-201292(config)# ssl associate cert WILD.vosecure.com-0810.crt WILD.vosecure.com-0810.crt 
20132-201292(config)# ssl associate cert WILD.vosecure.com-0810.key WILD.vosecure.com-0810.key

Now, it’s time to install it.  Requires SSL downtime!

  1. Suspend the SSL content rule
  2. Suspend the SSL service
  3. Suspend the SSL proxy list
  4. Run the updates
  5. Activate the SSL proxy list
  6. Activate the SSL service
  7. Activate the SSL content rule

Here are the exact commands:

20132-201292# config
20132-201292(config)# owner vosecure.com
20132-201292(config-owner[vosecure.com])# content 74.205.111.161-ssl
20132-201292(config-owner-content[vosecure.com-74.205.111.161-ssl])# suspend

20132-201292# config
20132-201292(config)# service ssl-service
20132-201292(config-service[ssl-service])# suspend

20132-201292# config
20132-201292(config)# ssl-proxy-list ssl-proxy

In the following commands, we remove the whole ssl-server so that it shows up at the bottom in one concise unit. Otherwise, the startup-config and running-config become fragmented.

20132-201292(config-ssl-proxy-list[ssl-proxy])# suspend
20132-201292(config-ssl-proxy-list[ssl-proxy])# no ssl-server 6
20132-201292(config-ssl-proxy-list[ssl-proxy])# ssl-server 6
20132-201292(config-ssl-proxy-list[ssl-proxy])# ssl-server 6 rsakey WILD.vosecure.com-0810.key
20132-201292(config-ssl-proxy-list[ssl-proxy])# ssl-server 6 rsacert WILD.vosecure.com-0810.crt
20132-201292(config-ssl-proxy-list[ssl-proxy])# ssl-server 6 vip address 192.168.1.161
20132-201292(config-ssl-proxy-list[ssl-proxy])# ssl-server 6 cipher rsa-with-rc4-128-sha 192.168.1.161 81
20132-201292(config-ssl-proxy-list[ssl-proxy])# active

20132-201292# config
20132-201292(config)# service ssl-service
20132-201292(config-service[ssl-service])# active

20132-201292# config
20132-201292(config)# owner vosecure.com
20132-201292(config-owner[vosecure.com])# content 74.205.111.161-ssl
20132-201292(config-owner-content[vosecure.com-74.205.111.161-ssl])# active

Test test test.  Firefox, IE, Chrome...

20132-201292# copy running-config ftp base running-config

Review changes with git diff

20132-201292# write memory

20132-201292# copy startup-config ftp base startup-config

And… Here is the git diff

diff --git a/load/startup-config b/load/startup-config
index 7042490..36fbbaa 100644
--- a/load/startup-config
+++ b/load/startup-config
@@ -1,4 +1,4 @@
-!Generated on 04/06/2009 16:05:48
+!Generated on 04/06/2009 21:51:02
!Active version: sg0810205

@@ -64,6 +64,8 @@ configure
+  ssl associate rsakey WILD.vosecure.com-0810.key WILD.vosecure.com-0810.key
+  ssl associate cert WILD.vosecure.com-0810.crt WILD.vosecure.com-0810.crt

!*********************** SSL PROXY LIST ***********************
ssl-proxy-list ssl-proxy
-  ssl-server 6
-  ssl-server 6 rsakey vosecure.com(080421-04300)-key
-  ssl-server 6 rsacert vosecure.com(080421-04300)-cert
-  ssl-server 6 vip address 192.168.1.161
-  ssl-server 6 cipher rsa-with-rc4-128-sha 192.168.1.161 81
@@ -146,6 +141,11 @@ ssl-proxy-list ssl-proxy
+  ssl-server 6
+  ssl-server 6 rsakey WILD.vosecure.com-0810.key
+  ssl-server 6 rsacert WILD.vosecure.com-0810.crt
+  ssl-server 6 vip address 192.168.1.161
+  ssl-server 6 cipher rsa-with-rc4-128-sha 192.168.1.161 81
active

xhprof php profilier

Worth noting:

XHProf is a hierarchical profiler for PHP. It reports function-level call counts and inclusive and exclusive metrics such as wall (elapsed) time, CPU time and memory usage. A function’s profile can be broken down by callers or callees. The raw data collection component is implemented in C as a PHP Zend extension called xhprof. XHProf has a simple HTML based user interface (written in PHP). The browser based UI for viewing profiler results makes it easy to view results or to share results with peers. A callgraph image view is also supported.

Read more at http://mirror.facebook.com/facebook/xhprof/doc.html

I highly recommend yum + createrepo + rpmbuild

As I was discussing lightly before, I have recently been involved in building quite a few RPMs for our server clusters at AppCove.


Where we have arrived:

Our (new) primary production cluster consists of multiple RedHat Enterprise Linux 5 boxes in different capacities (webserver, appserver, database master, database slave, etc…).

Each machine is registered with 3 yum repositories:

  1. RHEL (RedHat Enterprise Linux)
  2. EPEL (Extra Packages for Enterprise Linux)
  3. ACN (AppCove Network)

All of our custom software packages and custom builds of open source software are placed into individual RPMs, and entered into our ACN repository.

From there, it is a snap to update any given server with the correct version of the software that server needs.

We have a dedicated build area, versioned with git, that is used to build and package all of the custom software that is needed.

(note, RPMs are not used for web application deployment — rsync via ssh is used for that)


Recommendation:

Having worked through the process from start to finish, I must say that I would highly recommend the following tools to anyone who is responsible for RedHat Enterprise, Centos, or Fedora system administration.

  • git – to keep your .spec files versioned
  • rpmbuild – to build the rpms
  • createrepo – to create your very own yum repository
  • apache – to serve the yum repository
  • yum – to obtain, install, and upgrade your rpms

Additionally, if you are using RedHat Enterprise or Centos, I would highly recommend using Extra Packages for Enterprise Linux (EPEL) to get a few of those “other” packages that don’t come with your OS (git, for example).


Learning how to build RPMs was a fairly steep curve.  But it wasn’t long.  It is one of those things that if you know it you say “that’s easy” and if you don’t you say “what the ???

yum+rpm was invented (I assume) to make life easier for countless system administrators and software publishers.  So it’s not the kind of thing that everyone is involved in.

I was a bit tough to figure out the caveats of how to correctly build RPM’s that work.  The documentation is a bit sparse.  A bit here and a bit there.


What are the benefits?

Many.  Let me list a few.

Your system stays really clean. With RPMs, you can uninstall everything you installed without leaving extra files laying around.

Upgrades are a snap. Once you have registered your own yum repository on a system, you can upgrade a given package by running:

yum upgrade your-package

All your systems can be on the same “page”. It is very easy, using yum, to ensure that all of your systems are using the exact same version of software.

Custom builds are super easy to maintain. We custom-compile php, python, and various other software.  Once the .spec files are in place, all of your software can be re-packaged with a single command.

In our specific case, we wanted to have the memcached client statically compiled into PHP.  With a few extra commands in the .spec file, it was a snap to pull in the source from pecl, and update `configure` to take it into account.

All builds can take place in one place. With one set of documentation, one consistent set of development tools, etc…  We have a user called `build` on one of the hosts that is specifically used for building all of the RPMs.


Where to learn?

The best way to learn, as usual, is to jump in and figure it out.   There is some really good documentation buried in the rpm.org site.   It is a book called Maximum RPM, origninally published by redhat.  The current snapshot of the book is available online.

http://www.rpm.org/max-rpm-snapshot/

Google is another good resource, depending on what it is you are looking for.

Installing Source RPMs to your home directory

I’ve been involved in an ongoing project to build RPMs for all of the “custom” software installs we use on RedHat Enterprise Linux 5 (RHEL5) at AppCove.

By default (on RHEL), source RPMs are installed to /usr/src/redhat. This is nice, except that I don’t want to be running as root when building software.

rpm -i --relocate /usr/src/redhat=/home/build/RPMBUILD setuptools-0.6c9-1.src.rpm

The previous command will install the specified source rpm to a local directory under the “build” user.  That makes it easy to tweak the .spec file, and then build the desired RPM.

Basics of telnet and HTTP

Say you want to request a webpage…  Normally, one would use a web browser, right?  But sometimes you just need to see what is really going on…  In this blog post I will show the basics of using the telnet command to work with the HTTP protocol.

For reference: http://www.w3.org/Protocols/rfc2616/rfc2616.html

Most of these commands were run on Linux, but telnet on Windows should work too.

telnet <ip-or-host> <port>

Background…

If you are using the HTTP protocol, which is port 80, then you must follow the HTTP protocol conventions (which are simple).  HTTP has two primary versions at this point: 1.0 and 1.1.

In the HTTP 1.0 days, a single website was bound to a single IP address.  What this means is that an HTTP request sent to a given IP address would return content from only one site.  This is quite limiting and inconvenient.  To have to assign a new IP for every different domain name… What a bother.  Not to mention that the current internet protocol standard, IPv4, is limited to several billion addresses and quickly running out.

More recently, HTTP 1.1 has become the standard.  This enables something called Name Based Virtual Hosting.  By requiring a “Host” header to be sent along with the request, HTTP servers can in turn “look up” the correct website and return it based on the name.  Hundreds or even thousands of different domains can now be hosted on a single IP address.

(keep in mind that SSL certificates each require a seperate IP address.  Due to encryption issues, the IP address is needed to determine which SSL certificate to use…)

So with that introduction, allow me to show you the basics of HTTP…

Using HTTP over Telnet

The telnet utility is a simple (but useful) utility that allows one to establish connections to a remote server.  From my perspective, it is most useful with plain text protocols (like HTTP), but my knowledge of telnet is not very deep…

Here is an example (commands you would type are in red):

[jason@neon ~]$ telnet gahooa.com 80
Trying 74.220.208.72…
Connected to gahooa.com (74.220.208.72).
Escape character is ‘^]’.
GET /       <press enter>
<html>
   <body>
      Hi, you have reached Gahooa!
   </body>
</html>
Connection closed by foreign host.

Because it was an HTTP 1.0 request, the server DID NOT wait for additional headers.  Again, quite limiting – only sending one header line.

And… HTTP 1.1

Here is an example of an Apache Virtual Host configuration directive.

<VirtualHost 74.220.208.72:80>
   # Defines the main name by which this VirtualHost responds to
   ServerName gahooa.com

   # Additional names (space delimited) which this VirtualHost will respond to.
   ServerAlias www.gahooa.com 

   # Apache will append the requested URI to this path in order to find the resource to serve.
   DocumentRoot /home/gahooa/sites/gahooa.com/docroot

</VirtualHost>

When we issue the following HTTP 1.1 request, we are in effect asking for the file at:

/home/gahooa/sites/gahooa.com/docroot/index.html

Keep in mind that because this is HTTP 1.1, the web server will continue to accept header lines until it encounters a blank line:
A blank line…

[jason@neon ~]$ telnet gahooa.com 80
Trying 74.220.208.72…
Connected to gahooa.com (74.220.208.72).
Escape character is ‘^]’.
GET /index.html HTTP/1.1       <press enter>
Host: www.gahooa.com           <press enter>
                               <press enter again>
HTTP/1.1 200 OK
Date: Wed, 03 Sep 2008 21:00:46 GMT
Server: Apache/2.2.9 (Unix)
Transfer-Encoding: chunked
Content-Type: text/html
                               <take note of blank line here>
<html>
   <body>
      Hi, you have reached Gahooa!
   </body>
</html>
Connection closed by foreign host.

A couple notes:

  • HTTP 1.1 continues to accept header lines until it recieves a blank line
  • HTTP 1.1 sends a number of header lines in the response.  Then a blank line.  Then the response content.

Redirects

One of the main points of writing this article was to describe how to debug strange redirect problems.   Redirects are done by sending a “Location” header in the response.  For more information on the Location header, please see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.30

[jason@neon ~]$ telnet gahooa.com 80
Trying 74.220.208.72…
Connected to gahooa.com (74.220.208.72).
Escape character is ‘^]’.
GET /test-redirect.php HTTP/1.1 <press enter>
Host: www.gahooa.com            <press enter>
                                <press enter again>
HTTP/1.1 200 OK
Date: Wed, 03 Sep 2008 21:00:46 GMT
Server: Apache/2.2.9 (Unix)
Transfer-Encoding: chunked
Content-Type: text/html
Location: http://www.google.com <take note of this line>

The Location header in the response instructs the requestor to re-request the resource, but from the URI specified in the Location header.  In the above example, if you were debugging redirect issues, you would simply initiate another HTTP request to  http://www.google.com

Python instead of telnet

Finally, I’d like to illustrate a really simple python program that would facilitate playing around with the same:

import socket
S = socket.socket(socket.AF_INET)
S.connect(("www.gahooa.com", 80))

S.send("GET / HTTP/1.1\r\n")
S.send("Host: www.gahooa.com\r\n")
S.send("\r\n")

print S.recv(1000)

S.close()

Conclusion

When you are not familiar with protocols such as HTTP, understanding “how things work” can be daunting.  But like many technologies out there, they really are simple (once understood).

The more truth and understanding you can fit into your perspective, the better you will be able to make informed decisions.

Gahooa!

File Extensions and Apache, a win-win solution

Here is the problem…  Either the developer loses, or the end user loses.  What possibly could I be talking about?  Allow me to explain…

Long ago, websites were authored using .html files.  Developers would hand code them to make sites which served their purposes quite nicely.  But as time went on, more was demanded of the web.  Server side languages, such as PHP, ASP, Java, Perl, Python, and more began to surface and become quite popular.

The file extension shown in the browser *usually* matches the file extension used on the server.  At least under Apache’s default configurations (and IIS, I believe).

http://www.site.com/home/index.html

But now, it is quite common to see this:

apache-win-win-1

Or this:

apache-win-win-2

Or even this (whatever it’s doing…)

apache-win-win-3

But in reality…

They are all really returning a file with:

Content-type: text/html

That’s a pretty common approach to using server side languages.  There are a couple other approaches also, such as:

  1. Don’t use files at all, only directories:
    http://www.example.com/about
  2. Auto generate the files on the site (but then you lose the “interactive” nature of a server site language)
    http://www.example.com/about.html

The problems with the above are:

  • It gives the developers an “incorrect” file extension to work with (ie, embedding PHP in a .html file)
  • Or, it gives the end user a file like “about.asp”, but in reality, there is not a single character of ASP in the file they receive.

(“quit complaining”, you may say…  oh well… I do like things to be “optimal” when possible)

So I identified a way to suit both purposes nicely. We now name our scripts names like:

  • /home/about.html.php
  • /render/image.jpg.php
  • /foo/bar.xhtml.php

HOWEVER, when they are referenced via HTTP, the last extension is alwas omitted.

  • /home/about.html
  • /render/image.jpg
  • /foo/bar.xhtml

(doesn’t that look nice?)

To pull it off, we implemented an interesting Apache mod_rewrite rule:

RewriteCond %{REQUEST_FILENAME} (\.html|\.xhtml)$
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME}.php -f
RewriteRule ^(.*)$ $1.php

‘if the request ends in “.html” or “.xhtml”, and the file (REQUEST + “.php”) exists, then use that file instead.’

In this way, the end user simply receives an “.html” file.  The developers are still looking at a “.php” file.  And everyone is happy.

Observations and Questions:

Developers at AppCove have taken to this quite readily.  There was a little confusion at first about linking to “.html.php”, but that was quickly resolved.

Does it impact performance?  I’m sure it has an impact, however so small, but have not tested that.  It would be an interesting benchmark.  My opinion is that it would be negligible.

Useful?  Sure!  I think it is more “correct” to return a file with an extension that appropriately describes its content type.


Thoughts?