Sunday, January 27, 2013

Top 10 Technologies I Want My Software Engineers to Know In 2013

Last year I blogged about the top ten technologies I wanted my engineers at Berico Technologies to learn in 2012.  The post was so popular, I've decided to make it a tradition.  In addition to providing my new top ten list, I want to provide a little retrospective on the technologies that made the list and those that fell off this year to explain why I've increased or devaluated their importance.

Without further adieu, I present:

Top 10 Technologies I want my Software Engineers to Know in 2013.


1.  The Modern Client-Side Web Development Stack
2.  Node.js
3.  Modern Messaging Frameworks
4.  Hadoop + HBase
5.  Clojure + Leiningen
6.  Twitter Storm
7.  Lucene/Solr
8.  Graph Databases
9.  A Platform as a Service Offering
10. Apache ZooKeeper

Rationale.


1.  The Modern Client-Side Web Development Stack:  Let's face it, the faster growing "sector" of the Enterprise stack is the Client-Side.  We have not only seen an explosion of new client-side Application/MVC frameworks, but also the adoption of a whole new process for composing and building web applications.  And we should emphasize the term "application"!  Modern websites are as complex as their desktop predecessors (perhaps even more), as browsers continue to be come more capable, and user expectations grow.

Unfortunately, no single framework or library is worthy of being in the top ten on its own.  I will say, however, that the combination of these frameworks represents a fundamental shift in our community away from Server-side MVC and GWT-like frameworks towards hyper-capable clients.

There are a number of frameworks/libraries of note:

 

-  Application/MVC Frameworks:  Ember.js, Batman.js, Angular.js, and Knockout.js.

Please note that I've left Backbone and Sammy.js off the list in part because I think their popularity are starting to wane as the newer breed of frameworks offer more capabilities.  Another framework generating a lot of excitement is Meteor.js, which attempts to provide a seamless stack (client-server-database) with a simple API.  I have only glanced at Meteor's documentation, but it looks promising.


-  Application Composition:  Require.js.


-  Build Tools:  Mimosa.js, Yeoman.

Mimosa.js is a new build/deployment framework developed by one of my friends David Bashford.  While it hasn't gotten a lot of attention from the community, our company uses it for nearly all of it's projects and the number of people forking/starting it on Github seems to double each month.
      

-  Languages:  CoffeeScript, ClojureScript, TypeScript.

At this point, I don't think there is any dominant JavaScript alternative.  My opinion is that a team (not an individual) should adopt the one that best fits their collective personality.

-  Visualization:  D3.

D3 scored us a number of big wins in the visualization department last year.  Our company is also beginning to cultivate a number of D3 proficient engineers.  I see us continuing to spread the D3 goodness around our team indefinitely.
2 Node.js:  In my mind, last year Node.js (well Express and other web frameworks) proved it was a viable alternative to Ruby on Rails.  I believe this will be the year in which Node.js unseats Rails.  Numerous PaaS vendors began supporting the platform last year (OpenShift, Cloud Foundry, Heroku, Azure) and I'm sure many more will follow suit (where are you Google App Engine?).

More importantly, Node.js IS NOT JUST A WEB FRAMEWORK!  Applications and libraries in a number of different domains are being written on top of the platform, and I can't wait to see what comes next (a NoSQL database?, a distributed processing framework?). 



3.  Modern Messaging Frameworks:  one of the top ten technologies last year was RabbitMQ.  Let start by saying that my respect, admiration and appreciation of RabbitMQ has only grown during 2012 (there is no other AMQP broker in my book).  This year, I'm broadening the scope by including "modern messaging frameworks" as one of the top ten competencies in 2013.

The shift to include more messaging technologies came from the realization that there is a need in modern cloud architectures to have multiple messaging platforms.  AMQP, and specifically RabbitMQ, is representative of the "reliable messaging" tier of brokers (like JMS, MSMQ, TIBCO).  However, there is a distinct need for "high-throughput" and "batching" message brokers that sacrifice security and reliability for performance.  The dominant brokers I'm looking at in each tier are:

Reliable Messaging:  AMQP/RabbitMQ
High-Throughput:  ZeroMQ
Batching:  Apache Kafka

  

4.  Hadoop + HBase:  If it isn't obvious that the Hadoop ecosystem is incredibly important right now, you are living under a rock.  Almost every RDBMS vendor is embracing Hadoop with tie-ins to their database.  Many, including Oracle and SQL Server, will allow you to ship SQL tables to Hadoop, execute a MapReduce job, and pull the results back into the database.  As for HBase, it remains critically important as an operationally-proven petabyte scale NoSQL database.

This year, I don't expect any revolutionary changes to occur on the Hadoop + HBase platform, though I think the software will continue to mature as vendors adopt the platform and create commercial extensions.  The thing to watch for is the frameworks built on top of Hadoop, like Impala (released by Cloudera last year).  There's also the potential for the start of an Apache project attempting to clone "Spanner", Google's "Globally-Distributed Database".

 

5.  Clojure + Leiningen:  This will probably be the most upsetting item on this list, particularly from the Scala crowd.  Last year I wanted my engineers to learn a JVM language.  This year, I only want them to learn one: Clojure.  The decision to buck all other languages from this list came from the collective frustration our engineers faced last year using both Scala and JRuby (and our great experiences with Clojure).

So what happened with Scala and JRuby?

I think the success of JavaScript last year may have deemphasized Ruby's importance, which in turn deemphasized JRuby.  I personally like the Ruby language, but I constantly found myself struggling to find a good use-case for its application.  Another problem I think Java developers have with Ruby is needing to learn Ruby Gems in addition to dealing with Java dependency management.  Frankly, not too many people wanted to learn JRuby, favoring Scala or Clojure.

Scala, on the other hand, is a language I think many of us learned to hate last year.  On the surface, it appears to be a decent language.  However, the more we learned it's syntax, the more we realized how needlessly complex and obtuse it could be.  In the hands of a really great engineer, it can be very elegant.  In the hands of everyone else, it can be difficult to read and understand.  More importantly, we didn't particularly enjoy Scala's mechanisms for interoperability with Java (it seemed strange in many cases).

The big eye-opener was Clojure.  Once getting past all of those parentheses, I think many people realized a how simple and elegant the language was.  Interop with Java is extremely clean.  I personally found my self up and running in a couple of hours, using all of my favorite Java libraries without any issues.  This year we will continue to evangelize the language, pushing people and projects toward the platform.

6.  Twitter Storm:  Storm is doing for real-time computing, what Hadoop did for batch processing.  I think we are just now seeing the Storm rise in popularity and I expect it will become even more popular as developers start building frameworks on top of it.  Our company, Berico Technologies, already has plans for building an data ingestion framework and a real time data manipulation framework on top of it.  I would imagine many other developers are actively doing the same as I write this.
  

7.  Lucene/Solr:  This entry was at risk of falling off the top ten list, but stayed on the list primarily because of the promise of Solr Cloud this year.  Search is no longer a feature, but rather a requirement for many applications, and Lucene-based indexes will remain the dominant implementation.
 

8.  Graph Databases:  As graphs become more mainstream, I think engineers are starting to realize the value of databases optimized for joins.  The clear leader in this market is Neo4j, but I think it will start to see some competition from highly-scalable, distributed implementations like Titan.

More importantly, there has been a trend towards polyglot architectures (combining a graph database [for joins] with a relational database [for queries]).  Frameworks like Spring Data Neo4j simplify the development of these systems by providing annotation-driven Data Access Layer functionality similar to Hibernate and JPA.

In terms of usability, a framework to highlight is Tinkerpop's Blueprints, an abstraction for graph databases.  In addition to Blueprints, Tinkerpop has also written a number of complimentary frameworks (Pipes, Gremlin, Frames, Furnance, and Rexter) enhancing the usability of your graph.
 
  

9.  A Platform as a Service Offering:  Platforms like RedHat OpenShift, VMWare Cloud Foundry, Windows Azure, Heroku and Google App Engine are the way of the future in my opinion.  Being able to compose an application and not worry about installing and maintaining the services it relies upon is quite liberating for an engineer.  There is a clear cost and time savings in employing such solutions.  More importantly, I want my engineers thinking about PaaS design and it's implications for applications so they can build and/or employ them for our customers who don't have the luxury of using a commercial offering.

10.  Apache Zookeeper:  This probably seems like an odd addition to the list.  ZooKeeper is a framework that enables reliable communication and coordination between nodes in a cluster.  The framework is used to perform centralized configuration, key-value storage, distributed queues, leadership election, synchronization, failure detection and recovery, etc.  ZooKeeper is a key component in number of important distributed platforms like Apache Hadoop, Apache Solr, Apache HBase, Twitter Storm, Neo4j HA, and Apache Kafka to name a few.  Simply put ZooKeeper is underpinning of a number of important applications and knowledge of it couldn't hurt.  More importantly, there's nothing the prevents you from building your own distributed application with ZooKeeper.

Favorable mention.

Finally, I wanted to give a "favorable mention" to a number of frameworks that didn't make the list:

- Spring Framework:  Still the trusted backbone of all of our Java applications, and I just as dependent on it as I was three years ago.
- Redis: We're using it successfully on a couple of projects, the only downside is the lack of security which prevents us from using it on all of our projects.
- MongoDB:  We use MongoDB on a couple of projects.  It's certainly proven itself to be the document database of choice for our company.
- Riak:  Incredibly interesting NoSQL offering.  We aren't using it at the moment, but a couple of our engineers have used it at other companies and we're genuinely fascinated by it.
- Datomic:  Another incredibly interesting database from the key contributors and creator of the Clojure language.  Datomic offers the ability to keep a temporal history of mutations to a records stored within, making it uniquely suitable for some of our client's problem-sets.
- LMAX Disruptor:  A framework for performing multithreaded, no-lock operations on a ring buffer.  Developers at LMAX have optimized the framework to work with the underlying hardware, like ensuring variables are cached effectively in the L1.

Sliding off the list from last year.

These are the frameworks that feel off the list this year and why.

- Spring Framework:  Spring is still incredibly important, but it's at the point that we are taking it for granted.  Knowledge of the framework is practically mandatory in our company, so it's not as important of a technology to be learned this year.
- Ruby on Rails:  We are no longer building on Rails.  Rails is still a great framework, but it's being overshadowed by the prospect of an end-to-end JavaScript web stack.  We've also had some issues with Ruby's thread model, making it incredibly difficult for us to integrate messaging into the Rails stack.
- Redis:  Redis is still a great key-value store, but it's lack of security features makes it difficult for our company to use in our client's architectures.  We still love it, however.
- CoffeeScript:  I still write in CoffeeScript all the time, but its time to acknowledge that there are a number of great new compile-to-JavaScript languages out there.  For my .NET developers, I can't consciously recommend them learn CoffeeScript when they get great support for TypeScript in Visual Studio.
- OSGi:  OSGi became a great frustration last year for us.  The API is antiquated (i.e.: no use of Generics, registration of components is not straight forward), containers function inconsistently (we gave up on JBoss 7's OSGi support), and it's a real pain in the ass to have to bundle libraries you don't write.  I think the idea of a component architecture is awesome, but I think it needs to be apart of the core Java platform and not an external framework.
- RabbitMQ:  We still love and use RabbitMQ all the time.  In fact, I just wrote 8 posts on the topic!  RabbitMQ was not so much as demoted as expanded to include ZeroMQ and Kafka.
- A new JVM Language:  I don't advocate staying with the Java language, but I want to warn you that you may be frustrated with your options.  As you've already seen, I'm advocating you learn Clojure above all other languages.  If you don't learn Clojure, try give Scala a try.  We may even be surprised this year with a resurgence in Groovy's popularity as optimizations of the JVM makes the language much faster.  Outside of those choices, I think you will find learning a language off the platform more rewarding.


RabbitMQ Configuration and Management Series

Lately, I've been working heavily with RabbitMQ and wanted to share many of the things I have learned in the process.  Specifically, I wanted to demonstrate how to configure and administer RabbitMQ from the point of installing the OS, to configuring the firewall, clustering the brokers, enabling SSL, and load-balancing the cluster.  Basically, what you would want in a cluster if you had to deliver a messaging solution in a hostile environment.

I will continue to write more articles about RabbitMQ, so please come back to this page to get the latest index of articles.  I hope you find these articles helpful.

I recommend reading the documentation in the following order:
  1. Installing RabbitMQ on CentOS 6.3 Minimal - demonstrates how to perform a complete install of RabbitMQ on CentOS 6.3 Minimal (headless, no unnecessary packages), including two ways to install Erlang.
  2. Enabling RabbitMQ Management Console - an essential component for managing RabbitMQ, this is probably the second thing you should install once you have broker.
  3. Configuring iptables for a single instance of RabbitMQ - don't just turn off you firewall!  Configure iptables to allow clients to connect to the broker without exposing the rest of your system.
  4. Configuring iptables for a RabbitMQ Cluster - the process for allowing a cluster to communicate with the firewall on is a little more involved.  This post will show you how to do it.
  5. Clustering RabbitMQ - will show you how to cluster RabbitMQ brokers
  6. Configuring SSL for RabbitMQ - lock your RabbitMQ instances down by configuring them to use SSL.  I'll show you how to do this, including the generation of certificates.
  7. Securing the RabbitMQ Management Console with SSL - What's the point of locking down the AMQP port if you don't lock down the Management Console?  This post will show you how to enable SSL-authentication for the Management Console.
  8. Binding non-SSL-capable AMQP Clients to SSL RabbitMQ - some clients do not support AMQP/SSL.  I'll show you a generic solution for using a non-SSL-capable client (Node.js in the example) with a RabbitMQ instance protected by SSL.

Binding non-SSL-capable AMQP Clients to SSL RabbitMQ

This is an article in the RabbitMQ Configuration and Management Series.  For more articles like this, please visit the series' index.

Many languages support AMQP, but not all support SSL. Probably the most popular of the languages not supporting SSL is Node.js. Fortunately, there is a very easy solution that does not involve you rewriting a client library, and it works for every language.
The solution is to utilize stunnel, a process that will initiate an SSL connection (via OpenSSL), wrapping the underlying TCP-based communication initiated by the client. More plainly, your application connects to a port on localhost using AMQP, stunnel connects to the broker via SSL, and then pipes your local request to the AMQP broker through the SSL tunnel.
stunnel can be installed on most major operating systems (Windows, UNIX, OSX, Linux); but we will talk primary about how to do it on OSX and CentOS.
  1. Install stunnel.
    On OSX: sudo port install stunnel
    On CentOS: sudo yum install stunnel
  2. Next, we need to configure stunnel.
Create a local file and name it something like stunnel.conf.
vi stunnel.conf
Add the following to the stunnel.conf:
client = yes
foreground=yes
cert = {path to cert}/{client name}.certkey.pem

[amqp]
accept = {local port the client will communicate on}
connect = {broker IP address}:{broker SSL port}
Dissecting the file:
  • client: Indicates that we are a client contacting a server protected by SSL.
  • foreground:  Should stunnel run in the console instead of in the background.
  • cert:  A file with both the certificate and private key.
  • [amqp]:  This is a header for a route registration (we are calling it "amqp").
  • accept:  The incoming port to accept communications.
  • connect:  The port, or optionally, host and port to establish an SSL connection with.
Using our previous example:
client = yes
foreground=yes
cert = app01.certkey.pem

[amqp]
accept = 5673
connect = 192.168.192.155:5673
  1. Generate the client certificate.
    Using the certificate scripts in CMF-AMQP-Configuration:
    sh create_client_script.sh {client} {password}
    For example:
    sh create_client_script.sh app01 password123
    stunnel requires our key and certificate to be collocated in the same file. To do this, lets create a new file and append the contents of {client}.key.pem and {client}.cert.pem generated by create_client_cert.sh:

    cd {path/to/cert/dir}
    cat {client}.cert.pem >> {client}.keycert.pem
    cat {client}.key.pem >> {client}.keycert.pem

    For example:
    cd client/
    cat app01.cert.pem >> app01.keycert.pem
    cat app01.key.pem >> app01.keycert.pem
    
    The "keycert" file is now ready to be used with stunnel.
  2. Start stunnel.
    stunnel {path/to/configuartion}/stunnel.conf
    For example, assuming we are in the directory of stunnel.conf:
    stunnel stunnel.conf
  3. Now, simply start your client, binding to the port on localhost you chose stunnel to accept connections on.

Verifying stunnel Works

The CMF-AMQP-Configuration repository contains a test client to demonstrate this capability.

Install Node.js if you don't already have it installed.

Clone the CMF-AMQP-Configuration repository on GitHub:
git clone https://github.com/Berico-Technologies/CMF-AMQP-Configuration.git

Change into the node-test-client directory:
cd node-test-client

Install the the project's dependencies:
npm install

Edit the config.js file and add the correct stunnel and connection settings:
vi config.js
Using the settings from the example:
module.exports = { 
  host: "localhost", 
  port: 5673, 
  vhost: "/", 
  login: "guest", 
  password: "guest", 
  publishingInterval: 50 
};
Now run the example:
node run_test.js
If everything works as prescribed, you should see the following output in the console:

Securing the RabbitMQ Management Console with SSL

This is an article in the RabbitMQ Configuration and Management Series.  For more articles like this, please visit the series' index.

The process for securing the RabbitMQ Management Console with SSL is very similar to securing the AMQP port. Instead of rehashing how to do certificates, we will assume that you had followed the tutorial on Configuring SSL for RabbitMQ and are now securing the console.
In Configuring SSL for RabbitMQ, we took the convention of using the /etc/rabbitmq/ssl directory for storing certificates. If you followed the directions in that post, you should already have the certificates you need for securing the console. Alternatively, if you choose to use a separate certificate for the Management Console than the AMQP port, simply create a new certificate and key using the make_server_cert.sh script.

Configure RabbitMQ to use SSL for the RabbitMQ Management Console.

  1. Edit the rabbitmq.config file in the /etc/rabbitmq directory:
    sudo vi /etc/rabbitmq/rabbitmq.config
  2. Add a configuration entry:
      [{listener, 
        [{port, 15672},
         {ssl, true},
         {ssl_opts, 
           [{cacertfile, "/etc/rabbitmq/ssl/ca/cacert.pem"},
            {certfile,   "/etc/rabbitmq/ssl/server/{hostname}.cert.pem"},
            {keyfile,    "/etc/rabbitmq/ssl/server/{hostname}.key.pem"}]}
         ]}
    ]}
    ].
    
    And of course, using our example:
      [{listener, 
        [{port, 15672},
         {ssl, true},
         {ssl_opts, 
           [{cacertfile, "/etc/rabbitmq/ssl/ca/cacert.pem"},
            {certfile, "/etc/rabbitmq/ssl/server/rabbit3.cert.pem"},
            {keyfile, "/etc/rabbitmq/ssl/server/rabbit3.key.pem"}]}
         ]}
    ]}
    ].
    
    More importantly, the config with AMQP/SSL, iptables port range for clustering, and the Management console using SSL:
    {rabbit, [ {tcp_listeners, [5672] },
               {ssl_listeners, [5673] },
               {ssl_options, [
                 {cacertfile, "/etc/rabbitmq/ssl/ca/cacert.pem" },
                 {certfile, "/etc/rabbitmq/ssl/server/rabbit3.cert.pem" },
                 {keyfile, "/etc/rabbitmq/ssl/server/rabbit3.key.pem" },
                 {verify, verify_peer},
                 {fail_if_no_peer_cert, true }]}
    ]},
    {rabbitmq_management,
      [{listener, 
        [{port, 15672},
         {ssl, true},
         {ssl_opts, 
           [{cacertfile, "/etc/rabbitmq/ssl/ca/cacert.pem"},
            {certfile, "/etc/rabbitmq/ssl/server/rabbit3.cert.pem"},
            {keyfile, "/etc/rabbitmq/ssl/server/rabbit3.key.pem"}]}
         ]}
     ]},
    {kernel, [ {inet_dist_listen_min, 9100}, 
                {inet_dist_listen_max, 9105} ]}
    ].
    
  3. Restart RabbitMQ.
    sudo service rabbitmq-server start

Verify SSL on Management Console

Open your browser to the RabbitMQ Management Console, but don't forget to use "https".

Forcing the Browser to Authenticate using Certificates.

  1. Edit the rabbitmq.config file in the /etc/rabbitmq directory.
    Add {verify, verify_peer}, {fail_if_no_peer_cert, true } to the ssl_options of rabbitmq_management.
      [{listener, 
        [{port, 15672},
         {ssl, true},
         {ssl_opts, 
           [{cacertfile, "/etc/rabbitmq/ssl/ca/cacert.pem"},
            {certfile, "/etc/rabbitmq/ssl/server/{hostname}.cert.pem"},
            {keyfile, "/etc/rabbitmq/ssl/server/{hostname}.key.pem"},
            {verify, verify_peer},
            {fail_if_no_peer_cert, true }]}
         ]}
    ]}
    ].
    
    Which now looks like:
    {rabbit, [ {tcp_listeners, [5672] },
               {ssl_listeners, [5673] },
               {ssl_options, [
                 {cacertfile, "/etc/rabbitmq/ssl/ca/cacert.pem" },
                 {certfile, "/etc/rabbitmq/ssl/server/rabbit3.cert.pem" },
                 {keyfile, "/etc/rabbitmq/ssl/server/rabbit3.key.pem" },
                 {verify, verify_peer},
                 {fail_if_no_peer_cert, true }]}
    ]},
    {rabbitmq_management,
      [{listener, 
        [{port, 15672},
         {ssl, true},
         {ssl_opts, 
           [{cacertfile, "/etc/rabbitmq/ssl/ca/cacert.pem"},
            {certfile, "/etc/rabbitmq/ssl/server/rabbit3.cert.pem"},
            {keyfile, "/etc/rabbitmq/ssl/server/rabbit3.key.pem"},
            {verify, verify_peer},
            {fail_if_no_peer_cert, true }]}
         ]}
     ]},
    {kernel, [ {inet_dist_listen_min, 9100}, 
                {inet_dist_listen_max, 9105} ]}
    ].
    
  2. Restart RabbitMQ.
    sudo service rabbitmq-server restart

Verify Certificate Authentication

Visit the RabbitMQ Management Console without a certificate from the broker's CA installed in your browser.

Generate and Install Client Certificate for Browser-based Authentication.

Each browser and sometimes browser-OS pairing has a different way of installing certificates.
  1. Generate a Client Certificate.
    Using the create_client_cert.sh script, generate a certificate for your user.
    sh create_client_cert.sh {username} {password}
    For example:
    sh create_client_cert.sh jdoe password123
    In the ssl/client directory, you will see a couple of new files:
  • jdoe.key.pem: John Doe's private key
  • jdoe.cert.pem: John Doe's public key
  • jdoe.req.pem: John Doe's certificate signing request (CSR)
  • jdoe.keycert.p12: John Doe's Public-Private Key pair that can be used by the operating system or browser.
  1. Install the Certificate for Chrome and Safari.
    Browsers like Chrome and Safari that use the underlying OS's keystore. So the instructions will depend on the OS. This is how to do it in OSX.
    a. Double-click the jdoe.keycert.p12, and OSX at the very least, will install the certificate into its Keychain:
    b. You will be prompted for the password you entered in the create_client_cert.sh command.
    c. Visit the RabbitMQ Management Console again.
    d. You will be prompted to select a certificate. In this case, there will only be one to select:
    e. Chrome will prompt you for permission to sign the request with your private key:
    And you should be in!
  2. Install the Certificate for Firefox.
    a. Navigate in the menu to Preferences -> Advanced -> Encryption.
    b. Press the "View Certificates" button.
    c. Press the "Import..." button, and select the certificate to install (in our case joe.keycert.p12). Enter the password used for the creation of the certificate when you executed the create_client_cert.sh command ("password123" for the example).
    d. If the password is correct, Firefox will congratulate you.
    e. You will now see the certificate in the "Your Certificates" tab of the "Certificate Manager" window. Our example certificate for "jdoe" is at the bottom:
    f. Visit the RabbitMQ Management Console again.
    g. You will be prompted for a Certificate, Firefox will automatically select the best certificate based on the Certificate Authority.
    h. After you've pressed "OK", you should be in!

Configuring SSL for RabbitMQ

This is an article in the RabbitMQ Configuration and Management Series.  For more articles like this, please visit the series' index.

Configuring RabbitMQ for SSL is a fairly straight forward process. It involves generating the certificates and key file for the server to perform SSL, and registering those files with RabbitMQ via the rabbitmq.config file.

Generating Certificates

The difficult part of this process is knowing how to generate and manage Public-Key Infrastructure. If you are unfamiliar with this process, or work in an organization that already has it's own infrastructure, we would recommend you consult with whomever manages that infrastructure to get the correct keys and certificates.
In the event that you need to do this work on your own, we have a created a set of scripts that will simplify the process. These scripts literally automate the process documented by RabbitMQ on this page.
  1. Install git on a machine of your choice. This does not have to be on one of the cluster nodes, but you will have to copy files to each broker if you do not.
    sudo yum install git
  2. Clone the CMF-AMQP-Configuration repository.
    git clone https://github.com/Berico-Technologies/CMF-AMQP-Configuration.git
    Change into the CMF-AMQP-Configuration/ssl/ directory:
    cd CMF-AMQP-Configuration/ssl/
    In this directory you will find the following files:
  • openssl.cnf: This is the OpenSSL configuration file.
  • setup_ca.sh: This will setup the Certificate Authority you will need to generate and issue client and server certificates from.
  • make_server_cert.sh: This will generate a certificate for a server (like a RabbitMQ Broker).
  • create_client_cert.sh: This will generate a certificate for a client application (connecting to a RabbitMQ Broker).
  • implode.sh: This will remove all directories and content generated by the other scripts, but it will not delete the scripts of configuration.
    If you choose to fork this configuration in GitHub, the .gitignore file in this directory will prevent the inclusion of the certificate files.
  1. Edit the openssl.cnf file (as needed).
    Here you can specify default values for the certificate. Please see the OpenSSL documentation to get a full list of values.
    By default, this file will force a SHA1 2048-bit key good for 1 year.
  2. Generate a Certificate Authority.
    You may need to use sudo depending on where you put the project.
    sh setup_ca.sh [certificate authority common name (CN)]
    For example:
    sh setup_ca.sh OfficeMagiCA
  3. Generate a Server Certificate.
    You may need to use sudo depending on where you put the project.
    sh make_server_cert.sh [hostname] [password]
    For example:
    sh make_server_cert.sh rabbit3 rabbit
  4. (Optional) Generate a Client Certificate
    This will not be used during this portion of the tutorial.
    You may need to use sudo depending on where you put the project.
    sh create_client_cert.sh [client name] [password]
    For example:
    sh create_client_cert.sh rabbit-client1 rabbit
  5. Copy the certificates to your RabbitMQ Broker.
    The previous processes will have generated a lot of files that you will not need at the Broker. Many of these files were created in the process of generating certificate signing requests (CSR), where a server or client certificate is "stamped" by the CA to establish a "chain of trust".
    For this tutorial, we will need the following certificates:
  • ca/cacert.pem: The Certificate Authority's certificate.
  • server/{hostname}.cert.pem: The Server/Broker's certificate.
  • server/{hostname}.key.pem: The Server/Broker's private key.
    Protect the *.key.pem at all costs! This is literally the password to the certificate (i.e.: don't give this out).
    We have taken the convention of storing the certificates in a folder called ssl located in the RabbitMQ configuration directory (/etc/rabbitmq/).
    Preserving the structure created by the CMF-AMQP-Configuration project, copy those files into the directory:
    sudo mkdir -p /etc/rabbitmq/ssl/ca
    sudo mkdir /etc/rabbitmq/ssl/server
    sudo cp {/path/to/CMF-AMQP-Configuration/ssl/}ca/cacert.pem \
      /etc/rabbitmq/ssl/ca/cacert.pem
    sudo cp {/path/to/CMF-AMQP-Configuration/ssl/}server/{hostname}.key.pem \
      /etc/rabbitmq/ssl/server/{hostname}.key.pem
    sudo cp {/path/to/CMF-AMQP-Configuration/ssl/}server/{hostname}.cert.pem \
      /etc/rabbitmq/ssl/server/{hostname}.cert.pem

    Using our example, it looks like:
    sudo mkdir -p /etc/rabbitmq/ssl/ca
    sudo mkdir /etc/rabbitmq/ssl/server
    sudo cp {/path/to/CMF-AMQP-Configuration/ssl/}ca/cacert.pem \
      /etc/rabbitmq/ssl/ca/cacert.pem
    sudo cp {/path/to/CMF-AMQP-Configuration/ssl/}server/rabbit3.key.pem \
      /etc/rabbitmq/ssl/server/rabbit3.key.pem
    sudo cp {/path/to/CMF-AMQP-Configuration/ssl/}server/rabbit3.cert.pem \
      /etc/rabbitmq/ssl/server/rabbit3.cert.pem
    

Configuring RabbitMQ to support SSL Connections

To configure RabbitMQ to support SSL, you simply need to add some minor configuration options to the rabbitmq.config in the/etc/rabbitmq directory.
If the file doesn't exist, we simply need to create it and RabbitMQ will pick up those changes upon start/restart.
  1. Edit the rabbitmq.config file:
    sudo vi /etc/rabbitmq/rabbitmq.config
  2. Add the following configuration:
    {rabbit, [ {tcp_listeners, [5672] },
               {ssl_listeners, [5673] },
               {ssl_options, [
                 {cacertfile, "/etc/rabbitmq/ssl/ca/cacert.pem" },
                 {certfile, "/etc/rabbitmq/ssl/server/{hostname}.cert.pem" },
                 {keyfile, "/etc/rabbitmq/ssl/server/{hostname}.key.pem" },
                 {verify, verify_peer},
                 {fail_if_no_peer_cert, false }]}
    ]}
    ].
    
    Where our configuration looks like:
    {rabbit, [ {tcp_listeners, [5672] },
               {ssl_listeners, [5673] },
               {ssl_options, [
                 {cacertfile, "/etc/rabbitmq/ssl/ca/cacert.pem" },
                 {certfile, "/etc/rabbitmq/ssl/server/rabbit3.cert.pem" },
                 {keyfile, "/etc/rabbitmq/ssl/server/rabbit3.key.pem" },
                 {verify, verify_peer},
                 {fail_if_no_peer_cert, true }]}
    ]}
    ].
    
    And if you are already clustered with iptables configured:
    {rabbit, [ {tcp_listeners, [5672] },
               {ssl_listeners, [5673] },
               {ssl_options, [
                 {cacertfile, "/etc/rabbitmq/ssl/ca/cacert.pem" },
                 {certfile, "/etc/rabbitmq/ssl/server/rabbit3.cert.pem" },
                 {keyfile, "/etc/rabbitmq/ssl/server/rabbit3.key.pem" },
                 {verify, verify_peer},
                 {fail_if_no_peer_cert, true }]}
    ]},
    {kernel, [ {inet_dist_listen_min, 9100}, 
                {inet_dist_listen_max, 9105} ]}
    ].
    
    There are some important options to note:
  • tcp_listeners: This is the clear-text port to answer requests. Remove this if you want your broker to only accept SSL.
  • ssl_listeners: This is the port to accept SSL connections. Make sure you have enabled that port in iptables.
    The SSL-specific options (ssl_options):
    • cacertfile: The certificate file of the CA.
    • certfile: The certificate of this broker.
    • keyfile: The private key of this broker.
    • verify: If "verify_peer" is set, the client must present a certificate that will be verified by the broker.
    • fail_if_no_peer_cert: This is supposed to mean that the connection will fail if the peer does not preset a certificate and this property is set to true. But do to a bug in Erlang, if the {verify, verify_peer} option is set, fail_if_no_peer_cert is ignored if set to false (i.e.: client will have to supply a validated certificate).
  1. Restart RabbitMQ.
    sudo service rabbitmq-server restart

Verifying SSL

  1. We have supplied a test client to verify if the SSL connection works in the CMF-AMQP-Configuration Repository.
  2. Alternatively, the RabbitMQ Management Console lists the available ports for clients to connect on the main page: