Developer-Driven Database Design

Databases designed and maintained collectively by teams of developers using mostly migration tools and ORMs display certain defining characteristics:

  • All columns except for primary keys are nullable.
  • No indexes except on primary and maybe foreign keys.
  • Lots of bit columns.
  • ‘Type’ columns which determine for a row which sets of columns will have data and which will all be null.
  • Wide tables with lots of columns.
  • Tables and columns named after specific features.
  • Very few or no stored procedures, functions, view, or triggers.
  • Very few complex queries; lots of simple no-join queries.
  • Transactional boundaries are set in code, if anywhere, but are often not explicitly or consistently defined.

Old arguments between developers and DBAs about denormalization seem almost quaint when you think back on them now. Developers don’t even bother with ACID if they’re using NoSQL.

But I have to admit, I’m not sorry to see the backside of triggers. And DBAs and the rest of the database world (to use an indefensibly vague term) were probably too conservative in many cases. That often conflicted with developers. It’s not hard to see whose values are dominating right now.

Posted in software | Leave a comment

Setting up Puppet on Windows

Our system configuration strategy consists mainly of saved VMs, a hodgepodge of scripts and notes scattered about, a lot of manual effort, and good intentions. We have a mixed linux/windows environment so we’ve never been sure if the new DevOps tools offered us much. But we decided to take the plunge and start trying out Puppet‘s relatively new Windows offerings.

Install puppet agent on Windows

You can’t run your central Puppet server on Windows, but you can install a puppet agent, which can run standalone or take marching orders from a puppet master running on linux. Even if you’re not interested in standalone, it’s helpful to make sure it works before trying it with your master.

The Puppet Windows instructions are pretty straightforward. Before you start, though, think about whether you’ll want to run puppet as administrator or as a regular user. On Windows 2008/Windows 7, if you run puppet as administrator, data will be stored at C:\ProgramData\PuppetLabs\puppet. For other users, data will get stored relative to %HOMEDRIVE%%HOMEPATH%\.puppet.

Download and run the ruby 1.8.7 installer. If you have a different version of ruby already installed, you obviously need to figure out how you want to handle the PATH variable.

Install some gems:

gem install sys-admin win32-process win32-dir win32-taskscheduler --no-rdoc --no-ri
gem install win32-service --platform=mswin32 --no-rdoc --no-ri --version 0.7.1

Download facter and puppet from github, unzip each, go into the uncompressed folder and run ruby install.rb (do facter first) for each.

Test Puppet standalone on Windows

At this point, you should be able to run puppet standalone, which isn’t terribly interesting, but you can test that things are okay so far. Here’s a simple manifest, for example, that ensures that a test file stored in a source folder exists in a target folder and that my MySQL service was running:

file { "c:/test/puppet/target/win_test_file.txt":  
  ensure => 'file', 
  owner => 'Administrator', 
  source => 'c:/test/puppet/source/win_test_file.txt', 

service { 'MySQL': 
  ensure => 'running', 
  enable => true, 

Save this into a file named init.pp, and run puppet apply init.pp (puppet.bat is now in your ruby install’s bin folder). It should turn on the MySQL service if it’s off and copy the test file over from source to target if it’s missing from target.

Puppet can control a fair number of things on Windows, as listed in the docs, but it’s not very compelling unless you add in puppet’s ability to control agents from a central puppet server.

Install Puppetmaster on Linux

There are many other, better instructions for installing puppet on linux, but just to keep documenting what I did for this case…I installed the master on an old Oracle Enterprise Linux vm. Again, you need ruby. The puppet instructions recommend using the ruby package that comes with your OS, but I used rvm and matched the same version of ruby, 1.8.7, as on Windows, though it probably doesn’t matter.

Install rvm by first getting the installer:
sudo bash rvm-installer

I started installing puppet from a gem first, but the latest gem on the download site was only 2.7.6. There are a few warnings floating around about not using a higher version on agents than on the master so I got the source from github. Run rvmsudo ruby install.rb to build it.

Start it up with rvmsudo puppet master --mkusers. Use --mkusers to make a users for puppet if it doesn’t exist yet. The first time you run this it should also create the master security certificate, too, I believe.

Make sure the firewall allows 8140/tcp.

Introduce your Windows agent to your Linux master

Back on your Windows agent, add the server to your puppet.conf (ie. C:\ProgramData\PuppetLabs\puppet\etc\puppet.conf):
server = <>

Set up puppet with your DNS, or add to your hosts file at C:\WINDOWS\system32\drivers\etc\hosts a line like <> puppet.

Try puppet agent --test --verbose. You can also specify the master explicitly by adding --server <>.

You should get output like

info: Creating a new SSL key for
warning: peer certificate won't be verified in this SSL session
info: Caching certificate for ca/etc/puppet/manifests/site.pp
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
info: Creating a new SSL certificate request for
info: Certificate Request fingerprint (md5):
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
Exiting; no certificate found and waitforcert is disabled

This attempts to connect to the puppet master and sends a certificate request. Back on the puppet master, check out the waiting requests with puppet cert --list. Your client host should show up. Accept it with puppet cert --sign <>.

Go back to your Windows agent and try puppet agent --test again. It should now output something like notice: Finished catalog run with no errors.

Judging from online comments, getting the certificates working is a common stumbling block for setting up puppet clients in general. The error messages are often not very informative. For example, I kept getting this error:

err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed. This is often because the time is out of sync on the server or client
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run

It took a while to realize that I had an old, incorrect server key stored on the client. Removing the files from C:\ProgramData\PuppetLabs\puppet\etc\ssl then trying again worked.

Set up a manifest to execute

I set up more or less the same manifest on the master as I had run for standalone. It would be nice to store source files on the master, but puppet currently requires that Windows files and packages live on the Windows agent (or some share or location reachable by the agent, of course).

If you try to define a *nix file location in a manifest on the linux master to copy over to Windows, you will get an error like: err: Failed to apply catalog: Parameter source failed: Cannot use relative URLs.

Once it works running it from the command line, you can set up a Windows service to check in with the master regularly with nssm:

nssm.exe install puppet-agent <FULLPATH>\puppet.bat agent --server <>

The server arg shouldn’t really be necessary if you have it defined in your puppet.conf.

And there you are! Sit back and let your puppet master control your services, set up users and groups, and run msi packages or other executables on your Windows instances.

Posted in software | 9 Comments

Configuring Spring Integration channels without XML

I’ve been looking at some messaging frameworks lately and trying to find something that’s not too obtrusive. Spring Integration seems straightforward, plays nicely with our other Spring stuff, and runs in your application, which is a plus for us, but it’s hard to find simple examples to get started with, especially if you want to avoid xml configuration.

Spring makes it hard to totally skip all xml, but here’s an attempt to configure a pollable channel and a subscribable channel with as little xml as I can get away with.

First, the Maven pom has these dependencies:


The configuration xml is just a pointer to the java configuration (the TestAppConfig class).

In java, we configure the context with a QueueChannel, a PublishSubscribeChannel, and a TaskScheduler.

I didn’t see a simple implementation of MessageHandler, so I wrote a test handler that just prints out messages.

For this demo, the channels and their consumers are squeezed into one main method. We get the channels out of the Spring context, set up consumers to listen to them, then send out simple string messages to each of the channels.

Try running it and you should see the output of all the consumers as they receive the messages and let the message handler process them.

Handler A; [Payload=Message on the pollablechannel][Headers={timestamp=1318972120696,id=9db9d54a-a2d3-4396-bfac-632c3b4b861f}]
Handler B; [Payload=Message on the subscribablechannel][Headers={timestamp=1318972120697,id=5c918d28-4044-4b52-9e36-60f2498c5616}]
Handler C; [Payload=Message on the subscribablechannel][Headers={timestamp=1318972120697,id=5c918d28-4044-4b52-9e36-60f2498c5616}]

Woohoo! Not that useful in itself, but it shows the basic idea of channels and consumers.

Posted in java, software | Tagged | 4 Comments

Adding more users to your EC2 instance with IAM

Amazon makes it easy to try out AWS with a free micro instance. As you start using it more, one of the first things you might want to do is let others have access to your instance. Using the IAM service, it’s easy to add more users and set up a management console address that’s not directly tied to your own Amazon account.

Go to your AWS Console (I’m assuming you’ve already set up an account). The screen at this point should have the message, ‘Sign In or Create an AWS Account’, and prompt you for your Amazon account username and password. Sign in.

At the AWS Management Console prompt, select ‘AWS Identity and Access Management (IAM)’. You should be at the IAM dashboard. 

At the bottom of the screen, under AWS Account Alias, notice the IAM User sign-in URL at the bottom. We want to change that url that starts with a bunch of random numbers to something easier to deal with. Think of an alias you want to use. We’re going to use ‘acme-app’.

This is only a url for the AWS console. Your end-users won’t see it so it’s not a huge deal. Click the ‘Create Account Alias’ button at the bottom and put in your name, eg. ‘acme-app’. The url for the console would then becomes ‘’. You can go back later and change it if you want.

Next click the big ‘Create a new group of users’ button.

We’ll just do an admin group, but there are a lot of different authorization settings you can use. Type in a group name of ‘admin’. On the next screen, ‘Permissions’, click the ‘Administrator Access’  select button. Just hit ‘Continue’ through the edit permissions screen.

On the users screen, create user ids for all your admins.

On the next screen there is a ‘Download credentials’ link. Download it somewhere you won’t forget.

To set initial passwords for your users, screen, click the users link under IAM Resources. Then go to the Security Credentials tab below, and click ‘Manage Password’. Assign or auto-generate a password.

Your users will also need their access key id and secret access key from the credentials.csv you downloaded in order to connect to AWS through api calls, ie. command-line.

And that’s it! Other users can now get to your instance. If they go to the url you defined, they should see ‘AWS Account acme-app’, rather than the initial basic AWS screen we noted above, and only the user accounts you defined for it will be valid.

The owner of the account will always be able to get into the the console with either a user account defined for the new url, or their normal Amazon logon. If you’re on the new app-specific login screen but want to use your AWS account instead, click the ‘Sign in using AWS Account credentials’ link.

Posted in cloud, software | Tagged | Leave a comment

Adding an https connector to embedded Tomcat 7

The api to add https support to an embedded tomcat server follows the corresponding server.xml elements pretty closely. For example:

       Connector httpsConnector = new Connector();
       httpsConnector.setAttribute("keyAlias", keyAlias);
       httpsConnector.setAttribute("keystorePass", password);
       httpsConnector.setAttribute("keystoreFile", keystorePath);
       httpsConnector.setAttribute("clientAuth", "false");
       httpsConnector.setAttribute("sslProtocol", "TLS");
       httpsConnector.setAttribute("SSLEnabled", true);

       Tomcat tomcat = new Tomcat();
       Service service = tomcat.getService();

To add a redirect to your http port:

       Connector defaultConnector = tomcat.getConnector();

If you only want an https port with no other port open, you can call setConnnector() to make your connector the default on the tomcat object instead of adding new ones to the service.

Posted in java, software | Tagged | 6 Comments

Building Flex 4.1 projects with Maven and Flexmojos

Adobe relies on ant for its main flex build tool, but Sonatype now has a very functional Maven 3 plugin. It’s still in beta but seems stable enough, and it has the usual advantages over ant.

To try it out, make sure you have the Sonatype repository,, and add the flexmojos plugin to your pom: org.sonatype.flexmojos:flexmojos-maven-plugin:4.0-beta-7.

Since I was retrofitting old projects to use maven, I didn’t try to try out the flexmojos archetypes.

The default flex sdk of flexmojos seems to be 4.5. For flex 4.1, I found I had to add a dependency on the flex compiler with the version I wanted (see pom below). The sdks are in Sonatype’s repo, as are other basic libraries like flexunit. Most third-party libraries, like Swiz, for example, will need to be installed manually into your repository.

You can move over most compile settings from your ant script to corresponding maven options for compiling a swc or swf.  If your swc has namespaces used by another component, you will need to declare and include them in your pom (see below).

To run unit tests, you need to add a flex.flashPlayer.command property pointing to your debug flashplayer. To add code coverage, you supposedly just have to include the coverage tag. I’ve been unable to get it to work successfully, however. It always hangs on instrumenting the TestRunner.swf.

Here is a pom for a swc:

For the swf, you may also want to generate html wrapper files. Flexmojos does not generate them for you directly. Instead, you supply a set of files you want (swfobject.js, etc.) in a directory, use some replacement parameters, and then define the path to the files with templateURI. A line in the html like <title>${title}</title> will get the value of the title parameter defined in your pom substituted in. The docs say that the htmlName parameter is deprecated, but I found I had to add it to get the name the output html file named correctly.

To store all these files in a repository, it’s convenient to zip them up with the maven assembly plugin. It can then be added as a dependency to another project and unzipped if necessary, into a war, for example. Here’s a pom for a swf:

The documentation mentions a flexbuilder goal to generate Eclipse/Flashbuilder files, but it doesn’t seem to be in the current latest version. Better IDE support and working code coverage (assuming it’s not the fault of my own misconfiguration) would be great to have in future versions of flexmojos. What is in place already, though, works fine, and if you like working with maven for java, it’s worth looking at converting your flex projects over.

Posted in flex, software | Tagged , | 7 Comments

Developing for the Blackberry Playbook with Flex

In early March, I noticed RIM’s offer to develop an app for their new tablet, sportily named ‘The Playbook’, by March 31 to win a free device. Because I wanted to try out Adobe’s new 4.5 mobile SDK anyway and because I am very cheap, I decided to give it a try. The quick summary: RIM’s tools and processes are very much still in beta but they show promise and for the most part, I enjoyed working with them.

Getting Started

Coverage of RIM’s offer and development for the Playbook in general has been pretty much dominated by one negative blog post. I’m glad the author wrote it, because he seems to have single-handedly motivated RIM to change some of their ridiculous rules, like requiring a notarized identification. But still, it’s a little strange to see that one and only blog post cited in article after article. I guess it fit well into tech writers’ favorite memes on RIM.

The author complains a lot about setting up your development environment and it is definitely a chore. It’s not one clean install process and you do fill out a lot of forms. RIM cobbled together its own sdk and configuration with an IDE from Adobe and a simulator from VMWare and it shows. There are many little steps that all have to be done to get everything up and working. On the other hand, both main pieces of the environment are solid and already have a wide base of users.


For the most part, I liked the simulator. I’ve only tried Blackberry’s old simulator for phones a few times, but it seemed almost unusable. You can see why RIM went with a real virtualization tool this time. The Playbook simulator a lot faster and more responsive for me than Android’s virtual device. Apparently at one time you had to buy VMWare’s Fusion product, but when I went through installation, you just had to get the free VMWare Player, which I happened to have already installed.

One downside is that the connection between the IDE and the simulator is not as integrated as you’d like. You have to set up an ip address and password for it. And there are a few bugs. Once in a while during debugging, the simulator window will hold focus even though the Flash Builder window had been brought to the front. And you have to hit Alt-cntl to move your cursor off the simulator. Or maybe there’s a solution for that, like vmware-tools for running a Linux vm, but I didn’t see it documented.

Adobe Flash Builder is, of course, just an Eclipse plugin and has the strengths and weaknesses of Eclipse. Most importantly, it’s well-known to many developers. This version, 4.5, is in beta and that does show at times, but I found it to be pretty stable. Adobe is also hoping to build momentum for using flex to build Android apps, which I might try out next.

I don’t know how feasible it will be in reality to develop an application to run on Playbook, Android, and AIR, but at the very least, there should be a lot of opportunities to share code, as long as performance on Android is decent. The biggest downer about Flash Player is just that it’s not free. My 60-day free subscription runs out in 26 days and I somehow doubt revenue from my app will cover the cost.


When I first started doing some test apps, I didn’t realize that the new QNX actionscript APIs were not entirely integrated with the flex mobile SDK. The docs lead you to create a mobile actionscript project, but anyone accustomed to flex development immediately misses mxml. If you create a regular Flex Mobile Project, though, it’s hard to use the QNX libraries, which provide some key functionality.

For example, RIM wants to establish that swipe gestures from the top bevel down brings up an application menu, but that’s a and you can’t listen for it with just flex. Fortunately, the forums are quite helpful (more so than the official docs) in coming with work-arounds for this and other problems. I would expect that RIM will continue to work on presenting a more seamless SDK for developers (they better).

I do like, though, the choice of actionscript. With its nearness to javascript, it’s a far more appropriate language for this kind of development than java. And the language itself is essentially open-source with a free-of-charge spec and open-source compilers and virtual machines available.

I found the QNX information lacking enough that I just treated my app as an AIR project then added in Playbook events, like swipe down, where necessary. I ended up with something that really feels too much like a web app. That might be inevitable when you’re developing without a real device. Not all the touch events can be simulated with the simulator so there’s a limit right now to what you can test.

The vendor application and app submission process was, like setting up the development environment, tedious but not hard. I almost didn’t get my vendor registration done in time. But I did end up getting everything submitted with two hours to spare on March 31.

So did I qualify for a free Playbook? I don’t know. I’ve gotten confirmation that I completed the first step, but not yet heard if my app has been approved for App World. I have to admit, my app seems more like the result of a learning exercise (which it was, after all) than a polished application. But then, with no device, a simulator that can’t simulate all the real gestures and actions, a  beta IDE, and minimal documentation, it’s bound to be a little rough. At least that’s what I tell myself.

Posted in flex, software | Tagged | Leave a comment

How to find a file’s image type by looking at its bytes (in Java)

Byte arrays and byte streams get used a lot in java, but it’s rare to do anything with individual bytes. Many file types start with certain bytes, though, and can be quickly identified by them.

In my case, I had a byte array, extracted from a file with the guava library’s Files.toByteArray(file), and wanted to make sure it was a jpeg before sending it down to a flex front-end. Jpegs start with the two bytes, 0xFFD8. So I printed out the first two bytes in my array and found… -1 and -40.

Java stores the byte type signed, that is, -128 to 127, rather than 0 to 255. So what do the values of -1 and -40 mean? Java uses 2’s complement for its negative numbers. Take the number -1. 1 is 0000 0001 in binary, so to calculate its negative, -1, with 2’s complement, you invert  0000 0001, getting 1111 1110, then increment by 1, yielding 11111111. That’s the binary value represented by a -1 java byte.

In your code, to get a positive int from a 2’s complement negative byte, you can do a bitwise & with 0xFF (1111 1111):

int unsignedByte = myByte & 0xFF;

So, if you had, say, -3, you would take the value of -3 (3 in binary is 0000 0011, invert to get 1111 1100, then increment + 1 for 1111 1101) and evaluate it with & 0xFF (in binary, 11111111). The bitwise & operator evalutes the bits like:

1111 1101
1111 1111
—– —-
1111 1101

The java byte value -3 equals 1111 1101, or 253 in decimal. If you try to simply cast your byte to an int

int wrongByte = (int)b;

It will just convert it straight from a -3 byte to a -3 int.

Going back to the bytes I needed, a -1 java byte equals 1111 1111, or 255 in decimal. For the second byte of my file, I got -40, which is 1101 1000, or 216. The simplest, laziest way to convert an int to a hex string in java is probably Integer.toHexString, which gives us 0xFF and 0xD8 for 255 and 216. So my test file was, in fact, a jpeg.

Posted in java, software | Tagged | 6 Comments

Tomcat and Servlet 3.0 Web Configuration

Tomcat 7 is now in production, and one of its new features is support for Servlet 3.0. Servlet 3.0 is…well, hard to remember what’s in there. But actually, it does have some ease of deployment features that open up the boring old world of web.xml, and they might even be useful in some cases.

For java libraries and frameworks these days, if you don’t have annotations, the sun does not shine down upon you. Annotations get used for just about everything, and Servlet 3.0 lets you use them to configure your web components (ie, servlet stuff) instead of putting them into web.xml. In fact, if you create a new dynamic web project in Eclipse, it will not even create a web.xml file. Use the New… wizard to create a servlet and it will go ahead and add the @WebServlet annotation for you.

public class MainServlet extends HttpServlet

So now you can configure your servlets in annotations…is it a good idea? Keep in mind, the annotations are only skin-deep. You still have the same rules about what interfaces need to be implemented (HttpServlet, ServletContextListener, etc.) and what methods overriden. If you have much to configure, you’ll end up with something like this…

urlPatterns = {
initParams = {
@WebInitParam(name = "testparm1", value = "test1"),
@WebInitParam(name = "testparm2", value = "test2")
public class AnotherTest extends HttpServlet {

So, pick your poison, annotations or xml. The annotations can get hideous, but for simple cases, they do reduce some clutter. If you aren’t going to use annotations, you probably want to set metadata-complete to true in your web.xml (<web-app metadata-complete="true"…) in order to avoid the time during start-up spent scanning classes for annotations.

Another enhancement in the Servlet 3.0 spec is the ability to define web-fragments separately from the main web.xml. You define a web-fragment.xml, normally in the META-INF folder. It looks mostly the same as the web.xml except that its root element is web-fragment and it has a different xml schema. In Eclipse you can create a Web Fragment Project that will generate the fragment file for you and also add your new project as a dependency to your main web project.

Web-fragments give you some flexibility and make your project components more modular. They’ll make it a little easier and tidier to use frameworks as they start getting added.

One note- if you are as lazy as I am and use Eclipse to create your new Servlet 3 project then copy over a web.xml file from an older project (remember, it won’t create one for you if you’ve selected Servlet 3), be sure to change the xsd and version from 2.5 to 3.0. Otherwise, the web-fragments you add in dependent projects will not work at all and you will feel like a dope when it occurs to you why.

Posted in java, software | Tagged | 3 Comments

How to create QR Codes in Java

Bar codes aren’t just for cereal boxes any more. 2-D barcodes like QR Codes are easily read by smart phones and are showing up everywhere from magazines to restaurant fronts to business cards. With some of the open-source libraries out there, it’s not hard to create your own QR Codes with java.

To generate a quick code for standard content, there are several online generators, like this. Here is a link to this site:

Urls, contacts, calendar events, even wifi network info are getting put into QR Codes. But you don’t have to stick to standard content or to provided generators. So how would you encode ‘hello world’ in a QR Code?

The good folks working on ZXing provide many tools for working with bar codes in java. The project does not seem to be in maven repositories, so to get started, download from their site. I didn’t notice any built jars in there, but it’s easy to build from the source with ant or maven (build.xml and pom.xml files are provided).

Build the ‘core’ and ‘javase’ projects and put the jars into your classpath. Then create a string, use the zxing writer to encode it in a matrix, and save it to a png file:

Note that the string is encoded in ISO-8859-1. A lot of barcodes don’t support unicode. The resulting QR Code png file is:

Theoretically, thousands of characters can be encoded in QR Codes, but at least one source recommended not going above 800 characters or so. Here’s some text of about 700 characters:
Why would you encode a big blob of text in a barcode? Well, you probably wouldn’t. But there are plenty of useful scenarios for QR Codes and with more and more people buying barcode readers, aka smart phones, they’re only going to keep growing.

Posted in java, software | Tagged | 16 Comments