Book Review: Dealers of Lightning

Michael Hiltzik wrote his account of Xerox PARC (Palo Alto Research Center) years ago, but Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age never seemed to get as much attention as it should have, and it’s worth reminding folks of every so often. Hiltzik not only describes the technological accomplishments of PARC, but also narrates the business history of how Xerox had the vision to found PARC, but lacked the ability to take advantage of all the brilliant innovations its engineers created.

Consider some of the technical achievements to come out of  Xerox PARC:

  • First personal computer, the Alto; famously shown to Steve Jobs (along with other innovations).
  • Graphical windowing system, the forerunner of Mac and Windows.
  • Laser printer; maybe the only invention that Xerox capitalized on.
  • Ethernet; one of its creators, Bob Metcalfe, later founded 3Com.
  • Wysiwyg word processor; one of its creators, Charles Simonyi, left for Microsoft and started MS Word.
  • Smalltalk programming language.
  • Superpaint, one of the first computer programs; one of its creators, Alvy Ray Smith, went on to help found Lucasfilm’s computer division, then co-founded Pixar.
  • Advances in integrated circuit design, which Jim Clark later used in founding Silicon Graphics.
  • A page description language to handle wysiwyg printing, which John Warnock and Chuck Geschke later used as the basis for Postscript when they started their company, Adobe.

Even this many years later, it must hurt to be Xerox. But the book doesn’t settle for mocking Xerox. It gives many great entertaining details about how a traditional, monolithic, slow-moving company simply could not deal with the revolutionary stuff coming out of its own research center.

PARC, after all, helped define programmer culture as individualistic, fun-loving, profane, and yes, even cool, getting mainstream exposure through a famous Rolling Stone article. The article enraged Xerox executives. The article presented PARC’s Alan Kay as the archetypical rebellious hacker. Later, the Jeff Bridges character in Tron would be based on him (Kay’s wife, Bonnie MacBird, co-wrote the screenplay).

PARC even had beanbag chairs in its office, which for Xerox and for most businesses, was downright radical. One Dallas exec visited PARC and grumbled, “I went out there and I sat in their beanbags, but I just couldn’t get anything out of them. They were only interested in their own thing.” That exec later helped shoot down the idea of bringing the Alto to market (three years before IBM’s PC) and instead threw his weight behind another electronic word processor as Xerox’s flagship product.

It’s easy to ridicule Xerox, but Dealers of Lightning avoids simplistic explanations and shows the tremendous cultural and business difficulties they faced. How many companies even today are capable of totally remaking themselves? Xerox was used to getting paid by the copy. By the copy! How hard was it for them to conceive of a business model for technology that let customers look at a page on a screen then fax it without ever printing a thing? Hiltzik presents Xerox’s failings in excrutiating detail but refuses to indulge in easy hindsight second-guessing.

The common thread running through most of PARC’s accomplishments is the vision of interactive computing. The most renowned manager at PARC, Bob Taylor, gathered together like-minded computer scientists who were interested in a new model of computing different from the time-sharing on mainframes standard of the day. Taylor is credited with fostering, at both ARPA and PARC, many of the ideas behind the internet and personal computing. He wasn’t even a computer scientist. His only advanced degree was in experimental psychology and he never programmed himself. Some of the most interesting stories in Dealers of Lightning are about Taylor and his management ability. And his inability to be managed. While passionately supported by the engineers he managed, Taylor was himself a colossal headache to his managers.

Just one more story from the book…in Alan Kay’s job interview, he was asked what his greatest achievement at PARC would be. He answered, “It’ll be the personal computer.” “What’s that?” his future boss asked. Kay sketched out the design of a small, hinged, clamshell-shaped computer with a keyboard on the bottom half and a screen on the top, what we recognize now as a laptop.This is 1970, kids! Computers in 1970 filled entire rooms.

When the Mac first came out, Kay commented that it was the first computer worth criticizing. When the iPhone was unveiled, Jobs asked Kay if the iPhone was worth criticizing. He replied, “Make the screen five inches by eight inches, and you’ll rule the world.” That last bit is later than what appears in Dealers of Lightning, but if you want to understand today’s technology, reading the book is a good place to start.

Posted in ruminations, software | 3 Comments

Introduction to Amazon’s SimpleDB

Amazon’s SimpleDB is a NoSql datastore with a whole lot of no: no sql, no datatypes (except utf-8 strings), no transactions, no joins, no indexes, no schema, no administration, and no cost for minimal usage. But when you google it, you find Amazon’s docs, a lot of bold predictions about it from 2007 and 2008… and not much else. SimpleDB seems like an interesting solution in search of a problem, but its ease of use and lack of administration effort make it worth at least checking out.

SimpleDB (and Amazon’s other web services offerings) can be accessed through a number of different APIs. For java, they have an eclipse plugin. A new aws project has this option screen:

If you want a quick example of how to interact with SimpleDB through java, check the Amazon SimpleDb Sample and you’ll get a runnable class that includes basic functions.

SimpleDB has ‘domains’ that are roughly like tables in a relational model, and ‘items’ within each domain, which are kind of like rows. Domains can’t be joined or related together at the database level. Each item has ‘attributes’, which are key/value pairs, but an item can have multiple attributes with the same key. For example, this is fine (using the java sdk):

ReplaceableItem item = new ReplaceableItem("The Java Programming Language") .withAttributes(
  new ReplaceableAttribute("category", "technical", true),
  new ReplaceableAttribute("title", "The Java Programming Language", true),
  new ReplaceableAttribute("price", "15", true),
  new ReplaceableAttribute("author", "Ken Arnold", true),
  new ReplaceableAttribute("author", "James Gosling", true),
  new ReplaceableAttribute("author", "David Holmes", true)));

The ReplaceableItem constructor takes a name or key. Attribute ‘keys’ are not unique for a given item- only the combination of key and values has to be unique. That produces an odd situation where this query retur true:

select * from `myDomain` where author = 'James Gosling'

returns true. So does this:

select * from `myDomain` where author = 'Ken Arnold'

But this is false:

select * from `myDomain` where author = 'Ken Arnold' and author = 'James Gosling'

To select items with both these authors, you need:

select * from `myDomain` where author = 'Ken Arnold' intersection author = 'James Gosling'

To select items with only the given author, it’s

select * from `myDomain` where every(author) in ('Ken Arnold')

It’s not difficult, but it might have been clearer if they hadn’t used sql-like syntax. There must be a nicer way to define a key that has a list for its value.

You can compare and sort, but remember that everything is a string. Amazon has suggestions for dealing with numbers and dates, but they will not impress you terribly. Think of the tedious things you have to do in a regular relational database when you are stuck storing a number or date as a string for some reason.

You can do a count(), but Amazon has this note: ‘If the count request takes more than five seconds, Amazon SimpleDB returns the number of items that it could count and a next token to return additional results. The client is responsible for accumulating the partial counts.’ It’s a good reminder of the fact that you’re getting your data through a web service and need to plan accordingly. Likewise, there are also some limits you need to consider when you’re scoping out your requirements.

To insert, you perform a PutAttributesRequest operation, which looks like this in the java sdk:

sdb.batchPutAttributes(new BatchPutAttributesRequest(myDomain, listOfReplaceableAttributes));

Each ReplaceableAttribute can be defined with a boolean replace flag. If you have an item with an existing key value pair of ‘category’/’technical’ and do a put operation with a new pair with the same key but a different value, say ‘category’/’programming’, it will replace the old pair if replace is true, or add an additional pair if false. Attribute keys, again, are not unique.

Amazon helped pioneer the idea of eventual consistency and it’s still the default in SimpleDB, but they brought out the option of immediate consistency and conditional puts in 2010. They don’t seem to be available directly in the java sdk, but they are in the web service apis.

There are also some third-party libraries you can try out, like topica and SimpleJPA, which tries to wrangle SimpleDB into a jpa implementation, and even a simpledb-jdbc library. Overall, though, you wouldn’t want to treat SimpleDB like a normal database that you can access behind a typical java interface. SimpleDB won’t replace most folks’ database, but it could still be right for any number of different situations.

Posted in cloud, java, software | Tagged | 7 Comments

Asynchronous testing with events in Flexunit

Testing of actionscript code is not very useful if your tests can’t wait for and react to events. Flexunit is the main unit testing framework for Flex and it has some nice tools for setting up asynchronous tests.

The code I’m testing for this example comes from the Swiz Cafe Townsend project. Swiz has its own unit testing framework to go along with flexunit, but I’m not going to use it here. It does not appear to be updated to run with the latest version of flexunit (though calling constructSwizContext() in your own test class’s [Before]-annotated method fixes the main issue).

More importantly, though, using a dependency injection framework like Swiz in your unit tests means having to set up all the relevant dependencies and tying your tests to the framework. You’re testing not just your target methods but also your Swiz configuration. That’s great for integration tests, but here I want my unit tests to just test the methods’ functionality. Therefore I’ll manually create the objects that Swiz injects in the production system.

The code under test is the updateEmployee() method in EmployeeDetailPresentationModel, which basically dispatches an update event for an employee. I’m testing both for the event dispatch and for correct employee values. Async.handleEvent can be used to listen for the event and to provide a result method that will be called when the event is registered.

Here is the test:

I’m adding this test to com.cafetownsend.presentation.test.EmployeeDetailPresentationModelTest in Cafe Townsend, and making use of its existing setup. Before each test, it creates a basic employee, defines a presentation model, and importantly, creates a new dispatcher for the model. When the application actually runs, Swiz will inject that dispatcher, but for a test we can just create a plain old

In handleEvent, we pass in the test itself, the dispatcher, the event we’re listening for (EmployeeEvent.UPDATE), the method to call when the event is detected (verifyUpdate), the max time we’ll wait for the event (500 ms), the ‘passThroughObject’ (employee), and the method to call if the time-out is reached (dispatchingEventNeverOccurred). The call to updateEmployee() is what all the set up has been for and what kicks things off.

The handleEvent parameters are pretty straightforward except for the passThroughObject. This is an object containing data from the test that you can pass through to your result handler (verifyTest) in order to make assertions. It’s easier when you see verifyTest:

The employee object we created in the first part of our test is passed through as the second argument tp verifyTest, and we can use it to check if expected changes took place somewhere else in the code, in this case, the model’s tempEmployee. The passThroughObject is just a convenient way to preserve information from the beginning of the test for assertions at the end.

Looking a little more at Swiz Cafe Townsend’s EmployeeDetailPresentationModelTest, there are already tests like dispatchUpdateEvent, which makes sure that when updateEmployee() is called, the appropriate event is thrown. If all you need to test is that an event is dispatched but aren’t worried about checking data, all you need is Async.proceedOnEvent() with a timeoutHandler that includes an if it actually gets called, ie. the test times out without registering the expected event.

Another useful trick in Flexunit is using the asyncResponder to add a result handler to a service call, but that will have to wait until later.

Posted in flex, software | Tagged , | 1 Comment

The Incredibles revisited, or how we like technology

Watching The Incredibles again six years after it came out, I enjoyed it just as much as the first time, and I again admired the way it played off typical American themes, but this time another thought stood out. The film is in love with high tech tools and vehicles and gadgetry, but the heroes are passive users of technology while the villain is a brilliant engineer.

Maybe that’s just a coincidence. Villains have to be distinctive in some way, and Buddy’s genius is just his. But look at the beginning of the movie, when Buddy is desperately trying to get Mr. Incredible’s approval. He’s been pleading for acceptance and finally blurts out about his rocket boots, ‘I invented these. I can fly!’ Shouldn’t Mr. Incredible’s response (and the audience’s) have been something like  ‘Oh my God, that’s awesome!’ or ‘How could a kid who looks like he’s in junior high possibly do something like that!’? Instead, it’s portrayed as part of Buddy’s pathetic attempt to be someone he’s not, a superhero.

Later, Syndrome vows to share his technology with the world so that everyone will be super. Cue ominous music! But wait, what’s wrong with that? He really should have said, ‘no one will be super just because they’re born with gifts others don’t have.’ He’s going to level the playing field. There’s a sense throughout the movie that Buddy is cheating by using technology. “Not every superhero has powers,” he claims early on, but in the movie, that sentiment just appears uppity.

This isn’t new, it’s your basic Lex Luther vs. Superman conflict. It’s evil smart guy vs. noble strong guy. How many A-list superheroes are there that rely on their technological prowess? Okay, Iron Man, but he’s the Ayn Randian superhero. A glamorous playboy born into great wealth, he’s a long way from pimply, geeky Buddy. Buddy has only his own engineering genius, and he uses it to get the good life- hot babe, tropical island, all kinds of toys. He’s like a cartoon John Carmack (no offense to John).

In the geek classic Cryptonomicon, Neal Stephenson argues that you can look at a society’s trickster gods, or rather, at the clever characters in their stories, to gauge their attitude towards technology. How a society reacts to and portrays technology, in turn, reveals the principles of its civilization. The marauding Vikings had Loki, clever and evil enemy of the gods; the Greeks had Athena, embodiment of wisdom and justice combined with strength. Technology is not just a path to power, Stephenson suggests, it usually goes along with freedom of thought and humane values.

Cryptonomicon creates heroes out of geeky technologists. It exults in the role of scientists and engineers in helping to defeat Nazi Germany. The Nazis, on the other hand, are weakened because their totalitarianism and prejudice leads them to purge the universities and dismiss the ‘Jewish mathematics’ of Einstein and others. Free societies produce better technology while less open societies tend to be fearful of it.

The Incredibles, like American society, is ambivalent. Tech stuff is cool, but the engineers that make it don’t seem to get much credit. Worse, there’s a fear that they’re doing something sinister with their arcane knowledge. Facebook is immensely popular throughout American society, yet it strikes a chord when The Social Network portrays its creator as a socially awkward misfit we really shouldn’t trust.

All this is not to say that Buddy isn’t evil. He’s a murderer and his genius doesn’t change that. But then, this is the story of Mr. Incredible, not Syndrome; of Goliath, not David; of Achilles, not Odysseus. In this story, innate strength equals innate goodness. Brainy attempts to use tools or technology to make up for physical shortcomings are downright dishonorable.

But if the Wicked Witch of the West’s reputation can be resuscitated; if the story of Grendel, the original villain of the English language, can be retold; if Milton can make even Satan a sympathetic character, surely poor little Buddy deserves to have his say. His moral failings can’t simply be rationalized away, but let’s at least hear the geek’s side of things. After all, not many of us are born with the physical talents of the Incredibles- we’re better off trying to figure out how to make those rocket boots.

Posted in ruminations | Leave a comment

Java Code Coverage: Cobertura vs. Emma vs Clover

The last time I looked for free java code coverage tools, I found EclEmma for Eclipse and Cobertura for automated builds through ant or maven. They’re both good tools, but they come up with slightly different numbers. Maybe I just missed it before, but there’s now an Eclipse plugin for Cobertura and a maven plugin for Emma. Both have Hudson and even Sonar plugins, so I thought I’d spend a little more time comparing them. Just for kicks, I also looked at Atlassian’s Clover even though it’s fairly expensive (but free for open-source projects).

Units of coverage

Cobertura focuses on lines and branches (ie. if and case statements); EclEmma concentrates on bytecode instructions, though you can also get line metrics. Emma has a good write-up of their methodology, but I didn’t see as much for Cobertura. Clover looks at statements, branches, and methods to come up with a total score. All these approaches result in fairly similar coverage results (unless your code does things like put multiple statements on the same line separated by semi-colons) but they’ll almost always be a little different.

Eclipse plugin

All the tools will highlight covered and uncovered code in Eclipse for you. The Emma and Clover plugins have a nice graphic display and let you drill down.



Cobertura’s display in Eclipse is a bit weak, no drill-down into classes and no visual graph of coverage:

Full report

Probably more important than the Eclipse view are the reports that each tool can create, eg. in an automated build. I didn’t take much time to explore different configuration options here, just took the defaults for each.

Emma’s report actually has less visual appeal than its Eclipse plugin. If you invest some time, you can probably make it look better but this is what you get out of the box:

Cobertura, on the other hand, has some nice graphics. If you just configure report output as xml, and turn on the Hudson plugin, it looks like:

And here’s Clover:

The Emma and Cobertura reports include metrics on classes, lines, and methods. Cobertura also has conditional statements and files while Emma includes blocks. Conditionals could be interesting, I guess, if you notice a big coverage gap. Clover’s ‘elements’ appears to be lines and methods. Cobertura has nice drill-downs and Clover has not just drill-downs but tree map and word cloud charts as well.

Aside from graphics, I also looked a little closer at how some specific code elements are handled.

Static nested classes

The code under test for the result page below has an internal builder class. If you’ve never noticed, the compiler creates an extra $1 class for you behind the scenes. Emma reports the SampleBuilder$1 class as uncovered:

So somehow you have 100% coverage of blocks, lines, and methods, but only 67% coverage of classes… Cobertura excludes SampleBuilder$1 in its class coverage to give you 100%, but if you drill-down, you see the class with N/A in its line and method coverage columns, which seems like a good solution. Clover does not show the $1 class at all, which is also fine.


Emma dings you if your tests don’t explicitly test the built-in enum valueOf and values() methods- they show up as 0% coverage. Like with the inner classes, Cobertura will show those methods but mark them as N/A and not include them in your method coverage percentage. Clover will not show them at all.


Cobertura will show you an annotation (an @interface that you’ve created) with N/A. It won’t appear at all in Emma. It also won’t appear in Clover, but curiosly, I did notice it shows up in a word cloud chart of top project risks.

Default no-arg constructor

If your class doesn’t define its own constructor, but a test calls the implicit no-arg constructor, does it count in the coverage? For Emma, yes. It lists the constructor as classname(), eg. Sample() and gives you credit for covering one method and one line. Cobertura will show the constructor as <init>() but give it an N/A for coverage. Clover won’t show an undeclared default constructor at all.


These are all good tools. In Eclipse, I’m mostly concerned with finding untested lines and getting a general sense of how much code is covered so I think I’ll keep using EclEmma. For a full report, Cobertura has a nicer look and fewer coverage oddities to note. Also, even though new plugins have come out recently for Emma, it doesn’t appear that the main project is still being actively developed. As far as speed goes, we try to keep our unit tests very fast anyway, so it was not a big concern and I didn’t bother investigating it.

Clover looks goods visually, has lots of options, and I like how it handles results. If money were no object, I’d definitely look more closely at it, but for our situation, the free tools work just fine.

Posted in java, software | Tagged , | 10 Comments

Google’s guava java: the easy parts

Google’s guava is one of the best, most useful all-purpose java libraries to come out in a long time. It’s surprising that it isn’t standard in most projects now. Maybe there’s too much in there. If the functional and concurrency packages seem like too big an investment of time to understand and use in your code, just focus first on the easy parts. Think of guava as the next generation of apache commons utilities, with lots more stuff if and when you want to explore it.

Here are some of my favorite simple, yet handy routines in guava:

List<String> lines = Files.readLines(file, charset);

This is the same as FileUtils.readLines() in apache commons io, but updated to use generics. Files doesn’t quite have all the things that FileUtils has, but there are many other helpful methods like deleteRecursively(), createParentsDir(), and move().

boolean isEmpty = Strings.isNullOrEmpty(yourString);

You’ve probably written something like this already, perhaps even several versions scattered in different projects. Why not just use one from an established api? You might also find emptyToNull() or repeat() useful while you’re looking around the Strings class.

Preconditions.checkArgument(boolean expression);

Preconditions provides convenient ways to test arguments and throw illegal argument exceptions when necessary. There’s also checkNotNull(), checkState(), etc.. If your code handles validating arguments haphazardly, preconditions can be a standard, concise way to deal with them.

Map<String, Date> sampleMap = Maps.newHashMap();

Try creating your generic collections without having to duplicate the generic types and you won’t want to go back. Joshua Bloch helped design guava’s collections package so it might as well be part of the official java collections api now.

String relativeFilePath = Joiner.on("/").join(filePieceArray);

Joiner feels so natural to use. It’s quite handy for making file paths, comma-separated string lists, etc. without worrying about that trailing separator. Guava’s splitter also has nice advantages over String.split().

boolean targetIsInArray = Ints.contains(int[] array, int target)

Not earth-shattering, but again, very convenient. Ints (and Doubles, Floats, etc.) also has concat(), lastIndexOf(), and other methods that let you deal with arrays as a whole without manually iterating through them to find what you want.

The guava api is just fun to browse. Its methods are intuitive to use and understand, and once you’ve got guava on your classpath, they might also be gateway drugs to guava’s more advanced features. But you can just stick with the basics if you want. They’re worth it on their own.

Posted in java, software | Tagged | 11 Comments

How to set up a flex unit test with ant and Hudson

I’ve done a lot of unit testing in java before, but none in actionscript and not really much on front-end code in general, so I thought I’d give it a try. Before I tried anything too tricky, I wanted to focus first on defining a simple test, running it through the IDE, and then automating it on a build server.

I’m working with Flex 4.1, Windows, and the FlashBuilder Eclipse plug-in. For automated builds, I’m using ant and Hudson. By default, FlashBuilder will include FlexUnit 4 libraries for you to use. Mine are version, which is not too old. The FlexUnit site has version 4.1 release candidate if you want the latest and greatest.

My class for this example is and it has one wee method-

package com.copperykeenclaws {
  public class Sample {
    public static function add(a:int, b:int):int {
      return a + b

Now, set up a place for your tests. You can put your tests wherever you want, but I tend to follow Maven conventions and put source code in src/main/flex and test code in src/test/flex. Create your test folder and add it to your flex build path so that you can run tests in your IDE. Create your test by clicking New->Test Case Class, then enter the name of your class (here, it’s SampleTestCase). Check Select class and browse to your class to test (Sample). Now add a test-

package com.copperykeenclaws {
  import org.flexunit.Assert;

  public class SampleTestCase {
    [Test(description="Test the add method")]
    public function add():void {
      var a:int = 1
      var b:int = 2
      var result:int = Sample.add(a, b)

      Assert.assertEquals(a + b, result)

At this point, you can right-click your project and select ‘Execute FlexUnit Tests’. Your tests will be run and the results displayed in the FlexUnit Results view. Take a look in your main source code folder (here, it’s src/main/flex) and refresh if necessary. There are two new files- FlexUnitApplication.mxml and FlexUnitCompilerApplication.mxml. Open up the first- it’s a test runner that was created for you behind the scenes.

Those are the basics for running FlexUnit from your IDE. Now, for an ant script. For this, I’m using the latest FlexUnit. Move the flexunit swcs and the flexUnitTasks jar into the libs folder. You’ll also need the regular flexTasks.jar if you don’t have it already .

A test suite can help manage your tests, though I don’t think it’s strictly necessary. Create one by selecting New->FlexUnitSuite named ‘SampleTestSuite’ and select the test you made.

You need a testrunner like the one Eclipse created. Here’s a simple one, saved at src/main/test/FlexUnitApplication.mxml:

I’m not going to cover using ant for flex in general. Here, I have a stripped-down build script for this example (note: there’s a good article on ant and FlexUnit at It builds the FlexUnitApplication.mxml into a swf then passes it as an argument to the flexunit task. I’m going to output test results to target/reports.

Try running it in your IDE and look for output in the reports folder- there should be an xml results file there. Assuming your project is already set up in Hudson to run an ant build script, the only configuration you have to do there is to check ‘Publish JUnit test result report’ and enter the path to your results xml files: ‘target/reports/*.xml’. Hudson will find them and set up the usual good unit test stuff- a Test Result Trend graph, a link to Test Results, etc.

If you see this error in your hudson console output:

[flexunit] Starting server …
[flexunit] Opening server socket on port [1024].
[flexunit] Waiting for client connection …
[flexunit] Stopping server …
[flexunit] End of test data reached, sending acknowledgement to player …
[flexunit] Closing client connection …
[flexunit] Closing server on port [1024] …

You need to download the standalone FlashPlayer (also called the ‘Projector’ version) on your build server. It needs to be associated with the running of swf files (remember, we’re on windows), which should happen when you install it.

That’s it. Of course, FlexUnit’s most interesting capabilities involve asynchronous tests, and code coverage would be nice, but those will have to be later posts.

Posted in flex, software | Tagged | 4 Comments

What is the Cognos SDK?

The Cognos SDK occupies a strange position in the Cognos development ecosystem. It gets thrown around as a solution to a number of problems, but a lot of people who work all the time in Report Studio or Framework Manager really don’t know what it is. People working with Cognos usually have a database or business analysis background, not programming, and often are not familiar with the term ‘SDK‘. A lot of the materials on the SDK immediately dive into details without a big-picture overview, so here’s my take on what it is and why you might be interested in it.

‘SDK’ stands for software development kit. It’s a set of code libraries, ie. jars or dlls. The SDK comes in three different programming languages: java, C#, and visual basic. It’s not a studio or plugin or executable program. It’s nothing that you’d use inside the main Cognos applications. It’s code that a programmer can use to write a script or application that talks to a Cognos server.

In general, it’s for administrative purposes rather than writing reports or queries. A few different uses:

  • Write your own custom security provider.
  • Integrate Cognos into your own application.
  • Script out adminstrative tasks.
  • Programmatically do just about anything you can do manually in Cognos Administration.

You can automate back ups, installs, permission grants, or whatever. If you’re an OEM provider of Cognos, using the SDK is essential. It will allow you to seamlessly integrate a huge amount of Cognos functionality into your own application. Your users may not even know or care that you’re relying on Cognos for your reports.

How to get started

Get a license (which is expensive, unfortunately), then download the installer from the usual IBM Cognos site. By default, it will be installed into an existing Cognos installation. Under the sdk folder, you’ll see the different versions. I’ve only used the java one and that’s what I’m going to talk about, but the csharp and vb versions should be equivalent in most ways.

A few important things to look at…there is a ‘lib’ folders with jars, which contain the SDK code. You’ll need to include these jars in your app. A lot of small sample applications like ViewCMReports, ReportDelete, and AuthenticationProvider are included. There are also some useful folders containing shared code, like Common and Security. There is a ‘doc’ folder that contains javadoc. This is mostly worthless. The user guide has much better documentation. It’s outside of the sdk folder, in webcontent\documentation\en\dg_sdk.html. It has good write-ups on how to set up your environment, how to run samples, what you can do, etc.

When you’re ready to start coding, look at a sample for some guidance and build off that. Most applications start by connecting to a running Cognos server, then logging in. The code in common\ handles connection stuff. Feel free to copy it. For authentication, see Security\ (and copy it, too!).

The quickest way to figure out how to do something is usually going to be finding a relevant sample and copying it. Studying the API is not very useful, unfortunately. Important objects usually have generic property arrays to hold useful information, and instead of well-named, self-documenting methods, you get a lot of ‘setValues’ and ‘adds’.  The names of the properties you need don’t seem to be documented anywhere except in the samples.

That said, the code is not particularly hard to deal with and the samples are quite good. It does take some time to get familiar with the API, but the difficulty just depends on what you’re trying to do. Something like automating package deployments is trivial, but writing your own security provider is a lot more challenging. If you need to extend the functionality of Cognos and you have programming resources on your team, the SDK is worth looking at.

¨Cognos relies on Web services to communicate with the server. It’s helpful to know about Web services and SOAP but not really necessary.
Posted in software | Tagged | 5 Comments

How to make custom-shaped tabs in Flex

Flex gives you decent tab components that are simple to use, but if you want a shape more interesting than a rectangle with rounded corners, it takes a little extra work.

Start with a basic TabBar:

which gets you this…

If you just want the corners to look different, you can easily change the corner radiuses of the TabBar. In order to get a totally different shape, you need to use skins.

In FlashBuilder, select New->MXML Skin and choose TabBar for a host component. Now create one more, selecting a host of ButtonBarButton this time. You’ll get the default skins for each, which is an easy starting point.

In the TabBar tag, add a skinclass attribute that points at your new TabBar skin. In the TabBar skin, change the skinclass of the ButtonBarButton (in the data group item renderer) to point to your new button skin. Try running it. If everything works, you can start customizing the skins.

Most of the work happens in the button skin. Notice the Rect components under the States. Remove them all. We’re going to simplify things for this example.

To get a different shape for your tabs, you can define a vector shape with a Path. Here we put the Path inside of a group just to give a little more flexibility in the display. Add it to the skin where the rects were. The Path graphic’s shape is determined by its data element. M indicates the starting position, L writes lines between coordinates, and Z closes the path.

Now the tabs will look like this:

A different shape! But not terribly attractive. In a perfect world, you or someone else designs your graphics in Illustrator, imports them into Catalyst, then copies the data element from its code view. A simple path is easy enough to free style but they quickly get more complicated if you’re doing anything more interesting.

Posted in flex, software | Tagged | Leave a comment

Developers’ Learned Helplessness

In the 1960s, psychologists experimenting on dogs found that subjects given electric shocks with no hope of relief learned to be helpless. Even in subsequent experiments that did provide a means of escaping the pain, the dogs passively submitted without trying to help themselves. They had been trained not to learn. Some software developers are similar to dogs who’ve been tortured by sadistic psychologists- in the face of new challenges, they’re just helpless.

Why? Where do they learn that? Writing software consists mostly of solving problems, being confronted with puzzles to figure out. Why would you choose this kind of career if you didn’t enjoy learning new things?

One answer is that developers are trained to be helpless. For some shops, their entire code base is a big ball of mud floating in a toxic pool of backbiting and finger-pointing. When you try to fix some bad code, you own it. Then you get blamed for it. You inherit new responsibilities without getting the time or resources to fulfill them. Developers are rewarded for keeping their heads down.

Or, some shops are extraordinarily territorial. Even well-meaning, diplomatic attempts to take on issues outside of prescribed boundaries are punished. Either way, enough negative reinforcement teaches developers to stop trying.

Learned helplessness also comes up in the context of children with development challenges, especially an inability to communicate. Parents may anticipate a child’s needs too much, doing everything for them without trying to get their input to such an extent that they inadvertently teach them to be passive.

Most developers who are uninterested in learning new things seem more like this to me. They accept help too easily without making an effort to learn things themselves. They settle into a pattern of expecting others to be responsible for issues that they don’t currently know, so they never learn anything new.

In really large shops, you may also have very narrowly prescribed responsibilities on a very big project that make it difficult to break out. I once worked on a six-person team that was tasked with querying data from five (and only five) database tables and making it available to other system components. Even for that small task, we were required to use frameworks developed by other teams. Our exposure to different technologies and problems was painfully small and it took real extra effort to learn something new at work.

But whether your work duties are limited or you’re just lazy, learning to be helpless as a software developer is a game-ender. There aren’t many professions that demand constant re-training in the same way that programming does. If you learn to be passive and not take on new challenges, you’ve set the boundaries of your career. You will last as long as the technology you’re currently familiar with lasts. Who knows, that might be a long time. More often though, you might as well find yourself a new line of work.

Posted in ruminations, software | Tagged | 5 Comments