Simple client-side validation with JSF and Primefaces

The conventional wisdom for JSF seems to be that server-side form validation is good enough and client-side validation should be avoided. Even if your component library makes it relatively easy, raw javascript is hard to maintain in a jsf environment, an ajax update is fast enough, inputs will need to be validated on the server anyway, etc.

Seeing javascript in jsf templates may suggest to you ill-advised hacks and browser work-arounds that will be hard to support, but with javascript being used all over and frameworks like backbone.js and knockout.js getting popular, it’s hard to pass up the advantages that it can give you. It’s hard to justify making a call back to the server just to validate an email format or enable a button. Ajax requests can be pretty inexpensive and responsive to the user, but no request at all is even cheaper and more responsive.

Does mixing javascript and JSF really have to be such a burden to maintain? Primefaces provides a solid client-side api. It lets you define javascript event handlers (onblur, onclick, etc.) for many of its components and for some, extra client-side hooks and a widgetVar attribute for an id available directly in javascript.

Here are a few simple examples. The full code is available at https://github.com/thwick/clientside-jsf-validation.git.

You might have an email text field that you want to evaluate for validity and display a message if the field loses focus and the input is not valid.

<p:inputText id="emailInput" onblur="validateEmail();" />

And the function:

 function validateEmail() {
    var emailReg = /^([\w-\.]+@([\w-]+\.)+[\w-]{2,4})?$/;
    var email = $("#testForm\\:emailInput").val();
    if( emailReg.test( email ) ) {
       $("#emailInputValidation").text("valid!");
    } else {
      $("#emailInputValidation").text("invalid!");
   }
 }

where the emailInput field is in a form called testForm and it updates a span tag with the id emailInputValidation. Dealing with ids of JSF components can sometimes be painful, unfortunately, but Primefaces does let you define a simpler id in some cases.

For example, say you want to enable or disable a button depending on whether an input text field is empty or more than nine characters. Primefaces lets us define a ‘widgetVar’ value, which is the id of the component as it will appear on the html page, no generated id tacked onto the end. The button looks like this-

<p:commandButton id="button" widgetVar="buttonWidget" disabled="true" //>

The Primefaces commandButton lets us simply call enable/disable to alter its state directly without calling back to the server.

In your javascript, just call buttonWidget.enable(); to enable it:

function validateContent() {
    var currentValue = $("#testForm\\:nonNullInput").val();
    if (currentValue) {
      if (currentValue.length < 9) {
        $("#fieldInputValidation").text("");
        buttonWidget.enable();          		
      } else {
        $("#fieldInputValidation").text("Field must be 8 characters or less!");
        buttonWidget.disable();
      }
    } else {
      buttonWidget.disable();
    }
}

For important data, validation on the server will still be required. But for a lot of cases, using the clientside api can improve responsiveness without adding too much complication.

Posted in software | 1 Comment

Setting up an Android ContentProvider with a join

Android’s ContentProvider and related APIs allow for joins, but you usually just see simple single-table reads being used. I assume most people just create big denormalized tables in Android, but I still wanted to try out a content provider based on a join.

In this example, I have just two tables, sport and team, with these columns.

sport: _ID, name, periodType, updatedDt
team: _ID, name, sportId, updatedDt

The full code for this, by the way, is up at https://github.com/thwick/android-provider-join using the android-support-v4 compatibility package.

In my TeamProvider, extending ContentProvider, I use SQLiteQueryBuilder and set the join with:

String tables = "team LEFT OUTER JOIN sport ON (team.sportId = sport._id)";
queryBuilder.setTables(tables);

The tables have some column names that are the same, like _ID and name. Therefore, a projection map is needed to create aliases for the columns. The columns need to be referenced with their respective tables, eg., table.name, sport.periodType. For aliases, I just set the team columns to their simple form, eg. name, but added the table name plus underscore to the sport columns, so sport.name maps to sport_name (it might have been clearer if I had used the same convention for both).

Adding values to the projection map looks like this (using literals for clarity):

mColumnMap.put("team.name", "team.name AS name");
mColumnMap.put("sport.name", "sport.name AS sport_name");

Then the map is set on the query builder with

queryBuilder.setProjectionMap(mColumnMap);

To set up a cursor, the projection in the onCreateLoader() method of your LoaderManager implementation should use the table-qualified column names, eg. sport.name. However, use the aliases when you’re retrieving the results. The SimpleCursorAdapter that is set up as the list adapter, for example, binds the ui to the aliases of the team and sport name column (again with literals):

String[] uiBindFrom = { "name", "sport_name") };

For the detail fragment, the cursor is set with a standard cursor set-up:

Cursor cursor = getActivity().getContentResolver().query(Uri.withAppendedPath(TeamProvider.CONTENT_URI, String.valueOf(teamId)), projection, null, null, null);

where the projection is all the table-qualified column names from each table. However, results need to be retrieved later from the cursor by alias,
eg. cursor.getColumnIndex("sport_name").

Since we’re using an outer join, it’s fine if a team does not have a sport. They’ll just have a null sport name and period type.

That’s it. Maybe there are easier ways to accomplish the same thing (let me know!), but this isn’t too hard. Android does have a CursorJoiner class but I didn’t see it used much, and its API is more limited than a regular Cursor.

Posted in software | 2 Comments

Developer-Driven Database Design

Databases designed and maintained collectively by teams of developers using mostly migration tools and ORMs display certain defining characteristics:

  • All columns except for primary keys are nullable.
  • No indexes except on primary and maybe foreign keys.
  • Lots of bit columns.
  • ‘Type’ columns which determine for a row which sets of columns will have data and which will all be null.
  • Wide tables with lots of columns.
  • Tables and columns named after specific features.
  • Very few or no stored procedures, functions, view, or triggers.
  • Very few complex queries; lots of simple no-join queries.
  • Transactional boundaries are set in code, if anywhere, but are often not explicitly or consistently defined.

Old arguments between developers and DBAs about denormalization seem almost quaint when you think back on them now. Developers don’t even bother with ACID if they’re using NoSQL.

But I have to admit, I’m not sorry to see the backside of triggers. And DBAs and the rest of the database world (to use an indefensibly vague term) were probably too conservative in many cases. That often conflicted with developers. It’s not hard to see whose values are dominating right now.

Posted in software | Leave a comment

Setting up Puppet on Windows

Our system configuration strategy consists mainly of saved VMs, a hodgepodge of scripts and notes scattered about, a lot of manual effort, and good intentions. We have a mixed linux/windows environment so we’ve never been sure if the new DevOps tools offered us much. But we decided to take the plunge and start trying out Puppet‘s relatively new Windows offerings.

Install puppet agent on Windows

You can’t run your central Puppet server on Windows, but you can install a puppet agent, which can run standalone or take marching orders from a puppet master running on linux. Even if you’re not interested in standalone, it’s helpful to make sure it works before trying it with your master.

The Puppet Windows instructions are pretty straightforward. Before you start, though, think about whether you’ll want to run puppet as administrator or as a regular user. On Windows 2008/Windows 7, if you run puppet as administrator, data will be stored at C:\ProgramData\PuppetLabs\puppet. For other users, data will get stored relative to %HOMEDRIVE%%HOMEPATH%\.puppet.

Download and run the ruby 1.8.7 installer. If you have a different version of ruby already installed, you obviously need to figure out how you want to handle the PATH variable.

Install some gems:

gem install sys-admin win32-process win32-dir win32-taskscheduler --no-rdoc --no-ri
gem install win32-service --platform=mswin32 --no-rdoc --no-ri --version 0.7.1

Download facter and puppet from github, unzip each, go into the uncompressed folder and run ruby install.rb (do facter first) for each.

Test Puppet standalone on Windows

At this point, you should be able to run puppet standalone, which isn’t terribly interesting, but you can test that things are okay so far. Here’s a simple manifest, for example, that ensures that a test file stored in a source folder exists in a target folder and that my MySQL service was running:

file { "c:/test/puppet/target/win_test_file.txt":  
  ensure => 'file', 
  owner => 'Administrator', 
  source => 'c:/test/puppet/source/win_test_file.txt', 
} 

service { 'MySQL': 
  ensure => 'running', 
  enable => true, 
} 

Save this into a file named init.pp, and run puppet apply init.pp (puppet.bat is now in your ruby install’s bin folder). It should turn on the MySQL service if it’s off and copy the test file over from source to target if it’s missing from target.

Puppet can control a fair number of things on Windows, as listed in the docs, but it’s not very compelling unless you add in puppet’s ability to control agents from a central puppet server.

Install Puppetmaster on Linux

There are many other, better instructions for installing puppet on linux, but just to keep documenting what I did for this case…I installed the master on an old Oracle Enterprise Linux vm. Again, you need ruby. The puppet instructions recommend using the ruby package that comes with your OS, but I used rvm and matched the same version of ruby, 1.8.7, as on Windows, though it probably doesn’t matter.

Install rvm by first getting the installer:
wget https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer
sudo bash rvm-installer

I started installing puppet from a gem first, but the latest gem on the download site was only 2.7.6. There are a few warnings floating around about not using a higher version on agents than on the master so I got the source from github. Run rvmsudo ruby install.rb to build it.

Start it up with rvmsudo puppet master --mkusers. Use --mkusers to make a users for puppet if it doesn’t exist yet. The first time you run this it should also create the master security certificate, too, I believe.

Make sure the firewall allows 8140/tcp.

Introduce your Windows agent to your Linux master

Back on your Windows agent, add the server to your puppet.conf (ie. C:\ProgramData\PuppetLabs\puppet\etc\puppet.conf):
server = <yourmaster.com>

Set up puppet with your DNS, or add to your hosts file at C:\WINDOWS\system32\drivers\etc\hosts a line like <yourmaster.com> puppet.

Try puppet agent --test --verbose. You can also specify the master explicitly by adding --server <yourmaster.com>.

You should get output like

info: Creating a new SSL key for yourwindowsagent.com
warning: peer certificate won't be verified in this SSL session
info: Caching certificate for ca/etc/puppet/manifests/site.pp
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
info: Creating a new SSL certificate request for yourwindowsagent.com
info: Certificate Request fingerprint (md5):
BB:D7:30:C4:12:B9:13:33:F3:DE:E6:E8:71:7A:60:E7
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
Exiting; no certificate found and waitforcert is disabled

This attempts to connect to the puppet master and sends a certificate request. Back on the puppet master, check out the waiting requests with puppet cert --list. Your client host should show up. Accept it with puppet cert --sign <yourclient.com>.

Go back to your Windows agent and try puppet agent --test again. It should now output something like notice: Finished catalog run with no errors.

Judging from online comments, getting the certificates working is a common stumbling block for setting up puppet clients in general. The error messages are often not very informative. For example, I kept getting this error:

err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed. This is often because the time is out of sync on the server or client
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run

It took a while to realize that I had an old, incorrect server key stored on the client. Removing the files from C:\ProgramData\PuppetLabs\puppet\etc\ssl then trying again worked.

Set up a manifest to execute

I set up more or less the same manifest on the master as I had run for standalone. It would be nice to store source files on the master, but puppet currently requires that Windows files and packages live on the Windows agent (or some share or location reachable by the agent, of course).

If you try to define a *nix file location in a manifest on the linux master to copy over to Windows, you will get an error like: err: Failed to apply catalog: Parameter source failed: Cannot use relative URLs.

Once it works running it from the command line, you can set up a Windows service to check in with the master regularly with nssm:

nssm.exe install puppet-agent <FULLPATH>\puppet.bat agent --server <yourpuppetmaster.com>

The server arg shouldn’t really be necessary if you have it defined in your puppet.conf.

And there you are! Sit back and let your puppet master control your services, set up users and groups, and run msi packages or other executables on your Windows instances.

Posted in software | 9 Comments

Configuring Spring Integration channels without XML

I’ve been looking at some messaging frameworks lately and trying to find something that’s not too obtrusive. Spring Integration seems straightforward, plays nicely with our other Spring stuff, and runs in your application, which is a plus for us, but it’s hard to find simple examples to get started with, especially if you want to avoid xml configuration.

Spring makes it hard to totally skip all xml, but here’s an attempt to configure a pollable channel and a subscribable channel with as little xml as I can get away with.

First, the Maven pom has these dependencies:

org.springframework.integration:spring-integration-core:2.0.5.RELEASE
org.springframework:spring-context:3.0.3.RELEASE
org.springframework:spring-context-support:3.0.3.RELEASE
cglib:cglib-nodep:2.2

The configuration xml is just a pointer to the java configuration (the TestAppConfig class).

In java, we configure the context with a QueueChannel, a PublishSubscribeChannel, and a TaskScheduler.


I didn’t see a simple implementation of MessageHandler, so I wrote a test handler that just prints out messages.

For this demo, the channels and their consumers are squeezed into one main method. We get the channels out of the Spring context, set up consumers to listen to them, then send out simple string messages to each of the channels.

Try running it and you should see the output of all the consumers as they receive the messages and let the message handler process them.

Handler A; [Payload=Message on the pollablechannel][Headers={timestamp=1318972120696,id=9db9d54a-a2d3-4396-bfac-632c3b4b861f}]
Handler B; [Payload=Message on the subscribablechannel][Headers={timestamp=1318972120697,id=5c918d28-4044-4b52-9e36-60f2498c5616}]
Handler C; [Payload=Message on the subscribablechannel][Headers={timestamp=1318972120697,id=5c918d28-4044-4b52-9e36-60f2498c5616}]

Woohoo! Not that useful in itself, but it shows the basic idea of channels and consumers.

Posted in java, software | Tagged | 4 Comments

Adding more users to your EC2 instance with IAM

Amazon makes it easy to try out AWS with a free micro instance. As you start using it more, one of the first things you might want to do is let others have access to your instance. Using the IAM service, it’s easy to add more users and set up a management console address that’s not directly tied to your own Amazon account.

Go to your AWS Console (I’m assuming you’ve already set up an account). The screen at this point should have the message, ‘Sign In or Create an AWS Account’, and prompt you for your Amazon account username and password. Sign in.

At the AWS Management Console prompt, select ‘AWS Identity and Access Management (IAM)’. You should be at the IAM dashboard. 

At the bottom of the screen, under AWS Account Alias, notice the IAM User sign-in URL at the bottom. We want to change that url that starts with a bunch of random numbers to something easier to deal with. Think of an alias you want to use. We’re going to use ‘acme-app’.

This is only a url for the AWS console. Your end-users won’t see it so it’s not a huge deal. Click the ‘Create Account Alias’ button at the bottom and put in your name, eg. ‘acme-app’. The url for the console would then becomes ‘https://acme-app.signin.aws.amazon.com/console’. You can go back later and change it if you want.

Next click the big ‘Create a new group of users’ button.

We’ll just do an admin group, but there are a lot of different authorization settings you can use. Type in a group name of ‘admin’. On the next screen, ‘Permissions’, click the ‘Administrator Access’  select button. Just hit ‘Continue’ through the edit permissions screen.

On the users screen, create user ids for all your admins.

On the next screen there is a ‘Download credentials’ link. Download it somewhere you won’t forget.

To set initial passwords for your users, screen, click the users link under IAM Resources. Then go to the Security Credentials tab below, and click ‘Manage Password’. Assign or auto-generate a password.

Your users will also need their access key id and secret access key from the credentials.csv you downloaded in order to connect to AWS through api calls, ie. command-line.

And that’s it! Other users can now get to your instance. If they go to the url you defined, they should see ‘AWS Account acme-app’, rather than the initial basic AWS screen we noted above, and only the user accounts you defined for it will be valid.

The owner of the account will always be able to get into the the console with either a user account defined for the new url, or their normal Amazon logon. If you’re on the new app-specific login screen but want to use your AWS account instead, click the ‘Sign in using AWS Account credentials’ link.

Posted in cloud, software | Tagged | Leave a comment

Adding an https connector to embedded Tomcat 7

The api to add https support to an embedded tomcat server follows the corresponding server.xml elements pretty closely. For example:

       Connector httpsConnector = new Connector();
       httpsConnector.setPort(443);
       httpsConnector.setSecure(true);
       httpsConnector.setScheme("https");
       httpsConnector.setAttribute("keyAlias", keyAlias);
       httpsConnector.setAttribute("keystorePass", password);
       httpsConnector.setAttribute("keystoreFile", keystorePath);
       httpsConnector.setAttribute("clientAuth", "false");
       httpsConnector.setAttribute("sslProtocol", "TLS");
       httpsConnector.setAttribute("SSLEnabled", true);

       Tomcat tomcat = new Tomcat();
       //...
       Service service = tomcat.getService();
       service.addConnector(httpsConnector);

To add a redirect to your http port:

       Connector defaultConnector = tomcat.getConnector();
       defaultConnector.setRedirectPort(443);

If you only want an https port with no other port open, you can call setConnnector() to make your connector the default on the tomcat object instead of adding new ones to the service.

Posted in java, software | Tagged | 6 Comments

Building Flex 4.1 projects with Maven and Flexmojos

Adobe relies on ant for its main flex build tool, but Sonatype now has a very functional Maven 3 plugin. It’s still in beta but seems stable enough, and it has the usual advantages over ant.

To try it out, make sure you have the Sonatype repository, http://repository.sonatype.org/content/groups/forge/, and add the flexmojos plugin to your pom: org.sonatype.flexmojos:flexmojos-maven-plugin:4.0-beta-7.

Since I was retrofitting old projects to use maven, I didn’t try to try out the flexmojos archetypes.

The default flex sdk of flexmojos seems to be 4.5. For flex 4.1, I found I had to add a dependency on the flex compiler with the version I wanted (see pom below). The sdks are in Sonatype’s repo, as are other basic libraries like flexunit. Most third-party libraries, like Swiz, for example, will need to be installed manually into your repository.

You can move over most compile settings from your ant script to corresponding maven options for compiling a swc or swf.  If your swc has namespaces used by another component, you will need to declare and include them in your pom (see below).

To run unit tests, you need to add a flex.flashPlayer.command property pointing to your debug flashplayer. To add code coverage, you supposedly just have to include the coverage tag. I’ve been unable to get it to work successfully, however. It always hangs on instrumenting the TestRunner.swf.

Here is a pom for a swc:

For the swf, you may also want to generate html wrapper files. Flexmojos does not generate them for you directly. Instead, you supply a set of files you want (swfobject.js, etc.) in a directory, use some replacement parameters, and then define the path to the files with templateURI. A line in the html like <title>${title}</title> will get the value of the title parameter defined in your pom substituted in. The docs say that the htmlName parameter is deprecated, but I found I had to add it to get the name the output html file named correctly.

To store all these files in a repository, it’s convenient to zip them up with the maven assembly plugin. It can then be added as a dependency to another project and unzipped if necessary, into a war, for example. Here’s a pom for a swf:

The documentation mentions a flexbuilder goal to generate Eclipse/Flashbuilder files, but it doesn’t seem to be in the current latest version. Better IDE support and working code coverage (assuming it’s not the fault of my own misconfiguration) would be great to have in future versions of flexmojos. What is in place already, though, works fine, and if you like working with maven for java, it’s worth looking at converting your flex projects over.

Posted in flex, software | Tagged , | 7 Comments

Developing for the Blackberry Playbook with Flex

In early March, I noticed RIM’s offer to develop an app for their new tablet, sportily named ‘The Playbook’, by March 31 to win a free device. Because I wanted to try out Adobe’s new 4.5 mobile SDK anyway and because I am very cheap, I decided to give it a try. The quick summary: RIM’s tools and processes are very much still in beta but they show promise and for the most part, I enjoyed working with them.

Getting Started

Coverage of RIM’s offer and development for the Playbook in general has been pretty much dominated by one negative blog post. I’m glad the author wrote it, because he seems to have single-handedly motivated RIM to change some of their ridiculous rules, like requiring a notarized identification. But still, it’s a little strange to see that one and only blog post cited in article after article. I guess it fit well into tech writers’ favorite memes on RIM.

The author complains a lot about setting up your development environment and it is definitely a chore. It’s not one clean install process and you do fill out a lot of forms. RIM cobbled together its own sdk and configuration with an IDE from Adobe and a simulator from VMWare and it shows. There are many little steps that all have to be done to get everything up and working. On the other hand, both main pieces of the environment are solid and already have a wide base of users.

Tools

For the most part, I liked the simulator. I’ve only tried Blackberry’s old simulator for phones a few times, but it seemed almost unusable. You can see why RIM went with a real virtualization tool this time. The Playbook simulator a lot faster and more responsive for me than Android’s virtual device. Apparently at one time you had to buy VMWare’s Fusion product, but when I went through installation, you just had to get the free VMWare Player, which I happened to have already installed.

One downside is that the connection between the IDE and the simulator is not as integrated as you’d like. You have to set up an ip address and password for it. And there are a few bugs. Once in a while during debugging, the simulator window will hold focus even though the Flash Builder window had been brought to the front. And you have to hit Alt-cntl to move your cursor off the simulator. Or maybe there’s a solution for that, like vmware-tools for running a Linux vm, but I didn’t see it documented.

Adobe Flash Builder is, of course, just an Eclipse plugin and has the strengths and weaknesses of Eclipse. Most importantly, it’s well-known to many developers. This version, 4.5, is in beta and that does show at times, but I found it to be pretty stable. Adobe is also hoping to build momentum for using flex to build Android apps, which I might try out next.

I don’t know how feasible it will be in reality to develop an application to run on Playbook, Android, and AIR, but at the very least, there should be a lot of opportunities to share code, as long as performance on Android is decent. The biggest downer about Flash Player is just that it’s not free. My 60-day free subscription runs out in 26 days and I somehow doubt revenue from my app will cover the cost.

Development

When I first started doing some test apps, I didn’t realize that the new QNX actionscript APIs were not entirely integrated with the flex mobile SDK. The docs lead you to create a mobile actionscript project, but anyone accustomed to flex development immediately misses mxml. If you create a regular Flex Mobile Project, though, it’s hard to use the QNX libraries, which provide some key functionality.

For example, RIM wants to establish that swipe gestures from the top bevel down brings up an application menu, but that’s a qnx.events.QNXApplicationEvent and you can’t listen for it with just flex. Fortunately, the forums are quite helpful (more so than the official docs) in coming with work-arounds for this and other problems. I would expect that RIM will continue to work on presenting a more seamless SDK for developers (they better).

I do like, though, the choice of actionscript. With its nearness to javascript, it’s a far more appropriate language for this kind of development than java. And the language itself is essentially open-source with a free-of-charge spec and open-source compilers and virtual machines available.

I found the QNX information lacking enough that I just treated my app as an AIR project then added in Playbook events, like swipe down, where necessary. I ended up with something that really feels too much like a web app. That might be inevitable when you’re developing without a real device. Not all the touch events can be simulated with the simulator so there’s a limit right now to what you can test.

The vendor application and app submission process was, like setting up the development environment, tedious but not hard. I almost didn’t get my vendor registration done in time. But I did end up getting everything submitted with two hours to spare on March 31.

So did I qualify for a free Playbook? I don’t know. I’ve gotten confirmation that I completed the first step, but not yet heard if my app has been approved for App World. I have to admit, my app seems more like the result of a learning exercise (which it was, after all) than a polished application. But then, with no device, a simulator that can’t simulate all the real gestures and actions, a  beta IDE, and minimal documentation, it’s bound to be a little rough. At least that’s what I tell myself.

Posted in flex, software | Tagged | Leave a comment

How to find a file’s image type by looking at its bytes (in Java)

Byte arrays and byte streams get used a lot in java, but it’s rare to do anything with individual bytes. Many file types start with certain bytes, though, and can be quickly identified by them.

In my case, I had a byte array, extracted from a file with the guava library’s Files.toByteArray(file), and wanted to make sure it was a jpeg before sending it down to a flex front-end. Jpegs start with the two bytes, 0xFFD8. So I printed out the first two bytes in my array and found… -1 and -40.

Java stores the byte type signed, that is, -128 to 127, rather than 0 to 255. So what do the values of -1 and -40 mean? Java uses 2’s complement for its negative numbers. Take the number -1. 1 is 0000 0001 in binary, so to calculate its negative, -1, with 2’s complement, you invert  0000 0001, getting 1111 1110, then increment by 1, yielding 11111111. That’s the binary value represented by a -1 java byte.

In your code, to get a positive int from a 2’s complement negative byte, you can do a bitwise & with 0xFF (1111 1111):

int unsignedByte = myByte & 0xFF;

So, if you had, say, -3, you would take the value of -3 (3 in binary is 0000 0011, invert to get 1111 1100, then increment + 1 for 1111 1101) and evaluate it with & 0xFF (in binary, 11111111). The bitwise & operator evalutes the bits like:

1111 1101
1111 1111
—– —-
1111 1101

The java byte value -3 equals 1111 1101, or 253 in decimal. If you try to simply cast your byte to an int

int wrongByte = (int)b;

It will just convert it straight from a -3 byte to a -3 int.

Going back to the bytes I needed, a -1 java byte equals 1111 1111, or 255 in decimal. For the second byte of my file, I got -40, which is 1101 1000, or 216. The simplest, laziest way to convert an int to a hex string in java is probably Integer.toHexString, which gives us 0xFF and 0xD8 for 255 and 216. So my test file was, in fact, a jpeg.

Posted in java, software | Tagged | 6 Comments