Simple client-side validation with JSF and Primefaces

The conventional wisdom for JSF seems to be that server-side form validation is good enough and client-side validation should be avoided. Even if your component library makes it relatively easy, raw javascript is hard to maintain in a jsf environment, an ajax update is fast enough, inputs will need to be validated on the server anyway, etc.

Seeing javascript in jsf templates may suggest to you ill-advised hacks and browser work-arounds that will be hard to support, but with javascript being used all over and frameworks like backbone.js and knockout.js getting popular, it’s hard to pass up the advantages that it can give you. It’s hard to justify making a call back to the server just to validate an email format or enable a button. Ajax requests can be pretty inexpensive and responsive to the user, but no request at all is even cheaper and more responsive.

Does mixing javascript and JSF really have to be such a burden to maintain? Primefaces provides a solid client-side api. It lets you define javascript event handlers (onblur, onclick, etc.) for many of its components and for some, extra client-side hooks and a widgetVar attribute for an id available directly in javascript.

Here are a few simple examples. The full code is available at https://github.com/thwick/clientside-jsf-validation.git.

You might have an email text field that you want to evaluate for validity and display a message if the field loses focus and the input is not valid.

<p:inputText id="emailInput" onblur="validateEmail();" />

And the function:

 function validateEmail() {
    var emailReg = /^([\w-\.]+@([\w-]+\.)+[\w-]{2,4})?$/;
    var email = $("#testForm\\:emailInput").val();
    if( emailReg.test( email ) ) {
       $("#emailInputValidation").text("valid!");
    } else {
      $("#emailInputValidation").text("invalid!");
   }
 }

where the emailInput field is in a form called testForm and it updates a span tag with the id emailInputValidation. Dealing with ids of JSF components can sometimes be painful, unfortunately, but Primefaces does let you define a simpler id in some cases.

For example, say you want to enable or disable a button depending on whether an input text field is empty or more than nine characters. Primefaces lets us define a ‘widgetVar’ value, which is the id of the component as it will appear on the html page, no generated id tacked onto the end. The button looks like this-

<p:commandButton id="button" widgetVar="buttonWidget" disabled="true" //>

The Primefaces commandButton lets us simply call enable/disable to alter its state directly without calling back to the server.

In your javascript, just call buttonWidget.enable(); to enable it:

function validateContent() {
    var currentValue = $("#testForm\\:nonNullInput").val();
    if (currentValue) {
      if (currentValue.length < 9) {
        $("#fieldInputValidation").text("");
        buttonWidget.enable();          		
      } else {
        $("#fieldInputValidation").text("Field must be 8 characters or less!");
        buttonWidget.disable();
      }
    } else {
      buttonWidget.disable();
    }
}

For important data, validation on the server will still be required. But for a lot of cases, using the clientside api can improve responsiveness without adding too much complication.

Posted in software | 1 Comment

My neighborhood is first in line for Google Fiber

Woohoo! My neighborhood association is #1 in line for Google Fiber. You can suck it, Wornall Homestead, in your lame second-place position! Of course, we’re only first in Kansas City, MO (KCMO). Google will actually start installing in Kansas City, Kansas (KCK) first, but really, does anyone care about KCK?

You have to admire the finesse with which a leviathan like Google is managing local relations in a place far from its home base. Engaging small neighborhood associations in Kansas City to compete against each other to see who has the most sign-ups and therefore gets first in line for the precious, precious fiber seems to be working quite well. You even see signs with the fiber mascot, a rainbow bunny, in people’s yards.

A few malcontents have grumbled about Google backing off of its commitment to maintain open access on the fiber. It’s not ideal…but on the other hand, they give us ice cream! Look, here’s the Google ice cream truck.

And it wasn’t just some generic brand ice cream. It was from a gourmet dairy nearby. They are playing the local angle very nicely.

I have Time Warner right now and usually get a download speed around 16 Mbps in the evenings, worse during the workday, better in the early mornings. Upload speeds are usually about 1 Mbps. I honestly don’t have a lot of complaints about Time Warner, other than they are not providing me with anything near gigabit speeds. Also, they have not given me any ice cream.

Posted in software | Leave a comment

Setting up an Android ContentProvider with a join

Android’s ContentProvider and related APIs allow for joins, but you usually just see simple single-table reads being used. I assume most people just create big denormalized tables in Android, but I still wanted to try out a content provider based on a join.

In this example, I have just two tables, sport and team, with these columns.

sport: _ID, name, periodType, updatedDt
team: _ID, name, sportId, updatedDt

The full code for this, by the way, is up at https://github.com/thwick/android-provider-join using the android-support-v4 compatibility package.

In my TeamProvider, extending ContentProvider, I use SQLiteQueryBuilder and set the join with:

String tables = "team LEFT OUTER JOIN sport ON (team.sportId = sport._id)";
queryBuilder.setTables(tables);

The tables have some column names that are the same, like _ID and name. Therefore, a projection map is needed to create aliases for the columns. The columns need to be referenced with their respective tables, eg., table.name, sport.periodType. For aliases, I just set the team columns to their simple form, eg. name, but added the table name plus underscore to the sport columns, so sport.name maps to sport_name (it might have been clearer if I had used the same convention for both).

Adding values to the projection map looks like this (using literals for clarity):

mColumnMap.put("team.name", "team.name AS name");
mColumnMap.put("sport.name", "sport.name AS sport_name");

Then the map is set on the query builder with

queryBuilder.setProjectionMap(mColumnMap);

To set up a cursor, the projection in the onCreateLoader() method of your LoaderManager implementation should use the table-qualified column names, eg. sport.name. However, use the aliases when you’re retrieving the results. The SimpleCursorAdapter that is set up as the list adapter, for example, binds the ui to the aliases of the team and sport name column (again with literals):

String[] uiBindFrom = { "name", "sport_name") };

For the detail fragment, the cursor is set with a standard cursor set-up:

Cursor cursor = getActivity().getContentResolver().query(Uri.withAppendedPath(TeamProvider.CONTENT_URI, String.valueOf(teamId)), projection, null, null, null);

where the projection is all the table-qualified column names from each table. However, results need to be retrieved later from the cursor by alias,
eg. cursor.getColumnIndex("sport_name").

Since we’re using an outer join, it’s fine if a team does not have a sport. They’ll just have a null sport name and period type.

That’s it. Maybe there are easier ways to accomplish the same thing (let me know!), but this isn’t too hard. Android does have a CursorJoiner class but I didn’t see it used much, and its API is more limited than a regular Cursor.

Posted in software | 2 Comments

Developer-Driven Database Design

Databases designed and maintained collectively by teams of developers using mostly migration tools and ORMs display certain defining characteristics:

  • All columns except for primary keys are nullable.
  • No indexes except on primary and maybe foreign keys.
  • Lots of bit columns.
  • ‘Type’ columns which determine for a row which sets of columns will have data and which will all be null.
  • Wide tables with lots of columns.
  • Tables and columns named after specific features.
  • Very few or no stored procedures, functions, view, or triggers.
  • Very few complex queries; lots of simple no-join queries.
  • Transactional boundaries are set in code, if anywhere, but are often not explicitly or consistently defined.

Old arguments between developers and DBAs about denormalization seem almost quaint when you think back on them now. Developers don’t even bother with ACID if they’re using NoSQL.

But I have to admit, I’m not sorry to see the backside of triggers. And DBAs and the rest of the database world (to use an indefensibly vague term) were probably too conservative in many cases. That often conflicted with developers. It’s not hard to see whose values are dominating right now.

Posted in software | Leave a comment

The Internet can’t decide how old I am

Several articles came out a little while ago on Google’s ad preference settings and the fairly specific profile they keep on you, but I had already become intrigued by another online age measurement almost as explicit- match.com ads.

Now I have no interest in match.com. I’ve never used it, I’m not single, and, as far as I can recall, I’ve never clicked on of their ads even accidentally. Yet I see their ads all the time, especially on Pandora and on a Yahoo email account I have. On Yahoo, it suggests I might want to date women aged 20-29.

On Pandora, however, the match ads give me a suggested age of 30-39. 

Of course, that’s not a direct measure of how old they think I am. Given dating patterns and male stereotypes, those sites no doubt think I’m that old or older, but it still suggests a relative difference in their perception of my age.

Clearly, Yahoo and Pandora have different information sets available to them. Pandora has some direct user choices to consider in the form of the channels its users set up. Reviewing my set-up, I found channels on Oliver Messiaen and John Coltrane. Boom goes my online age!

Who wants to be older? I decided to try adding a Justin Bieber channel. But I couldn’t really commit to it. I never played it and after a week or so, there was no change in the ads. I reluctantly started playing the Bieber channel with the volume off. It took a few weeks, but it worked. I started seeing ads for 25-34 year-olds on Pandora. I was young again!

I couldn’t keep it up, though. I got tired of minimizing Pandora whenever someone came over to my desk just in case some Bieber cover art was visible. And I feared that I would start seeing Tiger Beat ads. I deleted the Bieber channel and before long, the ads went back to 30-39.

But just a few days later, I pulled up weather.com, and lo and behold, it had match.com ads waiting for me suggesting women aged…40-49. Son of a bitch! It’s not like I was reading feature articles there or anything, honest. I just wanted to know the hourly forecast.

Clearly, the Hive Mind at the bottom of the Intertubes is not as all-knowing as it no doubt wishes to be. It has not yet encapsulated my demographic attributes into a single unified avatar. Its vision is still fragmented and broken, but it’s getting better every day.

Maybe it’s better to try to keep your online activity as private as possible and stick to sites that have tighter privacy and advertising policies, but anything that requires much effort is probably doomed. Besides, concealing your online activities could have unintended consequences. You don’t want to meticulously erase your online persona only to find you’ve raised suspicions that you have something to hide and wound up on some government watch list and now need several hours to make it through airport security.

Celebrities and politicians assiduously cultivate their image in the press, and while they can directly influence how they’re perceived, there are many forces outside of their control. Their words and images can take on almost a life of their own. Maybe that’s what our online profiles will be like- semi-malleable entities that we have to monitor like credit scores. I’m sure someone will write an app for it.

Posted in ruminations | Leave a comment

Setting up Puppet on Windows

Our system configuration strategy consists mainly of saved VMs, a hodgepodge of scripts and notes scattered about, a lot of manual effort, and good intentions. We have a mixed linux/windows environment so we’ve never been sure if the new DevOps tools offered us much. But we decided to take the plunge and start trying out Puppet‘s relatively new Windows offerings.

Install puppet agent on Windows

You can’t run your central Puppet server on Windows, but you can install a puppet agent, which can run standalone or take marching orders from a puppet master running on linux. Even if you’re not interested in standalone, it’s helpful to make sure it works before trying it with your master.

The Puppet Windows instructions are pretty straightforward. Before you start, though, think about whether you’ll want to run puppet as administrator or as a regular user. On Windows 2008/Windows 7, if you run puppet as administrator, data will be stored at C:\ProgramData\PuppetLabs\puppet. For other users, data will get stored relative to %HOMEDRIVE%%HOMEPATH%\.puppet.

Download and run the ruby 1.8.7 installer. If you have a different version of ruby already installed, you obviously need to figure out how you want to handle the PATH variable.

Install some gems:

gem install sys-admin win32-process win32-dir win32-taskscheduler --no-rdoc --no-ri
gem install win32-service --platform=mswin32 --no-rdoc --no-ri --version 0.7.1

Download facter and puppet from github, unzip each, go into the uncompressed folder and run ruby install.rb (do facter first) for each.

Test Puppet standalone on Windows

At this point, you should be able to run puppet standalone, which isn’t terribly interesting, but you can test that things are okay so far. Here’s a simple manifest, for example, that ensures that a test file stored in a source folder exists in a target folder and that my MySQL service was running:

file { "c:/test/puppet/target/win_test_file.txt":  
  ensure => 'file', 
  owner => 'Administrator', 
  source => 'c:/test/puppet/source/win_test_file.txt', 
} 

service { 'MySQL': 
  ensure => 'running', 
  enable => true, 
} 

Save this into a file named init.pp, and run puppet apply init.pp (puppet.bat is now in your ruby install’s bin folder). It should turn on the MySQL service if it’s off and copy the test file over from source to target if it’s missing from target.

Puppet can control a fair number of things on Windows, as listed in the docs, but it’s not very compelling unless you add in puppet’s ability to control agents from a central puppet server.

Install Puppetmaster on Linux

There are many other, better instructions for installing puppet on linux, but just to keep documenting what I did for this case…I installed the master on an old Oracle Enterprise Linux vm. Again, you need ruby. The puppet instructions recommend using the ruby package that comes with your OS, but I used rvm and matched the same version of ruby, 1.8.7, as on Windows, though it probably doesn’t matter.

Install rvm by first getting the installer:
wget https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer
sudo bash rvm-installer

I started installing puppet from a gem first, but the latest gem on the download site was only 2.7.6. There are a few warnings floating around about not using a higher version on agents than on the master so I got the source from github. Run rvmsudo ruby install.rb to build it.

Start it up with rvmsudo puppet master --mkusers. Use --mkusers to make a users for puppet if it doesn’t exist yet. The first time you run this it should also create the master security certificate, too, I believe.

Make sure the firewall allows 8140/tcp.

Introduce your Windows agent to your Linux master

Back on your Windows agent, add the server to your puppet.conf (ie. C:\ProgramData\PuppetLabs\puppet\etc\puppet.conf):
server = <yourmaster.com>

Set up puppet with your DNS, or add to your hosts file at C:\WINDOWS\system32\drivers\etc\hosts a line like <yourmaster.com> puppet.

Try puppet agent --test --verbose. You can also specify the master explicitly by adding --server <yourmaster.com>.

You should get output like

info: Creating a new SSL key for yourwindowsagent.com
warning: peer certificate won't be verified in this SSL session
info: Caching certificate for ca/etc/puppet/manifests/site.pp
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
info: Creating a new SSL certificate request for yourwindowsagent.com
info: Certificate Request fingerprint (md5):
BB:D7:30:C4:12:B9:13:33:F3:DE:E6:E8:71:7A:60:E7
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
Exiting; no certificate found and waitforcert is disabled

This attempts to connect to the puppet master and sends a certificate request. Back on the puppet master, check out the waiting requests with puppet cert --list. Your client host should show up. Accept it with puppet cert --sign <yourclient.com>.

Go back to your Windows agent and try puppet agent --test again. It should now output something like notice: Finished catalog run with no errors.

Judging from online comments, getting the certificates working is a common stumbling block for setting up puppet clients in general. The error messages are often not very informative. For example, I kept getting this error:

err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed. This is often because the time is out of sync on the server or client
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run

It took a while to realize that I had an old, incorrect server key stored on the client. Removing the files from C:\ProgramData\PuppetLabs\puppet\etc\ssl then trying again worked.

Set up a manifest to execute

I set up more or less the same manifest on the master as I had run for standalone. It would be nice to store source files on the master, but puppet currently requires that Windows files and packages live on the Windows agent (or some share or location reachable by the agent, of course).

If you try to define a *nix file location in a manifest on the linux master to copy over to Windows, you will get an error like: err: Failed to apply catalog: Parameter source failed: Cannot use relative URLs.

Once it works running it from the command line, you can set up a Windows service to check in with the master regularly with nssm:

nssm.exe install puppet-agent <FULLPATH>\puppet.bat agent --server <yourpuppetmaster.com>

The server arg shouldn’t really be necessary if you have it defined in your puppet.conf.

And there you are! Sit back and let your puppet master control your services, set up users and groups, and run msi packages or other executables on your Windows instances.

Posted in software | 9 Comments

Book Review: Programming Android

Programming Android provides a good, comprehensive view of Android application architecture, but for someone already familiar with java, it starts slowly…reeeeeeeeaallly slowly. There are sections on installation of the Android SDK, basic concepts of standard java (like its type system), a long introduction to Eclipse…even a section on the preferred location to store java source code.

To be fair, the book proclaims at the very beginning that it’s written for people of all backgrounds, not just java, and it’s got to cover the basics for those who might, say, know iOS but not server-side java. But for me, the book didn’t really get interesting until it launched into a detailed description of concurrency and serialization on Android. From there, it kept going at a fast clip all the way into advanced topics, like NFC, sensors, and audio and video.

Layout, which some Android references get bogged down in, is explained conceptually in the context of MVC architecture. The book doesn’t spend time introducing all the standard view classes or going through their properties. You’ll find a good description of how Android measures and arranges UI components, but you won’t find simplistic code examples for the onMeasure() method.

The book goes through the Android framework and advocates how it thinks a non-trivial app should be organized. It keeps mobile issues like battery life, connectivity, and asynchronicity in the forefront of all its discussions, and it provides extended examples on things like how to write your own content provider and how to incorporate Google maps.

Programming Android is really not for beginners. If you want simple code examples to get up to speed on basic concepts, you’re better off starting with the online dev guide and other resources. But if you’re interested in deeper discussions of architectural issues as well as of more advanced Android APIs, this book should have what you want. Just don’t be afraid to skip the beginning if you’ve done much java before.

[Note: free review copy provided by O’Reilly]

Posted in software | Leave a comment

Looking for HTML5 Nirvana

This is a dumb question, but does HTML5 support true cross-browser compatibility? Or, I should say, is it browser-agnostic? I kinda thought the latest versions of the major browsers were all converging on the same standards anyway, so I tried doing a simple layout with absolutely no browser-specific elements. Here’s what it looked like in some different browsers.

UPDATE: I put the site back online at http://bigmath.herokuapp.com/ for folks to look at. If I had, ahem, known I’d be making this public, I’d have cleaned it up more, but this is the state it was in when I took the screenshots.

Firefox 8

Chrome

IE 9

Opera 11.5

The numbers should really all be the same for comparison, but still, you get the idea. I don’t do much front-end design at all, and my html and css here are childishly simple. So is there a way to HTML5-ify them to get this mess straightened out?

I googled “HTML5 layout” and the first few results promise better cross-browser support with HTML5, but include parallel ‘moz’ and ‘webkit’ CSS elements, as well as huge swaths of IE-specific junk. I didn’t even bother checking out earlier versions of these browsers or mobile browsers.

There’s a wiki page on HTML5 for different browser engines, and it makes me think there’s not a simple HTML5 solution for all browsers out there. Or at least if there is, it’s well-hidden from front-end noobs like me.

A lot of articles and tutorials claim that HTML5 is great for multiple browsers, but then set forth the principle of ‘progressive enhancement’ or ‘graceful degradation’, which is the idea that it’s fine for different browsers to display the same website differently with it possibly looking much worse on some than others. This smells a little like a rationalization.

I have done a little GUI work in flex (Adobe’s SDK for the Flash Player) and I have to say, you do not encounter the concept of ‘graceful degradation’ when you write your UI with flex. For the most part, you don’t worry about browsers at all (except for things like ssl and tab behavior).

Now, I don’t want to be on the wrong side of history or put myself in the path of the pitchfork-wielding mob seeking to occupy flash. Flash is proprietary, has never been stable enough on macs or linux, and uses a plugin architecture that has always been problematic, so I’m okay to see it go if there’s a better alternative.

Instead of a plugin architecture, I guess you just rely on repetitive CSS and a toolkit architecture when you use HTML5/CSS3/javascript. Do most shops just use tools like jQuery and Sencha instead of raw javascript? It reminds me a little of all the java frameworks and design patterns that sprouted up to deal with its underlying deficiencies. It can work fine, but it doesn’t leave you feeling like the technology underneath is very solid or well thought out.

And we’re talking about HTML5 here- the One True Standard to Rule Them All. HTML5 has been marketed and hyped as heavily as any new technology I can think of.  I thought writing a simple, horizontally-centered equation would be like pressing the easy button. For all the HTML5 headlines I’ve had to wade through, I want it to not just be possible to make consistent sites, but totally simple. It should be hard NOT to write a browser-agnostic site.

Surely I don’t have to write parallel pieces of my site for different browsers and even then have to accept differences, right? Do people just accept that, even with a technology so new its spec isn’t even done yet? Has everyone drunk the ‘graceful degradation’ kool-aid?

Not to mention the fact that there’s no agreement on the HTML5 video format; or that WebGL, which provides some of the cooler Flash alternative examples people display, is not accepted by Microsoft; or that Google has put forward a javascript replacement, Dart, that may wind up being their preferred platform. This doesn’t really feel like a triumphant, unified new world of web development…

When people say HTML5 is the future, I believe them. Only, it looks like the future that Joe Hewitt warned about.  There is no single care-taker, and the big players are just pushing and pulling the web to their best advantage. That’s not exactly news, but given the hopes attached to HTML5, it’s a bit more disappointing than usual.

 

Posted in software | Tagged | 15 Comments

Configuring Spring Integration channels without XML

I’ve been looking at some messaging frameworks lately and trying to find something that’s not too obtrusive. Spring Integration seems straightforward, plays nicely with our other Spring stuff, and runs in your application, which is a plus for us, but it’s hard to find simple examples to get started with, especially if you want to avoid xml configuration.

Spring makes it hard to totally skip all xml, but here’s an attempt to configure a pollable channel and a subscribable channel with as little xml as I can get away with.

First, the Maven pom has these dependencies:

org.springframework.integration:spring-integration-core:2.0.5.RELEASE
org.springframework:spring-context:3.0.3.RELEASE
org.springframework:spring-context-support:3.0.3.RELEASE
cglib:cglib-nodep:2.2

The configuration xml is just a pointer to the java configuration (the TestAppConfig class).

In java, we configure the context with a QueueChannel, a PublishSubscribeChannel, and a TaskScheduler.


I didn’t see a simple implementation of MessageHandler, so I wrote a test handler that just prints out messages.

For this demo, the channels and their consumers are squeezed into one main method. We get the channels out of the Spring context, set up consumers to listen to them, then send out simple string messages to each of the channels.

Try running it and you should see the output of all the consumers as they receive the messages and let the message handler process them.

Handler A; [Payload=Message on the pollablechannel][Headers={timestamp=1318972120696,id=9db9d54a-a2d3-4396-bfac-632c3b4b861f}]
Handler B; [Payload=Message on the subscribablechannel][Headers={timestamp=1318972120697,id=5c918d28-4044-4b52-9e36-60f2498c5616}]
Handler C; [Payload=Message on the subscribablechannel][Headers={timestamp=1318972120697,id=5c918d28-4044-4b52-9e36-60f2498c5616}]

Woohoo! Not that useful in itself, but it shows the basic idea of channels and consumers.

Posted in java, software | Tagged | 4 Comments

Adding more users to your EC2 instance with IAM

Amazon makes it easy to try out AWS with a free micro instance. As you start using it more, one of the first things you might want to do is let others have access to your instance. Using the IAM service, it’s easy to add more users and set up a management console address that’s not directly tied to your own Amazon account.

Go to your AWS Console (I’m assuming you’ve already set up an account). The screen at this point should have the message, ‘Sign In or Create an AWS Account’, and prompt you for your Amazon account username and password. Sign in.

At the AWS Management Console prompt, select ‘AWS Identity and Access Management (IAM)’. You should be at the IAM dashboard. 

At the bottom of the screen, under AWS Account Alias, notice the IAM User sign-in URL at the bottom. We want to change that url that starts with a bunch of random numbers to something easier to deal with. Think of an alias you want to use. We’re going to use ‘acme-app’.

This is only a url for the AWS console. Your end-users won’t see it so it’s not a huge deal. Click the ‘Create Account Alias’ button at the bottom and put in your name, eg. ‘acme-app’. The url for the console would then becomes ‘https://acme-app.signin.aws.amazon.com/console’. You can go back later and change it if you want.

Next click the big ‘Create a new group of users’ button.

We’ll just do an admin group, but there are a lot of different authorization settings you can use. Type in a group name of ‘admin’. On the next screen, ‘Permissions’, click the ‘Administrator Access’  select button. Just hit ‘Continue’ through the edit permissions screen.

On the users screen, create user ids for all your admins.

On the next screen there is a ‘Download credentials’ link. Download it somewhere you won’t forget.

To set initial passwords for your users, screen, click the users link under IAM Resources. Then go to the Security Credentials tab below, and click ‘Manage Password’. Assign or auto-generate a password.

Your users will also need their access key id and secret access key from the credentials.csv you downloaded in order to connect to AWS through api calls, ie. command-line.

And that’s it! Other users can now get to your instance. If they go to the url you defined, they should see ‘AWS Account acme-app’, rather than the initial basic AWS screen we noted above, and only the user accounts you defined for it will be valid.

The owner of the account will always be able to get into the the console with either a user account defined for the new url, or their normal Amazon logon. If you’re on the new app-specific login screen but want to use your AWS account instead, click the ‘Sign in using AWS Account credentials’ link.

Posted in cloud, software | Tagged | Leave a comment