Sunday, May 23, 2010

Eclipse Plugins, Acceptance Testing, Jigloo

We have decided to not attempt to unit test SWT UI pieces - a decision, by the way, that is orthogonal to our choice of using Maven 2. Regardless of your build environment, Eclipse plugins are usually tested using JUnit and trying to automate SWT events etc. in that environment does not seem productive. By convention, we try to keep the UI layer as thin as humanly possible and keep all the business logic squirreled away in classes that don't interact directly with SWT and only sparsely so with JFace - making it much more natural to test with JUnit. Also, we tend to not directly unit test code that is generated (like EMF Ecore models or Hibernate POJOs) and since we use the very excellent Jigloo GUI builder from Cloudgarden to create all the Composites for our views, wizard pages etc. the amount of direct SWT code we write ourselves is fairly limited. Our company uses dedicated UI testing tools to do automated functional testing of the GUI.

We realized that the Eclipse plugins we write can be fundamentally put into two categories. We had Eclipse plugins that contained code in them, which we called Source Plugins and we had other Eclipse plugins whose sole purpose was to expose jars that other plugins depended on. We called these Binary Plugins. Just to be completely honest, we did have plugins that contained jars as well as code, but to keep things simple (as we like to do) we decided that it was easy enough to keep our Eclipse plugins purely source or binary.

Use Jigloo or Matisse?

Please don't use either! As with this answer, it's my strong opinion (after writing Swing GUIs for 10 years), that using GUI builders is, in all but the most edge-cases, a bad idea. HAND CODE YOUR GUI!
  • Whether you choose Matisse or Igloo, it is not a standard, will fall out of favour and a better tool will come along. At that point, you will have legacy code that is nigh on impossible to maintain. This has already happened severeal times in the history of Java GUI builders
  • You should avoid forcing your developers to use one IDE and it is a huge overhead to expect devs to switch to a particular IDE when looking at the GUI code. They'll be frustrated as they can't remember key bindings, project setup is out-of-date, they havbe the wrong version installed etc. People will make quick-fixes without the builder. At this point your code is unmaintainable in both your IDE of choice, and in the GUI builder istelf! The whole thing is a mess.
  • Designing a GUI is not, in my experience, a particularly onerous task and probably accounts for no more than 5-10% of the total development time of an application. Even if initially using Matisse or Igloo provides you with a 50% time advantage over hand-coding the GUI, this is insignifcant in the grand scheme of things. It is certainly not worth the hidden costs and impending maintenance disasters that lie ahead
  • GridBagLayout is not hard. It just isn't! It's really simple, in fact. It will take you a few minutes to learn and after that you'll never look back. Your GUI's will look like how you want them to look and your code will be more maintainable as a result. Use GridbagLayout!
I have spent a good deal of time warning people about this before and been proven correct.

Sunday, May 2, 2010

Empirical Process Control

The empirical model of process control provides and exercises control through frequent inspection and adaptation for processes that are imperfectly defined and generate unpredictable and unrepeatable outputs. See statistical process control.

For many years software development methodologies have been based on the defined process control model. But software development isn’t a process that generates the same output every time given a certain input.
The agile software development method Scrum is based on the empirical process control model.

Saturday, May 1, 2010

Running Tested Features (RTF)

Agile Projects: Early Features

An Agile project really does focus on delivering completed features from the beginning. Assuming that the features are tested as you go, the RTF metric for an agile project could look like this:
This is a very simple metric, often called the “burn-up” chart. Real live features, done, tested, delivered consistently from the very beginning of the project to the very end. Demand this from any project, and they must become agile in response to the demand.

Waterfall: Features Later On

Waterfall-style projects “deliver” things other than RTF early on. They deliver analysis documents, requirements documents, design documents, and the like, for a long time before they start delivering features. The theory is that when they start delivering features, it will be the right features, done right, because they have the right requirements and the right design. Then they code, which might be considered Delivered Software, but not Running Tested Features, because the features aren’t tested, and often don’t really run at all. Finally they test, which mostly discovers that the RTF progress wasn’t what they thought. It looks like this:
Look at all the fuzzy stuff on this picture. There’s blue fuzzy unpredictable overhead in requirements and design and testing. We can time-box these, and often do, but we don’t know how much work is really done, or how good it is.
And the apparent feature progress itself may also be fuzzy, if the project is planning to do post-hoc testing. We think that we have features done, but there is some fuzzy amount of defect generation going on. After a bout of testing — itself vaguely defined — we get a solid defect list, and do some rework. Only then do we really know what the RTF curve looked like.
The bottom line is that a non-agile project simply cannot produce meaningful, consistent RTF metrics from day one until the end. The result, if we demand them, is that a non-agile project will look bad. Everyone will know it, and will push back against the metric. If we hold firm, they’ll have no choice but to become more agile, so as to produce decent RTF.

Professional Scrum Developer

Professional Scrum Developer - PSD

Pluralcast #12 : The Future of Scrum with Ken Schwaber

Video Interview with Ken Schwaber Assessments

Open Assessment (60 minutes, no certification, 75% score)
The open assessment of Scrum knowledge continues to be available free to anyone interested in testing their knowledge of Scrum, or in preparing for certification.  The assessment has been shortened to 50 questions (randomly selected from a pool) to reduce the amount of time it takes to complete.  The passing score is the established average achieved by the 1,000 test-takers during the development of this assessment.
PSM I (90 minutes, PSM I Certification, 90% minimum score)
The fundamental assessment of Scrum knowledge is available to anyone interested in demonstrating their knowledge of Scrum, and in achieving certification.  A $100 fee will be charged for this assessment.  Individuals demonstrating an acceptable level of knowledge will be issued a certificate and listed as a Professional Scrum Master I.
PSM II (120 minutes, PSM II Certification, 85% minimum score)
The intermediate assessment of Scrum knowledge and skill is available to anyone who wishes to demonstrate their capabilities in applying Scrum to solve complex problems, and in achieving certification.  As many of the answers require written responses (and consequently must be graded by a human) a fee of $500 will be charged for this assessment.  Individuals demonstrating an acceptable level of knowledge and skill will be issued a certificate and listed as a Professional Scrum Master II.
PSD I (.NET) (90 minutes, PSD I, 90% minimum score)
The fundamental assessment of skills needed for developing software using Scrum will be available only to those individuals that attend a Professional Scrum Developer course for the first year.  Individuals demonstrating an acceptable level of knowledge and skill will be issued a certificate and listed as a Professional Scrum Developer I.