Friday, December 7, 2012

Reading: BDD vs TDD, Estimation, Real Options

Challenging comment:

Dan North · June 6, 2012
  • I’ve seen teams burn insane amounts of time trying to automate UI interactions, for instance, at huge cost and with almost no benefit
  • The opportunity cost, in terms of all the other things they could have done with that time, is considerable, and they’re usually doing it on someone else’s dime. 
  • I think there’s a duty of care involved in these kind of decisions. 
  • You shouldn’t automate “because we do” but because there is an identifiable benefit in the automation that outweighs its cost in this case
  • Sometimes that investment is worth it, sometimes it isn’t, so it’s always worth asking the question.

J.B.Rainsberger: TDD/BDD and Queuing Theory

Other Reading:

Perils of Estimation
  • move beyond this cargo cult approach to inception where we slavishly trot out hundreds of stories with their associated estimates, 
  • remember we are engaging in a process of deliberate discovery

Real Options
Real Options:
  • Options have value.
  • Options expire.
  • Never commit early unless you know why.
  • Defer Commitments
  • The Last Responsible Moment
  • Pull

Tuesday, November 20, 2012

Getting Puppet master and agent running on a single Vagrant box

Ensure you have "lucid32" box:

vagrant box add lucid32

Add a Vagrantfile in a new directory:

# -*- mode: ruby -*-
# vi: set ft=ruby : do |config| = "lucid32"

vagrant up

ssh into new vagrant box (port 2222)

sudo su -
echo -e "deb lucid main\ndeb-src lucid main" >> /etc/apt/sources.list.d/puppet.list
apt-key adv --keyserver --recv 4BD6EC30
apt-get update
apt-get install puppet puppetmaster
apt-cache policy puppet
puppet --version
vi /etc/hosts

      # add    puppet
vi /etc/puppet/puppet.conf

     # add to [master] section:
touch /etc/puppet/manifests/site.ppiptables -A INPUT -p tcp -m state --state NEW --dport 8140 -j ACCEPT
puppet master --no-daemonize --verbose --debug

Start another ssh session to same box
sudo su -
puppet agent --verbose --debug


Sunday, November 18, 2012

Cryptic Ruby Global Variables

$!         The exception information message set by 'raise'.
$@         Array of backtrace of the last exception thrown.
$&         The string matched by the last successful match.
$`         The string to the left  of the last successful match.
$'         The string to the right of the last successful match.
$+         The highest group matched by the last successful match.
$1         The Nth group of the last successful match. May be > 1.
$~         The information about the last match in the current scope.
$=         The flag for case insensitive, nil by default.
$/         The input record separator, newline by default.
$\         The output record separator for the print and IO#write. Default is nil.
$,         The output field separator for the print and Array#join.
$;         The default separator for String#split.
$.         The current input line number of the last file that was read.
$<         The virtual concatenation file of the files given on command line (or from $stdin if no files were given).
$>         The default output for print, printf. $stdout by default.
$_         The last input line of string by gets or readline.
$0         Contains the name of the script being executed. May be assignable.
$*         Command line arguments given for the script sans args.
$$         The process number of the Ruby running this script.
$?         The status of the last executed child process.
$:         Load path for scripts and binary modules by load or require.
$"         The array contains the module names loaded by require.
$DEBUG     The status of the -d switch.
$FILENAME  Current input file from $<. Same as $<.filename.
$LOAD_PATH The alias to the $:.
$stderr    The current standard error output.
$stdin     The current standard input.
$stdout    The current standard output.
$VERBOSE   The verbose flag, which is set by the -v switch.
$-0        The alias to $/.
$-a        True if option -a is set. Read-only variable.
$-d        The alias to $DEBUG.
$-F        The alias to $;.
$-i        In in-place-edit mode, this variable holds the extension, otherwise nil.
$-I        The alias to $:.
$-l        True if option -l is set. Read-only variable.
$-p        True if option -p is set. Read-only variable.
$-v        The alias to $VERBOSE.
$-w        True if option -w is set.

Environmental Global Variables

$: (Dollar Colon)

$: is basically a shorthand version of $LOAD_PATH. $: contains an array of paths that your script will search through when using require.

$0 (Dollar Zero)

$0 contains the name of the ruby program being run. This is typically the script name.

$* (Dollar Splat)

$* is basically shorthand for ARGV. $* contains the command line arguments that were passed to the script.

$? (Dollar Question Mark)

$? returns the exit status of the last child process to finish.

$$ (Dollar Dollar)

$$ returns the process number of the program currently being ran. 

Regular Expression Global Variables

$~ (Dollar Tilde)

$~ contains the MatchData from the previous successful pattern match.

$1, $2, $3, $4 etc

$1-$9 represent the content of the previous successful pattern match.

$& (Dollar Ampersand)

$& contains the matched string from the previous successful pattern match.

$+ (Dollar Plus)

$+ contains the last match from the previous successful pattern match.

$` (Dollar Backtick)

$` contains the string before the actual matched string of the previous successful pattern match.

$’ (Dollar Apostrophe)

$' contains the string after the actual matched string of the previous successful pattern match. 

Exceptional Global Variables

$! (Dollar Bang)

$! contains the Exception that was passed to raise.

$@ (Dollar At Symbol)

$@ contains the backtrace for the last Exception raised. 

Other Global Variables

$_ (Dollar Underscore)

$_ The last input line of string by gets or readline.

$, (Dollar Comma)

$, is the (global) default separator for Array#join and possibly other methods.

Thursday, November 15, 2012

Event Sourcing Yow Night with Greg Young

·         Current state:
·         Is awful
·         Requires large amounts of versioning
·         1st level derivative of facts that have happened
·         Look at systems from perspective of no current state
·         Banking, insurance, gambling, etc
·         We don’t have current state, we have a series of facts
·         Driving point is from business perspective
·         E.g.
·         Purchase order
·         Line items(n)
·         Shipping information
·         Models represent our current state
·         Document stores are awesome - until you need to change your schema
·         Problem is we want to go and change our previous representations of data
·         E.g. Cart created -> 3 items added -> shipping information added
·         At any time can replay 3 events to get data model
·         Events: append only model
·         How do you scale immutable data?  Copy it
·         Immutable data is awesome
·         Once “Cart created” is created it will never change
·         Append-only model, with everything immutable, what about updates/deletes?
·         Update/delete = lost valuable data
·         Code with a magic 8-ball to predict what business is going to want in 2 years?
·         Strategic design with DDD
·         Don’t apply ES globally
·         ES/CQRS is not an architecture
·         Small things you apply within a service/component
·         Not losing information is valuable
·         2 sets of use cases in different orders that end up with same ending state?
·         Lost info
·         Hash collision – non-perfect – lost info coming into system
·         One rule: we don’t lose any data – generating 100Gb per day
·         How do you predict value of data?
·         Humans have history of making bad predictions about future
·         Bigger the expert = worse predictive analysis
·         Can only say: “I cannot price this option”
·         Therefore I should keep it
·         When business ask for unexpected data, can say yes
·         Could be something that makes or breaks company – competitive advantage
·         Accounting is not done with a pencil
·         If make a mistake, do a reversal
·         Partial reversal $10,000 instead of $1,000 = -$9,000
·         Accountants don’t like doing – too complicated across 8 accounts,
·         Do a full reversal instead and then redo
·         E.g. Cart created -> 3 items added -> 1 item removed -> shipping information added
·         Same as 2 items added?
·         As a series of facts, very different from each other
·         Want to know about how many items removed?
·         Most businesses are not just create, read, update, delete…. Many verbs
·         ES gives semantics associated back down to verbs
·         Business value comes from fact that we’re not losing information
·         E.g. Large POS, Amazon
·         Removed items from cart are more likely to purchase in the future – still want them can’t afford them
·         Old model
·         Add RemovedLineItems object or flag & date on line items
·         Query, subquery – time correlation – 3 nested subqueries
·         (Try using a Stream database instead)
·         ES model
·         Write projection with state inside
·         If item found in carts
·         Business person can go back into past and see things at that point in time with a deterministic perception we have today
·         Huge win for business
·         Useful for predicting future  - “Back testing” in finance
·         BI reverse engineer CRUD databases into events (imperfectly)
·         Temporal data model
·         Smoke testing
·         Rerun commands since day 1 every Friday and compare results from last time
·         Won’t protect you from black swans
·         Append-only good for hard drives (even SSDs that burn out rewriting)
·         E.g. Secure system
·         Gambling
·         Chris Harn – edited his bets on hard drive
·         How to prevent a super user attack
·         E.g. Pick 6 tickets
·         CSU/DSU
·         Prevent by putting log on “write-once” media – physically can’t modify data
·         Easier to physically secure a machine than to secure software
·         200 partitions within logs
·         Every aggregate has its own stream
·         Partition
·         Rolling snapshot
·         20,000 requests per sec if all in memory
·         Rents represents functions
·         Current state = left fold
·         Snapshots = memoisation
·         ES = functional way of storing data
·         Pattern match functions to events
·         ES = FP
·         Balance of bank account not a column in db but a function of account history
·         Provable

·         Natural fits for ES
·         Accounting
·         Pubsub
·         Don't have to build your own Event Store
·         Cassandra - stream per colum
·         Scales well
·         Medical system

·         How to justify cost of storing everything because you don’t know what you will need
·         Cost of data is low - 5gb for can of coke
·         Hard to justify not storing data
·         What is it not used for?
·         Lots of things
·         Things outside of core domain
·         Events represent use cases
·         Some use cases might not be high value
·         E.g claims more valuable than sales
·         Only used for competitive advantage – requires analysis
·         Pitfalls?
·         ES architecture
·         Monolithic - systems of systems instead
·         Expensive to do analysis
·         Does every projection read every event?
·         Projection pattern match, function
·         Only look at events interested in
·         Map reduce
·         I asked which databases other than Cassandra were a good fit for ES?
·         Consistency is important
·         Need CA for writes, AP for reads
·         Hard to find system that can be tuned like that
·         Riak but slow, quorum writes
·         Event Store has BSD license
·         SQL server for small systems

Saturday, November 10, 2012

RunDeck and Jenkins

RunDeck and Jenkins can be used together to provide a deployment pipeline.
How is RunDeck different from Jenkins?
  • Rundeck not a CI server
  • Both are able to:
    • provide a self serve job interface to automate routine procedures. 
    • execute shell scripts on remote nodes to facilitate deployment tasks. 
  • Differentiator: Rundeck's built-in support for pluggable remote command execution
  • Comes down to use case. 
    • Rundeck == job console for Ops and geared to work with that ecosystem of tools.
    • jenkins-rundeck plugin demonstrates how complimentary they are in continuous deployment tool chain. 
    • Jenkins handling build end of CI loop and triggering Rundeck to provide distributed orchestration across deployment management tool chain.

How is RunDeck different than Puppet mcollective or Chef knife?
  • Some overlap between rundeck and mcollective and knife
    • Allow administrators to execute commands in distributed environment, offering a form of real time control
    • Use metadata-level searches for targeting remote nodes. 
    • Levels of authorization, authentication and auditing
  • Rundeck has a few goals of its own though:
    • Easy way to define routine sequences as "Job workflows" as a basis for runbook automation solutions.
    • Integration of node and environment metadata sources as RunDeck "resource model providers". In this way, Rundeck can use Puppet or Chef node data to drive remote execution.
    • Evolve role-based access control definitions into a high level DSL that ties privilege level to resource model and workflow actions
    • Plugin system supporting concept of "dispatch providers" to delegate to tools like mcollective, knife, func, fabric, PsExec and others for cross tool execution.
  • Ultimate Goal: Simple to use yet flexible enough to complement existing tool chains
Puppet-Rundeck resource provider for Rundeck

Example/Musings on using Rundeck, Puppet, Jenkins, Fabric together

Bamboo-RunDeck Plugin

Saturday, October 27, 2012

Thoughtworks Tech Radar Oct 2012


  • micro-services (Dropwizard, declarative provisioning)
  • Edge Side Includes (ESI) for page composition (Varnish)
  • Configuration in DNS 
  • aggregates as documents 
  • automated deployment pipeline (first class in build tool)
  • work-in-progress limits 
  • declarative provisioning. (Pallet)
  • Mobile first 
  • responsive web design 
  • advanced analytics 
  • logs as data 
  • guerrilla user testing, remote usability testing
  • Semantic monitoring (continuously test app in prod through test-execution/real-time monitoring)
  • In-process acceptance testing 
  • Recommend against exhaustive browser based testing.  


  • Rake for Java and .Net projects.
  • Gradle 
  • GemJars 
  • immutable servers (‘phoenix servers’), Chef/Puppet, software designed to withstand failure
  • Jasmine paired with Node.js
  • Zipkin (monitoring)
  • Zucchini (Cucumber for iOS)
  • JetBrains AppCode IDE (iOS and OS X)
  • Light Table
  • Apache Pig (Hadoop MR pipelines)
  • Crazy Egg (heat maps), Gaze, Silverback
  • Graphite  
  • Riemann (aggregates and relays events in real time)
  • Highcharts
  • D3
  • Dependency Structure Matrices (DSM)
  • embedded servlet containers (SimpleWeb and Webbit)
  • Locust (in-line automated performance testing) Python, better than JMeter or Grinder
  • SaaS performance testing tools ( and Tealeaf) 


  • Hybrid clouds
  • open source IaaS (OpenStack or CloudStack)
  • Google BigQuery
  • Microsoft’s Azure 
  • Continuous integration in the cloud (no local software and minimal configuration)
  • mobile payment systems (M-Pesa, Square)
  • MongoDB
  • Neo4j
  • Riak
  • Datomic
  • Couchbase
  • Vert.x 
  • Calatrava (cross-platform mobile application development
  • Meteor.js (client- and server-side JavaScript application framework backed by MongoDB)
  • Demoted: Windows Phone
  • Demoted: Singleton infrastructure  

    Languages & Frameworks

  • JavaScript as a platform
  • Require.js.
  • Twitter Bootstrap
  • Scratch, Alice, and Kodu (programming languages for kids)
  • Lua
  • Sinatra, Flask, Scalatra and Compojure
  • Dropwizard (embedded HTTP server, RESTful endpoints, built-in metrics and health-checks, and straightforward deployments)
  • Gremlin (imperative graph traversal language)
  • Jekyll (“microization” of web publishing framework)
  • RubyMotion (Ruby compiler and toolchain for developing iOS applications)
  • HTML5 for offline application
  • AngularJS and Knockout 
  • Demoted: Backbone.js
  • Demoted: component-based web frameworks (don't attempt to make web development into something that it fundamentally is not

Further Kanban: Classes of Service, Expedited

Classes of service:

Lean Kanban Conference May 2012:

Henrik Kniberg: Kanban vs Scrum

David Anderson intro
  • Visual control mechanism tracks work flow through stages of value stream. 
    • Whiteboard with sticky notes, or electronic card wall system
    • Best practice == do both
    • Generates transparency that contributes to cultural change
  • Exposes bottlenecks/queues/variability/waste – which impact performance of organization in terms of quantity of work delivered and cycle time required
  • Changes behavior and encourages greater collaboration within the workplace
  • Encourages discussion about improvements, and teams quickly start implementing improvements to their process.

Kanban in a nutshell
  1. Visualize the workflow
    • Split the work into pieces, write each item on a card and put on the wall.
    • Use named columns to illustrate where each item is in the workflow.
  2. Limit WIP – assign explicit limits to how many items may be in progress at each workflow state.
  3. Measure lead time & optimize process to make lead time as small and predictable as possible
    • lead time == average time to complete one item (“cycle time”)

On sliding scale from prescriptive to adaptive:
  • Scrum is more prescriptive (iterations, cross-functional teams) than Kanban.  
  • Kanban is more adaptive than Scrum

Roles not prescribed
  • Doesn't mean you shouldn't have them
  • But make sure they add value and don't conflict with other elements of process
  • In a small project, unnecessary roles could lead to waste (or sub-optimisation & micromanagement)
  • Less is more - start with less
  • E.g. "Product Owner" == sets priorites of team
Timeboxes not prescribed - can choose any cadences necessary
  • E.g. Single Cadence (Scrumlike)
    • Plan & Commit every second Monday
    • Demo/release every second Friday
    • Retrospective every second Friday
  • E.g. Three Cadences
    • Every week release whatever is ready for release. 
    • Every second week planning meeting and update our priorities/release plans. 
    • Every fourth week retrospective to tweak and improve our process
  • E.g. Event-driven
    • Trigger planning meeting whenever start running out of stuff to do. 
    • Trigger release whenever set of Minimum Marketable Features (MMFs) ready for release.
    • Trigger spontaneous quality circle whenever bump into same problem second time. 
    • Do in-depth retrospective every fourth week
Kanban limits WIP per workflow state (Scrum limits per iteration)
  • Scrum limits WIP indirectly (theoretical max per column is max of iteration driven by velocity)
  • Kanban limits WIP per column directly
  • Choose what limit to apply to which workflow states
  • Limit WIP of all workflow states
    • Starting as early as possible and ending as late as possible along value stream
    • E.g. consider adding WIP limit to “To do” state as well
  • Once WIP limits in place, can start measuring/predicting lead time
    • Allows to commit to SLAs & make realistic plans
  • If item sizes vary, consider defining WIP limits in terms of story points/whatever unit you use. 
    • Some teams break down items to roughly same size to reduce time spent estimating (estimation can be waste). 
    • Easier to create smooth-flowing system if items are roughly equal-sized
Both are empirical
  • You have to experiment with process and customize it to your environment.
    • Scrum and Kanban just give you basic set of constraints to drive process improvement.
    • Kanban says you should limit WIP. So what should the limit be? Don’t know, experiment
  • Don't have knobs for Capacity/Lead Time/Quality/Predictability
    • Do have indirect controls:
      • Few people <-> Many people
      • Few large teams <-> Many small teams
      • Low WIP limits <-> High WIP limits
      • No iterations <-> Long iterations
      • Little planning <-> Lots of planning
    • E.g. reduce WIP limit
      • Then observe how Capacity/Lead Time/Quality/Predictability change
      • Draw conclusions
      • Change some more things
      • Continuously improve
    • CI = Kaizen, Inspect & Adapt, Empircal Process Control, Scientific Method
    • Feedback loop == most critical element
      • Scrum + XP loops: Sprint, Scrum, CI, UT, Pairing
        • "Are we building the right stuff?" down to "are we building the stuff right?"
      • Kanban: Should use all of the above
      • Kanban adds useful real-time metrics:
        • Average lead-time: Updated when item reaches "Done"
        • Bottlenecks: E.g. Column X crammed with entries, X+1 empty
      • With real-time metrics: Can choose length of feedback loops based on how often you want to analyse and make changes
        • Too long: CI will be slow
        • Too short: process might not have time to stabilise between each change == thrashing
      • Can experiment with feedback loop itself (meta-feedback loop).
  • E.g. Experimenting with WIP
    • E.g. 4 person team, start with WIP of 1
      • E.g. not feasible for all 4 to work on same item ==  people sitting idle (ok occasionally) == avg lead time increases
      • items will get through “Ongoing” really fast once they get in, but they will be stuck in “To Do” longer than necessary, so the total lead time across the whole workflow will be unnecessarily high
    • E.g. increase WIP to 8
      • E.g. problems with integration server prevents cards from being "done"
      • Cards start piling up in "Ongoing" as new work is taken on
      • When WIP 8 is reached, must fix integration server - WIP limit prompted us to react and fix bottleneck instead of piling up unfinished work
      • Good. But if WIP limit was 4 would have reacted earlier, giving better avg lead time. 
      • It’s a balance. Measure avg lead time and keep optimizing WIP limits to optimize lead time
  • Why need a "To-Do" column?
    • Gives team a small buffer to pull work from in absence of customer
    • Not needed if customer always available to tell team what to do
Kanban allows change within iteration
  • Scrum wouldn't allow E to be added to A+B+C+D after committed to sprint.
  • Kanban would allow it but say there's a limit to how many can be added to "To-Do" column so you have to choose to remove one
  • E could be prioritised above other To-Do but work would only start on E once it can be pulled into "Ongoing" column
  • Kanban reponse time (time to respond to a change of priorities) == however long takes for capacity to become available, “one item out = one item in” (driven by WIP limits).
  • Scrum response time == avg half sprint length
Kanban board is persistent - doesn't need to be reset every iteration
Cross-functional teams are optional
  • Board is related to workflow
  • Board not necessarily owned by one team
  • Establish some ground rules as to who uses the board and how then experiment to optimise flow
  • E.g. Whole board served by one cross-functional team - like Scrum
  • E.g. PO sets priorities in column 1
    • Cross-functional dev team does dev (column 2) and test (column 3)
    • Release (column 4) done by specialist team
    • Slight overlap in competencies, so if release team becomes bottleneck one of the devs will help them
Items do not have to fit into a single iteration
  • Kanban teams try to minimize lead time and level flow
    • Indirectly creates incentive to break items into relatively small pieces
  • No explicit rule stating that items must be small enough to fit into a specific time box
  • Same board might have both items that take 1 month and 1 day
 Estimation is not prescribed
  • If you need to make commitments you need to decide how to provide predictability
  • Some teams make estimates and measure velocity just like Scrum
  • Other teams skip estimation, but break each item into roughly same size pieces
    • Can simply measure velocity in terms of how many items completed per unit of time (e.g. features per week)
  • Some teams group items into MMFs (Minimum Marketable Features) and measure avg lead time per MMF, and use that to establish SLAs
    • E.g. “when we commit to an MMF it will always be delivered within 15 days”
  • Lots of interesting techniques for Kanban-style release planning and commitment management
  • Best practices will emerge over time
Both allow working on multiple products simultaneously
  • What if one team maintains multiple products? ("Team Backlog" instead of "Product Backlog")
    • Merge both products into one list. 
    • Forces us to prioritize between products, which is useful in some cases
    • One strategy: focus on one product per sprint
    • Other strategy: work on features from both products each sprint
      • Distinguish with different colours
      • Or separate horizontal swimlanes
Both are lean and agile
  • Pull scheduling systems - JIT inventory management
  • Based on continuous and empirical process improvement *
  • Emphasize responding to change over following a plan (Kanban typically allows faster response than Scrum)
Kanban doesn't prescribe prioritised backlog
  • Can choose any/none prioritisation scheme (don't need to prioritise in advance of timebox)
  • Left-most column typically fills same purpose as backlog
  • Need some kind of decision rule as to which to pull first:
    • Top item
    • Oldest item (each item needs timestamp)
    • Any item
    • 20% maintenance, 80% new features
    • Split between product A and product B
    • Red items first
Daily standups not prescribed
  • Most Kanban teams do it anyway
  • More board-oriented, focusing on bottlenecks/visible problems
  • More scalable - can have 4 teams looking at same board - not everyone needs to speak as long as focus is on bottlenecks
Burndown charts not prescribed
  • No charts prescribed but can use any you want
  • Continuous Flow Diagram (CFD)
    • Every day, total up items in each column and stack on Y axis
    • E.g. day 4, there are 9 items == 1 Production, 1 Test, 2 Dev, and 5 Backlog
    • Plot these points every day and connect the dots
    • Vertical and horizontal arrows illustrate relationship between WIP and lead time. 
    • Horizontal arrow shows that items added to backlog on day 4 took avg 6 days to reach production
    • About half of that time was Test. 
    • Can see that limiting WIP in Test and Backlog would significantly reduce total lead time
    • Slope of dark-blue area shows velocity (i.e. number of items deployed per day). 
    • Over time we can see how higher velocity reduces lead time, while higher WIP increases lead time.
  • Most organizations want to get stuff done faster (= reduce lead time). 
    • Many fall into trap of assuming this means getting more people in or working overtime. 
    • Most effective way to get stuff done faster == smooth out flow and limit work to capacity, not add more people or work harder. 
    • CFD shows why, and increases likelihood team & management will collaborate effectively
  • Even more clear if distinguish between queuing states (such as “waiting for test”) and working states (such as “testing”). 
    • Want to absolutely minimize number of items sitting around in queues, CFD helps provide right incentives for this
Scrum vs Kanban Example
  • Sprint backlog == just one part of picture
  • Why split “Dev” column into “Ongoing” and “Done”? 
    • Gives production team chance to know which items they can pull into production.
  • Why share "Dev" limit of 3 among the two sub-columns?
    • Creates excess capacity
    • Developers who could start a new item, but aren’t allowed to because of the Kanban limit.
    • Gives strong incentive to focus efforts and help get stuff into production, to clear the “Done” column and maximize flow. 
    • Nice and gradual effect – the more stuff in “Done”, the less stuff is allowed in “Ongoing” – helps the team focus on right things.
  • One piece flow
    • “perfect flow” scenario, where an item flows across the board without ever getting stuck in a queue. 
    • At every moment there is somebody working on that item
    • Can get rid of backlog and selected columns for a really short lead time
    • Cory Ladas: “Ideal work planning process should always provide dev team with best thing to work on next, no more and no less”
    • WIP limits are there to stop problems from getting out of hand - if things are flowing smoothly WIP limits aren’t really used
  • Only thing that Kanban prescribes is that work flow is visual, and WIP is limited
  • Purpose == create smooth flow through system and minimize lead time. 
  • Need to regularly bring up questions such as:
    • Which columns should we have?
      • Each column represents:
        • one workflow state
        • a queue (buffer) between two workflow states
      • Start simple and add columns as necessary
    • What should the Kanban limits be?
      • When the Kanban limit for “your” column has been reached and you don’t have anything to do, start looking for a bottleneck downstream (i.e. items piling up to the right on the board) and help fix the bottleneck. 
      • No bottleneck == Kanban limit might be too low - reason for having limit was to reduce risk of feeding bottlenecks downstream. 
      • Many items sit still for a long time without being worked on == Kanban limit might be too high
      • Too low kanban limit => idle people => bad productivity 
      • Too high kanban limit => idle tasks => bad lead time

* empirical process improvement 
  • Empirical process control provides and exercises control through frequent inspection and adaptation for processes that are imperfectly defined and generate unpredictable and unrepeatable outputs
  • Requires three basic elements: 
    • Transparency: ensures all elements in a process are openly observable
    • Inspection: taking observation enabled by transparency and critically evaluating how work flows through the process (cross-functional team)
    • Adaptation: takes insights gleaned from that inspection as basis for making incremental ongoing improvements to process