Agile Planet

June 22, 2011

Darren Hobbs

Lessons from memory

Every now and then I take a stroll down memory lane and read some old blog posts, both mine and colleagues’.

After I stop cringing at how naive and earnest we all were I try and contrast my current opinions with those of my younger self and his young friends.

Some thoughts from tonight’s history lesson:

  • Be aware of your biases, preconceptions and beliefs. Consider how they might colour your interpretation of events.
  • Throwing away an acceptance test suite representing ten person-years of work because you want to use a different test tool is almost always a shockingly irresponsible thing to do.
  • A major high street bank based in Poole could have had an award-winning business banking website if they hadn’t (ostensibly) cancelled the project because it was going to deliver 3 months late. That was 5 years ago. It’s still not finished.
  • Technical excellence is meaningless without effective governance.
  • High performing teams are messy, loud, exhausting, chaotic fun and will still be discussed fondly whenever 3 or more members get together, even 5 years later.

by darren at 2011-06-22T22:43:44Z

Mark Levison

Scrum is Simple and Incomplete

cannonI’ve heard several references recently to the Scrum Canon. I went searching and I’ve not been able to find it. Is it one Ken (et al.) three books? Ken and Jeff’s most recent Scrum Guide? Does it include the ideas that Mike documented in Agile Planning and Estimation? … Is what the majority of CST’s are teaching at this moment in time? In end I don’t think there is a Canon and its absence doesn’t matter. We can all agree on the basics: three roles (PO, Scrum Master and Team Member), Four meetings (Planning, Daily Standup, Demo, Retrospective) and three artefacts (Product Backlog, Sprint Backlog, Burndown Chart).  Beyond that is the art of what makes a team truly successful Scrum Practice (in no particular order):

  • User Stories
  • Planning Poker
  • Release Planning
  • Engineering Practices
  • Cross Training
  • ….even approaches to Scaling

I teach about all of these in my CSM courses, but none of these is core to Scrum. None is required. Why not? Scrum is incomplete it gives you enough information to get started and says you should improve from there. Its not a straight jacket and it welcomes other ideas/practices. I get concerned when people seek a complete methodology as they discourage diversity, outside thought and even thinking for yourself. So practice Scrum but don’t assume it or any other toolbox has all the answers. Sample from the buffet.

by Mark Levison at 2011-06-22T17:56:35Z

Agile Quick Links #20

dojo-martial-artsThis quick links is a bit of a potpourri:

Building an Agile Environment – Rachel Davies describes her experiences helping people to model work environments, using craft materials, playmobil characters etc. This helps surface the annoyances, impediments, attractors. Sounds like a great workshop. Next time you ask me if proposed work area is Agile I will try this.

But We Need a Database … Don’t We?Ron Jeffries tackles one way to design a database in a test driven fashion, without actually having the database. Remember its not the only way its another tool.

Pair Design: Better together; the practice of successful creative collaboration – Stefan Klocek describes Cooper design’s approach to pairing in design. Some valuable ideas even for pure coders.

Estimation Non-Functional Requirements – Mike Cohn offers another approach for estimating the costs/tax of non-functional requirements.

Improving Names in Code – JB has a cute drawing to help us see how to evolve better names in code.

12 Tips to be a better coach – Martin Proulx gives us some things to remember when acting as a coach. My favourites:

Code Cleaning: A Refactoring Example In 50 Easy Steps – Wouter Lagerweij provides a step by step example of a refactoring he did.  He’s since re-examined using tests to drive the refactoring: Code Cleaning: How tests drive code improvements (part 1).

In From Months to Minutes (1hr presentation on InfoQ) – Dan North examines what can be done with Agile if your turn the dials to 12 (its better than 11 :-). Its a very interesting and provocative take on where you can take Agile. My only concern is that some will listen to this presentation and start throwing away practices because Dan didn’t need them anymore. Remember context is important, Dan’s approach may not work for you.

by Mark Levison at 2011-06-22T16:06:42Z

Sammy Larbi

Auto-running shell scripts based on which directory you're in, forgetting about bundle exec but still running it

Yesterday I got sick of typing rake test and rake db:migrate and being told You have already activated rake 0.9.2, but your Gemfile requires rake 0.8.7. Consider using bundle exec. I know you should always run bundle exec , but my unconscious memory has not caught up with my conscious one on that aspect, so I always forget to run rake under bundle exec . So I wondered aloud on twitter if I could just alias rake to bundle exec rake , but confine that setting to specific directories (with bash being my shell). Turns out, it is ...

by Sammy Larbi at 2011-06-22T11:42:58Z

Simon Baker

Turning the showcase on its head

We learned in the very early days to treat everything as a PR opportunity with the customer. So the showcase is a big thing for us. We run showcases every Tuesday. We prepare the demo environment with real-world data, put together an entertaining narrativ...

2011-06-22T10:38:43Z

Less is more, more or less

CSS! Just the name elicits groans from the backend developer bench. Googling “css sucks” brings a back a wealth of comedy in about 10,100,000 results. Cascading as in a neutron fission reaction. Style as in every project does it differently. S...

2011-06-22T10:06:33Z

June 21, 2011

James Shore

Let's Play TDD #116: You Gotta Know What You're Doing

21 Jun 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-06-21T08:01:00Z

June 20, 2011

Mark Levison

NeuroAgile Quick Links #1

For sometime I’ve been publishing Agile Quick Links. Links to articles that Agile folk will be interested in. Now I’m starting NeuroAgile Quick Links, these will references to articles or summaries of papers that I think are of interest to members of the Agile community.

The Conversation is Over. Long Live the Conversation – reacting to an article about the use of twitter like tools in high school, David Rock examines the affects our interactions through facebook, twitter etc. are reducing our empathy and damaging the art of conversation. David talks about the silence in cubicle mazes as people ignore each other and fail to collaborate. Talk your teammates don’t IM or Skype them, real collaboration happens through face-to-face conversation and not over a computer

When We’re Cowed by the Crowd and The Web and the Wisdom of Crowds – we’re all familiar with the Wisdom of Crowds (James Surowiecki) and the idea how a diverse group of people can make some very accurate estimates/guess. In these articles, Jonah Lehrer describes:

The scientists then gave their subjects access to the guesses of the other members of the group. As a result, they were able to adjust their subsequent estimates based on the feedback of the crowd. The results were depressing. All of a sudden, the range of guesses dramatically narrowed; people were mindlessly imitating each other. Instead of cancelling out their errors, they ended up magnifying their biases, which is why each round led to worse guesses. Although these subjects were far more confident that they were right—it’s reassuring to know what other people think—this confidence was misplaced.

I wonder what affect this has in planning poker where we get more information than just the raw numbers, also the ideas behind the numbers.

“That’s the Way We (Used to) Do Things Around Here” (strategy+business – free reg required) a long article on understanding how change works and the brains involvement. If you’re helping to organize change (i.e. coaches) this article is well worth reading.

Happiness on the Job is tied to Autonomy (see: Happiness: The Neglected Role of Job Design). In addition the study notes:

The study also shows that performance-related pay, one widely-used management tenet of high performance work systems, makes no difference to satisfaction or stress. Performance-related pay includes bonuses given to City workers and other employees

In the same vein Its all about control – you either need power or choice (i.e. autonomy). This is a recurring theme autonomy is very important to us.

by Mark Levison at 2011-06-20T15:11:07Z

June 16, 2011

George Dinwiddie

Video interview: Overcoming Agile Obstacles

Here’s another video interview recorded by Yvette Francino of SearchSoftwareQuality.com at the ADP/West 2011 Confierence.

by George Dinwiddie at 2011-06-16T15:48:47Z

Simon Baker

Continuous Integration for the Last Mile

Here’s Gus talking about Continuous Integration for the Last Mile. The session looks at leveraging continuous integration techniques to deploy and operate software all the way to the end user, exploring some of the difficulties and gotchas along the...

2011-06-16T10:01:58Z

Continuous Integration for the Last Mile

Here’s Gus talking about Continuous Integration for the Last Mile. The session looks at leveraging continuous integration techniques to deploy and operate software all the way to the end user, exploring some of the difficulties and gotchas along the...

2011-06-16T10:01:58Z

Continuous Integration for the Last Mile

Here’s Gus talking about Continuous Integration for the Last Mile. The session looks at leveraging continuous integration techniques to deploy and operate software all the way to the end user, exploring some of the difficulties and gotchas along the...

2011-06-16T10:01:58Z

James Shore

Let's Play TDD #115: Wrapping Up the Icon Spike

16 Jun 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-06-16T08:01:00Z

June 15, 2011

Mark Levison

NeuroAgile

hippocampusNeuroAgile – the intersection of neuroscience, cognitive science, psychology and agile/lean software development. For some time now I’ve had an interest in understanding how and why Agile works (see: Does Scrum Work? Hell Yes!!! Why and Why Scrum Works??) while poorly written it was my first attempt at articulating a deeper why.

For sometime now I’ve been reading about Neuroscience (Norman Doidge “The Brain That Changes Itself”, John Medina “Brain Rules: 12 Principles for Surviving and Thriving”, James Zull “The Art of Changing the Brain.”,  Torkel Klingberg “The Overflowing Brain: Information Overload and the Limits of Working Memory” and David Rock “Your Brain at Work”), cognitive science and psychology (mostly via well researched blogs). What I’m learning that these fields have a lot to tell us about people, their motivations and how they work together. I think  that these things are at the heart of making successful projects.

So why am I coming up with a new term? I think we need a label to help us categorize some of these great ideas and to provoke us into finding more. In my case I’m pushing it further. In the past few years between Roger and myself we’ve written several articles that look at applications of neuroscience: The Science of Learning: Best Approaches for Your Brain and Multitasking Gets You There Later. Now we’re crawling our way towards at least a mini-book. In the next few weeks I will start publishing a series of interesting links and sometime early in the fall we will publish a paper on Creativity.

by Mark Levison at 2011-06-15T02:32:56Z

June 14, 2011

Willem van den Ende

Why offshoring government IT is stupid

I’m live blogging from the uk government IT meeting on agile at the SPA conference. Nothing beats a blogpost fired off in anger ;)

Here are some tweets I would like to elaborate on:

Me: “Offshoring government it is stupid”

Eric Lefevre (@elefevre): “why? on paper, if there is any way to commoditize software for government, it would sound like a win, wouldn’t it?”

Me: “Software is executable knowledge. Offshoring software is giving a potential enemy exclusive posession of that knowledge.”

Imagine, if you will, the UK code breakers in the 1920’s outsourcing some of their work to Germany, because labour there in the crisis after the first world war was cheap. Manual computations, but nevertheless. What would have happened by 1938?

This is a bit of a straw man of course. However, the problem with outsourcing and/or offshoring is that you put executable knowledge, plus the expertise to grow that knowledge and build knew software from it in the hands of one party. That leads to a dependency on that party. As long as things go fine, that is swell. When things go bad, not so much.

If you think that documentation is going to help. It might, if you are lucky. Most of the knowledge that goes into software is tacit, and very difficult and costly to transfer.

So if you are a government agency (or a company for that matter), don’t be stupid. Keep some good technical people on permanent staff, who know how your crucial systems work, and can make modifications. If you add some external staff to that to be flexible in capacity or bring in specialist knowledge, great!

Just don’t forget to let them work side by side with your permanent staff on a daily basis, so your organisation does not leak knowledge that it can’t recover on its own.

Just remember, knowledge is power. If you’re not careful, you will learn that lesson the hard way.

by Willem at 2011-06-14T19:24:50Z

George Dinwiddie

Podcast: Acceptance Test Driven Development and the 3 Amigos

Also while in Las Vegas for the ADP/West Conference, Bob Payne and I sat in the Agile Philanthropy booth and recorded a podcast on Acceptance Test Driven Development and the 3 Amigos. This is the latest in a series of Tips and Advice podcasts that Bob and I have done.

by George Dinwiddie at 2011-06-14T14:53:13Z

James Shore

Let's Play TDD #114: Icon Challenges

14 Jun 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-06-14T08:01:00Z

June 13, 2011

George Dinwiddie

Video Interview: On Cultural Change

At the SQE Agile Development Practices (ADP/West) Conference last week in Las Vegas, Yvette Francino interviewed me on the topic of cultural change.  Here is the video of that interview.

by George Dinwiddie at 2011-06-13T19:56:25Z

Sammy Larbi

I need your help with Todoxy

I have a job where in any given week I might be working on any one of 30 projects spread across a half dozen product lines. I freelance, sometimes with a single company, but I also work a lot through another company for several different customers. I have my personal projects too, of course, and then there's non-work type things like getting a haircut, building a garden, or changing the air filters around the house. Problem My list of things to do is too complex for me to keep in my head. It doesn't fit in a typical list because ...

by Sammy Larbi at 2011-06-13T00:22:41Z

June 11, 2011

George Dinwiddie

Don’t You Have to LOGIN first?

In my previous post, Avoiding Iteration Zero, I suggested starting with “the one obvious thing that needs to be done? (Hint: it’s not ‘login.’)” As Jon Kern has recently mentioned, this same topic has come up elsewhere. I was also in that list discussion.

Jon is, of course, right in a narrow sense. You can start with login, if you want. You can also start with an Iteration Zero. (Or, an Iteration Minus One, as I’ve seen one organization do when their list of pre-planning outgrew one iteration.) I’ve observed that you can generally get better software, faster if you start somewhere else.

There are some very good reasons for this. For one thing, it’s unlikely that you’ll find much business value in delivering a system that allows people to login, but do nothing else. Unless you’re writing a login package for others to integrate into their code, something else is the central idea of the system. Surely there are some things about that central idea that we don’t yet know in detail. By contrast, “login” is fairly widely known and understood (even if sometimes implemented poorly). Even if we decide we can’t deliver the system without “login,” we can learn important things earlier if we work on the central idea first. (And I can imagine, in a pinch, having a usable system where access was controlled by putting the computer in a secure room where only those with authority could touch it.)

Learning important information sooner is one of the subtle but powerful benefits of working in an Agile fashion. It can help the business make better decisions about priority, or even about the direction of the effort. That’s one of the tenets of the Lean Startup focus, but it works for established organizations, too. There’s almost always something new about what we’re doing, or we wouldn’t be spending money doing it.

This pattern of accelerating learning is powerful for businesses that use it. But it’s also powerful for programmers working on the code. As we build our application, we can learn about better ways to structure the code, and we can use that information when we’re writing further code. I’ve found this to allow me to do a better job when developing code in an iterative-incremental fashion than when creating an up-front design and then following it.

Let’s look in more detail at Jon’s suggested beginning:

For example, to start, you can simply check that the response has “Login Succeeded” ( or “Login Failed” for testing that a bogus login attempt does indeed fail). …simply:

Scenario: Successful Login
    When I login as admin
    Then I should be logged in
Scenario: Failed Login
    When I login as asdf56ghasdkfh
    Then I should be not logged in

And your steps would hide the logic for filling in the login form and checking for success:

Given /^I login as "([^"]*)"$/ do |login|
  @login_name = login
  visit login_path
  fill_in "login", :with => login
  fill_in "password", :with => "password"
  click_button "login_button"
end
Then /^I should be logged in$/ do
  response.should contain "Login Succeeded"
end
Then /^I should not be logged in$/ do
  response.should contain "Login Failed. Please try again."
end

Given this simple start, where will we put our access control logic? Most likely in our Controller of MVC or Presenter of MVP. Then we’ll decide to which page we direct the user. In effect, we’re protecting the “Login Succeeded” page from the unauthenticated. Is that page the asset whose access we want to control?

I once worked with a client who had an application that allowed authorized users to access the documents to which they were authorized. It was not an Agile shop, and they were not working in short iterations using small stories. They had, however, been delivering new functionality in each release for a number of years. And they were very thorough programmers, carefully checking that they were not delivering bugs. Having started with the idea of access control, they had ended up with their authorization check in the Action class of their Struts app. Over time, there was a need for further authorization concerning some documents. Only an authorized user could see any document, but some documents were only available to a subset of users. The Action class checked that the user was logged in, retrieved the list of documents, and then filtered out the ones that particular user was not allowed to see.

Do you see any problem with this scheme?

It works well enough, but it causes some maintenance headaches. Even though there was not a link to the unauthorized documents, there’s nothing to stop a savvy user from guessing the URL and retrieving it anyway–so there had to be an authorization check there, too. And there were multiple lists of documents, so each one had to make this check. Sometimes when the authorization logic changed, one of the Action classes would be overlooked. Extracting this authorization filtering code to a method would reduce much of this duplication and make that particular mistake less likely, but some lists had different filtering rules. Also, the Action classes would retain the duplication of the order of steps that each one had to apply in order to properly protect the assets. (That’s a more subtle flavor of duplication that many overlook.) And there was a requirement to show some documents in the list to people who were not yet authorized to retrieve them.

It was pretty complicated and error-prone. Enforcing the security at the outer layer, instead of directly around the protected assets, resulted in more places and more variability in the implementation. I’m sure that you and I and Jon Kern would not end up with such an error-prone design. Jon is an excellent software designer–much better than I am. He can explain why you’d want to design in a particular way and describe the reasons why. I, on the other hand, have trained myself to be sensitive to duplication. Test-driving code with an aversion to duplication would lead me to a different design.

I helped these programmers implement a new feature–one that provided government-mandated access to certain documents even if the user wasn’t logged in at all. Since this required a change to all of these access controls, it provided the impetus to change the design. While adding this new requirement using Test Driven Design, I also refactored code to remove the duplication. As I did so, I pushed the access check lower in the code, passing along a “AuthenticationToken” object that could be treated as a black box by intervening layers. Ultimately the access check was made in the database queries themselves, insuring by simple inspection that no path in the user interface could lead to a condition that allowed unauthorized access.

You’re probably a good programmer, too. I’m sure you wouldn’t make a mistake. Linda Rising recently told me that 80% of people think they’re above average, so you must be good. As it happens, the programmers who built this system were pretty good, but they weren’t experienced in the technology. This J2EE system was the first Java code they’d ever written. These programmers generally succeeded by being very careful to check all the paths. Only very occasionally did they miss one and allow a bug to escape to User Integration Testing.

It’s my contention that by implementing the primary functionality first, in this case the listing and retrieval of documents, then we’ll be more likely to naturally put the access control directly around the assets that matter. In other situations, there may be other considerations for the access control that would be different from the application I described. In any case, if our primary story is

When I do whatever my application does
Then I get the result of doing so

in all of the varieties of “do,” then the proper place for access control will be obvious for more than 80% of us when we get to the story

Given I am not an authorized user
When I do whatever my application does
Then I am denied the result of doing so

While it is certainly true that we can get to the desired design anyway, as both Jon’s blog and my story above illustrate, it’s easier, more direct, and more likely that we’ll do so if we start with our primary business functionality before the LOGIN story.

It’s not a law of the universe, but it’s a good heuristic: The Login story is not the place to begin.

by George Dinwiddie at 2011-06-11T23:05:14Z

June 10, 2011

Simon Baker

Energized Work book club

From my own experience, many things I have learned started with reading something in a book. So in November 2009 I floated the idea of starting a book club at Energized Work. My motivation came from my failure to explain concepts based on my understanding...

2011-06-10T14:54:45Z

Simon Baker

Energized Work book club

From my own experience, many things I have learned started with reading something in a book. So in November 2009 I floated the idea of starting a book club at Energized Work. My motivation came from my failure to explain concepts based on my understanding...

2011-06-10T14:54:45Z

June 09, 2011

James Shore

Let's Play TDD #113: OverlayLayout?

09 Jun 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-06-09T08:01:00Z

June 07, 2011

Sammy Larbi

Tokenized input autocomplete with rails3-jquery-autocomplete

You can split an input on a specific string to get the same field to autocomplete multiple times. One example where you might want to do this is in the case of tags: You have a field that should contain multiple tags, and you want to do an autocomplete after every comma + space (', '). It's not documented in the rails3-jquery-autocomplete README , but all you need to do is use the 'data-delimiter' attribute like this: <%= f.label :category_tags %> <%= f.autocomplete_field :category_tags, autocomplete_category_tag_title_library_assets_path, 'data-delimiter'=>', ' %> Hopefully this helps you save a little time ...

by Sammy Larbi at 2011-06-07T11:12:09Z

James Shore

Let's Play TDD #112: It's in the (Grid)Bag

07 Jun 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-06-07T08:01:00Z

June 06, 2011

Willem van den Ende

Maximum Marketable Featureset

Have you ever thought about the Maximum Marketable Featureset for your product? You may be familiar with a Minimum Marketable Featureset, which is the minimal set of features where someone wants to buy your product.

The underlying assumption is that adding more features after the minimal set will make your product more valuable.

At some point adding features makes a product less valuable to users. Less features are easier to understand for users and for developers, which in turn can help improve the value of each feature.

So next time you build a product, think about it. When will more features diminish the value of your product to its users? Why not on your current one? Are you there yet? How far still to go? Past the point already? Could I make this post more valuable by asking less questions or using fewer words?

by Willem at 2011-06-06T14:26:53Z

June 03, 2011

James Shore

Canonical (Ubuntu) Hiring Test Automaters

03 Jun 2011 James Shore/Blog

I just got an email from Allison Randal, Technical Architect of Ubuntu. They're looking for people to build automated tests for Ubuntu:

At Canonical, we're ramping up a team for automated testing across the entire OS. I'm wondering if you have contacts in Portland or elsewhere who might have recommendations, or be able to point a few qualified people our way?

Allison went on to say that they're specifically looking for people who can write automated tests, not just manual testing skills.

If you're interested, apply here. Although Allison mentioned the Portland area, I'm pretty sure any location is fine.

Comments

2011-06-03T08:00:00Z

June 02, 2011

James Shore

Let's Play TDD #111: Icon in JTextField: Go!

02 Jun 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-06-02T08:01:00Z

June 01, 2011

Mark Levison

Laptops and Crackberries in my Certified ScrumMaster Training

laptopIn my Certified ScrumMaster Training I ask students not use their laptops and CrackBerry’s. Why?

Some people want to take notes using their laptops. Unfortunately my experience shows that laptops do more harm than good. They put up a barrier between people at a table and reduce collaboration. They send a signal that this person isn’t open. They even reduce interaction with the trainer. Finally with tools like outlook even if you don’t have web access they’re often a source of distraction. For these reasons I ask that you not use a laptop.

Crackberries, IPhones, etc. don’t put up the same barrier because the screen doesn’t get in the way but they present a different problem. If you’re looking at your device, worse actively using it you’re telling people around you that the distraction is more important than the class. You’re saying its more important to interact with the device than your classmates.

Breaks are the perfect time to use your electronic devices, but the rest of the time honour us with your presence.

Come Join us for Certified ScrumMaster Training in Ottawa

by Mark Levison at 2011-06-01T16:53:02Z

May 31, 2011

Mark Levison

Relationships Made Easy

For several years I’ve been trying I’ve been trying to find a Neuro-Linguistic Programming (NLP) book that provided a simple and clear explanation of what it is and how it works. With “Relationships Made Easy for the Business Professional” – Dr David Fraser scores well on the first and not as well on the second.

The book’s strength comes from David’s practical business background which he uses to ground his writing and examples. The book fails when its attempting to explain how NLP works.

David describes a 12 step process to help

  1. Attention to others
  2. Attitude
  3. Self-control
  4. Wavelength
  5. Filters
  6. Connection
  7. Values
  8. Language
  9. Self-awareness
  10. Attention to yourself
  11. Balance
  12. Love

Most of the steps David describes make perfect sense and I’m trying to apply them. The book falls down when David attempts to explain why these ideas work:

Whenever you feel you have run out of mental flexibility, try some physical flexibility, such as a stretching exercise or moving around. Going for a walk when you have a problem to think about gets the left and right sides of your brain communicating. (location 1054)

While going for a walk is a great tactic when you’re blocked it has nothing to do with getting the left and right sides of brain to communicate. They’re going to do that not matter what. From David Rock’s, “Your Brain at Work”:

Stellan Ohisson, at the University of Illinois at Chicago. explains how when facing a new problem, people apply strategies that worked in prior experiences. This works well if a new problem is similar to an old problem. However, in many situations this is not the case, and the solution from the past gets in the way, stopping better solutions from arising. The incorrect state becomes the source of the impasse.

Handing the problem off to your unconscious by taking a walk or starting another activity is an effective approach. There are left/right brain distinctions but they have nothing to do with “blockages”. See, Elkhonon Goldberg “The New Executive Brain: Frontal Lobes in a Complex World” where he makes the claim that the left hemisphere deals with routine situations and right with novel. The details of what’s happens and the evidence to support the claim is several chapters long, so I won’t summarize it here.

David Fraser says:

If a meeting becomes strained, try having everyone move around. Use flexibility in the physical dimension to unblock the mental processes. A time-out” can actually speed up a meeting.

Again this might be a good idea but I’ve not seen any example of “blocked mental” processes in my reading nor a study that explains why movement might be effective when people are interacting in a meeting. The best discussion I’ve seen for our interactions comes from Rock’s, “Your Brain at Work” see Act III Collaborate with Others.

Finally Fraser says

Develop physical flexibility to increase your mental flexibility. Learning T’ai Chi, even at a very simple level, is one effective way of doing this.

I’ve not found any evidence yet about a relationship between physical and mental flexibility. In fact I’m not sure we really know how to characterize mental flexibility. The closest I can find is John Medina and Brain rules: “Rule #1: Exercise boosts brain power.” (brainrules.net). David Rock also has more on exercise and the effects on neurogenesis (grow new neurons). Finally there is some interesting evidence around meditation and its benefits (a quiet mind), but nothing to support the claims Fraser makes.

In a nutshell, an interesting book with many good ideas. However the flaws in the science make it difficult for me to read. I suspect I wasn’t the intended audience for the book. If you don’t mind these issues then you will find the book valuable.

by Mark Levison at 2011-05-31T16:18:53Z

Sammy Larbi

plupload rails plugin

Here is the beginning of a Rails 3 plugin for plupload Plupload lets you upload multiple files at a time and even allows drag and drop from the local file system into the browser (with Firefox and Safari). This plugin tries to make its integration with Rails 3 very simple. To install (from inside your project's directory): rails plugin install git://github.com/codeodor/plupload-rails3.git To use: <%= plupload(model, method, options={:plupload_container=>'uploader'} %> <div id="uploader" name="uploader" style="width: 100%;"></div> More info is available on the readme file . If it's missing a feature you'd like to see, please open ...

by Sammy Larbi at 2011-05-31T13:21:39Z

James Shore

Let's Play TDD #110: Create a Beautiful, Simple Result

31 May 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-05-31T08:01:00Z

May 26, 2011

James Shore

Certification Debate with Alistair Cockburn

26 May 2011 James Shore/In-the-News

On Tuesday, Alistair Cockburn and I debated the merits and limitations of certification in a webcast hosted by the PMI. We had an interesting and cordial discussion and the PMI has graciously put up their recording for anyone to hear.

Listen to the debate here.

Comments

2011-05-26T08:02:00Z

James Shore

Let's Play TDD #109: Zombie TDGotchi

26 May 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-05-26T08:01:00Z

May 25, 2011

George Dinwiddie

Avoiding Iteration Zero

Teams new to Agile often realize that they have a lot to do before they get their new development process at full speed. Looking at this big and unknown hill in front of them, many Agile teams choose to do an Iteration Zero (or Sprint Zero) to prepare before they start delivering regular increments of functionality. During this time, they may try to get their ducks in a row with

  • A list of features to be built
  • A release plan or timeline for those features
  • Setting up development infrastructure such as version control or continuous integration servers
  • Studying or practicing skills in new technologies they expect to use
  • … and other management, infrastructure, and technical endeavors.

They try to get all the preliminaries out of the way so they can hit the ground running full speed in Iteration One. In my experience, they’re still not ready to go full speed. These things are rarely as complete as expected after one iteration, and often aren’t quite in tune with the actual needs of the project.  The list of features will likely not be complete, but the attempt to approach completeness will dump in lots of ideas that have been given little thought. Any attempt to project into the future still has no data about how fast things can be accomplished. The infrastructure may or may not be the best for supporting the project, but it is likely that the project will now conform to the infrastructure rather than the other way around. The choice of technologies will be made speculatively rather than driven by the needs of the project. While we may do OK, we’ll have made a lot of decisions with the least amount of information we’ll have in the project lifecycle.

And we’ll have burned an iteration without producing any working software that tests our decisions.

My advice is to borrow an idea from Lean and look at the situation from the output point of view.  Ask yourself, “what would it take to start delivering?”

The initial backlog really only needs to be one item in order to start delivering.  If you’ve got too many unknowns, then just start with one item.  Get the business stakeholders, the programmers, the testers, and anyone else who needs to be in on the discussion (User Experience? Ops?) to talks about it.  (I call this a meeting of the Three Amigos.)  What is the one obvious thing that needs to be done? (Hint: it’s not “login.” Start with the main purpose of the system.) I can’t imagine a situation where a project is started without any ideas at all.

Take that one thing, and slice it into thinner slices.  Decide on the examples that represent acceptance of those slices.  Some of the slices will have questions that can’t be answered.  Put those aside for the moment.  Choose the most central slice that travels through the entire concept from end to end, or as close to that as possible.  Estimate that as one team-iteration.  (Estimates don’t have to be “right.” They’re just estimates.)  Start building it.

Learn the necessary skills in the technology while accomplishing real work. Learn the parts that aid building the system, rather than developing the system according to some framework. When you don’t know how to accomplish something, or you think multiple approaches might work, do minimalistic spikes to give the information needed to make a decision.

Along the way, start slowly building your development infrastructure. Set up a local code repository. You can always migrate the code to an “official corporate” repository later.  Right now, there’s not much code. Set up a simple build-and-test script so that everyone builds in the same fashion. You can always add other build targets later. If you’ve got time, you can set up a Continuous Integration server. Otherwise, just do it manually. Checkout & build into a clean workspace. Do what’s needed to run the code so that you can show it working.

If you can’t accomplish this slice in one iteration, it’s probably not thin enough. Or, maybe you haven’t yet solved an essential technical problem. Or the goal isn’t yet clear enough. Figure out what impediment is most in your way, address that, and try again.

More likely, you’ll get this slice done in less than a iteration length. If you get this slice done before the end of the iteration, then pull in another slice.  Estimate this as “the rest of the iteration.”  Repeat as needed.  As long as you’ve gotten one slice done, you’ve got a potentially deliverable product increment.

Yes, there will still be development infrastructure to be developed. There’s no particular rush to get that done. Just keep improving it, so that it helps you get more done. Yes, there will still be technical skills to be developed. That should always be the case. Just keep experimenting and pushing your limits.

Yes, there will still be features to be added to the backlog, refined, prioritized, split into stories, and prioritized again. This should continue throughout the project. It’s part of the “steering” process. Yes, there will still be a need for projections to estimate when functionality can be released, or how much functionality can be released by a certain date. When you think you’ve got enough information about what needs to be done, then consider the initial release plan.  By then you’ll also have accumulated a little information about how fast things get done.

There will still be a lot of holes in your knowledge of what needs to be done and how fast things get done.  Don’t trust your initial release plan to be “right.”  It’s just a stick in the sand to help you judge how things are going.  Keep planning, and move the stick as needed. And as time passes, you’ll have a better and better indication of how fast the system gets developed. Even when you think the Release Plan is complete, it needs to be continually reviewed and adjusted. Since it’s never done until the release, there’s no particular hurry for a certain level of completeness.

This sort of beginning is very like the Hudson Bay Start that Johanna Rothman describes in her book, Manage It (pp. 52-53).

The Hudson Bay Start approach was originated by the Hudson Bay Company in the 1600-1700s in northeastern Canada. The Hudson Bay Company outfitted fur traders. To make sure the traders hadn’t forgotten anything they needed, they left Hudson Bay and camped just a few miles away. By making camp just a few miles away, the traders ensured they hadn’t forgotten any tools or supplies–before they abandoned civilization. With just a short start to their journey, they had a better idea about their ability to endure the winter.

There’s really no reason (other than “that’s not the way we do things around here”) that this can’t work for the start of any team/project. It’s a great way of learning the right stuff for the current situation while also making a bit of progress. I use this technique in my Agile in Six Months transition plan.

by George Dinwiddie at 2011-05-25T16:42:37Z

James Shore

Let's Play TDD #108: Wishing for First-Class Functions

25 May 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-05-25T08:01:00Z

May 22, 2011

George Dinwiddie

On Models

Brian Marick has written a tantilizing post The Stance of Reaction. In it he says

At this point, Sr. Naveira has at least four reasonable choices. He can step forward, thereby “asking” Srta. Muzzopappa to move backwards. He can step backwards (typically reversing the sweep). He can turn toward her so that they’re facing, causing her to “collect” (bring her feet together). He can take no step at all, but turn his chest to the left, causing her to begin moving counterclockwise around him.

The important thing about Argentine tango (danced recreationally, though perhaps not in a performance like this) is that she does not know which choice he’ll make. She must be balanced in a stance—physical and mental—that allows her to react gracefully to any legitimate move.

I truly hope he’ll expand on this, and how he applies it to the business of software development. I have great admiration for Brian’s intellect and inventiveness. I suspect what he says will help me work on some half-baked ideas I have about effective TDD keeping the code in a state in which it’s prepared to go in any direction, and about Pair Programming being most effective when we work to increase the possibilities open to our partner (a la Improv acting).

So far, Brian seems to be describing the concept of Reaction by saying what it is not–that it is not a reduction to a model. His description of this dichotomy does not match my understanding of how we use models. Online conversation has not clarified my understanding of his description. I suspect that the difficulty stems from us looking at the situation using different models. The appropriate next step seems (to me) to clarify my own model of how models work and are useful to me.

I wholeheartedly agree with Brian’s assertion that we have “a readiness to reduce (abstract away, simplify, generalize) the world’s complexity into something simpler that you can work with and think about.” This is not just true of knowledge workers, but of people in general. The assault of the universe on our senses is far too much to observe without doing so. As Brian says, “By default, we mostly act unconsciously, with the unconscious mind forwarding only anomalies to the rational part of the mind.” In other words, when the world around us is appearing to conform to the models that are most ingrained within us, we can react according to those models without making a conscious choice.

Other models are less ingrained, and we apply them more consciously.  Brian’s examples of the “homo economicus” and the “V-model of the Systems Engineering Process” models are good illustrations of models we’ve built particularly to help us with situations that our subconscious doesn’t handle.

Is there a fundamental difference between our unconscious and conscious models? As far as I can tell, it’s only in our awareness of them.  The study Brian uses to illustrate the power of the unconscious mind describes a detectable awareness between a cookie sheet and a chess board.  The unconscious model accounts for putting a cookie sheet into the oven, but does not account for putting a chess board into the oven.  Certainly both cookie sheets and chess boards are too modern to be innate to the species. Instead, the pattern of putting a cookie sheet is one to which we’ve become accustomed by familiarity. Or, at least, most of us have. Trying the same experiment with people unfamiliar with ovens, cookie sheets, and chess board would assuredly give different results.

Reductionist models are the way that both the conscious and unconscious mind deals with the sensory overload.  But the models we use can lead us astray.  As Brian says,

So sticky is the “homo economicus” reduction that economists face the occupational hazard of treating it as the only model of human behavior, which can make them say awfully silly things. Similarly, elegant and simple software development models like the V-model are so elegant, so simple, so pleasingly linear that their failure to work with real human behavior and limitations is commonly seen as the fault of the people, not the model.

H.L. Mencken put it this way, “For every complex problem, there is a solution that is simple, neat, and wrong.”  We are more likely to jump to an easy answer when we only know one model that fits the situation, or when we fit a model that’s so deep in our unconscious that we don’t notice it’s there.

If we are to avoid the trap of being limited by an inappropriate model, then we need to know more than one model.  And for the unconscious models, we need to find ways to rethink the situation according to conscious models.

In fact, this latter use is the value I find in the Myers Briggs Type Indicator model that so irks Brian. Brian would like for people to discard MBTI because it’s not “true” and doesn’t predict peoples’ behavior well. I, on the other hand, don’t expect either of these things from MBTI. It only aspires to be a model of preference, not behavior. While it often gets misapplied for things such as predicting what career path a person should take, that’s not, as far as I know, an intended use. And I don’t believe it’s “true” in the sense of corresponding to entities in the human psyche. Instead, it’s “true” in the sense that it corresponds to human observations, much as putting a cookie sheet in the oven does. I’m not even convinced that the preferences indicated by the MBTI are constant. I wonder if my preferences change slowly over time, and if they sometimes change rapidly and suddenly in response to the situation.

Whatever the faults of the MBTI as a model for people, it helps me to rethink the application of my unconscious models. As an introvert, my unconscious model might label an extravert as pushy and self-absorbed. Re-evaluating the situation in the light of MBTI, however, may suggest to me that they’re merely thinking out loud. As an NT (iNtuitive-Thinker), it’s easy for me to jump to the conclusion that the solution in my head is both correct and obvious. Realizing my preferences helps me to realize that it is unlikely to be obvious to others, and that I should test its correctness with some data.

We would do best if we always beware our implicit trust in our models, especially our deepest and most closely held ones. These are the models that allow us to act immediately and “instinctively.” But it is the model we don’t question that will most likely get us into trouble. From time to time, especially when things are not playing out to our liking, we can view the same situation in light of multiple models and see if they give us different insights.

by George Dinwiddie at 2011-05-22T19:46:12Z

May 19, 2011

James Shore

Let's Play TDD #107: The Hidden Listener

19 May 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-05-19T08:01:00Z

May 18, 2011

George Dinwiddie

The Carrying Cost of Code

Michael Feathers has just written a post on The Carrying-Cost of Code: Taking Lean Seriously.  He says,

No, to me, code is inventory.  It is stuff lying around and it has substantial cost of ownership. It might do us good to consider what we can do to minimize it.

I’m not sure I can see the analogy of code that’s in production to inventory.  Code that hasn’t shipped, yes.

But all code is a liability, I think.  When code is in production, then it’s offset by the asset that is the functionality.  Whether or not the net is positive is another question.

There’s no doubt to me that code, whether in production or not, has carrying costs that are larger than generally realized.  Perhaps it’s a depreciating capital expense?

Carrying costs are larger than we think. There’s competitive advantage for companies that recognize this.

It’s something that takes up space.  It takes maintenance.  It takes attention.  It does have a substantial cost of ownership–larger than we think.

The analogies may be failing me, but I think Michael’s sentiment is correct.

by George Dinwiddie at 2011-05-18T01:07:32Z

May 17, 2011

James Shore

Let's Play TDD #106: Back from the Dead

17 May 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-05-17T08:01:00Z

May 15, 2011

James Shore

Wanted: Your Certification Experiences and Perspectives

15 May 2011 James Shore/Blog

On May 24th, I'll be participating in a PMI-sponsored debate with Alistair Cockburn about the merits and limits of certification. Alistair is someone I have great respect for--he's responsible for introducing me to the Agile community--and we fall on opposite sides of the certification debate. Alistair is heading up the ICAgile certification program and I... well, I've had strong words to say about certifications.

I want to enter this discussion with a clear understanding of the plusses and minuses of certification, so I'd love to hear your arguments--and particularly experiences--for and against certification. Let me know what you think!

PS: I haven't been able to find a link to the PMI event, but it's on May 24th and you can see it on the Agile Community of Practice calendar.

Comments

2011-05-15T08:00:00Z

May 12, 2011

Mark Levison

Agile Guru’s or Thought Leaders?

Rodin thinker silhouetteIn the past few months I’ve seen the following question several times: “Who’re the Agile/Scrum Guru’s or Thought Leaders?” The urge to ask the question is good but misplaced. I assume it comes from people who’re new to Agile and want to know where to get good ideas. Inevitably people reply with long lists of people.

There is just one problem, the whole concept of thought leaders is alien to Agile thinking. We promote the value of cross functional teams and always assume that even the least experienced person has a contribution to make even if its asking a question.

I learn from people inspite of their names. I’ve learned things from Ron, Alistair and other well known people, but I’ve also learned from lesser known people. When you start paying attention to names your narrow your thinking. Radical suggestion tell us which non-guru you’ve learned something from in the last week? (in my case Charles, Heather and Steve). Read widely and ask if the ideas fit with your understanding of Scrum and Agile. When you encounter a new idea go back to the Agile Manifesto and its accompanying Principles. Ask yourself will this new idea help you deliver high quality software and delight your customer sooner.

Frankly I focus most of my reading and thinking about outside the Agile community now. My personal energy is focused on understanding people through the lens of Psychology, Cognitive Psychology and Neuroscience.

Lets stop gazing at our navels.

If you must have a guru look up Brian Marick (see: Artisanal Retro-Futurism crossed with Team-Scale Anarcho-Syndicalism) he will be happy to have a few followers :-)

by Mark Levison at 2011-05-12T23:33:17Z

Mark Levison

Agile Quick Links #19

measureOne Reason Time Forecasts are so Inaccurate – Mark Graybill – talks about the cognitive issues behind our inability to do accurate time forecasts. Hint your attempts to estimate in hours/days will never get significantly better.

How We Determine Product Success – John Ciancutti – explains how Netflix finds ways to fail cheaply and fast. Think what would happen if you could run many small experiments to test the effects of your changes. That’s just what the folks Netflix  have done.

StrategicPlay: Make Systems Thinking Tangible the Playful Way – Olaf Lewitz – shows how to use Lego to help team members build a model of their system (i.e. organisation, product, team, …) and understand how they see the system.

How To Set Smart Daily Goals – Jocelyn K. Glei – has some good tips to setting daily goals and improving focus.

The Power of Personas in Exploratory Testing – Janet Gregory  – explains how to use Persona’s to put you in a different frame of mind than you normally use when doing exploratory testing.

by Mark Levison at 2011-05-12T03:06:22Z

May 11, 2011

George Dinwiddie

Simplicity and Perspective

Everything should be made as simple as possible, but no simpler. — Albert Einstein

Dave Rooney recently bemoaned on Twitter about how complicated people make things, pointing in particular to a thread on the scrumdevelopment yahoogroup.  It’s a thread that started with a question about a team wanting to adjust the Sprint Backlog in-sprint when something changed about their capacity to complete the work.  From there it spawned a long discussion about various ways to estimate the work and commit to it.

To me, most of these approaches to estimation are more complicated than is necessary.  Some go into detailed calculations that are far more complicated than what most teams do.  I could tell you a really simple technique, but I suspect most teams aren’t ready for extreme simplicity, yet.

That’s OK with me.  If they spend a bit more time and effort figuring out what they can do in the next sprint, that’s probably not a big deal.  And as they notice they’re not getting much value from the time they spend estimating, they’ll want to do something about it.  At that time, it will be important enough to be the next issue to tackle.

One of the proponents of an especially involved calculation said

The other thing I many have not mentioned previously is that in this company, this senior guy (an American) does not like it when people disagree with him and is known for having got people fired for not agreeing with him…This the reason why everyone is careful when they are in planning poker, which, for this reason, does not work as intended as an open discussion opportunity…

Oh, dear! This estimation calculation is a technical solution to a people problem.  While it may get them through the planning meeting with a selection of stories to be implemented, it doesn’t deal with any of the myriad of problems caused by such a bully.

When one prima donna throws his weight around intimidates others, you’ll never have a Real Team.  People can’t rely on each other.  They can’t share their thoughts, ideas, and misgivings.  Agile development is a team sport and such behavior is the death of teamwork.

Let’s face it, if you’ve got fundamental problems like this, then the estimation strategy doesn’t amount to a hill of beans.

by George Dinwiddie at 2011-05-11T19:55:48Z

May 10, 2011

Jimmy Nilsson

Developer Chronicle: Too many chefs spoil the code

Let's start with a fairly common question:

"Ouch, we're so behind with the project, what on earth shall we do?"

A response that is just as common is:

"Add more resources! Increase head count as much, and as fast, as possible!"

Honestly, how many people have responded so spontaneously?

Over 35 years ago Frederick Brooks wrote the book "The Mythical Man-Month", where he shows a formula for how to calculate how much the delay increases in a delayed project if you add one more person. How is it, then, that so many think it'll help?

What is even worse, apart from the delay being likely to increase, is that projects with too many people involved have a tendency to lead to a large code base. It is to be expected as a law of nature. It can of course be too large a code base for many other reasons too, but this is one important factor.

Each developer would naturally like to be productive and, just like the others, add X number of lines a day. So the growth of the code base size is directly related to the number of developers.

Even if I am allergic to projects that become overcrowded over time, there's something I dislike even more. That is projects that start to be overpopulated on day one. On top of the problem with too large code base there is an additional problem with X amount of people that are waiting for something to do, which costs a lot to no avail. Pretty soon everyone starts to create their own architecture. Similarly, requirements are created wildly on speculation. The project is simply in a very bad position from the outset.

Why does this happen? I think one reason could be that many people see software development as a keyboard intensive task. Sure, it never hurts to be fast on the keyboard, but it is mostly because the keying should not distract the thinking and the mental models.

If you are twice as fast at the keyboard as your colleague is, are you twice as good as your colleague?

Perhaps, but if you believe a quote from Bill Gates where he should have said that a super programmer is worth 10 000 times higher salary than a mediocre one, your double speed keyboard skills should be a very small advantage. Probably other stuff is way more important.

When I think about it, we have a saying for this phenomenon; the adage is probably much older than Brooks's book and it goes: "too many chefs spoil the broth." So, avoid adding loads more head count on the problem at all costs.

(First published in Swedish here.)

2011-05-10T20:00:00Z

May 08, 2011

George Dinwiddie

It’s Only A Model

We use models to help us simplify the situations we’re viewing, so we can reason about them more easily.  I’ve often found this to be enormously helpful.  It’s important, though, to remember that this is only a model.  We can use a model for understanding, and even for making predictions.

We cannot substitute the model for the thing that it is modeling, though.  The map is not the territory.  When we use a model in contexts where it doesn’t apply, it’s likely to lead us astray.  Similarly, when we mistake an illustration of the model for the model itself, we may make inferences that the model doesn’t support.

For example, a couple of my friends have recently tweeted complaints about the Satir Change Model in response to such misuses.  I find the Virginia Satir’s model extremely useful, and would like to disassociate it from these misuses.

Bas Vodde said,

“I come to dislike the Satir change curve, it seems to include the assumption that every change is good.”

In what approximates discussion on Twitter, it seems that the root of his problem is that diagrams illustrating the Satir Change Model commonly show a new status quo with “higher performance” than the old status quo.  Such an illustration makes sense if you’re talking about trying to introduce a change to an organization.  Few people introduce change intended to decrease performance.

If the Foreign Element is, instead of a new business process, the loss of an arm, you might well expect a person’s performance to go through the same plummeting chaos before they find a Transforming Idea that allows them to reach a New Status Quo.  You might also expect that the performance level of the New Status Quo will be less than that of the Old Status Quo.  There are things you can do with two arms that you can’t do with one.  And it’s unlikely that someone would choose losing an arm as a strategy to increase performance.  Bas is right that not all change is good–even when it’s well-intentioned.

There’s also the question as to what is being measured as “performance.” This is usually a conceptual illustration rather than some particular measure or estimate of performance.

Brian Marick said,

“Dammit, some people change because it’s fun. Not everyone changing can be modeled by a dysfunctional family! #satir #rant #overabstraction”

This statement was apparently a reaction to another tweet which said,

“Change cannot occur until the pain of the status quo is greater than fear of change. In work and in life.”

I’m not sure what connected this to the Satir Change Model for Brian, other than it has the words “change” and “status quo.”  I suppose the statement might even be generally true, but I agree with Brian that it mis-characterizes many situations.  For the vast majority of changes that we freely choose, there may be no fear at all, and the “pain” might be better termed “boredom.”  This would still technically fit the statement above. Somehow it sounds very different when you say “Change cannot occur until the boredom of the status quo is greater than zero.”

In general, the Satir Change Model applies to change resulting from destabilizing external events.  That’s not the entire universe of change.  I agree with Brian that, if you’re talking about voluntary, joyful change, then the Satir Change Model is probably of little use.  I would look to models about learning rather than about change.

For one of the examples that Brian mentioned, that of changing to programming in Clojure, you could probably fit the model to the events. Even though this change is a self-induced Foreign Element, I suspect that Brian’s programming productivity plummeted for a time.  Then, as he began to understand Clojure and internalize how to use it, he got progressively, but inconsistently better, until he finally reached a level of competence in the new language.  (Note: I’m envisioning this based on my experiences at switching programming languages.)

But while you could apply the model, I’m not sure what benefit it will provide.  Remember, It’s Only A Model.


Thanks, Bas and Brian for the impetus to write this article. I hope that I haven’t misrepresented your views. I realize that Twitter is a difficult place for nuanced conversations.

by George Dinwiddie at 2011-05-08T00:06:58Z

May 07, 2011

George Dinwiddie

Joyful Change

Brian Marick challenged me for an expression of joyful change, especially related to software development, based on the teachings of Virginia Satir.  As discussed in my previous post, he’s come to associate the combination of “Virginia Satir” and “change” with pain and the following:

…blaming… …placating… …anger… …guilt… …stress… …resistance… …denying… …avoiding… …blocking… …deny… …avoid… …anxiousness… …vulnerability… …fear…

This post is, in part, to demonstrate to him that the work of Virginia Satir is not focused on the negative.  Mostly it’s to share, and rejoice in, the freedom we have to reach our goals.

The Five Freedoms

The freedom to see and hear what is here instead of what should be, was, or will be.

The freedom to say what one feels and thinks, instead of what one should.

The freedom to feel what one feels, instead of what one ought.

The freedom to ask for what one wants, instead of always waiting for permission.

The freedom to take risks in one’s own behalf, instead of choosing to be only “secure” and not rocking the boat.

This passage comes from Virginia Satir’s book, Making Contact.

It is the fifth freedom on which I want to focus, for this is the freedom of joyful change.  I can try new things, taking the risk that they might fail.  If they do, then I own that failure. But that risk of my failure is balanced by the likelihood of my success.  I weigh the balance and make my own choice.

I do not need the protection of others.  I do not need someone else to make the decision and force a change on me, or deny a change from me.  The choice is mine alone.

As long as failure is only a risk and not a sure thing, then there is also a chance of success.  That success is also mine.

Success often does not come with the first attempt.  Often we must fail a little.  We may try small experiments, so as not to bet the farm on a single roll of the dice.  We risk that which we can afford to lose.  If we lose, we learn from it and try with this new knowledge.  This is a strategy almost sure to produce success in time.

And what a sweet success that is.  Success not handed to us by someone else’s choice.  Success not borne of attempting the sure thing.  But success achieved by our own efforts and intellect.  Success that teaches us far more than any instant success.

And this success, and new knowledge, is all ours.  Isn’t that a joyful thing!

by George Dinwiddie at 2011-05-07T02:47:25Z

May 06, 2011

George Dinwiddie

Agile In 6 Months

How long does it take to take a team from where they are to becoming an Agile team?  Of course, that depends on many things, including where they are and how badly they want to succeed at Agile.  It’s reasonable to think they can make a transition in six short months.

If you’d like your team to become Agile, give me a call to find out how I can coach the team to do that for about the same cost as contracting a senior developer.  If your team has already made a transition, but you find that you’re not as effective with Agile as you’d like to be, I can coach using the same framework to help you reach that effectiveness.

by George Dinwiddie at 2011-05-06T20:33:37Z

George Dinwiddie

Splitting User Stories

I’ve written about User Stories before and made available a handout that includes a page on splitting stories that, in addition to listing some splitting heuristics, includes several links to several lists of techniques for splitting stories.

What it doesn’t include is an even simpler way to split stories–the simplest way I’ve found yet.When doing Acceptance Test Driven Development (ATDD), we create examples that illustrate the functionality we want the software to have.  Each of these examples is, potentially, a separate story.  You can split a story by dividing these examples into two or more groups.  It’s that simple.

By the way, here are some other guides on splitting stories:

  • William Wake, “Twenty Ways to Split Stories”
  • J.B. Rainsberger, “Splitting stories: an example”
  • Lasse Koskela, “Ways to split user stories”
  • James Grenning “Story Weight Reduction Toolkit”

and the InfoQ article that triggered this post:

  • Dan Puckett, “How To Split User Stories“

by George Dinwiddie at 2011-05-06T19:58:41Z

May 04, 2011

Mark Levison

Does your Grocery Store Limit Work in Progress?

produce-vegetable-vegetablesShopping in our local Grocery Store (Farm Boy) on a recent Saturday made me realize what a good job they do Limiting Work in Progress (WIP) and Self Organizing. Driving into the parking lot with my 4yr old, I was dreading the busyness of the store. When I got in the place was packed, trying to manoeuvre even a small cart with a 4yr old driving was quite the experience. I had expected to the checkout experience to be easily 10 minutes long, an eternity even with the best behaved child.

When I entered the store there were only a few people on cash and the lines seemed to be building, by the time we were ready to checkout half an hour later all 9 cashes were open and we waited less than two minutes.

What happened? A couple of conversations with cashiers have helped me piece together the key points:

  • They all recognize that Farm Boy doesn’t make money until you’ve paid – a Customer with unpaid groceries is Work In Progress. After all if you only have a few items and see a 10 minute line up you might just leave. Especially if the 1-8 items queue is also deep.
  • If there is a line up they just start cashiers just start opening lanes until the bottle neck is cleared
  • Many of the staff can work the cash so you’re rarely stuck waiting for another cashier
  • Staff don’t wait to be told to open cashes they just do it
  • When demand ebbs the cashiers start to close and return to other work

So effectively they’ve seemed to discovered the Theory of Constraints (TOC) and they Self Organize to eliminate the bottleneck. Their system is informal, but even without sophisticated measurements you can still observe and eliminate bottlenecks. Compare this to another large Canadian grocery chain where I often line up for 10+ minutes, just waiting to get to the front of the line. Guess which store gets more of my business?

In the software world QA, especially when all the tests are run manually. is often the constraint we find. So we need to take steps to eliminate the bottleneck:

  • Automate your Regression Tests, so that you have a minimal (if any) manual regression work to do
  • Train everyone on the team in the basic of QA.
  • When work builds up in QA cease writing new code until the existing code has been tested and the tests automated.
  • Start write your application using Acceptance Test Driven Development

Eventually QA stops being the bottleneck, at which point we re-examine the system to see if the bottleneck has moved again. When that happens take similar steps all over again to eliminate the next bottleneck.

What Bottlenecks have you observed in your grocery store? Your development process?

Another Theory of Constraints post: “Theory of Constraints in Software Development”

by Mark Levison at 2011-05-04T23:52:53Z

April 29, 2011

Mark Levison

Agile Quick Links #18

RetrospectiveThat’s the Way We (Used to) Do Things Around Here – Jeffrey Schwartz, Pablo Gaito, and Doug Lennick team up to write about understanding the mechanics of change through the lens of neuroscience. As many of you will know this is topic near to my heart.

A brilliant brainstorming technique – Edward Boches – explains how to improve brainstorming through silent listing. I’ve used this and related approach of working in pairs on a number of occasions. Both approaches are faster and seem to generate better results than traditional approaches.

People Know When First Impressions Are Accurate – apparently we know when we’re right.

The Whole Team Approach in Practice – Lisa Crispin visits Energized Work and talks about the great team and their focus on Continual Improvement and the focus on Quality.

Advantages of Limiting your WIP – Matt Wynne – from the archives, a story about the problems his team was having and how Limiting WIP held smooth out delivery.

Basic Story Testing Styles – Charles Bradley, examines a number of different ways of expressing Acceptance Criteria. He mentions five approaches: “Bullet Points; Test with…; Test that…; Given/When/Then; Specification By Example” giving pros and cons for each. Which style(s) do you use and why?

A day in the life of an acceptance tester – Cheezy of LeanDog fame helps see how he moved from acceptance criteria to code. A great place to start when you’re trying to get your head wrapped around ATDD.

by Mark Levison at 2011-04-29T19:05:40Z

April 26, 2011

Steve Freeman

Test-Driven Development and Embracing Failure

At the last London XpDay, some teams talked about their “post-XP” approach. In particular, they don’t do much Test-Driven Development because they find it’s not worth the effort. I visited one of them, Forward, and saw how they’d partitioned their system into composable actors, each of which was small enough to fit into a couple of screens of Ruby. They release new code to a single server in their farm, watching the traffic statistics that result. If it’s successful, they carefully propagate it out to the rest of the farm. If not, they pull it and try something else. In their world, the improvement in traffic statistics, the end benefit of the feature, is what they look for, not the implemented functionality.

I think this fits into Dave Snowden’s Cynefin framework, where he distinguishes between the ordered and unordered domains. In the ordered domain, causes lead to effects. This might be difficult to see and require an expert to interpret, but essentially we expect to see the same results when we repeat an action. In the complex, unordered domain, there is no such promise. For example, we know that flocking birds are driven by three simple rules but we can’t predict exactly where a flock will go next. Groups of people are even more complex, as conscious individuals can change the structure of a system whilst being part of it. We need different techniques for working with ordered and unordered systems, as anyone who’s tried to impose order on a gang of unruly programmers will know.

Loosely, we use rules and expert knowledge for ordered systems, the appropriate actions can be decided from outside the system. Much of the software we’re commissioned to build is about lowering the cost of expertise by encoding human decision-making. This works for, say ticket processing, but is problematic for complex domains where the result of an action is literally unknowable. There, the best we can do to influence a system is to try probing it and be prepared to respond quickly to whatever happens. Joseph Pelrine uses the example of a house party—a good host knows when to introduce people, when to top up the drinks, and when to rescue someone from that awful bore from IT. A party where everyone is instructed to re-enact all the moves from last time is unlikely to be equally successful1. Online start-ups are another example of operating in a complex environment: the Internet. Nobody really knows what all those people will do, so the best option is to act, to ship something, and then respond as the behaviour becomes clearer.

Snowden distinguishes between “fail-safe” and “safe-fail” initiatives. We use use fail-safe techniques for ordered systems because we know what’s supposed to happen and it’s more effective to get things right—we want a build system that just works. We use safe-fail techniques for unordered systems because the best we can do is to try different actions, none of which is large enough to damage the system, until we find something that takes us in the right direction—with a room full of excitable children we might try playing a video to see if it calms them down.

At the technical level, Test-Driven Development is largely fail-safe. It allows us, amongst other benefits, to develop code that just works (for multiple meanings of “work”). We take a little extra time around the writing of the code, which more than pays back within the larger development cycle. At higher levels, TDD can support safe-fail development because it lowers the cost of changing our mind later. This allows us to take an interim decision now about which small feature to implement next or which design to choose. We can afford to revisit it later when we’ve seen the result without crashing the whole project.

Continuous deployment environments such as at Forward2, on the other hand, emphasize “safe-fail”. The system is partitioned up so that no individual change can damage it, and the feedback loop is tight enough that the team can detect and respond to changes very quickly. That said, even the niftiest lean start-up will have fail-safe elements too, a sustained network failure or a data breach could be the end of the company. Start-ups that fail to understand this end up teetering on the edge of disaster.

We’ve learned a lot over the last ten years about how to tune our development practices. Test-Driven Development is no more “over” than Object-Orientation is, it’s just that we understand better how to apply it. I think our early understanding was coloured by the fact that the original eXtreme Programming project, C3, was payroll, an ordered system; I don’t want my pay cheque worked out by trying some numbers and seeing who complains3. We learned to Embrace Change, that it’s a sign of a healthy development environment rather than a problem. As we’ve expanded into less predictable domains, we’re also learning to Embrace Failure.



1) this is a pretty good description of many “Best Practice” initiatives
2) Fred George has been documenting safe-fail in the organisation of his development group too, he calls it “Programmer Anarchy“
3) although I’ve seen shops that come close to this

by steve.freeman at 2011-04-26T20:40:39Z

James Shore

Rabu Schedule Visualizations: Taking the Edge Off Hard Facts

26 Apr 2011 James Shore/Blog/Rabu

Our customers want--need--to know what we're going to get done and when. As I described in my last Rabu essay, Agile teams have the ability to make those projections.

But they aren't always well-received. Most Agile teams I've met don't have the full trust of their key stakeholders, and schedule projections often aren't as rosy as those stakeholders would like. The stakeholders react by shooting the messenger.

We need to find a way to redirect stakeholders' frustration to a constructive discussion of the facts rather than an exercise in blaming the team. That's where Rabu visualizations come in. They attempt to present projection data in a way that focuses conversation on facts and options rather than blame.

Stakeholders' most common complaint is one we've all heard before: "Why will it take so long?" It's easy to interpret that question as an attack on the team's ability, especially when it's said in an angry tone of voice, but we can also create a more positive discussion by focusing on two underlying questions:

  1. What are you going to work on during that time?
  2. Is this estimate realistic?

What are you going to work on?

If you have an estimated story backlog, you can answer the question of what you're going to work on very easily: you're going to work on what's in your backlog.

You shouldn't just shove a list of stories at your stakeholders, however. That's a lot to absorb at once, and that risks making your stakeholders feel foolish, which will only increase tension. Instead, roll up the list of stories into three to seven major items. Perhaps they're minimum marketable features (MMFs); perhaps they're just groups of stories. Either way, the goal is to distill your backlog down to its essence.

For example, for the upcoming Rabu 0.2 release (going out to early adopters in early May), we have ten stories in our backlog. They're things like "Spike Raphael graphing library" and "Gracefully handle fall-back when there's insufficient historical data." In conversations with stakeholders, however, we roll that list up into three bullet points, like this:

  • Rabu 0.2: Add risk-adjusted burn-up chart
    • Visualization essay
    • Javascript charting research
    • RABU chart implementation

Given a simplified list of features, you can help focus the conversation further by providing a visualization of the work involved for each one. The diagram below shows two options we're considering for Rabu. Option #1 shows the relative size of each feature; Option #2 shows miniature timelines.

Two charts showing options for visualizing the time required by features. The first shows percentage bars representing the amount of time required; the second shows miniature timelines representing when each feature will be complete.

From the outside, programming work always looks easier than it really is. Providing detail about what you're working on and how long it takes will help stakeholders understand why there's so much work involved.

Is this estimate realistic?

Even when your stakeholders understand what you're working on, they're likely to think that your projections are overblown. We need to help stakeholders trust our estimates, and a technique I've found valuable is to show a consistent history of realistic projections by using a risk-adjusted burn-up chart.

A chart showing changes over time for three pieces of information: the team's finished work, the amount of work yet to complete, and the projected completion dates.

On the left, the chart shows the cumulative work required to complete each feature. Time passes as you move from left to right and the total work completed grows. By March 22nd ("today" in this example), there's no work remaining for Feature A.

On the right, the visualization shows your historical projections as miniature timelines. They roughly correspond with the height of the "completed" line, so the highest projection is the most current. You can see that the projections are consistent and narrowing in on a mid-April delivery date.

This is a complicated chart, but I've had success with it. The "completed" line helps stakeholders see that you're making steady progress; the stacked "features" areas show when delays are the result of scope creep; and the historical projections show that your projections are reliable and becoming more precise over time.

Perhaps most importantly, this chart demonstrates to your stakeholders that you're not just pulling your schedule projections out of mid-air. Even if they don't understand the chart, they'll appreciate the sophistication of your projection techniques, especially as they see the chart evolve over time.

No Silver Bullets

When stakeholders shoot the messenger, the root cause is lack of trust, and no visualization is going to magically solve that problem. The goal here is to redirect stakeholders' frustration into a discussion of data. In some cases, the rift between a team and their stakeholders may be too large to overcome. For most teams, though, frequent positive communication and a history of making and meeting commitments will do wonders.

I'd like to hear from you. What have you done to get your customers to love you? How do you share scheduling information with your customers? Share your experiences in the comments.

Sign Up

Team Rabu is focused on creating exemplary customer relationships. Our first product is tools and ideas for product scheduling. (Learn more at the Rabu website.) If you'd like to be one of the first to try it, provide your email address here. I promise not to sell your address or use it for other nefarious purposes.

Comments

2011-04-26T08:00:00Z

April 21, 2011

James Shore

June 8th to 10th in Oslo, Norway: NDC 2011 Presentations

21 Apr 2011 James Shore/Calendar

I'm presenting two sessions at the Norwegian Developer Conference in Oslo. This first one is "Evolutionary Design Illustrated," a detailed discussion of evolutionary design, with code samples and visualizations of how projects I've worked on have changed over time. It's on June 8th from 17:40 to 18:40.

My second session is "We Deliver Business Value," a focused session on the planning techniques required to deliver business value on a regular basis. Techniques I'll discuss include iteration planning, being "done done," velocity, slack, minimum marketable features, working on one thing at a time, and risk-adjusted burn-up charts. This session is on June 9th from 10:20 to 11:20.

Comments

2011-04-21T08:04:00Z

James Shore

May 5th to 8th in Seattle, Washington: Alt.Net Seattle 2011 Keynote

21 Apr 2011 James Shore/Calendar

I'm keynoting at the Alt.Net Seattle 2011 conference in Seattle Washington on May 7th. The conference as a whole runs from May 5th to May 8th. The working title for my session is "What We Can Learn from Richard Feynman."

Comments

2011-04-21T08:03:00Z

October 11th to 13th in Buenos Aires, Argentina: Ágiles 2011 Keynote

21 Apr 2011 James Shore/Calendar

I'm keynoting alongside Jeff Patton at the Ágiles 2011 conference (English version) in Buenos Aires from October 11th to 13th. I'll update this page with more details when the conference date is closer.

Comments

2011-04-21T08:02:00Z

Let's Play TDD #105: Negative TextField

21 Apr 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-04-21T08:01:00Z

April 19, 2011

Wayne Allen

Internship with devZing

devZing is looking for an intern. If you're a hotshot with html/javascript, seo or ad words take a look.

by Wayne Allen at 2011-04-19T17:21:00Z

James Shore

Let's Play TDD #104: Cleaned Up and Ready to Move On

19 Apr 2011 James Shore/Blog/Lets-Play

The source code for this episode is available here. Visit the Let's Play archive for more episodes!

Many thanks to Danny Jones for figuring out the HD Youtube embed code.

Comments

2011-04-19T08:01:00Z

April 18, 2011

Jimmy Nilsson

Developer chronicles

I've been writing the developer chronicle for Computer Sweden (in Swedish) for a couple of months. I'm going to republish the articles as blog posts. The first one is called "Lasagna is not a good role model".

2011-04-18T16:05:00Z