Saturday, August 23, 2008

The Accidental Mind: Fascinating Book

Judging by the podcasts I am listening to, this is a very interesting book and topic about brain evolution:

Passing the Impasse: Inviting the Religious and Non-Religious to Participate Together

Note: the following is the beginning to something I want to create to help get religious and non-religious people working together, not against each other, to solve problems in the world. What I envision is a group similar to the Interfaith Youth Corps of Chicago that engages people in community service projects primarily, with an optional component that allows people to discuss their differences through civil discussion without any pressure or expectations.


Throughout my life I have been fascinated by two subjects more than any other: science and religion.

I did not grow up religious. At a very young age I learned about science and critical thinking by watching Carl Sagan's Cosmos series on television. Sagan never, to my knowledge, discounted or belittled religion. He seemed to consider himself an agnostic to the ultimate question as to whether a supernatural being exists or not. However, he never did shy away from putting claims about phenomena in the physical world to the test of reason.

Looking around our world we see several major religious traditions, and countless other smaller ones. Within each major tradition are sects with their own beliefs and practices. It seems that each of these traditions consider themselves correct and others wrong. Could it really be that there is one correct path and all the others are wrong? My own position is simple. My belief is that no one has all the right answers. That certainly includes me. But, when people practice a religion, they often form strong communities based on the ideals of love and shared values. They draw strength from the religion's rich stories, teachings, parables, and guidance for both the old and young. Yet, religions change too. In the Christian gospel accounts, Jesus continually questioned the Jewish religious leaders of the day, leading ultimately to a new religion founded upon his teachings.

Therefore, shouldn't religious people of today continue to question popular religious traditions and practices the same way Jesus did?

Now, what about science? Do the scientists have it all correct? Hardly. In fact, true science is based upon the idea that hypotheses must be tested by multiple independent people and in multiple ways before they become accepted as well established theories. Einstein's theories have been corroborated time and again by experimental evidence. Does that mean that Einstein's theories are complete? No, it does not. It means that they can predict observable behavior accurately under a given set of circumstances. But, there is more research that has to be done before his theories and quantum mechanics can be reconciled. Even then, we may never have a complete and final understanding of how nature operates.

Therefore, shouldn't scientists continually questions ideas and theories the same way Einstein did?

We all know that there are many scientists who are religious, some who even hold to fairly traditional and literal interpretations of many religious texts. Francis Collins is a prime example of someone likely to be known to many. We also all know that there have been many books published recently from prominent atheists such as Richard Dawkins and Sam Harris that aim to show that science is superior to religion. So, who is correct?

Once again, neither "camp" is correct about everything. There are some things religious people are correct about, and some things that non-religious people are correct about. Life is almost always like this. One side is very seldom entirely correct and just as seldom entirely incorrect.

Personally, I am an agnostic atheist in the following sense: I do not believe that any human being has ever, or could ever, claim to know the absolute truth about the ultimate nature of reality. In this sense, I reject all claims that any particular religion is "correct" about God and therefore I reject all human conceptualizations of God. Mystics of all faiths have come to much the same conclusion. I am not just making this up. To speak about it without jargon, they essentially state that any idea or conceptualization we could hold in our finite, human brain, by definition, cannot be God. A thought is a thing. Mystics say God is not a thing.

However, I am agnostic to the proposition as to whether there is an existence higher than what we experience. I am not calling such an existence "supernatural" as is popularly portrayed. Instead, I offer a simple analogy to illustrate my point. Imagine you are a character in a 3D video game. You've been programmed with such sophistication that you can make decisions within the context of your own environment. The environment is governed by rules and laws. We might call them "game physics". How would you, as a game character, ever come to comprehend the fact that you were actually constructed by human beings in a totally different plane of existence?

Therefore, who is to say that our universe is not a construction of a force that we simply cannot, by definition, ever comprehend?

Likewise, who is to say that it is? I am not postulating our own universe is like that. There are theologians, philosophers, and scientists who have argued both sides of that question with far greater sophistication and erudition than I have to offer. However, what I am saying is that I think religious people and non-religious both need to take a second look at what they believe before they step all over each other. Each side can learn from the other through civil discussion and participation.

The goal of this site is two-fold:

1. Allow religious people and non-religious people for form community groups aimed at helping people in their community, regardless of any creed, belief, or lack there-of.
2. Allow religious people and non-religious people to engage in civil discussion about ideas.

Passing the Impasse: Religion and Science Misunderstanding

All throughout my whole life I have been fascinated with two things: science and religion.

Most of my writings are about my own perceptions and thoughts regarding the interplay of science and religion. My ideas are likely resonate with both atheists and religionists in some areas as well as strike discord with both in other areas. The reason for this is that I agree with most atheists about the importance of critical thinking and I agree with most religionists about the importance of community. I also disagree with most atheists about the meaning of religion and I disagree with most religionists about the literalism of their beliefs.

I believe that religion and mythological traditions have been, for thousands of years, humankind's way of trying to explain existence and to provide a shared system of beliefs upon which to base communities. As modern science has explained more and more of the physical nature of the universe, many have come to believe that religion is outdated, useless, or unimportant.

However, I disagree with this. Instead, I believe that we can look past the literal interpretation of religious texts and seek instead to discern wisdom from their stories and parables. This does not mean we take what they say wholesale with no modification or criticism. It means that we must interpret them in the light of modern science and think about how they have and will continue to affect our culture.

I am a strong proponent of rational, critical thinking. Because of that, I think it is irrational and uncritical to suppose that the entire world can just stop paying attention to its religious traditions. Instead, we must leverage those traditions to increase the importance of critical thinking from the inside out. This does mean that traditional and literal interpretations must change. I know that many people do not want to change their views. However, I hope that my writings will help demonstrate the reasonableness of my view, and, at minimum, help my friends and family understand my position.

I do not practice any specific religious tradition, and working as a software engineer I probably spend more time than most people thinking about logical systems and the interconnectedness of parts. Due to this, I tend to view systems from multiple perspectives. I look at them from the inside, at the level of individual methods or properties. I look at them from the surface, at the level of classes, modules, or assemblies. I also look at them from a higher perspective, at the level of complete application. I could expand on this and discuss interconnecting systems across the internet, but I'll just leave it at that.

Similarly, I look at belief systems in the same light. I look at the individual practices and understandings of verses and how people interpret and apply these to their daily lives. I look at the way families and communities incorporate their shared traditions into their lives. And, I look at the way societies assemble themselves around these sub-units to form a larger whole.

Where I have the most to say, however, is about individual thought processes and interpretations. It is my hope, again, that my writings and recordings will help others to understand my perspective and see the reasonableness of my views.

Wednesday, August 6, 2008

Attention and Focus: Distracted, Loss of Critical Thinking

There is an interesting interview on Point of Inquiry by the author of a book called Distracted: The Erosion of Attention and the Coming Dark Age. Here is the link:

The book has only 3.5 stars on Amazon, and ironically one of the comments says the book is a prime example of what it condemns!

Anyway, the observations the author and host D.J. Grothe make are about the increasing number of "connections" that people have via the online, networked world, but the decreasing depth and fluency of those connections. They discuss how these days people are expected to know more about a larger number of things.

This reminds me of the concept of the long tail, described here:
The crux of the long tail is that people now buy fewer items of a larger number of items, such
that the total volume of sales across the larger number equals the total volume of sales for the most popular market leaders.

So, it seems as if people are stretching themselves to pay attention to more and more things across a long tail because they feel that knowing that little bit across the horizontal scope complements the larger amount they know about a few things in the vertical rise.

Scott Ambler has written about the "Generalizing specialist" in terms of Agile teams on his blog here:

Here is a brief excerpt:
A generalizing specialist is someone with a good grasp of how everything fits together. As a result they will typically have a greater understanding and appreciation of what their teammates are working on. They are willing to listen to and work with their teammates because they know that they’ll likely learn something new. Specialists, on the other hand, often don’t have the background to appreciate what other specialists are doing, often look down on that other work, and often aren’t as willing to cooperate. Specialists, by their very nature, can become a barrier to communication within your team.
So, are long tails and generalizing specialists intrinsically linked to the distraction topic or am I proving the author's point by just introducing tangents?

Well, I think they are related, and not necessarily bad. The decrease of attention and erosion of critical thinking is bad, no doubt, because it represents an inability for people to perform the top-to-bottom in-depth analysis needed to make wise decisions.

However, the increased side-to-side attentiveness is good in that it shows that people are becoming more aware of the relationships and connections of parts into a whole.

Understanding that whole, however, requires a thorough understanding of the top-to-bottom view of, at minimum, a complete vertical slice of a process.

In software engineering, this means that a developer can grasp the overall architecture of a system by having a thorough and deep knowledge of one particular module, top-to-bottom, if and only if that module is representative of the larger system as a whole.

That is to say that the module under examination follows the same basic design structure and coding standards, design principles, and design patterns that are adhered to by the rest of the system. This helps achieve something akin to the concept of economies of scale.

On the other hand, if an engineer must contend with dozens of differing formats, approaches, spaghetti code bases, incompatible languages and toolsets, then a lot of understanding is lost or can never be gained simply due to something like the effects of diseconomies of scale.

For a team to achieve a level of productivity and efficiency in software engineering, they have to apply design principles that are in accordance with shared purpose, communication, maintainability, and flexibility. Creativity is an important aspect, but creativity should be expressed in terms of solving the problem domain in terms of the most understandable and simple design, rather than in terms of the most terse, most elegant or innovative, or unique design.

This may at first sound rigid and unflexible, but the introduction of basic, rudimentary constraints, such as the generic interface seen in REST, provides a context within which creativity can be expressed, yet also affords communication and connectivity to external components without cumbersome setup time or negotiation.

So, to bring this back to the idea of distraction, I want to point out that it is important for people to use the tools they have to help them find the right path, and then focus intently on understanding that path and exploring that path to the fullest, to understand it from the top-to-bottom, and to build in the side-to-side extension mechanisms to enable future modification and expansion without impact to the core features and functionality.

These are core principles expressed in the literature of design patterns and agile design guidelines, but they must be carefully studied and learned. They cannot be gleaned by glancing over a few web sites or thumbing through a few books. They have to be explored, experimented with, and ultimately employed in real systems in order for engineers to comprehend them fully.

Sunday, August 3, 2008

Certified Scrum Master Training from Innovel

I completed my Certified Scrum Master training Friday from Innovel. You can learn more about the training at

My instructor was Chris Doss. He was a great instructor. It was a very good class and I met a lot of great people in the class as well. I think I was one of the younger folks in the class, so I was able to learn a lot from everyone there just through "osmosis".

Of course, the CSM course is not something that makes one really a well-practiced Scrum implementer. I think it's more of a way to get your feet wet. The progression for training in Scrum is diagrammed here:

The course covers a lot of ground at a high level about Agile, Lean, and Scrum, with a touch of XP. I've been conducting independent study on these subjects and documenting the links and videos I've read and watched on the ATL ALT.NET wiki at

Aside from watching those videos from Ken Schwaber and Jeff Sutherland, I read two Scrum books to prepare for the training:

Agile Software Development With Scrum:

Agile Project Management with Scrum:

I'm currently reading Scrum and XP From the Trenches from InfoQ Press by Henrik Kniberg:

Future Reading
I am now planning to read Head First Software Development from O'Reilly:

This book is not overtly about Scrum, but the authors thank Henrik Kniberg's InfoQ book and it is heavily geared toward scrum from that I can tell so far.

I love the Head First concept and am currently reading Head First Design Patterns to freshen up on my understanding of design patterns, which is not as good as it should be.

Monday, June 2, 2008

Scrum Link List

As I've been talking about the Scrum software development process framework and its associated terms and phrases like agile, velocity, productivity, reduced defect rate, I have collected some great links.

Getting up to Speed on Scrum

After reading the definition on Wikipedia, I recommend watching Jeff Sutherland's presentation at:

It presents real industry data and case studies that demonstrate how teams have transformed themselves by introducing agile practices using Scrum. It will definitely make you want to learn more.

Scrum Definition:
Concise definition and links:

Hirotaka Takeuchi, coined the term Scrum in 1986 after studying how Japanese companies increased their productivity and competitiveness:

Hirotake Takeuchi's book on Knowledge Management:

Presentations about Scrum:
"The Agile Enterprise: Real World Experience in Creating Agile Companies", by Jeff Sutherland:

"Scrum, et al", by Ken Scwaeber, co-inventor of the scrum method for software development, presented at Google:

"Scrum Tuning: Lessons Learned from Scrum implementation at Google", by Jeff Sutherland, co-inventor of the scrum method for software development:

"The Roots of Scrum", by Jeff Sutherland. A presentation with lots of empirical data and examples. He discusses the roots of Scrum within Japanese businesses like Toyota, and about his own experience with medical companies using Scrum:

Audio Lessons:

These three introductions to Scrum by Ken Schwaber are excellent. They give you very clean and clear definitions of what the major aspects of Scrum are. Listen while you jog, listen while you drive, but just listen:

Part 1:
Part 2:
Part 3:

Agile Software Development with Scrum, by Ken Schwaber and Mike Beedle:

Scrum and XP from the Trenches, a 90 Page Experience Report, by Henrik Kniberg:

Previous Posts by Me:
I have written a few posts recently about scrum and agile. You can find all those by clicking here.

Enjoy the links!

Monday, May 26, 2008

Scrum at Google: Ken Schwaber Talk

Ken Schwaber addresses Google about Google's Scrum implementation:

In this video he discusses the history of scrum coming out of Japan when Japanese firms were stressed to compete at higher levels. He also talks about how Google's competitor Yahoo uses scrum.

Related Resources

Agile podcast and interviews about Scrum with Ken Schwaber

These three introductions to Scrum by Ken are excellent. They give you very clean and clear definitions of what the major aspects of Scrum are. Highly reocmmended.

Part 1:
Part 2:
Part 3:

Ken Schwaber's book:

Agile Project Management with Scrum, by Microsoft Press:

Scrum Masters 2: funny video

Review: Scrum and XP from the Trenches

I am reading the book Scrum and XP from the Trenches by Henrik Kniberg. It is about his personal experience successfully implementing a Scrum-based methodology in his development team of 40 people in Stockholm, Sweeden.

You can read the book as a PDF on InfoQ's web site here:

Or, you can purchase the book at LuLu here:

Here are my comments on each section of the book:

Foreward by Jeff Sutherland

Jeff co-created Scrum. In his foreward he makes these points:

JSG paraphase:

Direct excerpts:
  • Iterations must have fixed time boxes and be less than six weeks
  • Code at the end of the iteration must be tested by QA and be
    working properly.
  • A Scrum team must have a Product Owner and know who that
    person is.
  • The Product Owner must have a Product Backlog with estimates
    created by the team.
  • The team must have a Burndown Chart and know their velocity.
  • There must be no one outside a team interfering with the team
    during a Sprint.

Foreward by Mike Cohn

Mike is a founding member of the Scrum Alliance. He has written books, articles, and speaks regularly. Learn more at

Mike makes these points:

  • Scrum and XP are both practical, results-oriented approaches. They are about Getting Things Done
  • Prototype early and often rather than documenting requirements at an exhaustive level
  • Avoid excess modeling, prefer prototyping instead
  • Work on things that have potential to become part of the actual working solution
  • Refer to other resources for the theory behind Scrum. It is out there, but this book focuses on implementing Scrum successfully
  • He was skeptical going in, but was convinced shortly thereafter starting
  • Scrum worked for the author's 40 person team
  • Team's quality was way below acceptable, but implementing Scrum solved their problems
  • He will use Scrum by default for new projects in the future unless there is a specific reason not to use it

1. Introduction

  • Scrum is not a magic bullet
  • You tailor it to your context
  • His team was fighting fires all the time
  • Quality was low
  • Overtime was up
  • Deadlines were missed
  • "Scrum" was just a buzzword to most people
  • Implemented across teams of 3 to 12 people
  • Learned about different ways of managing backlog (Excel, Jira, index cards, etc)
  • Experimented w/different sprint sizes (2 to 6 weeks)
  • Used other XP practices like pair programming
  • Used continuous integration
  • Used TDD
  • Took Ken's certification course
  • Most useful info came from real "War Stories" and case studies of people actually solving problems with Scrum

2. How we do product backlogs

  • Backlog is hear and soul of Scrum
  • Backlog contains customer's desired features, prioritized by most critical
  • Call the backlog items User Stories or just Stories
  • ID, Name, Importance, Initial Estimate, How-To-Demo, Notes
  • 6 fields were most often used
  • Kept in Excel spreadsheet on shared drive with Multiple Editing allowed
  • Additional: Track, Components, Requestor, Bug Tracking ID if needed
  • Keep the product backlog at the Business Level not a technical level
  • Let the team figure out the How-To Technical Level
  • Ask Why as many teams as needed to get to the underlying intent if the Product Owner does state it in technical terms, and move technical language to Notes field.

3. How we prepare for Sprint planning

  • Product Backlog MUST exist first in ship-shape form
  • One Product Backlog and One Product Owner per Product
  • All items should have a unique Importance level
  • Leave large gaps. 100 does not mean 5 times more important than 20. Easier than making 20.5 if you did 20 and 21 instead when item C comes up in the middle
  • Product Owner should understand the intent of each story
  • Others can add stories, but only Product Owner can assign importance
  • Only the team can add an estimate
  • Tried using Jira for keeping the backlog, but too many clicks for Product Owner
  • Have not yet tried VersionOne or other Scrum tools

4. How we do Sprint planning

TODO page 25

5. How we communicate Sprints

6. How we do Sprint backlogs

7. How we arrange the team room

8. How we do daily Scrum

9. How we do Spring demos

10.How we do Spring retrospectives

11.Slack time between Sprints

12.How we do release planning and fixed priced contracts

13.How we combine Scrum with XP

14.How we do testing

15.How we handle multiple Scrum teams

16.How we handle geographically distributed teams

17.Scrum master checklist

18.Parting words

Recommended reading

Saturday, May 24, 2008

Getting Real, Release It!, and The Perpetual Beta: Modern Web Apps

I've been thinking an awful lot lately about web application architecture and release strategies. This has a lot to do with a project I am working on professionally, but it ties into everything I've been thinking about regarding architecture in general.

Here are some links for reading more about these topics and insights from some of today's top developers, including, the creators of the Ruby-on-Rails framework and developers of

Getting Real, by 37Signals

Learn about 37Signals by googling them. They are the creators of the Ruby-on-Rails framework that has pushed a lot of other people to higher levels of quality. Only a true Microsoft fanboy would not know that tons of what is inside of Microsoft's forthcoming ASP.NET MVC framework comes straight out of the Ruby-on-Rails framework.

Getting Real is a book by the 37Signals crew about how they do Agile development.

Read it here:

Here is a key excerpt from chapter 2 in the essay entitled Fix Time and Budget Flex Scope

"Launching something great that's a little smaller in scope than planned is better than launching something mediocre and full of holes because you had to hit some magical time, budget, and scope window. Leave the magic to Houdini. You've got a real business to run and a real product to deliver.

Here are the benefits of fixing time and budget, and keeping scope flexible:

  • Prioritization

    You have to figure out what's really important. What's going to make it into this initial release? This forces a constraint on you which will push you to make tough decisions instead of hemming and hawing.
  • Reality

    Setting expectations is key. If you try to fix time, budget, and scope, you won't be able to deliver at a high level of quality. Sure, you can probably deliver something, but is "something" what you really want to deliver?
  • Flexibility

    The ability to change is key. Having everything fixed makes it tough to change. Injecting scope flexibility will introduce options based on your real experience building the product. Flexibility is your friend."
I really like these ideas with regard to web applications. These principles become extremely important when you are migrating a system from an older technology to a new technology, especially when the benefit you seek is primarily for the application infrastructure as opposed to for the end-user benefit. You have to ask yourself questions like:
  • How does this migration actually benefit the user?
  • Is is tested well enough to replace the existing system such that users do not notice the change?
  • Is there a way to minimize any possible damage the new infrastructure could cause to profitability should it not work as hoped?
As a consultant, you have to make clients aware of these questions. Your job is to inform them about possible problems and strategies for risk mitigation. Ultimately, they may select to do something you either agree with or don't, but you have to due your diligence when you recommend a migration strategy.

The situation is more complex when you are implementing not just a back-end migration, but also adding brand new features that you want to introduce to actually improve the user experience.

Suppose you have this scenario:
  1. You have a client that operates a popular shopping web site
  2. The existing application is running ASP with COM objects written in Visual Basic 6.0
  3. The existing application works well and has been tested through real-world use for more than five years
  4. The existing application continues in increase in value, leading to higher and higher ROI each year
  5. Your client wants to migrate to Visual Basic.NET and ASP.NET 3.5
  6. Your client wants to add new features to the system that will increase the usability and utility for the system's users such as Ajax-enhanced search pages and RESTful URLs that offer strong Search-Engine-Optimization benefits
You have to carefully weigh all of these demands and criteria. Ask questions like:
  • How important is time-to-market?
  • How important is it that users do not have any interruptions in service?
  • What are the performance and scalability requirements?
Yes, Yes, and Yes
In my experience, most clients will want it as soon as possible and with as few interruptions as possible and with as good or better performance as the existing system. This is just a given.

But, you really cannot deliver all three of those concurrently. You have to make some trade-offs.

In this case, I believe the best migration strategy is what is called a vertical migration strategy. Read more about this in the Dr. Dobbs link below.
  1. Create the new foundational architecture in Visual Basic.NET to support the system
  2. Create Interop assemblies on top of the COM objects
  3. Create the new, value-added functionality first
  4. Bring up the system in beta side-by-side to the existing system so that the value-added features can be delivered to the users and so that the client realizes early return-on-investment (ROI) and gets early user feedback.
  5. Monitor the system's performance and refactor any of the bottleneck areas caused by interop by implementing them in pure .NET code.
  6. Add more features to the beta to slowly replace the existing application, getting user feedback and important performance and scalability information all the while to help direct your refactorings.
This is different from a horizontal migration strategy. In a horizontal strategy, you would select an entire layer of system, such as the UI, the Business Logic, or the Persistence Layer. A horizontal strategy is typically more complex and time-consuming and requires more testing.

However, you can use a very similar risk mitigation strategy to what you do in a vertical migration. You can bring up the new system side-by-side with the existing one and allow users to alpha and beta test it while you measure the usability, performance, and scalability and refactor as needed before it replaces the existing system.

You can read much more about the various approaches to ASP.NET application migration in this Dr. Dobbs online article:

Figure 1: Vertical Migration Strategy

Figure 2: Horizontal Migration Strategy

Release It!: Design and Deploy Production-Ready Software

Another book that I have eyed on shelves of late is called Release It!: Design and Deploy Production-Ready Software by Michael Nygard. Here is an interview with the author on InfoQ about the book and his lessons learned:

Michael Nygard: First off, there's quite a bit of variation in what people mean by "feature complete". Even at best, it just means that all the specified functionality for a release has passed functional testing. For an agile team, it should also mean that all the acceptance tests pass. In some cases, though, all it means is that the developers finished their first draft of the code and threw it over the wall to the testers.

"Production ready" is orthogonal to "feature complete". Whether the acceptance tests pass or the testers give it a green check mark tells me nothing about how well the system as a whole is going to hold up under the stresses of real-world, every day use. Could be horrible, could be great.

For example, does it have a memory leak? Nobody actually runs a test server in the QA environment for a week or a month at a time, under realistic loads. We're lucky to get a week of testing, total, let alone a week just for longevity testing. So, passing QA doesn't tell me anything about memory leaks. It's very easy for memory leaks to get into production. Well, now that creates an operational problem, because the applications will have to be restarted regularly. Every memory leak I've seen is based on traffic, so the more traffic you get, the faster you leak memory. That means that you can't even predict when you'll have to restart the applications. It might be the middle of the busiest hour on your busiest day. Actually, it's pretty likely to happen during the busiest (i.e., the worst) times.

This is crucially important. Memory leaks come from third-party vendors just as often as they come from your own internal code. There is nothing like having to log in remotely to a web server when you're trying to have fun and the third-party component is causing your web server to hang. These are things that people rarely think about up front because they typically are problems revealed only by real system usage. I'll give a real world example:

Suppose you have a Visual Basic 6.0 COM object that use ADO internally. It may keep a RecordSet open to allow for consuming code to rewind the cursor and start over from the beginning. Well, .NET uses non-deterministic finalization, so you have to take care to call System.Interop.Marshal.ReleaseCOMObject to inform the runtime that it should destroy the COM object when you are finished with it. If you do not do this, you could end up with long-standing blocks against your database until the garbage collector frees the COM object.

I have run into this problem, and luckily was able to refactor my wrapper class in one single place to alleviate the problem. In the web application, we never rewind the collection, so it was safe for us to free the object after the while loop in the IEnumerable.GetEnumerator() completed.

As for this book, you can read the full table of contents and excerpts from the book in this PDF extracted from the book:

After looking at the TOC, I know this is a book I want to read.

Web 2.0 Applications and Agile Methodologies
This brings us into the territory of web 2.0 applications and the topic of the agile methodology.

The wikipedia entry for Web 2.0 defines the following key characteristics for a web 2.0 application:

"The sometimes complex and continually evolving technology infrastructure of Web 2.0 includes server-software, content-syndication, messaging-protocols, standards-oriented browsers with plugins and extensions, and various client-applications. The differing, yet complementary approaches of such elements provide Web 2.0 sites with information-storage, creation, and dissemination challenges and capabilities that go beyond what the public formerly expected in the environment of the so-called "Web 1.0".

Web 2.0 websites typically include some of the following features/techniques:

Suppose your task is to migrate a web 1.0 application to a web 2.0 application such that it increasingly resembles these features. How do you do that with maximal risk mitigation and protection of the existing system's ROI?

Taking Cue's from Yahoo and Google's Lead
First, from a process methodology standpoint, both Yahoo and Google have adopted an agile process. In particular, they have adopted a Scrum-based development methodology. You can watch the following videos to learn more about that:
Just to summarize, however, here is what Scrum looks like visually:

Verbally, the wikipedia article describes it as:
Scrum is a process skeleton that includes a set of practices and predefined roles. The main roles in scrum are the ScrumMaster who maintains the processes and works similar to a project manager, the Product Owner who represents the stakeholders, and the Team which includes the developers.

During each sprint, a 15-30 day period (length decided by the team), the team creates an increment of potential shippable (usable) software. The set of features that go into each sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. What backlog items go into the sprint is determined during the sprint planning meeting. During this meeting the Product Owner informs the team of the items in the product backlog that he wants completed. The team then determines how much of this they can commit to complete during the next sprint.[4] During the sprint, no one is able to change the sprint backlog, which means that the requirements are frozen for sprint.

There are several good implementations of systems for managing the Scrum process and the "sprints" while others prefer to use yellow stickers and white-boards. One of Scrum's biggest advantages is that it is very easy to learn and requires little effort to start using.

Why have both Google and Yahoo adopted scrum? Well, just listen to what someone inside of Yahoo had to say about this in the Scrum Mailing List.

"What the Times doesn’t say is that Yahoo! is now 18 month into its adoption of Scrum, and has upwards of 500 people (and steadily growing) using Scrum in the US, Europe, and India. Scrum is being used successfully for projects ranging from new product development Yahoo! Podcasts, which won a webby 6 months after launch, was built start-to-finish in distributed Scrum between the US and India) to heavy-duty infrastructure work on Yahoo! Mail (which serves north of a hundred million users each month). Most (but not all) of the teams using Scrum at Yahoo! are doing it by the book, with active support from inside and outside coaches (both of which in my opinion are necessary for best results).

Pete Deemer Chief Product Officer, Yahoo! Bangalore / CSM"

Microsoft also uses Scrum-based methodologies for building systems. See this eWeek article here for details:

Microsoft ASP.NET and MVC: A new direction
We see further changes in Microsoft's approach to development in the way they are releasing upgrades to ASP.NET and the MVC framework. Take a look at This is the new home for the ASP.NET framework, where Microsoft makes "often and early" releases to the developer community. They are still a few steps short of going fully open-source, but I think they will get there sooner rather than later. At least, I hope they do if they hope to survive in the competitive market.

The Perpetual Beta: A way forward that ensures quality

Finally, arrive at the idea of a perpetual beta. This is something that Tim O'Reilly discussed a few years back about the nature of a web 2.0 system. Read more about his comments here:

and here:

His key point about the concept of a perpetual beta is this:
"Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, "release early and release often" in fact has morphed into an even more radical position, "the perpetual beta," in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It's no accident that services such as Gmail, Google Maps, Flickr,, and the like may be expected to bear a "Beta" logo for years at a time."[1]
The Perpetual Beta Deployment Model in Practice

In practice, you could implement the Perpetual Beta model as follows:

Web Farm:
[ Production Server #1 - App v1.0 ] [ Production Server #2 - App v1.0]

Beta Server:
[ Beta Server #1 - App v1.x ]

To follow Google and Yahoo's lead here, you would deploy release candidate builds to the Beta Server and allow users to select to use that version of the system, but they could always navigate to the production URL instead of the beta.

This provides you the following benefits:
  • Users become part of your testing process and help you develop and guide the evolution of the system
  • Your team becomes far less stressed and annoyed because they have a true metric of user satisfaction and system stability to gauge before changes actually go into the true production environment
  • You decrease your time-to-market by having a system that is continually improving
Cautions and Caveats to this process:
  • You have to take care in testing the beta build well in advance of pushing it to the beta server. Make sure there are no database concurrency issues or blocking processes that could cause the beta and the production system to conflict.
  • You should be exercising automated tests and using regression test tools well in advance of the beta deployment.
  • You will still not catch all problems before it goes into the real production environment. This is just the way development is.
These are just some of my notes and thoughts on the way forward for web application development. I know I'm already behind on most of this stuff, but some of my readers are further behind :-)

Is there a silver lining to all of this change and fast pace?

I believe there is. The silver lining is HTTP and URI.

If you build your URIs to last, last they will. That is the fundamental change in mindset that is taking place for most of the successful players right now. They are realizing that they can construct their services using RESTful designs that allow both applications and users to repurpose content and data in ways that nobody thought possible before.

If you don't believe me, just head on over to or read up on Google Base

URI stands for Uniform Resource Identifier. It's about time we started treating it like one. We've got to stop reinventing the wheel and start driving the car we have with the wheel we got already.

Check out my previous posts on REST or just read Roy Fielding's dissertation to learn more.

Related Resources

Thursday, May 22, 2008

Querying Google Base using GLinq in C#

Google Base is google's open data repository. Watch a video and learn more about the open API to use Google Base here:

The GLinq project is a beta project that provides strongly-typed access to Google Base. Check it out at:

Once you download the beta, you can run the sample application, which consists of this code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace ConsoleTest
class Program
static void Main(string[] args)
//You need a google key. It's easy -
Console.WriteLine("Enter your Google Base Key (");
string GoogleKey = Console.ReadLine();
GoogleItems.GoogleContext gc = new GoogleItems.GoogleContext(GoogleKey);
var r = from ipods in gc.products
where ipods.BaseQuery == "mp3 players" && ipods.Brand == "apple"
where ipods.Price > 200 && ipods.Price < 400
orderby ipods.Price descending
select ipods;

foreach (GoogleItems.Product product in r.Take(100))
Console.WriteLine("{0} for ${1}",product.Title, product.Price.ToString("#.##"));

This returns a big list of the first 100 items returned from Google Base:

16GB Apple iPod Touch for $399.99
IPOD for $399.99
Black 80GB Video Apple Ipod for $399.99
Apple 160GB iPod classic â?" Black for $399.99
Apple iPod Photo 60GB - 15000 songs & 25000 photos in Your Pocket for $399.99
Apple 160GB iPod classic â?" Black for $399.99
16GB Apple iPod Touch for $399.99
Apple 60 GB iPod with Video Playback Black for $399.99
Apple iPod Touch 16GB WiFi Digital Music/Photo/Video Player for $399.99
BOSE SoundDock Portable Black Digital Music System for the iPod for $399.99
Apple iPod touch 16GB* MP3 Player (with software upgrades) - Black for $399.99
Apple 8GB iPod touch for $399.99
APPLE IPOD 8GB TOUCH for $399.99
BOSE SoundDock Portable Black Digital Music System for the iPod for $399.99
Apple 16GB iPod touch for $399.99

Sunday, May 11, 2008

Web Application Architecture in 2008 and Beyond

Application Architecture 2008: The More Things Change the More they Stay the Same
This post is about the future of Web Application Architecture in 2008 and beyond. It details some trends in "returning to the basics" regarding the adoption of REST-based services that are happening right now that I believe will lead to companies being able to build solid long-term platforms for service integration and collaboration with external partners and end-users alike.

Back to the Basics, Son
A good architecture should minimize the degree of difficulty for end-users, software components, and external vendors and third-party software to use a system. It should maximize the opportunity for the same people and software to derive value from and contribute value to the system.

The way the existing World Wide Web works, with the simple HTTP protocol and URI standards, provides a good model that companies like Yahoo, Amazon, Google, and Microsoft (as of late) are capitalizing on to build long-term scalability and integration with disparate systems. The architecture of the web is called Representational State Transfer (REST), a term coined by the principal author of the HTTP and URI specifications, Roy Thomas Fielding .

Prerequisite Reading
To fully appreciate this post, I recommend you read a little more about the history of REST and WWW architecture as well as how they compare and contrast to SOAP / RPC models for Web Services and Service Oriented Architecture.

To summarize what you will find in the background reading, here is what REST boils down. Note that I am applying the example HTTP here because HTTP is an example of a REST-based architectural style.
  • HTTP focuses on finding named resources through the global addressing scheme afforded by the URI standard
  • HTTP allows applications to GET, PUT, POST, and DELETE to those URIs.
  • GET retrieves a representation of an resource located at a specific URI
  • PUT attempts to replace the resource at the given URI with a new copy
  • REST is not a "buzzword" or a "fad". It is a description of how the WWW actually operates and how it has scaled to the size and success that it has thus far.
Addressability and Uniform Interface are the Keys to Success

As explained in the Wikipedia entry, and alluded to above, addressability and the uniform interface are two of the most crucial features for a REST-based architecture.

I want to compare HTTP's implementation of a REST based architecture with a similar example of SQL.

In HTTP, we have the standard verbs, or methods, GET, POST, PUT, and DELETE.

Similarly, in SQL, we have SELECT, INSERT, UPDATE, DELETE.

Imagine a web site with the following URL:

Here, we have several parts: = Host address

/Items = Subdirectory off the root

/ViewItem.aspx?id=123 = This breaks down into several parts:

View = verb (action)
Item = noun (resource)
id = parameter name
123 = parameter value

The actual HTTP request for this would look something like:

GET /ViewItem.aspx?id=123

Now, consider an alternative URL:

Here we still have the first two parts the same, but we roll up all the others into the /123. This makes sense because we already have a verb, it's GET. We already know we are looking at the "Items" subdirectory, so that identifies the type of noun that we want already.

Detailed Example of Exposing and Consuming Data

Let's take the example a little further with microformats.

Background Reading

These are some links for understanding Web (REST) architecture.

Real World Business Point of View

Quick technical review
Technical In-Depth analysis
Detailed understanding
These are just some miscellaneous notes.

The more things change the more they stay the same

During the early years of the web we had a few building blocks:
  • URI (
  • HTML (and friends like the blink and marquee tags)
  • User agents (Netscape, Mosaic, Lynx, MSIE)
  • CGI and PERL
Later, we had things like:
  • Cascading Style Sheets
  • XML
  • Java
  • Javascript
A little later than that we got:
  • PHP
  • .NET
  • SOAP based Web Services
  • XmlHttpRequest
Most recently, we've seen:
  • Python popularity growth
  • Ruby
  • Ruby on Rails with ActiveRecord
  • Asynchronous Javascript and XML (AJAX)

Things that will never happen

My grandfather Gene Gough worked at IBM for 35 years as a programmer and manager. He sent out a funny email the other day. I don't know if this list is accurate or not, but here is the list posted on line:

Here are three good ones from the list

"Louis Pasteur's theory of germs is ridiculous fiction." -- Pierre Pachet, Professor of Physiology at Toulouse, 1872

"I think there is a world market for maybe five computers." --Thomas Watson, chairman of IBM, 1943

"There is no reason anyone would want a computer in their home." -- Ken Olson, president, chairman, founder of Digital Equipment Corp. 1977.

I worked with the people at CDC's Epi-X ( program in the past, and I think they'd strongly disagree not just the first one, but with all three.

We are all familiar with the now famous "Technology Adoption Curve", which looks like a standard bell curve. You can read more about this here:

REST Notes

These are links and notes from various sites about REST architectural styles.

Q: Can you elaborate a little on why you think these would be tightly coupled? I mean everybody is talking about loose coupling and SOAP

A: I miss that one. When I was selling Systinet's products I talked about loosely coupled, when I present information for Burton Group on SOA I certainly stress "Trying to do your best to loosely couple client to server", because obviously loose coupling is a very good idea. The issue and I have said this before in print, is that you can strive for the largest amount of loose coupling possible in the SOAP WS-* world, and when you do the best job possible and when you obey all the best practices, you still end up with a tightly coupled system.

And you know the answer, but for the audience out there the fact of the matter is when you are creating a SOAP web service, a client, you have to know a great deal about that service, whether you learnt it via WSDL or some other mechanism, you have to know a great deal about it, and generally if the service changes in any not even terribly significant way, your client has to change with it or it will simply stop working. And you can go through heroic efforts to keep the existing clients alive and you can try and do your best at the design and development phase to loosely couple them, but the fact that the client has to know all the operation names and all the message formats before it can get any value out of a web service, is tight coupling to me.

Where of course in contrast all I need to know about a RESTful web service is its URI. Now I might not be able to derive all of the business values that the service can offer, but I can "GET" it and maybe something interesting will come from that, and it may be all the information I need. So properly designed RESTful system is dramatically loosely coupled, whereas a properly designed SOAP WS-* based system is unfortunately tightly coupled and all you can do is your best effort to avoid more tightly coupling than necessary.

Saturday, May 10, 2008

RESTful Web Services: Notes

This is continued from a prior post. These are my notes from the book RESTful Web Services that Mike ( and I have been reading. He is finished. He's already implementing a commercial application using RESTful design and the ASP.NET MVC framework at

=Chapter 8: REST and ROA Best Practices=
*Expose all interesting nouns as Resources
*Perform all access to Resources through HTTP's uniform interface: GET, PUT, POST, DELETE
*Serve Representations of the Resource, not direct access to it
*Put complexity into Representations not into access methods!

=The Generic ROA Procedure=
*Figure out data set
*Split data set into resources
*Name the resources with URIs
*Expose a subset of the uniform interface
*Design the representation(s) accepted from the client
*Design the representation(s) served to the client
*Integrate the resource into existing resources, using hypermedia links and forms
*Consider the typical course of events: what's supposed to happen? Standard control flows like the Atom Publishing Protocol can help
*Consider error conditions: what might go wrong? Again, standard control flows can help

*RESTful web services should be highly addressable through thousand or infinitely variable addresses
*By contrast: RPC / SOAP based web services typically expose just one or a few addresses
*URIs = Universal Resource Identifier
*Never make a URI represent more than one resource
*Ideally, each variation of representation should have its own URI (think of a URI with en-us for English, ko-kr for Korean, etc)
*Set Content-Location header to specify the canonical location of a resource
*URIs travel better if they specify both a resource and a representation

=State and Statelessness=
*Resource state: stays on server, sent to client as a representation
*Application state: stays on client, until passed to server to use for Create, Modify, or Delete operations
* Service is "stateless" if server never maintains in memory or disk any application state
** Each request is considered in isolation in terms of resource state
** Client sends all application state to the server with each request, including credentials

=Connectedness, aka Hypermedia as the Engine of Application State=
=Server guides client to paths for state transitions of resources using links and forms
=Links and forms are the "levers of state" transition
=XHTML and XML are good representational formats

=Uniform Interface=
*Resources expose one or more of HTTP's interface
*GET: requests info about a resource: returns headers and a representation
*HEAD : same as GET, but only headers are returned
*PUT: assertion about resource state. Usually a representation is sent to the server and the server tries to adjust the resource state to match the representation
*DELETE: assertion that the resource should be removed.
*POST: attempt to create a new resource from an existing one. Root resource may be a parent resource or a "factory" resource. POST can also be used to append state of an existing resource.
*OPTIONS: attempt to discover which other methods are supported (rarely used)
*You can overload POST if you need another method, but consider first whether you can implement your need by simply designing another resource
*For transactions, consider making them resources as well

=Safety and Idempotence=
*GET or HEAD should be safe: resource state on server should not change as a result
**Server can log, increase view count, but client is not at fault
*PUT or DELETE should be idempotent: making more than one of the same request should have the same effect as one single request.
**Avoid PUT requests that are actually instructions, like "Increment x by 5"
**Instead, PUT specific final values
*POST requests for resource creation are neither safe nor idempotent
** Consider: Post Once Exactly

=New Resources: Put Versus Post=
*PUT can only create new resources when it can calculate the actual URI
*POST can create new resources even when the server decides the new URI
**Ex: /{dbtable}/{itemid}, POST to /{dbtable} and the server returns the new URI

=Overloading POST=
*Can use POST to transform the resource into an RPC-style message processor (think: SOAP web services)
*Use of overloaded POST (for XML-RPC or SOAP) is strongly discouraged by the author.
**Using this breaks the Uniform Interface.
**No longer is the web a collection of well-defined URIs with a uniform interface, instead:
**It becomes a collection of known entry points into a universe of DIFFERING INTERFACES, few compatible with each other
*Legit overloaded POST:
**Work around lack of PUT and DELETE support
**Work around limitations on URI length
***POST http://name/resource?_method=GET and payload contains huge data set.
**Avoid methods in GET URIs: /blog/rebuild-index. This is not idempotent

=This Stuff Matters=
*Principles are not arbitrary
*Advantages: simpler, more interoperable, easier to combine than RPC
*JSG: They have, in fact, revolutionized the world by being so simple and constrained to allow loosely coupled "links" from all over the planet to anywhere else.

=Why Addressability Matters=
*Every interesting noun, or concept, is immediately accessible through one operation on its URI: GET
*URIs provide:
** Unique structured name for each item (you own your own domain name so is always unique)
** Allows bookmarking
** Allows URIs to pass to other apps as input
** Allows for mashups you never imagined (
* URIs are like:
** Cell addresses in Excel
** File paths on disk
** JSG: Longitude and Lattitude coordinates
** JSG: XPath queries against XML
** JSG: SQL SELECT statements against relational tables

=Why Statelessness Matters=
*The king of simplifying assumptions!
*Each request contains all application state needed for server to complete request
**No application state on server
**No application state implied by previous request
**Each request evaluated in isolation
*Makes it trivial to scale application up
**Add a load balancer and there is no need for server affinity
*Can scale up until resource (database) access becomes the bottleneck
*JSG This is where COM Interop introduces database latency by forcing connections open longer than they need to be because over marshalling data across process boundaries
*Increases reliability (requests that timeout can simply be requested again)
*Keeping session state can harm scalability and reliability, so use it wisely
** JSG if using a cookie, the cookie can be used to reinstantiate the srver side state for the user no matter what server handles the request

=Why the Uniform Interface Matters=
*Provides a standard way of interaction
*Given, you know:
**GET retrieves it
**POST can attempt to append it or place a subordinate resource beneath it
**DELETE can assert that it should be removed

=Why Connectedness Matters=
* Provides a standard way to navigate from link to link and state to state

=Resource Design=
*Need a resource for each "thing" in your service
**Apply to any data object or algorithm
*Three types of resources:
**Predefined one-off resources: static list, object, db table
**Large (maybe infinite) number of resources of individual items: db row
**Large (usually infinite) number of resources representing ouputs of an algorithm: db query, search results
*For difficult situations, the solution is almost always to expose another resource.
**May be more abstract, but that is OK

=Relationships between Resourcs=
*Alice and Bob get married, do you:
**PUT update to Alice and to Bob, or:
**POST new resource to the "marriage" factory resource?
**Answer: you should create a third resource that links to both Alice and Bob
*JSG this leaves you wondering how you navigate the other direction, but this is not any different at all from a relational database table that has a linking table.

=Asynchronous Operations=
*A single HTTP request itself is synchronous
*Not all requests finish and many take a long time
*Use the 202 status code "Accepted"
*Example: ask server to calculate huge result, sever returns:

202 Accepted

*This URI identifies the job uniquely for the client to come back to later
*Client can GET the URI for status updates and DELETE it to cancel it or DELETE the results later
*This overcomes the asynchronous "limitation" by using a new resource URI
*Caveat: use POST when you will spawn a new resource asynchronously to avoid breaking idempotency if you were to use GET or PUT

=Batch Operations=
*Factory resources can accept a collection of representations and creates many in response
*Create a "job" in response with a URI for the client to check status, or:
*Use WebDAV extension for 207: multi-status; client needs to look in entity body for a list of codes

*You can implement them as resources, just like batch operations can be
*Example: financial transaction

1. POST to a transaction factory to get a URI for your transcation (201 Created response)

POST /transactions/account-transfer

201 Created

Location: /transactions/account-transfer/11a5

2. PUT new balance for checking account to this URI

PUT /transactions/account-transfer/11a5/accounts/checking


3. Then PUT the new value for the savings account:

PUT /transactions/account-transfer/11a5/accounts/savings


3. Commit the transaction

PUT /transactions/account-transfer/11a5


*The server should make sure the representations make sense (no deleted money, no newly minted money, etc)
*RESTful transactions are more complex to implement, but they have advantages of being addressable, transparent, archived and linked

=When in Doubt, Make it a Resource=
*Anything can be a resource
*Strive to maintain the Uniform Interface

=URI Design=
*URIs should be well-designed and meaningful
*URIs should be "hackable" to increase the "surface area"
*Make it so clients can bookmark just about anything to get right to it
**Don't make clients have to repeat dozens of manual steps to get back to a view of a resource
*Go for general to specific
** Example: /weblogs/myweblog/entries/100
*Use punctuation to separate multiple data inputs at the same level
*Use commas when order matters (eg long and lat)
*Use semi-colons when order doesn't matter: /color-blends/red;blue
*Use query variables only for algorithm inputs
*URIs denote resource, not operations: almost never appropriate to put method names in them
**/object/do-operation is a bad style

=Outgoing Representations=
Bookmark: Page 254

Note to Self: Music and Salsa

So I'm back to technical/work mode in general. Not a lot of time to listen to interesting podcasts or audiobooks right now. But, I am not letting that stop me from doing things I want and need to do for fun.

I am taking piano/keyboard lessons at

And, I'm continuing to take salsa lessons, usually at and Check out for a full list of salsa events in the area. I'll be taking a cruise to Alaska in August and it is a salsa cruise!

Thursday, April 24, 2008

Article Review: Managing an Agile Project by Jeff Pallermo

Jeff Pallermo has an excellent article about "Managing an Agile Project" at CoDe magazine:

First, About Jeff

Jeff is the founder of the MVC Contrib project at and he blogs here: He is also working on a book about ASP.NET MVC with a co-author who is a Ruby on Rails consultant.

He's interviewed on Polymorphic Podcast here about MVC and MVC Contrib: (Scott Guthrie (The Gu) is interviewed in the beginning of this episode)

My favorite quote from this interview is "If you are writing new code without automated tests, you are writing instant legacy code".

My second favorite part is where he says that they use CruiseControl, NAnt and NCover for MVC Contrib and they do not accept any contributions that have less than 95% code coverage in their automated tests. NO EXCEPTIONS

Jeff leads a .NET Boot Camp for .NET Teams every six weeks: The training is in Texas and covers the following topics:

This advanced agile curriculum will cover everything involved in developing software in .NET, from setting up a new project and defining the architecture to implementing functionality in a loosely-coupled and testable manner. We will immerse ourselves in domain-driven design, test-driven development, design patterns, object-relational mapping, inversion of control (IoC), pair programming, automated builds, and continuous integration (CI). Students will discover which practices cause projects to fail and which practices help projects succeed. The course will include a strong focus on solid principles and values that can be applied to any .NET project. With a solid understanding of Agile values and object-oriented programming, students will emerge from the training with a refocused view on software development and the tools to immediately bring value back to their companies. All developers will take back working code developed during the course using the techniques and practices taught.

Comments on Article
Pallermo makes some very good points about Communication and Expectations. In my experience with iterative development, these were absolutely critical.

One project I worked on for four years in which Scrum techniques were applied was CDCs Epi-X system (, a peer-review and emergency notification system for alerting about emergency health threats. We held weekly planning and strategy meetings in which the business side, the scientists and epidemiologists, related plans and goals for upcoming activities. On the development side, we had "stand up" meetings that were crucial for the team to understand the goals and problems for each day. After each milestone and release, we held reviews in which we cleaned up our environments, tightened our procedures, and made other improvements.

During a four year period we made about 15 releases, each bounded, each small and highly testable. That is roughly one release every three months.By breaking down the dilverables into smaller milestones, we were able to deliver new value continuously while also maintaining the mission-critical operations of the system on a day to day basis. Communication and documentation between the users, the business staff, and the technical staff was what enabled our success as a team.

On With the Highlights from the Article

Aiming for Agile
  • Combine XP, Lean, and Scrum practices
The Fallacy of Fixed Scope
  • Agile project management understands that scope is a moving target.
  • The larger the scope, the more it will change.
  • The software manager should foster an environment where questioning assumptions is welcome.
  • The software team will apply a critical eye to every story on the table and evaluate whether that story will contribute to the end goal.
  • Every story must stand up to criticism in order to prove worthy of the team's time.
  • Working on the right thing is as important as making the thing worked on right.
  • In an agile project, you tend to employ more generalists than specialists. With a team of generalists, you want folks who are skilled in a variety of areas.
Key Performance Indicators
  • Defect-free stories delivered. Each story represents a unique behavior in the software from which the customer benefits. Stories delivered with defects aren't as valuable, so don't count those.
  • Customer satisfaction. Ultimately, you are creating software for a customer. Without the customer, you wouldn't be creating the software. Keeping them happy is a great metric. It's simple. Just ask their satisfaction level on a scale from one to ten.
  • Consistent velocity. Iteration over iteration, velocity should remain constant. If it continues to change, something is wrong. After about the sixth iteration, treat any significant change in velocity as a cause for alarm. If the team make-up is consistent, the velocity should remain constant. A slowing of velocity could be a sign of a codebase that is less than maintainable.
Managing the Customer

Manage the customer's expectations. Use daily "stand up" meetings to communicate the following quickly:
  1. What I accomplished yesterday.
  2. What I hope to accomplish today.
  3. What barriers are impeding progress.
  • Armed with this daily knowledge, the customer will start to feel like part of the team instead of an outsider.
  • If the customer isn't engaged properly, there will be an us vs. them environment, and that is dysfunctional.
  • When your team is in competition with the customer, neither will win.
Bugs versus Defects
  • Defects are gaps between what your team committed to and what they delivered.
  • A bug is anything that bugs the user. I'm not sure we'll ever get away from bugs, but constant communication sure helps.
  • A defect is a gap in the story contract and acceptance criteria.
  • A bug is not a defect since it's an annoyance that has yet to be discussed.
  • When someone finds a defect, the team should drop everything and fix the defect right away.
  • A defect signifies that a story which the team has already labeled "done" is, in fact, not finished. Finish the story and move on.
  • Institute a zero-defect policy.
After an iteration, have the team take about 30 minutes to document the following:
  1. Good (what you would like to continue).
  2. Bad (what you would like to stop).
  3. Ugly (a fun category if an issue was particularly messy).
All-in-all, you are in charge, so if your team starts a long discussion on how to move forward, you may have to make the call.