It is hard to talk about failure. However when you work with many projects and many teams, the odds are good you will experience failures. Last year I had such a learning experience, and one year later it is far enough away that I feel I can talk publicly about it. This project started with the best of intentions. The business case was developed and sponsored by two directors who saw an opportunity to improve a data gathering process that was used to define and market products. The current process involved manual entry of data into a 100 row by 200 column spreadsheet, multiple sources of data, many approvals, and finally re-keying of this data into a custom JAVA application to feed downstream systems. The goal was to replace all of this with a COTS application that would allow re-use of data, the analogy was a “catalog.” A vendor with some presence in a related area was bringing new enhancements that was going to allow this capability. We began a Scrum project to implement this new COTS application to replace the manual entry, spreadsheet and custom JAVA application. A knowledgeable Product Owner with deep experience in the current process was recruited, a team with experience in that part of the business was brought together, and a senior Agile coach who knew the organization and had worked with many teams and some of the team members came onboard (me). In addition a big 3 consulting company with experience with the product was hired to provide team members and guidance. How could a project with all of these things going for it get off the rails? Let’s take it apart and see what we can learn:
Iteration 1: Got initial COTS application installed, knowledgeable PO from business recruited, and team put in place.
Iteration 2: Learned about development/configuration of COTS tool, built out product backlog, and investigated incremental replacement strategies for current system. Product Owner begins to doubt the capabilities of the tool to deliver the needed capability. Product Owner does not see a reason to change a process that they constructed and has been used as is for 3 years. Big 3 consulting firm puts 4 people on the team though 2 members have little experience with the tool.
Iteration 3: Team delivers first functionality in COTS dev environment and pilots it with users. COTS tool still lacks key features required. Feedback is universally negative from business users. Sponsors make a decision to not show the app to customers for next 6 months in order to not create a bad impression with customers. The Product Owner is felt to lack vision and is soon replaced by a Director. An IT Lead with little technical skill and a strong controlling personality is put on the team to “shake things up.”
Iteration 4: Vendor onsite promising features “soon” and working with team to define how system could work. Internal IT controls require a work orders for every db change, causing the team’s pace to crawl, as UI changes in the COTS app generates database changes. Team works around issue by adopting completely local dev setup. IT lead begins attempting to control information flow to team, complains that Scrum is “loosy goosey” and that we need a schedule to drive towards.
Iteration 5/6: Scrum Master and Team members escalate continuing conflict with IT lead, action taken to reduce his involvement. Directors decide to wrap up contract with Big 3 consulting firm. Vendor working daily with team, however key features of COTS package are still not delivered. Business stakeholders who are not engaged in the project but have their “sources” begin influencing the project sponsors and question spending. Process improvement initiative that has been on the back burner for 1 year starts to gain momentum on business side.
Iteration 7: Vendor delivers first version of features with numerous limitations. Directors now openly discussing shutting the project down.
Iteration 8/9/10: About 1/2 the staff roll off the project, smaller technical team continues to try make progress with tool.
Project begins winding up.
Looking at this project we can see that while the Directors did many things to try ensure success, including using Scrum, hiring a big 3 consulting firm, putting lots of people on the team, etc. However there are 2 root causes that are pretty clear:
The project business case was driven by an IT Director and an OPs Director working together. The business that owned the current process (remember this is product definition data that is generated by the business) and that giant spreadsheet was not driving the project and asking for these improvements.
The choice of platform was driven by a belief that the infrastructure needed to move to a COTS application vs. a custom application. The choice of the application was based on promised features that had not yet been released in the application. Further, the application was chosen prior to the start of the project and before anyone had tried to use the tool to build the desired capabilities. Furthermore, when the application was used by the team it was found to be very limited, and the business users found it painful to use. This data was rejected by the Directors. Instead of replacing the tool they chose to wait for it to improve.
These two issues in my estimation doomed the project way before any of us knew we would fail. Further when the warning signs started to crop up there was a failure to recognize and act on this data.
If we look at the principles of Lean and Agile we see a number of violations. If the principles had been followed, the project might have been successful, or terminated much earlier, and either outcome would have been better for the business.
1. Know your customer, engage them and do what they need.
The business stakeholders that owned this process were not engaged. Had they been engaged, they could have influenced process changes that would have enabled a more effective automation solution to be developed. When the business was engaged through the product owner, they were unsure and then convinced the tool could not do what they required. They also did not see a need to change the process they had in place.
2. Decide at the Last Responsible moment
The Directors became emotionally invested in the vendors proposed solution long before the solution was available. This meant that other options, such as a custom application were never evaluated. A set based approach where multiple potential solutions were explored would have provided a much higher level of learning and much better data to base the platform decision upon.
3. Working software is the primary measure of progress
When in iteration 3 the team delivered the first capabilities in the tool the customer feedback was hard for the stakeholders to hear. They did not like what they were trying to use. The response was to stop inviting users to try the software. This is clearly denial of the reality that the project was at risk of ever meeting user expectations, and a better response would have been to change the tool or cancel the project and spend the money elsewhere.
Interestingly, while this IT project ended up delivering nothing, a new project was by the business started to implement major process improvements based on Lean. We also used Scrum for that project and the project was a huge success. I will be cover it in a later posting.
Interested to hear your thoughts, have you had similar experiences?
Post Update based on scrumdevelopment list discussion.
No one starts out a project or a mountain climb with the intention to fail. My hope with this blog and ensuing discussion was to show the process of how events unfolded that lead to the failure. I believe it was the process and a series of decisions that lead to the failure. In the case of this project, the average team member’s years of experience was around 8, and key roles were played by experienced staff some with an MBA from the top three schools. About half of the team had recently completed agile projects that were successful, one being changes to the very same JAVA application that the new system would replace. I would start another project with the same people in that same business area today.
A bit about the COTS application and why it was picked. The COTs application was used by other customers for custom forms based workflows for operational (standardized) projects. The platform supported a high level of configuration for this type of work. One of the goals of the project was to provide a system that would allow business users more control over their work, where they could use configuration to implement automation instead of requiring IT staff to code. A configurable process tool would replace Excel and manual processes that made up the front end. So the COTS application was one of a few from a reputable vendor, had some capabilities in production and a customer base. A key capability it did not have was the (at the risk of oversimplifying again) ability to replace that huge spreadsheet with an automated reusable system – the “catalog” I mentioned. That part was slide ware, though it was heavily sold by the vendor.
The organizational structure played a role as well. The silos of business (generate ideas), operations (run the engine), and IT (build new systems) did not work together. The “pain” and cost being addressed was experienced by operations, who were used to requesting systems from IT. The business silo did not understand the cost of their manual processes downstream. This project was trying to bring process improvement, data re-use and automation in this critical part of the business. However the business users, many of them analysts, were comfortable making updates to an Excel file and sending it on, for someone else to worry about things like reconciliation against other data/constraints/products. While data re-use was happening, it was invisible to anyone but the person copying and pasting. Bringing an automation solution might have meant more work for those analysts, but a major time /effort savings down the line. When we started to see the limitations of the system for business users, the time savings for the whole production line was one justification for continuing.