Archive for 'Project Management'

“Agile teams produce a continuous stream of value, at a sustainable pace, while adapting to the changing needs of the business.” – Elisabeth Hendrickson

In this definition, “business value” refers to shippable code.  Agile teams release business value in the form of shippable code frequently at a very fast pace atleast monthly once.
“Sustainable pace” means that the agile teams continuously add capabilities to the emerging system at more or less the same velocity given no increases in team size.

Thus, Agile ensures that the code is built in a sustainable fashion to a high quality. This article explains how agile manifesto might apply to testing and the roles and responsibilities of a tester in an agile project.

“If you don’t know how to work together, whatever your defined processes, I can all but guarantee that your processes are not being followed” – James Bach

Team building should take the place of processes. The whole development team should commit to delivering a high-quality product. Testing is a key activity that should involve testers,developers and customer representatives. It is not supposed to be done by the testers in isolation. The developers need to write automated unit tests before starting to code. This approach of doing testing before coding is called Test Driven Development. In agile environment the distinction between a tester and a developer is blurred. Testers are not the solely responsible or even the primary owner of quality. Quality is a shared responsibility of the whole team. Individuals in an agile team may specialise in a particular role but will take on different roles depending on the context. Testers who are out of their depth reading code or uncomfortable influencing design decisions may require some training to fit into the agile team. The estimate for test efforts should be done as a part of the overall implementation efforts of the project.

Rigid processes sometimes tend to hinder the team’s progress towards quality goals and impose friction on people interactions. Testers need to be on their gaurd to watch out for any impact of rigid processes on quality. For a long time the focus of  testing has been short-sighted with emphasis on functional correctness rather than value to people. This problem in testing is a subset of the bigger problems of rigid processes in place. For example, in company XYZ if the tester logs a defect that lacks the support of requirement specification document, the issue is considered to be a non-issue as per the process. The tester needs to support the defect with the automated test case that failed and the spec document it violates. Otherwise the DEV doesn’t fix the issue and the test managers do not treat the defect as a priority item. In these situations there needs to be a careful risk assessment of the issue through shared conversations with the developer and high support from the management if indeed the issue falls into high risk area.



Myth #1:

Manager:  Hey Team! We are agile now, therefore the requirements are not documented

Team: No problem, we can deal with it

Having no documentation is called agile…but wait…how do we come up with an elegant, high quality code without requirements documentation. It is a myth that agile eliminates all documentation. While agile testing does not support detailed specifications, the requirements should be clearly documented at a high level though not detailed.

Conversations and shared understanding will take the place of heavy-weight documentation. For testers this could mean favoring manual tests over automated tests. This gives the testers enough freedom to discover, diagnose and exploit the product while testing. This requires exploratory testing skills. Testers should be capable of looking at the big picture from multiple angles and ask good questions that the developers and customers might think of. The rigid test scripts and procedures in automation does not reveal critical issues at times. Exploratory testing can help to spot missing features or opportunities for product improvement. The testers could collaborate with the DEV and read the automated unit tests. Instead of spending energy on the specifications and requirements document, the testers can get into the code itself.

The agile testing mind-set is all about collaborating with customers, looking at the big picture, providing and obtaining feedback. Customer collaboration is at the heart of agility. Here the testers collaborate closely with the customers. Working closely with the customers is favored over becoming a customer proxy. The testers do not merely execute against a negotiated bill of work but have iterative collaboration with the customer. The testers state the user’s point of view in team meetings and bug reports. The testers even fail the release if the quality was unacceptable to protect\defend the customer.

Myth #2:

Manager: Hey Team, the management has just decided to add twenty two new features to the product!

Team: Great! That’s good for the customer!

As an agile team we never  resist changes and can accept any amount of changes anytime…What? If twenty two new features are added to the product what happens to the stability of the product?

While agile testing adapts to changing business needs, it doesn’t encourage adding many new features that can make the code unstable and break quality. Agile focuses on delivering a small set of core features with high quality one at a time and adds the other capabilites in the subsequent releases at a fast pace. The changes should be prioritised and if required deferred to the next release.

Change is at the heart of agile projects. The testing team might be expected to follow a plan when a change has happened in the product. The plan should support the change. After a change has been implemented the testers need to test the untouched areas of the changes to ensure the ongoing stability of the product.

“Don’t leave before you leave”

 Sheryl Sandberg, Fortune’s Most Powerful Women.

Here is a video from TED worth watching for young aspiring women in tech. The Role Model Sheryl Sandberg breaks the spell of the  inner limitations that pose barrier to young women who are geared to succeed in tech. She inspires young women to look at technology as a viable and wonderful option and tells it’s time to change the ratio of women in leadership.

“Unless you are breaking stuff,you are not moving fast enough.”

– Mark Zuckerberg



Organizational Project Management Maturity Model (OPM3) is the project management maturity model proposed by the Project Management Institute (PMI). It sets the standard for excellence in project, program, and portfolio management best practices. OPM3 explains the capabilities necessary to achieve those best practices and to deliver projects successfully, consistently, and predictably. A Capability Assessment reports one’s existing capabilities (level of maturity) and provides specific, actionable, and manageable options for developing existing capabilities further.

OPM3 considers that project management consists of 3 layers; wherein maturity should be improved continuously throughout the stages of standardization, measurement, control and improvement. OPM3 provides organizations with an assessment tool to evaluate their maturity in each of these 3 layers and to develop improvement plans aligned with the best practices and organizational strategy. OPM3 provides an objective basis upon which organizations can assess their maturity on a continuous scale of 0 to 100%, based on a standard developed and accepted globally by the Project Management community.


OPM3 Model

 OPM3 Model

Credit:Rubrik Wissen


Some techniques to develop improvement plans:

1. Metrics reveal the health of a project. The devil is in the details. Tracking metrics can help identify process gaps and develop improvement plans. Some examples of metrics that can be tracked in a project are as below:

  • Effort variance: tasks planned versus actual and hours planned versus actual
  • Defect Backlog: Shows new and open defects by severity, functionality group and release
  • Customer weighted requirements prioritization matrix
  • Bugs triage: Prioritize bugs according to customer impact and complexity of the fix

2. Enable communication between the Project team and Auditors where cross-pollination of ideas can take place.

3. Document best practices and lessons learned

Running an organization without access to relevant and pertinent performance metrics is like driving a car without any instrument. The more inspiring the final goal and challenging the deadline, the more key stakeholders are tempted to compromise on best principles of planning, management and control.  Yet, it is critical to measure project performance for cost and schedule objectives, process improvement activities for business maturity objectives and system performance and desired outcome measures for our Return on Investment (ROI) objectives.


Credit: Flickr – Umer Ahmed Khan


Hard indicators are facts that can be measured directly, whereas soft indicators are less tangible conditions that must be measured indirectly. The time it takes to execute a task or how much it costs are typical hard indicators. Quality, expressed as customer satisfaction of needs, is one example of typical soft indicator. Hard indicators are by far more widely used; soft indicators are seen by many as being so inaccurate that they are rarely useful. As Deming (1986) stated, the most important numbers are often unknown. Management by numbers is one of the deadly diseases that have ruined many enterprises. Planning for the unknown or the importance of a detailed upfront scoping process is one of the major critical success factors for a project.

The success of any software project largely depends on effective estimation of project effort, time, and cost. Estimation helps in setting realistic targets for completing a project. The most important estimation that is required to be fairly accurate is that of effort and schedule. This enables you to obtain a reasonable idea of the project cost. If the team starts fully aware of the likely reasons schedules fall apart and takes some actions to minimize those risks, the schedule can become a more useful and accurate tool in the DEV process.

Schedules are a kind of prediction.  Good schedules come only from a team that relentlessly pursues and achieves good judgment. There is no magic formula or science for creating perfect schedules. Schedules don’t have to be perfect. Schedules need to be good enough for the team to believe in, provide a basis for tracking and making adjustments, and have a probability of success that satisfies the client, the business or the overall project sponsor. Good work estimates have a high probability of being accurate.


Good estimates come from good designs

Good engineering estimates are possible only if you have two things: good information and good engineers. If the spec are crap, and a programmer is asked to conjure up a number based on an incomprehensible whiteboard scribbling, everyone should know exactly what they are getting – a fuzzy scribble of an estimate.

There are known techniques for making better estimates. The most well-known technique is PERT, which tries to minimize risks by averaging out high, medium and low estimates. This is good for two reasons. First, it forces everyone to realize estimates are predictions, and that there is a range of possible outcomes. Second, it gives project management a chance to throttle how aggressive or conservative the schedules are.