Archive

Monthly Archives: September 2023

Modus Operandi in Software Development

Preamble

In my previous post, there was an implict assumption (under option 2) that development teams attend to folks’ needs – at least, the needs of all the Folks That Matter™ as a core aspect of their modus operandi. I’ve only ever seen one team do this, but in principle there’s no reason why every team could not work this way.

Definition

The term “modus operandi” describes the standard methods and practices that individuals, teams or organisations use to achieve specific tasks. In the context of software development, it refers to the patterns, tools, and methodologies employed to write, deliver, and deploy code.

Preferred Programming Languages

Developers often specialise in certain programming languages like Python, JavaScript, or C++. The choice of language can dictate other elements of the development process, such as the libraries and frameworks that can be used.

Development Methodologies

The development methodology sets the stage for how a project progresses. Common methodologies include Agile, Scrum, and Waterfall. Each comes with its own set of rules and approaches to tasks such as planning, executing, and reviewing work.

Toolsets

Software development usually involves a suite of tools, ranging from integrated development environments (IDEs) to version control systems like Git. These tools streamline various aspects of the development workflow.

Approaches to Quality

The strategies for producing quality products also form part of a developer’s modus operandi. Some teams may favour prevention (e.g. ZeeDee), others testing-based methods such as Test-Driven Development (TDD), while others again might opt for inspections, or more exploratory forms of testing.

Code Review and Collaboration

The way developers collaborate also speaks to their modus operandi. Some might prefer pair or ensemble programming, while others could lean more towards code reviews (synchronous or asynchronous).

Summary

In software development, a modus operandi isn’t just a fixed set of steps but rather a combination of various elements that make up a development approach. This includes the choice of programming languages, methods, tools, and collaboration styles. Understanding a team’s or individual’s modus operandi can be crucial for effective collaboration and synergy.

The Fallacy of Measuring Developer Productivity: McKinsey’s Misguided Metrics

At least the execrable, and totally misinformed, recent McKinsey article “Yes, you can measure software developer productivity” has us all talking about “developer productivity”. Not that that’s a useful topic for discussion, btw – see “The Systemic Nature of Productivity”, below. Even talking about “development productivity” i.e., of the whole development department would have systems thinkers like Goldratt spinning in his grave.

The Systemic Nature of Productivity

Productivity doesn’t exist in a vacuum; it’s a manifestation of the system in which work occurs. This perspective aligns with W. Edwards Deming’s principle that 95% of the performance of an organisation is attributable to the system, and only 5% to the individual. McKinsey’s article, advocating for specific metrics to measure software developer productivity, overlooks this critical context, invalidating its recommendations from the outset.

Why McKinsey’s Metrics Miss the Mark

Quantitative Tunnel Vision

McKinsey’s emphasis on metrics ignores the complex web of factors that actually contribute to productivity. This narrow focus can lead to counterproductive behaviours.

The Dangers of Misalignment

Metrics should align with what truly matters in software development. By prioritising the wrong metrics, McKinsey’s approach risks incentivising behaviours that don’t necessarily add value to the project or align with organisational goals.

Predicated on Fallacies

McKinsey’s suggestions are riddled with fallacious assumptions, including:

  • Benchmarking – long discredited.
  • Contribution Analysis – focused on individuals. Music to the ears of traditional management but oh so wrong-headed.
  • Talent – See, for example,  Demings 95/5 for the whole fallacious belief in “talent” as a concept.
  • Measuring productivity (measure it, and productivity will go down).

The Real Measure: Needs Attended To and Needs Met

The Essence of Software Development

The core purpose of business – and thus of software development – is to meet stakeholders’ needs. Therefore, have the most relevant metrics centre on these factors: How many stakeholders’ needs have been identified? How many have been and are being attended-to? How many have been successfully met? These metrics encapsulate the real value generated by a development team – as an integrated part of the business as a whole. (See also: The Needsscape).

Beyond the Code

Evaluating how well needs are attended to and met requires a focused approach. It includes understanding stakeholders’ requirements, effective collaboration within and across teams and departments, and the delivery of functional, useful solutions. (Maybe not even software – see: #NoSoftware).

Deming’s 95/5 Principle: The Elephant in the Room

The System Sets the Stage

Ignoring the role of the system in productivity is like discussing climate change without mentioning the Sun. Deming’s 95/5 principle suggests that if you want to change productivity, you need to focus on improving the system, not measuring individuals, or even teams, within it.

The Limitations of Non-Systemic Metrics

Individual metrics are the 5% of the iceberg above the water; the system—the culture, processes, and tools that comprise the working environment—is the 95% below. To truly understand productivity, we need metrics that evaluate the system as a whole, not just the tip of the iceberg. And the impact of the work (needs met), not the inputs, outputs or even outcomes.

The Overlooked Contrast: Collaborative Knowledge Work vs Traditional Work

McKinsey’s article advocates for yet more Management Monstrosities, where the category error of seeing CKW – collaborative knowledge work – as indistinct from traditional models of work, persists.

The Nature of the Work

Traditional work often involves repetitive, clearly defined tasks that lend themselves to straightforward metrics and assessments. Think of manufacturing jobs, where the number of units produced per time period or per resources committed can be a direct measure of productivity. Collaborative knowledge work, prevalent in fields like software development, is fundamentally different. It involves complex problem-solving, creativity, and the generation of new ideas, often requiring deep collaboration among team members.

Metrics Fall Short

The metrics that work well for traditional jobs are ill-suited for collaborative knowledge work. In software development, such metrics can be misleading. The real value lies in innovation, problem-solving, and above all meeting stakeholders’ needs.

The Role of Team Dynamics

In traditional work settings, an individual often has a clear, isolated set of responsibilities. In contrast, collaborative knowledge work is highly interdependent. This complexity makes individual performance metrics not just inadequate but potentially damaging, as they can undermine the collaborative ethos needed for the team to succeed.

The Importance of Systemic Factors

The system in which work takes place plays a more significant role in collaborative knowledge work than in traditional roles. Factors like communication channels, decision-making processes, and company culture (shared assumptions and beliefs) can profoundly impact productivity. This aligns with Deming’s 95/5 principle, reinforcing the need for a systemic view of productivity.

Beyond Output: The Value of Intellectual Contributions

Collaborative knowledge work often results in intangible assets like intellectual property, improved ways of working, or enhanced team capabilities. These don’t lend themselves to simple metrics like ‘units produced’ but are critical for long-term success. Ignoring these factors, as traditional productivity metrics tend to do, gives an incomplete and potentially misleading picture of productivity.

A Paradigm Shift is Needed

The nature of collaborative knowledge work demands a different lens through which to evaluate productivity. A shift away from traditional metrics towards more needs-based measures is necessary to accurately capture productivity in modern work environments.

Quality and Productivity: Two Sides of the Same Coin

The Inextricable Link

Discussing productivity in isolation misses a crucial aspect of software development: quality. Quality doesn’t just co-exist with productivity; it fundamentally informs it. High-quality work means less rework, fewer bugs, and, ultimately, a quicker and more effective delivery-to-market approach.

Misguided Metrics Undermine Quality

When metrics focus solely on outputs they can inadvertently undermine quality. For example, rushing to complete tasks can lead to poor design choices, technical debt, and an increase in bugs, which will require more time to fix later on. This creates a false sense of productivity while compromising quality.

Quality as a Measure of User Needs Met

If we accept that the ultimate metric for productivity is “needs met,” then quality becomes a key component of that equation. Meeting a user’s needs doesn’t just mean delivering a feature quickly; it means delivering a feature that works reliably, is easy to use, and solves the user’s problem effectively. In other words, quality is a precondition for truly meeting needs.

A Systemic Approach to Quality and Productivity

Returning to Deming’s 95/5 principle, both quality and productivity are largely influenced by the system in which developers work. A system that prioritises quality will naturally lead to higher productivity, as fewer resources are wasted on fixing errors or making revisions. By the same token, systemic issues that hinder quality will have a deleterious effect on productivity.

Summary: A Call for Better Metrics

Metrics aren’t the problem; it’s the choice of metrics that McKinsey advocates that demands reconsideration. By focusing on “needs attended to” and “needs met,” and by acknowledging the vital role of the system, organisations can develop a more accurate, meaningful understanding of holistic productivity, and the role of software development therein.Let’s avoid the honey trap of measuring what’s easy to measure, rather than what matters.

Afterword

As with so much of McKinsey’s tripe, the headline contains a grain of truth – “Yes, you can measure software developer productivity”. But the nitty-gritty of the article is just so much toxic misinformation. Many managers will seize on it anyway. Caveat emptor!

Cracking the Quality Code: Deming and Crosby

W. Edwards Deming is a name synonymous with quality management and process improvement, particularly in Japan where he helped revive post-war industries. Deming’s approach centres around Statistical Process Control (SPC) and the “Plan-Do-Study-Act” (PDSA) cycle, which emphasises iterative improvement.

His core philosophy manifests through “The 14 Points of Management,” guidelines designed to steer management’s decisions and actions towards achieving quality. Here are a few key points to consider:

  1. Create Constancy of Purpose: Long-Term Over Short-Term Deming believed in focusing on long-term goals instead of short-term profits.
  2. Adopt the New Philosophy: Innovation Over Status Quo Deming urged the adoption of new approaches to improve quality and productivity.
  3. Cease Dependence on Inspection: Build Quality In, Don’t Inspect It In For Deming, quality should be built into the process, rather than inspected into the finished product.
  4. Stop Awarding Business Based on Price: Value Over Cost Deming advised prioritising value and quality when choosing suppliers, rather than just looking at price.
  5. Improve the System: Continual Improvement Over Quick Fixes Deming emphasised the need for continual improvement in products, services, and processes.
  6. Use Training for Skills: Education Over On-the-Job Training For Deming, well-planned training programs were preferable to quick, on-the-job training.
  7. Implement Leadership: Leadership Over Mere Supervision Deming believed in guiding workers to better performance through leadership, rather than simply supervising them.
  8. Drive Out Fear: Openness Over Secrecy Deming recommended creating a work environment where employees feel secure, leading to better quality work.
  9. Break Down Barriers: Teamwork Over Individual Performance According to Deming, departments within an organisation must work together as a team for better quality.
  10. Eliminate Slogans: Reality Over Rhetoric Deming criticised empty slogans that demand performance without providing methods. “By what method?”
  11. Eradicate Numerical Quotas: Quality Over Quantity Deming advised against numerical production quotas that sacrificed quality for volume.
  12. Remove Barriers to Pride of Workmanship: Satisfaction Over Speed Deming believed in allowing employees to take pride in their work, rather than rushing them through tasks.
  13. Institute Education and Retraining: Learning Over Layoffs Deming advocated for continual learning and personal development of staff.
  14. Take Action: System-Wide Changes Over Piecemeal Adjustments Deming called for a comprehensive approach to organisational change, rather than making small, disconnected changes a.k.a. “tinkering”.

Deciphering Philip B. Crosby’s Four Absolutes of Quality

Philip B. Crosby, another heavyweight in the quality management arena, had a simpler but equally impactful approach. He believed in the concept of “Quality is Free,” which suggests that investing in quality from the get-go actually reduces costs in the long run. Crosby laid out the “Four Absolutes of Quality” as follows:

  • Quality Means Conformance, Not Goodness: Quality isn’t about being excellent; it’s about meeting specifications or requirements.
  • Quality Comes from Prevention, Not Detection: The emphasis here is on stopping mistakes before they happen rather than fixing them afterward.
  • Quality Performance Standard is Zero Defects, Not ‘That’s Close Enough’: Crosby propagated the idea that anything less than perfect is unacceptable.
  • Quality is Measured by the Price of Nonconformance, Not Indices: Crosby quantified quality as the cost involved when failing to meet the standard.

The Face-off: Deming vs. Crosby

Here’s some key distinctions between Deming’s and Crosby’s perspectives on quality:

Philosophy vs Pragmatism

  • Deming: Believed in a philosophical shift across the entire organisation, emphasising continual improvement as part of its DNA.
  • Crosby: Took a more pragmatic approach with clear, measurable standards, focusing on specific, achievable outcomes.

Definition of Quality

  • Deming: Didn’t provide a single definition for quality but suggested that it is a continuous quest for improvement.
  • Crosby: Defined quality strictly as conformance to requirements or specifications (not the written kind we think of today, but the actual needs of the customer).

Approach to Errors and Defects

  • Deming: Advocated for a constant, iterative process to reduce defects but didn’t explicitly demand a zero-defects approach.
  • Crosby: Pushed for a zero-defects standard, indicating that anything less was not acceptable.

Preventive vs Reactive Measures

  • Deming: While he supported preventive actions, his model allows for reactive measures as well, with the PDSA cycle helping to correct course.
  • Crosby: Strictly focused on prevention over detection, emphasising that errors should be eradicated before they occur.

Scope of Application

  • Deming: Offered a broad framework, the “14 Points of Management,” that touched on multiple aspects of an organisation.
  • Crosby: Provided a narrower focus through his “Four Absolutes of Quality,” zeroing in on specific key principles.

Cost Implications

  • Deming: Didn’t directly quantify the cost of poor quality, although he suggested that focusing on quality would naturally result in cost savings.
  • Crosby: Directly correlated the cost of poor quality with the price of nonconformance (PoNC), providing a numerical way to gauge it.

Flexibility vs Rigidity

  • Deming: More flexible, as his PDSA cycle allows organisations to adapt and evolve over time.
  • Crosby: More rigid due to his zero-defects policy and strict conformance to requirements.

These distinctions boil down to different focal points in the journey towards quality: Deming offers a holistic, philosophical path, while Crosby provides a more tangible, practical, metric-driven route. Both have their merits, and the choice between them often depends on an organisation’s prevailing assumptions and beliefs.

Fertile Mindsets

Now let’s directly compare the foundational beliefs and assumptions needed to implement Deming’s versus Crosby’s approaches to quality.

Philosophy of Continuous Improvement vs Zero-Defect Mentality

  • Deming: Assumes that improvement is a never-ending journey, fueled by the belief in the value of continuous, iterative processes.
  • Crosby: Operates on the belief that the goal is to reach a zero-defect standard, where each and every error is considered a failure.

Systems Thinking vs Focused Accountability

  • Deming: Necessitates a belief in systems thinking, where every part of the organisation is interconnected and contributes to quality.
  • Crosby: Focuses more on specific, measurable areas and assumes that quality can be attained by holding individuals or departments accountable for their specific roles.

Employee Involvement vs Strict Adherence

  • Deming: Assumes that employees are part of the solution and not the problem, advocating for an organisation-wide culture of quality.
  • Crosby: While not discounting the participation of employees, Crosby invited adherence to pre-defined standards, with less emphasis on employee involvement beyond meeting these standards.

Long-Term Vision vs Short-Term Metrics

  • Deming: Assumes that the organisation is committed to long-term goals and is willing to invest in long-term improvements without immediate ROI.
  • Crosby: Emphasised the achieving of short-term gains as building blocks toward overall quality. Achieving and maintaining a zero-defect status can be seen as a series of short-term objectives that eventually lead to long-term quality improvements.

Flexibility vs Rigidity in Execution

  • Deming: Beliefs must be flexible enough to adapt and change as new information and data become available through the PDSA cycle.
  • Crosby: Assumes a rigid, unyielding stance towards goals and metrics, with no room for deviation from the set standards.

Educational Investment vs Proactive Prevention

  • Deming: The organisation must believe in continuous learning and personal deveopment as fundamental aspects of quality improvement.
  • Crosby: Emphasises proactive prevention of errors, believing that the best way to maintain quality is to prevent mistakes in the first place.

Quantifying Quality

  • Deming: Quality is a more abstract concept, assumed to benefit the organisation in the long run, though not directly quantified.
  • Crosby: Operates on the assumption that quality can be precisely measured through the ‘price of nonconformance,’ making it a more tangible asset or liability.

By understanding these foundational differences, organisations can better assess which approach to quality management aligns with their existing culture, beliefs, and goals.

Final Musings

Both Deming and Crosby offer frameworks that have stood the test of time and continue to be referenced in the world of quality management. Picking one over the other isn’t a straightforward choice; it’s more about aligning their strategies with your organisation’s unique assumptions and beliefs. After all, quality isn’t a one-size-fits-all garment but a tailored fit that evolves with the organisation and its memeplex.