Monday, June 27, 2022

Creativity Myths

The 2021 study "Creativity myths: Prevalence and correlates of misconceptions on creativity" tries to separate facts from fiction on what creativity is and what it is not.

Psychology Today summarizes the 15 Myths About Creativity covered in that study.

This study examined the prevalence of known creativity myths across six countries from diverse cultural backgrounds and explored why some people believe in them more than others. Results revealed persistent, widespread biases in the public conception of creativity, such as attributing creative achievements to spontaneity and chance rather than persistence and expertise.

The researchers looked through the existing scientific literature to identify 15 creativity falsehoods, which they divided into four categories:

Creative Definition Myths
  • Creativity cannot be measured
  • Creativity is essentially the same as art
  • Creative ideas are naturally a good thing
  • Most people would not be able to distinguish abstract art from abstract children's drawings

Creative Process Myths
  • Creative accomplishments are usually the result of a sudden inspiration
  • Creative thinking mostly happens in the right hemisphere of the brain
  • Creativity tends to be a solitary activity

Creative Person Myths
  • Creativity is a rare gift
  • People have a certain amount of creativity and cannot do much to change it
  • Children are more creative than adults
  • Mental health disorders usually accompany exceptional creativity

Creative Stimulation Myths
  • People get more creative ideas under the influence of alcohol or marijuana
  • Long-term schooling harms the creativity of children
  • Brainstorming in a group generates more ideas than if people were thinking by themselves
  • One is most creative when with total freedom in one's actions

“A ‘naivety’ conceptualization of creativity is problematic for two reasons,” say the authors.

First, relating creativity to childlike behavior and chance implies low appreciation for the hard work behind creative achievements. Second, it externalizes relevant factors in the development of creativity. Emphasizing the role of inspiration rather than active engagement may undermine creativity by suggesting we need to wait until creativity hits us with a ‘Eureka’-experience.

 


The authors contrast the myths with the following
Creativity Facts
  1. To be considered creative, something has to be both novel and useful or appropriate
  2. Teachers appreciate the idea of creativity but not necessarily creative pupils
  3. Whether or not something is viewed as creative depends on zeitgeist and social norms
  4. Creativity is an important part of mathematical thinking
  5. Creative ideas are typically based on remembered information that is combined in new ways
  6. The first idea someone has is often not the best one
  7. Alpha activity (10Hz) in the brain plays an important role in creative thought
  8. Creative people are usually more open to new experiences
  9. Creative people are usually more intelligent
  10. Achieving a creative breakthrough in a domain (i.e. publishing a successful novel) typically requires at least 10 years of deliberate practice and work
  11. Men and women generally do not differ in their creativity
  12. A man's creativity increases his attractiveness to potential partners
  13. When stuck on a problem, it is helpful to continue working on it after taking a break
  14. Positive moods help people get creative ideas
  15. Getting rewarded for creative performance at work increases one’s creativity



Source:   Creativity myths: Prevalence and correlates of misconceptions on creativity - ScienceDirect
              Appendix B (.xls) references the research papers backing their claims.


The article Top Ten Myths About Creativity (futurefocusedlearning.net) lists 10 Creativity Myths mostly in line with the study's findings:

  1. Creativity belongs to the geniuses
  2. Creativity is making something from nothing
  3. Creativity can’t be forced
  4. Mental illness causes creativity
  5. Drugs make you more creative
  6. To be creative you need to be free
  7. Creativity belongs to the arts
  8. Creativity is a solitary activity
  9. Extrinsic motivation is detrimental to creativity
  10. To explain creativity is to damage it

Friday, June 24, 2022

Creativity

In a recent interview Dr. Deepak Chopra made the following statement:

"Creativity is a spiritual experience, not a mental experience."

which made me investigate the topic a little further.


I came across the 2014 book Modeling Creativity (arxiv.org) by Tom De Smedt who summarizes key insights in chapter 4 as follows:

Creativity refers to the ability to think new ideas. 

Creative ideas are grounded in fast, unconscious processing such as intuition or imagination which is highly  error-prone but allows us to “think things without thinking about them”.

Some of these near-thoughts can emerge without warning as an interesting solution: a moment of insight. This usually happens while tackling everyday problems. This is called little-c creativity.

Big-C creativity, eminent ideas that fill history books, develop gradually. They require interaction with slow, conscious processing. This requires effort and motivation, because consciousness is lazy and tends to wander off.

Flexibility to switch between styles of thought – from unconscious to conscious, from goal-oriented to open-ended, from combinatory to explorative and transformative – is key to creativity: an agile mind.


Another Chopra quote summarizes the above:

"To harness true creativity, you must silence the conditioned mind."

Thursday, June 23, 2022

How to Misuse and Abuse DORA DevOps Metrics

In the How To Measure Software Delivery Using DORA Metrics (YouTube) presentation, Dave Farley, author of "Continuous Delivery" and "Modern Software Engineering" describes how one can apply DORA measurements to drive software development to deliver on this state-of-the-art approach, but also explores a few of the common mistakes that can trip us up along the way.

I found the reference to Bryan Finster's October 2021 presentation How to Misuse DORA DevOps Metrics especially useful.

Bryan contrasts common pitfalls & fallacies with pragmatic and realistic advice.





He also points out that the 4 prominent DORA metrics constitute only the tip of the iceberg.










My earlier blog article on Software Productivity Metrics provides further details on these additional metrics.

Slide #29 in Bryan's deck puts these metrics into perspective ("To improve flow, we must improve CI.") and makes the case for a set of balanced metrics (#34):













Summary ("Closing Thoughts")

  • The 4 outcome metrics are only the tip of the iceberg.
  • Product development is a complex interaction of people, process, and products. There are no simple metrics.
  • Measures require guardrails to avoid perverse incentives.
  • Metrics are a critical part of the improvement toolbox, but…
    • We cannot measure our way to improvement.
    • We use them to monitor and inform the next improvement experiment.
  • Don’t measure people, invest in them. They are our most valuable asset.


[July 26, 2022 -- Update:

Abi Noda discusses Finster's recent article in the The DevOps Enterprise Journal | Spring 2022 (itrevolution.com) edition on the same topic:

Common misuses of the DORA metrics
  • Focusing too much on speed.
    • “Measuring deployment frequency without using quality metrics as guardrails will result in poor outcomes.”
  • Setting goals around DORA metrics. 
    • “The goal isn’t better DORA metrics… OKRs should be focused on desirable business outcomes.”
    • Choose goals, then choose metrics that align with those goals. 
  • Mistaking measuring DORA metrics as a way to improve. 
    • “[DORA metrics] don’t fix things.
      If we simply get a dashboard and do not buy into using it to identify improvement items, then nothing will get better.” 
  • Using DORA metrics as vanity metrics. 
    • “[DORA dashboards] are often used as ‘vanity radiators’ instead of information we can use to help us improve.”
  • Not including other signals in addition to the four key DORA metrics.
    • “The four key metrics DORA used to correlate behaviors of high-performing organizations are a small subset of the metrics recommended in the book Accelerate. They also represent only one aspect of the health of a system…”
]


[January 25, 2023 -- Update:

In his LinkedIn article, Abi Noda summarizes 

Common pitfalls of the DORA metrics, according to 
Nathen Harvey who helps lead DORA at Google:

1. Comparing teams to each other based on the four key metrics. Different projects have different needs, so we can think more critically about whether a team's metrics should fall in the low, medium, or high performance category given that context.

2. Setting goals for improving the DORA metrics, and in turn creating the wrong incentives. Instead set goals to improve the capabilities or factors that drive the DORA metrics.

3. Spending more effort on pulling data into dashboards than on actually improving

4. Not using the metrics to guide improvement at the team level. When the teams doing the work aren’t using the metrics to improve, this defeats the purpose of the metrics.

5. Using "industry" as an excuse for not improving. Even companies in well-regulated industries can focus on improvement.

6. Assuming you’re already world-class, so your organization doesn’t need to focus on improving. If software delivery is no longer the constraint, then what is? Identify what is preventing teams from making progress and focus on that.

7. Fixating on the four DORA metrics (which are outcomes) and forgetting about the capabilities. “We don’t get better at those outcomes by focusing on the outcomes. We have to focus on the capabilities that drive those outcomes.”

The big takeaways:
  • the DORA metrics are outcomes not goals,
  • context matters, and
  • a team must look to understand and improve the factors that drive the DORA outcomes.

P.S. I like the "You might also deliver wrong things 10x faster" statement in the "Fantastic Facts and How to Use Them" presentation referenced in one of the comments.
]

Tuesday, June 29, 2021

Software Productivity Metrics

"When a measure becomes a target, 
it ceases to be a good measure." 
Goodhart's Law 

Traditional research presentations such as Software Productivity Decoded by Thomas Zimmermann (co-editor of the book Rethinking Productivity in Software Engineering discussed in the previous blog entry) have focused on productivity measures for the development of software delivered on premises, e.g.

  • Modification requests and added lines of code per year
  • Tasks per month
  • Function points per month
  • Source lines of code per hour
  • Lines of code per person month of coding effort
  • Amount of work completed per reported hour of effort for each technology
  • Ratio of produced logical code lines and spent effort
  • Average number of logical source statements output per month over the product development cycle
  • Total equivalent lines of code per person-month
  • Resolution time defined as the time, in days, it took to resolve a particular modification request
  • Number of editing events to number of selection and navigation events needed to find where to edit code


The Accelerate: State of DevOps Report 2019 has identified four metrics - commonly referred to as DevOps Research and Assessment (DORA) Metrics - that capture the effectiveness of the development and delivery process summarized in terms of throughput and stability. Their research has consistently shown that speed and stability are outcomes that enable each other.

They measure the throughput of the software delivery process using lead time of code changes from check-in to release along with deployment frequency. Stability is measured using time to restore— the time it takes from detecting a user-impacting incident to having it remediated— and change fail rate, a measure of the quality of the release process. 

In addition to speed and stability, availability is important for operational performance. Availability is about ensuring a product or service is available to and can be accessed by your end users.

  • Deployment Frequency
    How often an organization successfully releases to production.
  • Lead Time for Changes
    The amount of time it takes a code commit to get into production.
  • Change Failure Rate
    The percentage of deployments causing a failure in production.
  • Time to Restore Service
    How long it takes an organization to recover from a failure in production.

I found the webinar by Jez Humble, CTO of DORA / Google Cloud Develeoper Advocate, to provide a great overview of the content of the DevOps Report 2019:

  • Performance metrics
  • Improving performance
  • Improving productivity 
  • Culture 
  • [Update 06/23/2022: See my new blog entry How to Misuse and Abuse DORA DevOps Metrics to avoid common pitfalls when applying DORA metrics in practice.]

    Julian Colina endorses these process metrics and warns against flawed output metrics in his summary of the Top 5 Commonly Misused Metrics:

    1. Lines of Code
    2. Commit Frequency
    3. Pull Request Count
    4. Velocity or Story Points
    5. "Impact“

    Dan Lines even makes the point that Velocity is the Most Dangerous Metric for Dev Teams

    Velocity is a measure of predictability, not productivity. Never use velocity to measure performance and never share velocity outside of individual teams.

    He proposes the following alternative measures:

    • If speed to value is your main goal, consider Cycle Time.
    • If predictability is your main goal, look at Iteration Churn.
    • If quality is your priority, Change Failure Rate and Mean Time to Restore are good.

    Be aware of the kind of culture you want to create by applying these measures as measuring the wrong things for the wrong reasons can backfire. 

    Patrick Anderson acknowledges that DORA Metrics help to deliver more quickly but notes their limited focus on product development & delivery leaving out the product discovery phase. He advocates end-to-end Flow Metrics as part of Value Stream Management to deliver the right things more quickly at the right quality and cost and with the necessary team engagement.

    • Flow Time measures the whole system from ideation to production—starting from when work is accepted by the value stream and ending when the value is delivered to the customer. 
    • Flow Velocity—How much customer value is delivered over time.
    • Flow Efficiency—What are the delays and wait times slowing you down?
    • Flow Load—Are demand vs. capacity balanced to ensure future productivity?
    • Flow Distribution—What are the trade-offs between value creation and protection work?

    Check out the Linearb.io White Paper 17 Metrics for Modern Dev Leaders if you are looking for even more metrics clustered into three categories of KPIs (work quality, delivery pipeline, investment profile) across two dimensions (iterations, teams).

    Somewhat similar Gitential has broken up its Value Drivers & Objective Metrics to Improve Your Software Development into four buckets (speed, quality, efficiency, collaboration):













    The SPACE Framework (The SPACE of Developer Productivity) features five different dimensions of looking at productivity; hence the acronym SPACE:

    • Satisfaction is how fulfilled developers feel with their work, team, tools, or culture; Well-being is how healthy and happy they are, and how their work impacts it.
    • Performance is the outcome of a system or process.
    • Activity is a count of actions or outputs completed in the course of performing work.
    • Communication and collaboration capture how people and teams communicate and work together.
    • Efficiency and flow capture the ability to complete work or make progress on it with minimal interruptions or delays, whether individually or through a system.

    To measure developer productivity, teams and leaders (and even individuals) should capture several metrics across multiple dimensions of the framework—at least three are recommended.



    Another recommendation is that at least one of the metrics include perceptual measures such as survey data. […]. Many times, perceptual data may provide more accurate and complete information than what can be observed from instrumenting system behavior alone.

    Including metrics from multiple dimensions and types of measurements often creates metrics in tension; this is by design, because a balanced view provides a truer picture of what is happening in your work and systems. 

    This leads to an important point about metrics and their effect on teams and organizations: 
    They signal what is important. 

    One way to see indirectly what is important in an organization is to see what is measured, because that often communicates what is valued and influences the way people behave and react. […]. As a corollary, adding to or removing metrics can nudge behavior, because that also communicates what is important.

    Listen to the Tech Lead Journal podcast #43 - The SPACE of Developer Productivity and New Future of Work - Dr. Jenna Butler (Microsoft) for an overview of the framework and other research initiatives related to the "New Future of Work". See link above for transcript, mentions and noteworthy links to related research articles.


    In this context you may want to take note of Steven A. Lowe's six heuristics for effective use of metrics:

    1. Metrics cannot tell you the story; only the team can do that.
    2. Comparing snowflakes is waste.
    3. You can measure almost anything, but you can't pay attention to everything.
    4. Business success metrics drive software improvements, not the other way round.
    5. Every feature adds value; either measure it or don't do it.
    6. Measure only what matters now.

    And if you want to read more about the topic then this blog provides further references. 

    Monday, June 28, 2021

    Productivity in Software Engineering

    Here are my key takeaways from the 2019 book Rethinking Productivity in Software Engineering (edited by Caitlin Sadowski and Thomas Zimmermann). 


    This open access book collects the wisdom of the 2017 "Dagstuhl" seminar on productivity in software engineering, a meeting of community leaders, who came together with the goal of rethinking traditional definitions and measures of productivity.

    The results of their work includes chapters covering definitions and core concepts related to productivity, guidelines for measuring productivity in specific contexts, best practices and pitfalls, and theories and open questions on productivity. 

    Key Findings

    • There is no single metric to measure software development productivity
      and attempting to find one is counterproductive
    • Many productivity factors (technical, social, cultural) need to be considered (> Context)
      • Relationship between developer job satisfaction and productivity
    • Different stakeholders may have varied goals and interpretations of any sort of productivity measurement (> Alignment)
    • Individual developers, teams, organizations and market have different perceptions of productivity (> Level)
      • Productivity goals may be in tension across these different groups
      • Developers do not like metrics focused on identifying the productivity of individual engineers
    • Productivity perceptions vary greatly according to the period of time that is considered (> Time)
    What to do instead: 
    • Design a set of metrics tailored for answering a specific goal
    • Invest in finding and growing managers who can observe productivity 


    BTW, the myth of the "10x programmer" also gets debunked.

    Most of the key points outlined above are covered in Chapter 2 No Single Metric Captures Productivity (Ciera Jaspan, Caitlin Sadowski; Google) which quotes Bill Gates:

    “Measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs.”


    A variety of alternative metrics are discussed in my next blog article

    Friday, September 28, 2018

    Why Startups Fail

    The research firm CB Insights has started to analyze the reasons for Start-Up failures in 2014 and has provided 12 updates since.

    Here's the list of the Top 20 issues they have identified over the years as of February 2018:


    Source: CB Insights, The Top 20 Reasons Startups Fail



    Sunday, October 9, 2016

    Google’s Nine Principles of Innovation


    In 2013, Google codified a new set of “Nine Principles of Innovation,” which updated the version first unveiled by former Google executive Marissa Mayer in 2008.
    1. Innovation comes from anywhere.
    2. Focus on the user.
    3. Think 10x, not 10% - Aim to be ten times better.
    4. Bet on technical insights.
    5. Ship and iterate.
    6. 20% time - Give employees 20 percent time.
    7. Default to open.
    8. Fail well.
    9. Have a mission that matters.

    These principles have been commented on by many others, including Robert Brands ("Learn from the Best", 2016) and Martin Zwilling: ("9 Principles For Maximizing Innovation In Your Business", 2015):


    Innovation can come from anywhere in the organization.
    Entrepreneurs should look for ideas from anyone, inside the organization or outside, top down or bottoms up, but the implementation responsibility is all yours. [...]

    Focus on customer needs rather than profits.
    When innovations are implemented that have clear value and acceptance by customers, business success will follow. [...]

    Target factor of ten improvements, not 10 percent.
    [...] to make something 10 times better than it is to make it 10 percent better. It’s called radical innovation versus incremental improvement. [...]

    Let new technical insights drive innovative products.
    For Google, this has led to self-driving cars, based on work with Google maps and artificial intelligence. [...]

    Ship and iterate, don’t expect instant perfection.
    [...] No technical analysis has the power of real-time user and market feedback. [...]

    Spend twenty percent of work time on innovation.
    Everyone in a company should be encouraged to spend fully one-fifth of their time pursuing ideas for positive change, even if it is outside the core job or core mission of the company. [...]

    Set your default to sharing rather than proprietary.
    Information sharing and open source facilitates collaboration on a huge scale, and can bring in as many innovations as are sent out. [...]

    Tolerate no negativity attached to failure.
    Stigmas and penalties for failing are among the largest gates to innovation. [...] Failing well [...] means failing fast and failing cheap, [...]

    Instill a mission and purpose that matters.
    People think harder if they really believe their innovations will impact millions of people in a positive way. Work can be more than a job when it stands for something people care about [...]
     

    There also is an interim 2011 version authored by Susan Wojcicki that outlines "The Eight Pillars of Innovation"

    • Think big but start small
    • Strive for continual innovation, not instant perfection
    • Look for ideas everywhere
    • Share everything
    • Spark with imagination, fuel with data
    • Be a platform
    • Never fail to fail
    built on Mayer's preceeding nine principles of 2008:
    • Innovation, not instant perfection
    • Ideas come from everywhere
    • A license to pursue dreams
    • Don’t kill projects, morph them
    • Share as much information as you can
    • Users, Users, Users - It’s users, not money
    • Data is apolitical
    • Creativity loves constraints
    • You’re brilliant? We’re hiring