Sunday, October 7, 2012

CIOs Counting beans

The CFO has worked very hard to earn the moniker "bean counter." He and his minion make their living counting bodies, dollars, hours, contracts, offices, … Technologists don't think of themselves as bean counters.They write software, gather requirements, fix bugs, support users, manage projects, test systems,… But something strange happens with IT leadership.

When managing portfolios of technology, IT leaders ignore what they manage and become bean counters. Instead of managing value and outcomes, they harangue staff over internal minutia, scare up reams of detail, and bullshit their customers with derived metrics that bear no resemblance to reality. It's as if they are embarrassed by who they really are. As if they believe that to gain credibility they have to placate the CFO. But, this always backfires.

In mimicking the CFO, IT leaders give credibility to the CFOs ridiculous argument that technology management is just counting beans. Is it any wonder that many CIOs and up reporting to CFOs, after loading the gun and handing it over?

In acting like the CFO, IT leaders lose credibility with their staff. Staff know what's required for the job. And they expect their leaders to know, too. They know the work can't be reduced to a couple of simple metrics, e.g., how many tasks have you completed today? How many tasks do you have for tomorrow?

Moreover, bean-counting IT leaders undermine the work itself. In the name of metrics collection, they bake processes around yesterday's work. Inappropriately rigid processes freeze work in time and undermine the continuous improvement of self-directing teams.

Yes IT leaders need to manage budget and time, but the primary driver is value. When managing at the portfolio level, IT leaders need to get their heads out of task tracking, out of detailed resource planning. Instead, they need to focus on the outcomes needed by their customers. Manage to value, not to minutia.

Tuesday, October 2, 2012

50 X and Productivity of Technology Workers

... a 50-fold increase in productivity. What CFO, CIO, or CEO would balk at that? Who would hesitate more than a moment to say, "Yes, I'll take it!"

Fifty-fold is the difference between the median programmer and the top 1%, as  measured by a company with the capability to do so. While not 50X but still astounding, 10X difference between the top and the bottom is widely accepted by anyone with professional software development experience. A 10X productivity difference was first noted by Fred P. Brooks in the '60's. A few summaries of the evidence have been documented by StackExchange, C2, and Construx.

One other useful factor to note: the pay-scale difference between the top and the bottom is 3X at best. And due to an inability to see the difference, many organizations pay the worst performers about the same as the best. Clearly, here's a spread here worth exploiting!

Yet, whether in cahoots with or under the thumb of the skills-blind CFO profession, IT shops mistakenly, dogmatically and aggressively seek the cheapest rate not the deepest skill. For example, the rate card submitted this month by a major system integrator to the feds had rates for local programmers competitive with Indian offshore firms. You have to ask, with new grads from the humanities making more money than that, just who is going to fill these positions?

This ass-backwards pattern of optimizing a part that is anti-correlated with the success of the whole is destructive to our credibility. Have we given up hope that it's possible to predictably deliver value? Is it any wonder that in-house IT shops are the laughing stock of the business and the rest of the tech industry?

Friday, September 28, 2012

Controlling PMO Cancer

The swarm of project coordinators and the meetings they demanded multiplied at a rate that would make E. coli blush. "Dependency, dependency, we must understand all dependencies!" they shrieked. Spawned by the CIOs need for centralized control - presumably as a means to minimize risk and get visibility - the PMO was in fact undermining progress.

The drag their efforts imposed on people doing real work was frustrating. Worse, it sapped valuable mind share and time. And soaked up budget (hey CFO, are you listening?).

Their methods destroyed accountability, too. This in turn raised the importance of the PMO's existence as a mediator. Who else was going to resolve, through endless meetings, interviews, inspections, PowerPoint, and revised project plans, the missed handoffs resulting from their meddling?

In unleashing this cancer, management failed to realize that the organization - as a complex system - was unmanageable this way. At a tiny fraction of the cost of bloated PMO, superior coordination can be achieved by using an agent model, not a centralized control model. Here's how.

Each team must follow a simple set of rules:
  • Publish your milestones, 
  • Negotiate milestones with your dependencies,
  • Trust the milestones of your dependencies in your own plans, 
  • By all means, make your milestones.
Autonomy, predictability and transparency inoculate against the PMO cancer. And guess where the leaders should focus? Enabling - not mandating, not demanding - predictability and transparency.

If your CIO is diving into the PMO rabbit hole, isn't it time for an intervention?

Wednesday, September 26, 2012

Fed Conflict of Interest Rules (OCI) Undermine Continuous Improvement

Federal procurement rules include a concept called Organizational Conflict of Interest (OCI), [FAR Subpart 9.5]. Intended to keep a level competitive playing field, one particular element of OCI is retrograde for software development work.

This element mandates that the group that specifies a solution cannot implement that solution. Yes, that is correct.  The team that invests in learning the problem domain, understanding the customer's problems and framing a solution must specify the solution yet cannot be involved in creating the solution.

This has negative implications in software construction. The team that specifies a solution has no accountability for the feasibility of that solution. Moreover, because they are never around when the solution is created, they never receive any feedback about what works and what doesn't. Their ability to recommend solutions is therefore questionable.

This runs counter to the idea of continuous improvement. This subverts the ideal of craftsmanship. It is the act of doing that teaches us what should be done. In the doing there is learning that cannot be accomplished without the experience of doing. And designers who don't do never experience the insights that lead to innovation. Andy Grove wrote eloquently about this as a national problem here.

It seems that in practice, the larger the problem the stronger the enforcement of these OCI provisions. Evidence the very lucrative Systems Engineering Technical Assistance (SETA) companies that study but never deliver. Could this be one of the reasons the federal government has such a horrible track record with large software projects?

Tuesday, September 25, 2012

Fractional People: Obsessive Focus on Resources Misses Value

The sight was surreal. Just over 100 cubicles housed the IT staff. Organized by the system they supported, several score workers leaned into their screens. Working intently and quietly, oblivious to the body parts of dozens of others around them. The head of an analyst here, the upper half of a tester there, a half PM was split bilaterally. Fractionals manned a third of the screens.

Good work doesn't get done this way. Yet, portfolio planning, budgeting, and project management often do. Project managers, overseers, and financial stakeholders spend countless hours tallying bodies. They believe their centralized model of headcount-control is efficient. They are helping the company by shifting people from one project to another, adding, removing, and defending the status quo.

But, this neurotically obsessive focus on "resources" minimizes effectiveness.
  • Treated as fractions, people doing real work lose purpose. A few hours on this, a few days on that, back to this again. No predictability, no consistency, and no opportunity to leverage experience to make a dramatic improvement.
  • Work gets organized and prioritized by the availability of the various fractions, 2/3 of a tester, 1/4 of pm, 1.6 developers. 
  • Controlling the fractions becomes an exalted position that begets more positions - it's time consuming, you know!
  • The numbers give "cost avoiders" the ready-made metrics they need for browbeating possibility into an expense line.
  • And sadly, all lose sight of the real goal, the result they are supposed to be working towards: creating value for constituents of the organization they inhabit.
They are lost in the weeds managing cost instead of managing the leading indicators of value. What they miss is that by focusing instead on value, the bean counting will sort itself out. And will do so with far less work, far less useless work, and more satisfaction all around. And more value delivered.

Thursday, September 20, 2012

The Myth of "Oversight" for Reducing Risk in IT

Information technology departments (IT) have a love affair with "oversight." Adopted as the panacea for controlling risk, the sad reality is that oversight costs organizations dearly in time and money. This might be worth the cost, except the common forms of IT oversight actually increase risk.

Oversight comes in different flavors. Sometimes masquerading as governance,more often it's implemented as Enterprise Architecture (EA),  Program/Project Management Office (PMO), or both. These efforts are started with the best of intentions, but the consequences are costly:
  • At 10-20% of total organization size, oversight is not cheap. (see PMO Headcount Sizing). And it tends to grow. One CIO commented sagely "PMO's are a cancer; left unchecked they grow to choke the real work of the enterprise."
  • In addition, oversight adds a 20-50% hidden tax on the work overseen. Teams are forced to add staff and time to address the inappropriate, rigid methods, unproductive review meetings, redundant reporting, and excessive documentation imposed in the name of oversight. 
  • Moreover, oversight frequently raises non-issues to emergency status, causing teams to expend time and resources chasing phantoms just to stay in good stead.
  • Finally, excessive oversight checkpoints add wait states that cost real money. In one egregious example, a PM told me that 66% of his project budget was spent waiting for EA & PMO approvals.
In short, oversight can easily double the cost and time to complete work. Maybe this cost would make sense if oversight minimized risk. But as implemented in IT, oversight actually increases risk:
  • Contrary to popular belief, an outside, non-practitioner cannot spot risk by imposing method or by periodic inspection. Even a savvy practitioner will shy away from the responsibility of pronouncing project health, or finding project fault, by casual inspection. She knows that risk is best exposed and mitigated by, and while, doing the work.
  • Delays imposed by oversight transfer the risk from the program to the mission. And in some cases, not having a mission solution is riskier than any risk generated in creating a solution. For example, one federal agency has spent so long reviewing a project that the mission capability is literally months away from functional failure.
  • Oversight stifles innovation by its rigid adherence to "standards" - mindlessly sticking to the "way we know." Failure to keep pace with the knowledge gained by industry experience builds technical debt, a form of adding risk.
  • Finally, oversight leads to information hiding. The dirty secret is that teams tell "oversight" what it wants to hear to make them go away. Oversight then reports inaccurate information to decision makers and dependents. It may take months or even years, but eventually the disconnect between perception and reality will become obvious. And by then, the damage is more extensive, harder to contain, and mitigation far more costly.
Oversight may make organizations feel good. But as commonly implemented it pads cost, increases delivery time, and increases risk.

Superior risk reduction and efficiency can be achieved not by costly oversight, but by other means. I'll leave describing those other means for later.

Thursday, April 26, 2012

The Cost of Predictability


Predictability is a highly valued property of organizations, in particular, of IT organizations that concurrently run more than a handful of projects. Indeed, managing expectations with stakeholders, coordinating dependencies, budgeting/planning and minimizing the stress and chaos from surprises are good career-enhancing incentives for valuing predictability.

But what does predictability mean in practice? A good working definition is: the perception of a pattern of regularly met commitments. There may be a psychological basis as to what constitutes a pattern in the oft-demonstrated ratio of 4 to 5 "rights" offset every "wrong," but many risk-averse IT organizations use a target figure of 95% of commitments met. 

Of course, the implication is that for meeting commitments 95% of the time, each commitment should have a 95% confidence (probability) of being met. Thus, averaging the results of all commitments over time will hit the portfolio target of 95% of commitments met. 

You must have figured out by now that, except by the very foolish, commitments are based on estimates. Estimates are forecasts of an uncertain future, and the more novel the activity that needs an estimate, the less reliable the estimate, i.e., repetition improves accuracy. But, most IT activities - especially software creation - are relatively unique for the organization making the estimate, so the estimates must be padded (or buffered) to ensure that they have a 95% confidence interval.  Interestingly, most organizations use ROMs (or, Rough Order of Magnitude guess) as an estimating technique, and this approach produces the most padding.

Before we get to our conclusion, three other facts about estimates and commitments are relevant. First, all projects are like a perfect gas, that is, they expand to consume the available resources and never deliver early. Second, information theory tells us that the maximum information yield is when any given estimate has a 50% chance of being exceeded, and a 50% confidence estimate is always smaller (or equal) than a 95% confidence estimate.  And finally, any project that exceeds its estimate, whether estimated at the 50% or 95% confidence level, will experience pressure to conclude quickly.

Now, imagine two portfolios: portfolio Safe demands that all projects estimate to 95% confidence, and portfolio Efficient demands that all projects estimate to 50% confidence. The sum of resources consumed by projects in the Safe portfolio will be higher than the sum of resources consumed by projects in the Efficient portfolio. As the names gave away, a 50% target estimate is more efficient than 95% target.

Phew, we're finally here: the cost of predictability is real, often high, and based on personal observation, in some cases extraordinarily high. Seeking predictability with high confidence estimates may be okay if used as a device to manage dependencies in a large program, when the cost of a single project's blown commitment has a cascading effect on other projects. Or, it may be justified when used to manage expectations in an organization with nasty politics.  But folks concerned with efficiency should adopt a different set of metrics than "on time, on budget," jettison the use of ROMs as an estimating technique, and work on achieving better estimates.