Sunday, October 7, 2012

CIOs Counting beans

The CFO has worked very hard to earn the moniker "bean counter." He and his minion make their living counting bodies, dollars, hours, contracts, offices, … Technologists don't think of themselves as bean counters.They write software, gather requirements, fix bugs, support users, manage projects, test systems,… But something strange happens with IT leadership.

When managing portfolios of technology, IT leaders ignore what they manage and become bean counters. Instead of managing value and outcomes, they harangue staff over internal minutia, scare up reams of detail, and bullshit their customers with derived metrics that bear no resemblance to reality. It's as if they are embarrassed by who they really are. As if they believe that to gain credibility they have to placate the CFO. But, this always backfires.

In mimicking the CFO, IT leaders give credibility to the CFOs ridiculous argument that technology management is just counting beans. Is it any wonder that many CIOs and up reporting to CFOs, after loading the gun and handing it over?

In acting like the CFO, IT leaders lose credibility with their staff. Staff know what's required for the job. And they expect their leaders to know, too. They know the work can't be reduced to a couple of simple metrics, e.g., how many tasks have you completed today? How many tasks do you have for tomorrow?

Moreover, bean-counting IT leaders undermine the work itself. In the name of metrics collection, they bake processes around yesterday's work. Inappropriately rigid processes freeze work in time and undermine the continuous improvement of self-directing teams.

Yes IT leaders need to manage budget and time, but the primary driver is value. When managing at the portfolio level, IT leaders need to get their heads out of task tracking, out of detailed resource planning. Instead, they need to focus on the outcomes needed by their customers. Manage to value, not to minutia.

Tuesday, October 2, 2012

50 X and Productivity of Technology Workers

... a 50-fold increase in productivity. What CFO, CIO, or CEO would balk at that? Who would hesitate more than a moment to say, "Yes, I'll take it!"

Fifty-fold is the difference between the median programmer and the top 1%, as  measured by a company with the capability to do so. While not 50X but still astounding, 10X difference between the top and the bottom is widely accepted by anyone with professional software development experience. A 10X productivity difference was first noted by Fred P. Brooks in the '60's. A few summaries of the evidence have been documented by StackExchange, C2, and Construx.

One other useful factor to note: the pay-scale difference between the top and the bottom is 3X at best. And due to an inability to see the difference, many organizations pay the worst performers about the same as the best. Clearly, here's a spread here worth exploiting!

Yet, whether in cahoots with or under the thumb of the skills-blind CFO profession, IT shops mistakenly, dogmatically and aggressively seek the cheapest rate not the deepest skill. For example, the rate card submitted this month by a major system integrator to the feds had rates for local programmers competitive with Indian offshore firms. You have to ask, with new grads from the humanities making more money than that, just who is going to fill these positions?

This ass-backwards pattern of optimizing a part that is anti-correlated with the success of the whole is destructive to our credibility. Have we given up hope that it's possible to predictably deliver value? Is it any wonder that in-house IT shops are the laughing stock of the business and the rest of the tech industry?

Friday, September 28, 2012

Controlling PMO Cancer

The swarm of project coordinators and the meetings they demanded multiplied at a rate that would make E. coli blush. "Dependency, dependency, we must understand all dependencies!" they shrieked. Spawned by the CIOs need for centralized control - presumably as a means to minimize risk and get visibility - the PMO was in fact undermining progress.

The drag their efforts imposed on people doing real work was frustrating. Worse, it sapped valuable mind share and time. And soaked up budget (hey CFO, are you listening?).

Their methods destroyed accountability, too. This in turn raised the importance of the PMO's existence as a mediator. Who else was going to resolve, through endless meetings, interviews, inspections, PowerPoint, and revised project plans, the missed handoffs resulting from their meddling?

In unleashing this cancer, management failed to realize that the organization - as a complex system - was unmanageable this way. At a tiny fraction of the cost of bloated PMO, superior coordination can be achieved by using an agent model, not a centralized control model. Here's how.

Each team must follow a simple set of rules:
  • Publish your milestones, 
  • Negotiate milestones with your dependencies,
  • Trust the milestones of your dependencies in your own plans, 
  • By all means, make your milestones.
Autonomy, predictability and transparency inoculate against the PMO cancer. And guess where the leaders should focus? Enabling - not mandating, not demanding - predictability and transparency.

If your CIO is diving into the PMO rabbit hole, isn't it time for an intervention?

Wednesday, September 26, 2012

Fed Conflict of Interest Rules (OCI) Undermine Continuous Improvement

Federal procurement rules include a concept called Organizational Conflict of Interest (OCI), [FAR Subpart 9.5]. Intended to keep a level competitive playing field, one particular element of OCI is retrograde for software development work.

This element mandates that the group that specifies a solution cannot implement that solution. Yes, that is correct.  The team that invests in learning the problem domain, understanding the customer's problems and framing a solution must specify the solution yet cannot be involved in creating the solution.

This has negative implications in software construction. The team that specifies a solution has no accountability for the feasibility of that solution. Moreover, because they are never around when the solution is created, they never receive any feedback about what works and what doesn't. Their ability to recommend solutions is therefore questionable.

This runs counter to the idea of continuous improvement. This subverts the ideal of craftsmanship. It is the act of doing that teaches us what should be done. In the doing there is learning that cannot be accomplished without the experience of doing. And designers who don't do never experience the insights that lead to innovation. Andy Grove wrote eloquently about this as a national problem here.

It seems that in practice, the larger the problem the stronger the enforcement of these OCI provisions. Evidence the very lucrative Systems Engineering Technical Assistance (SETA) companies that study but never deliver. Could this be one of the reasons the federal government has such a horrible track record with large software projects?

Tuesday, September 25, 2012

Fractional People: Obsessive Focus on Resources Misses Value

The sight was surreal. Just over 100 cubicles housed the IT staff. Organized by the system they supported, several score workers leaned into their screens. Working intently and quietly, oblivious to the body parts of dozens of others around them. The head of an analyst here, the upper half of a tester there, a half PM was split bilaterally. Fractionals manned a third of the screens.

Good work doesn't get done this way. Yet, portfolio planning, budgeting, and project management often do. Project managers, overseers, and financial stakeholders spend countless hours tallying bodies. They believe their centralized model of headcount-control is efficient. They are helping the company by shifting people from one project to another, adding, removing, and defending the status quo.

But, this neurotically obsessive focus on "resources" minimizes effectiveness.
  • Treated as fractions, people doing real work lose purpose. A few hours on this, a few days on that, back to this again. No predictability, no consistency, and no opportunity to leverage experience to make a dramatic improvement.
  • Work gets organized and prioritized by the availability of the various fractions, 2/3 of a tester, 1/4 of pm, 1.6 developers. 
  • Controlling the fractions becomes an exalted position that begets more positions - it's time consuming, you know!
  • The numbers give "cost avoiders" the ready-made metrics they need for browbeating possibility into an expense line.
  • And sadly, all lose sight of the real goal, the result they are supposed to be working towards: creating value for constituents of the organization they inhabit.
They are lost in the weeds managing cost instead of managing the leading indicators of value. What they miss is that by focusing instead on value, the bean counting will sort itself out. And will do so with far less work, far less useless work, and more satisfaction all around. And more value delivered.

Thursday, September 20, 2012

The Myth of "Oversight" for Reducing Risk in IT

Information technology departments (IT) have a love affair with "oversight." Adopted as the panacea for controlling risk, the sad reality is that oversight costs organizations dearly in time and money. This might be worth the cost, except the common forms of IT oversight actually increase risk.

Oversight comes in different flavors. Sometimes masquerading as governance,more often it's implemented as Enterprise Architecture (EA),  Program/Project Management Office (PMO), or both. These efforts are started with the best of intentions, but the consequences are costly:
  • At 10-20% of total organization size, oversight is not cheap. (see PMO Headcount Sizing). And it tends to grow. One CIO commented sagely "PMO's are a cancer; left unchecked they grow to choke the real work of the enterprise."
  • In addition, oversight adds a 20-50% hidden tax on the work overseen. Teams are forced to add staff and time to address the inappropriate, rigid methods, unproductive review meetings, redundant reporting, and excessive documentation imposed in the name of oversight. 
  • Moreover, oversight frequently raises non-issues to emergency status, causing teams to expend time and resources chasing phantoms just to stay in good stead.
  • Finally, excessive oversight checkpoints add wait states that cost real money. In one egregious example, a PM told me that 66% of his project budget was spent waiting for EA & PMO approvals.
In short, oversight can easily double the cost and time to complete work. Maybe this cost would make sense if oversight minimized risk. But as implemented in IT, oversight actually increases risk:
  • Contrary to popular belief, an outside, non-practitioner cannot spot risk by imposing method or by periodic inspection. Even a savvy practitioner will shy away from the responsibility of pronouncing project health, or finding project fault, by casual inspection. She knows that risk is best exposed and mitigated by, and while, doing the work.
  • Delays imposed by oversight transfer the risk from the program to the mission. And in some cases, not having a mission solution is riskier than any risk generated in creating a solution. For example, one federal agency has spent so long reviewing a project that the mission capability is literally months away from functional failure.
  • Oversight stifles innovation by its rigid adherence to "standards" - mindlessly sticking to the "way we know." Failure to keep pace with the knowledge gained by industry experience builds technical debt, a form of adding risk.
  • Finally, oversight leads to information hiding. The dirty secret is that teams tell "oversight" what it wants to hear to make them go away. Oversight then reports inaccurate information to decision makers and dependents. It may take months or even years, but eventually the disconnect between perception and reality will become obvious. And by then, the damage is more extensive, harder to contain, and mitigation far more costly.
Oversight may make organizations feel good. But as commonly implemented it pads cost, increases delivery time, and increases risk.

Superior risk reduction and efficiency can be achieved not by costly oversight, but by other means. I'll leave describing those other means for later.

Thursday, April 26, 2012

The Cost of Predictability

Predictability is a highly valued property of organizations, in particular, of IT organizations that concurrently run more than a handful of projects. Indeed, managing expectations with stakeholders, coordinating dependencies, budgeting/planning and minimizing the stress and chaos from surprises are good career-enhancing incentives for valuing predictability.

But what does predictability mean in practice? A good working definition is: the perception of a pattern of regularly met commitments. There may be a psychological basis as to what constitutes a pattern in the oft-demonstrated ratio of 4 to 5 "rights" offset every "wrong," but many risk-averse IT organizations use a target figure of 95% of commitments met. 

Of course, the implication is that for meeting commitments 95% of the time, each commitment should have a 95% confidence (probability) of being met. Thus, averaging the results of all commitments over time will hit the portfolio target of 95% of commitments met. 

You must have figured out by now that, except by the very foolish, commitments are based on estimates. Estimates are forecasts of an uncertain future, and the more novel the activity that needs an estimate, the less reliable the estimate, i.e., repetition improves accuracy. But, most IT activities - especially software creation - are relatively unique for the organization making the estimate, so the estimates must be padded (or buffered) to ensure that they have a 95% confidence interval.  Interestingly, most organizations use ROMs (or, Rough Order of Magnitude guess) as an estimating technique, and this approach produces the most padding.

Before we get to our conclusion, three other facts about estimates and commitments are relevant. First, all projects are like a perfect gas, that is, they expand to consume the available resources and never deliver early. Second, information theory tells us that the maximum information yield is when any given estimate has a 50% chance of being exceeded, and a 50% confidence estimate is always smaller (or equal) than a 95% confidence estimate.  And finally, any project that exceeds its estimate, whether estimated at the 50% or 95% confidence level, will experience pressure to conclude quickly.

Now, imagine two portfolios: portfolio Safe demands that all projects estimate to 95% confidence, and portfolio Efficient demands that all projects estimate to 50% confidence. The sum of resources consumed by projects in the Safe portfolio will be higher than the sum of resources consumed by projects in the Efficient portfolio. As the names gave away, a 50% target estimate is more efficient than 95% target.

Phew, we're finally here: the cost of predictability is real, often high, and based on personal observation, in some cases extraordinarily high. Seeking predictability with high confidence estimates may be okay if used as a device to manage dependencies in a large program, when the cost of a single project's blown commitment has a cascading effect on other projects. Or, it may be justified when used to manage expectations in an organization with nasty politics.  But folks concerned with efficiency should adopt a different set of metrics than "on time, on budget," jettison the use of ROMs as an estimating technique, and work on achieving better estimates.

Saturday, February 11, 2012

Status Reporting Is Not an Activity - It Is a Byproduct of Doing the Work

Whether CIO, VP or Director, the senior-most accountable manager in a mid-size to large organization presides over a large portfolio of activities - scores of projects and initiatives, hundreds of systems and people. Knowing what's really going inside the portfolio is a major responsibility that is hard to shirk as an IT leader.

The tendency in larger organizations is for each of the many distinct activities to locally optimize and run as they best see fit. Due to the shear scale of disparate activities and inability to roll it up consistently, the view from the top looks more like chaos than order. This state of "not knowing" puts the IT leader between a rock and a hard place and she must address it.

In IT, the time-honored prescriptions are to add extra reporting burden, force-fit heavy methodology,  add "pmo/oversight/governance" groups, or all of the above. An organization that has applied this salve (caustic) is easy to spot by the "data calls," Friday reporting scramble, scads of meetings, roaming hoards of people not actually accountable to deliver anything, burdensome process, nasty tools, and grand status meetings of the kind Tom Demarco calls "ceremonies."

This is a classic example of doing exactly the wrong thing if you really want visibility to salient facts. These actions make it harder not easier to know what's really going on, because teams are forced to develop a split-personality to deal with these added burdens and yet still attempt to meet their delivery obligations. Internally, they try to preserve their local habits - whether bad or good - because this is what will allow them to deliver (they believe). But then, they add an outer shell whose sole purpose is to tell the inquisitors what they want to hear in order to keep them out. These pesky project managers sound evil, but they rationalize this (correctly, I might add) as protecting the team from management stupidity.

As result, what's really going on in any given project or activity is encapsulated by a defense mechanism that didn't exist before and radiates a fiction. Over time, these defense mechanisms get really good at masquerading reality. Usually, this also has the consequence of increasing risk within the organization, because not all teams are really good at delivery. And this begets a death spiral -  a counter-productive positive feedback loop - excessive burden hides risk, which leads to  failures, which causes more burden, which causes yet more failures, ...

There is a better way. The real question to ask is:
How can I tweak existing practice, in a minimally disruptive way, and in a manner true to the craft of doing the real work work, such that the work naturally disgorges raw data, that when rolled up, gives the needed insight?
There are many analogs outside of IT - in fact, IT makes many of them possible! For example, when buying groceries at the supermarket, scanning a food item captures data to make the payment and accounting possible, but it also captures data useful to other parts of the business. The scan also records the time, the location, and data about the buyer by encouraging use of a loyalty card. Later, some background processing crunches up all this data into useful nuggets useful to purchasing, marketing, inventory, etc.

Notice what doesn't happen. The cashier doesn't fill out a form for every purchase; there isn't an army of trolls watching to make sure it's done correctly, and there isn't another army of data collectors mashing all these reports together. Isn't this easier on the team? No extra, error-prone data collection. Isn't this easier on the org as a whole? No parallel organization to stick probes into the half of the organizaiton doing real work, or worse, creating cockamamie processes in the name of gathering information to the detriment of delivery.

Of course such an approach doesn't come for free, it does require making tradeoffs, and it does require patience. The working teams have to change the way they work, ideally in a way that is as efficient as the way they were working. Moreover, all teams have to change in the same way. And, the proscription for how to do this work must be extremely bare - essence only - and allow for local variation and extension.

Why does this not happen more in IT? Probably because it requires deep knowledge of the activity being monitored, creativity in planning and implementing a solution, patience to see results (ironically, results actually come faster this way), and a desire by top management for real information not just plausible deniability.

Friday, January 13, 2012

I Want the Detailed Plan!

Those who’ve rolled around in the construction of software for a long time are comfortable viewing a project as a process to drive uncertainty from 100% to 0% over a period of time by building something. In contrast, those who have been victimized by custom software creation - stakeholders and patrons - tend to demand certainty too early as a risk mitigation strategy. 

These stakeholders who don’t like uncertainty want us to do something about it. In reality the only thing to do is to begin the real work of the project. But, all too often, the unfamiliar stakeholders insist on more planning, which exacerbates the problem.

Only too willing to please, many teams capitulate by creating a facade of certainty, e.g., detailed project plans, complex resource loading charts, myriad milestones, status meetings. It’s easy to understand why: most people want to please, and the rest want to stop being harangued. 

The problem is, at some point the facade is exposed. But, we have only ourselves to blame. This behavior seems to fit the modern explanation of an addiction: a behavior that in the short term mitigates pain, but in the long term only makes the situation worse.

It would behoove us to educate our stakeholders that the nature of a software project is to reduce uncertainty by delivering something often. Oh, and when the bad things happen - as they usually do - to live by the words of that apocryphal general “the only thing worse than bad news is bad news late.”

Sunday, January 1, 2012

Protocol Droid: How PMO Staff Enable Dysfunction

"My value is that I speak everyone's language," she explained. Her job is to attend meetings and relay messages between different groups involved in managing the large IT program. Sometimes she volunteers to "unstick" the latest obstacle to a group's progress, which usually involves several more pairwise meetings reminiscent of shuttle diplomacy. Upper level managers think she's needed, probably because she listens well, is obviously intelligent and articulate, and relays news of progress from afar.

The Star Wars movies featured an android named C3PO. Fluent "in over six million forms of communication," '3PO's job "is to assist etiquette, customs, and translation so that meetings of different cultures run smoothly." A protocol 'droid, C3PO is the archetype for people like the person described above.

In reality, protocol droids are key enablers of organizational dysfunction. While the role may emerge initially as a sort of internal consultant to address gaps in changing communication pathways, unless checked it quickly gives rise to negative consequences, such as, intermediation, politics over results, and poor decision making.

  • While it may seem like different teams in a business-IT ecosystem are different galactic races, the linguistic and cultural differences are not so great as to require translation. In fact, intermediation prevents different teams from learning to work together effectively - as they must in order to optimize. The intermediary gains validation by maintaining isolated cultures, and only succeeds in adding inefficient and error prone communication hops.
  • The structure of an organization should change over time as an initiative proceeds. These changes result from attempting to break down obstacles to efficiency or even success. However, protocol 'droids have a vested interest in maintaining the status quo - and the need for their role in it. Before too long, discussions center around who should be attending what meetings, not what outcomes need to be achieved by those meetings. And of course, status is evaluated by who gets to meet with whom, not who accomplishes what. 
  • After months of message-shuttling, emissaries begin to believe that they actually know the subject matter they are shuttling. Once this happens, they start to apply their own filters and even make decisions for others based on their own incomplete understanding. This behavior is the most insidious and destructive of all, because the decisions are often ill-formed and alienate those who should have been involved in the decision making.
Savvy organizations are on the lookout for the protocol droid anti-pattern. Just like campus space planners use students to first establish well-worn paths before deciding where to pave, protocol droid's pathways are a diagnostic for reorganization. Use this information wisely or risk becoming a politics-laden, low-performing organization.