A physical example of technical debt addressed with 3D printing

Last week a friend, at the woodshop I use, asked me if I could solve a problem he’d been having. I thought I’d share a bit of issue context, what I did to address it and the similarities to the software concept of technical debt.

Back before 2008, there was a large, thriving industry of automotive and RV customization in Northern Indiana. When the recession hit, many of these businesses closed, since the market for RVs dried up. Fortunately, this area has started to recover but the earlier collapse left many RV owners with parts they couldn’t replace, when something went wrong.  Once the part broke, their investment was less functional than it was before and there was nothing they could do about it. I’ve heard this complaint from numerous folks, so this particular situation is not an isolated incident. Even name brand companies used specialized parts from these ‘mom-and-pop’ tooling shops.

My friend had an awning (I think the brand was Carefree – how ironic). One of the support mechanisms broke. He investigated finding a replacement for quite a while and resolved that he’d have to live with an awning that no longer functioned as designed. He asked me if there was something I could do, since he knew my background as a problem solver and gave me the broken fragments (he could find) of the awning slide. He also described the functionality of the parts that he couldn’t find.

Having worked with 3D printing for about a decade, I told him I’d give it a shot. I started by creating a prototype (in PLA) that he could try and once we agreed upon the design, I’d create him a couple real ones (in ABS).

It was a fairly simple design, so I modeled it in Microsoft 3D Builder. It took a couple of attempts, but he now has an awning that moves as effectively as it ever did, as well as a spare part.

The modeled part

The problem reminded me of the software portfolio management issue of technical debt. Organizations spend much time and money creating successful software that works great until it doesn’t. The business risk is just sitting out there waiting to bite us. If there is a problem, organizations may not know how to fix it, since the folks who wrote or performed the customization are no longer around. Sometimes the software that was used to create the solution is no longer supported and available. These issues (even if known) can be difficult to address, since the software is working and adding business value, on a daily basis. Any change may be viewed as riskier than just relying on ‘hope and prayer’.

When the worst occurs, people with the right tools and expertise may address the issue quickly – if you know where to find them. If not, you may just need to ‘trade it in’ and replace the functionality while the business limps along, assuming it can.

Organizations need to assess their software on a regular basis and understand the value generation, cost, risk relationship of their software investments. This should be part of any strategic planning or business continuity effort.

Here is a picture of the final product mounted on his awning. Hopefully it will give him as many more years of service as the original.

The mounted final result

Simplicity, the next big thing?

Complex processRecently, Dynatrace conducted a survey of CIOs on their top challenges. Of the top six, almost all deal with concerns about complexity. There is no doubt there are numerous technologies being injected in almost every industry from a range of vendors. Integration of this multivendor cacophony is ripe with security risks and misunderstanding – whether it is your network or IoT vendor environment.

Humans have a limited capacity to handle complexity before they throw up their hands and just let whatever happens wash over them. That fact is one of the reasons AI is being viewed as the savior for the future. Back in 2008, I wrote a blog post for HP that mentioned:

“the advent of AI could allow us to push aside a lot of the tasks that we sometimes don’t have the patience for, tasks that are too rigorous or too arduous.”

IT organizations needs to shift their focus back to making the business environment understandable, not just injecting more automation or data collection. Businesses need to take latency out of decision making and increase the level of understanding and confidence. A whole new kind of macro-level (enterprise) human interface design is required. Unfortunately, this market is likely a bit too nebulous to be targeted effectively today other than through vague terms like analytics…  But based on the survey results, large scale understanding (and then demand) appears to be dawning on leadership.

The ROI for efforts to simplify and encourage action, should be higher than just adding a new tool to the portfolio ablaze in most organizations. We’ll see where the monies go though, since that ROI is likely to be difficult to prove when compared to the other shiny balls available.

Security certificate maintenance – there must be a better way

Broken-chainOver the last few years, I’ve seen numerous instances where will maintained systems that are run by organizations with good operational records have fallen over, caused by security certificate expiration.

Just last week, Google Mail went down for a significant time when their security key chain broke (note Google’s use of SHA-1 internally – but that’s a whole other issue). Gmail is a solution that is core to an increasing % of the population, schools and businesses. Most people likely believe that Google operations are well run and world class – yet they stumbled in the same way that I’ve seen many others before.

A reliable and rigorous approach is needed for organizations to track their certificate chains that proactively warns the organization before they expire, since it will take hours to repair them once they break. There are many critical tasks that come with certificate management, and ignoring or mishandling any one of them can set the stage for Web application exploits or system downtime.

These certificates (which contain the keys) are the cornerstone to the organization’s cryptography-based defense. As the market-facing application portfolio of an organization expands, the number of certificates will also expand and the key chains can get longer with more convoluted interrelationships as well (especially if not planned and just allowed to evolve). Additionally, the suite of certificate products from vendors can be confusing. There are different levels of validation offered, numerous hash types, lengths and warranties (which actually protect the end users, not the certificate owner). It can be difficult to know what type of certificate is required for a particular application.

CSS-Security put out this high-level video about certificates and why they’re blooming in organizations (there is an ad at the end of the video about their product to help with certificate management).

Most companies still manage their certificates via a spreadsheet or some other manual process. That may be fine when you’re just getting started but it can quickly spiral out of control and addressing the problem may involve costs that are just not understood.

There are products and approaches to the enterprise certificate management. Automation tools can search a network and collect information all discovered certificates. They can assign certificates to systems and owners and manage automated renewal. These products can also check that the certificate was deployed correctly to avoid using an old certificate. Automated tools are only part of the answer and will require some manual intervention.

When purchasing one of these certificate management tools, ensure that the software can manage certificates from all CAs, since some will only manage certificates issued from a particular CA.

The ‘Who Moved My Cheese?’ of Legacy Systems

Having recently gone through a personal disruption related to employment, I dusted off my copy of Who Moved My Cheese? After re-reading the book, I thought about how this applies to the life of the CIO and application portfolio management. We are all too often with the world we understand and the 80% (or more) of the budget it consumes – failing to Sniff out opportunities.

Recently there was a post: CIOs make the tough call on legacy systems by Mary K. Pratt that delved into the issue of managing the layer upon layer of project success that builds up to calcify an organization’s ability to respond, that I found a worthwhile read.

Even in this day of IaaS and SaaS, the basics of optimizing the application portfolio of an organization remains relatively unchanged. It gets down to where the organization is headed and an assessment of costs vs. value generation.

Organizations need to ask some fundamental questions like:

  1. What needs to be done and why?
  2. How is it going to be accomplished?
  3. What is the expected outcome?
  4. When will it be needed or done?
  5. How will we measure outcomes, so we can validate that the task is complete and effective?
  6. What resources will be required? ($$, people…)

Essentially an assessment of leading and lagging indicators and how the portfolio can support them.

A simple view of the assessment is summed up in this quadrant chart:

Apps Portfolio Assessment

I am sure there are other complex and wonderful interpretations of this, but to me this view is the simplest. Keep what adds value and has a low cost to operate. Refactor those programs (where possible) that have a high cost to maintain and also add high value. Validate the need for anything that delivers low value – you may be surprised how many of these you can turn off. Finally, replace those that have business support and high cost.

In this age of automation, the concepts of cost need to be holistic and not just the IT maintenance costs… For a parity of Who Moved My Cheese? touching on automation look to this Abstruse Goose illustration.

It is not hard to start but it is constantly changing so it may never be done.