Is the Corona Virus a wakeup call for business continuity?

The increasing spread of the Coronavirus is a great opportunity for companies to revise their Business Continuity Plans (BCPs). In my experience, too often BCPs focus on just the IT aspect of business disruption and not on the situations most likely to disrupt a large multinational organization (trade, human work force disruption…)

Living in a retirement community provide a unique insight into the reaction of groups of people to disruptive events like this and the recommendations provided by the CDC. Though there is a very slim likelihood for major disruption for a small, isolated group like where I live, some are already talking about hording long shelf-life foodstuffs. It is just part of human nature, but corporations don’t think that way.

Getting senior management to understand the impact of large numbers of staff being quarantined, working from home or out of work sick can help rethink the different approach required for a real BCP. Back when I worked for Electronic Data Systems, I was part of a group of folks who went through a similar exercise reacting to the SARS virus, for a number of organizations – back around the turn of this century.

This current concern brings back memories of those heady days.

Well it’s 2020

Happy New Year!!

I was talking to some folks the other day who said “Gosh, it’s been 20 years since Y2K”. Some of us used to think that 2020 was impossibly far off. I used to do predictions of technology and adoption for EDS and HP. Each year (for about a decade), I’d give about 10 things to look for in the coming years and at the end of the year I’d grade my predictions.

Now that I am retired, even the predictions are receding into the rear view mirror and in some ways they appear naive. In other ways, they’ve held up well.

When I worked in HP labs (almost a decade ago), I remember writing a piece on the impact of the technology trends on services. One of the foundation elements was about the conflict within our expectations.

“We live in a world of conflict:

  • Simple, yet able to handle complexity
  • Standard, yet customizable
  • Secure, yet collaborative
  • Low cost, yet high quality
  • Sustainable, yet powerful
  • Mobile, yet functionally rich”

Some of those conflicts have been resolved to the point where they are barely background noise, while others remain as challenging as ever. A good example of that is gamification, which is now ubiquitous.

The abundance of capability (and possibility) that I tried to represent with the following illustration (that is also almost a decade old) still seems to hold true. Possibilities for new value remain around us everywhere.

Hopefully this year will allow you to expand your horizons and address the goals you’ve been making.

Cyber Observer – a powerful security dashboard that reminded me of an earlier time

The other day I received a note in LinkedIn from an individual I worked with back in EDS. He mentioned a company he is currently working for that is focused on security. Since security needs to be at the top of the list of concerns at all levels of organizations today, I thought I’d take a deeper look.

The software is called Cyber Observer (they have a fairly effective marketing overview movie on their site). Though this solution is focused on enterprise security monitoring, it reminded me of the data center monitoring programs that came out in the late 80s and 90s that provided status dashboards and information focused on reducing time to action for system events. CA Unicenter was one that was popular.

Back in the late 80s I had system administration leadership over the largest VAX data center that GM had. We had hundreds of VAXen, PDPs and HP 1000s of all sizes scattered over nine or ten plants. Keeping them all running required some significant insight into what was going on at a moments notice.

Fortunately, today folks can use the cloud for many of the types of systems we had to monitor, and the hardware monitoring is outsourced to the cloud providers. Plant floor systems are still an area that need to be monitored.

One of the issues we had keeping hundreds of machines running was that the flood of minor issues being logged and reported can easily lead to ‘alert fatigue’. Those responsible can loose the big picture (chicken little syndrome). Back then, we put a DECTalk in our admin area, when something really serious happened, it yelled at us until it was fixed. We thought that was pretty advanced for its time.

I asked how Cyber Observer handled this information overload concern. Since the software is primarily targeted at leaders/executives — we all know the attention span of most managers for technical issues. I also asked about a proactive (use of honeypots) vs. a reactive approach for the software. Now that both soft (HoneyD among others) and hard honeypots (Canary) are relatively easy to access, they should be part of any large organizations approach to security.

He explained that the alert and dashboarding system was very tunable at both the organizational and individual level.

Although it has more of a dashboard approach to sharing the information, details are available to show ‘why’ the concern reached the appropriate level.

An example he gave me was (for example) a new domain administrator being added in Active Directory. The score next to account management domain would go down and show red. When the user drills down, the alert would state that a new domain admin was added. The score in the system would be reduced and eventually the system baseline would adjust to the change although the score would remain lower. The administrative user would have to manually change the threshold or remove the new domain admin (if it is rogue or unapproved). Only then would the score would go back to its previous number (if no other events took place). Some threshold tolerances come preset out of the box based on expected values (for example if the NAC is in protect mode and not in alert mode, or if the Active Directory password complexity is turned-on — these scores are preset). Some thresholds are organizationally dependent and the user needs to set the proper thresholds as with the number of domain admins.

He also mentioned that if the system was connected to a honeypot that its information monitored the level of concern based on the shift of ‘background radiation’ was possible.

I don’t know much about this market and who the competitors are, but the software looked like a powerful tool that can be added to take latency out of the organizational response to this critical area. As machine learning techniques improve, the capabilities in this space should increase, recognizing anomalies more effectively over time. I was also not able to dig into the IoT capabilities that is a whole other level of information flow and concern.

The organization has a blog covering their efforts, but I would have expected more content since their hasn’t been a post this year.

Things are not always what they seem – a discussion about analytics

evaluationHave you ever been in a discussion about a topic thinking you’re talking about one area but later find out it was about something else altogether?

We’ve probably all had that conversation with a child, where they say something like “That’s a really nice ice cream cone you have there.” Which sounds like a compliment on your dairy delight selection but in reality is a subtle way of saying “Can I have a bite?”

I was in a discussion with an organization about a need they had. They asked me a series of questions and I provided a quick stream of consciousness response…The further I got into the interaction the less I understood about what was going on. This is a summary of the interaction:

1) How do you keep up to speed in new data science technology? I read and write blogs on technical topics as well as read trade publications. I also do some recreational programming to keep up on trends and topics. On occasion I have audited classes on both EdX and Coursera (examples include gamification, Python, cloud management/deployment, R…)

2) Describe what success looks like in the context data science projects? Success related analytics efforts is the definition, understanding, development of insight on and the addressing of business goals using available data and business strategies. Sometimes this may only involve the development of better strategies and plans, but in other cases the creation of contextual understanding and actionable insight allows for continuous improvement of existing or newly developed processes.

3) Describe how do you measure the value of a successful data science application. I measure the value based on the business impact through the change in behavior or business results. It is not about increased insight but about actions taken.

4) Describe successful methods or techniques you have used to explain the value of data science, machine learning, advanced analytics to business people. I have demonstrated the impact of a gamification effort by using previously performed business process metrics and then the direct relationship with post implementation performance. Granted correlation does not prove causation but by having multiple instances of base cases and being able to validate performance improvement from a range a trials and processes improvements, a strong business case can be developed using a recursive process based on the definition of mechanics, measurement, behavior expectations, and rewards.

I’ve used a similar approach in the IoT space, where I’ve worked on and off with machine data collection and data analysis since entering the work force in the 1980s.

5) Describe the importance of model governance (model risk management) in the context of data science, advanced analytics, etc. in financial services. Without a solid governance model, you don’t have the controls and cannot develop the foundational level of understanding. The model should provide the rigor sufficient to move from supposition to knowledge. The organization needs to be careful not to have too rigid a process though, since you need to take advantage of any information learned along the way and make adjustment, to take latency out of the decision making/improvement process. Like most efforts today a flexible/agile approach should be applied.

6) Describe who did you (team, function, person) interact with in your current role, on average, and roughly what percent of time did you spend with each type of function/people/team. In various roles I spent time with CEO/COOs and senior technical decision makers in fortune 500 companies (when I was the chief technologist of Americas application development with HP: 70-80% of my time). Most recently when with Raytheon IT, I spend about 50% of my time with senior technical architects and 50% of my time with IT organization directors.

7) Describe how data science will evolve during the next 3 to 5 years. What will improve? What will change? Every organization should have in place a plan to leverage both improve machine learning and analytics algorithms based on the abundance of data, networking and intellectual property available. Cloud computing techniques will also provide an abundance of computing capabilities that can be brought to bear on the enterprise environment. For most organizations, small sprint project efforts need to be applied to both understanding the possibilities and the implications. Enterprise efforts will still take place but they will likely not have the short term impact that smaller, agile efforts will deliver. I wrote a blog post about this topic earlier this month. Both the scope and style of projects will likely need to change. It may also involve the use more contract labor to get the depth of experience in the short term to address the needs of the organization. The understanding and analysis of the meta-data (block chains, related processes, machines.…) will also play an ever increasing role, since they will supplement the depth and breadth of contextual understanding.

8) Describe how do you think about choosing technical design of data science solutions (what algorithms, techniques, etc.).

I view the approach to be similar to any other architectural technical design. You need to understand:

  • the vision (what is to be accomplished)
  • the current data and systems in place (current situation analysis)
  • understand the skills of the personnel involved (resource assessment)
  • define the measurement approach to be used (so that you have both a leading and lagging indicator of performance)

then you can develop a plan and implement your effort, validating and adjusting as you move along.

How do you measure the value/impact of your choice?

You need to have a measurement approach that is both tactical (progress against leading indicators) as well as strategic (validation by lagging indicators of accomplishment). Leading indicators look ahead to make sure you are on the right road, where lagging indicators look behind to validate where you’ve been.

9) Describe your experience explaining complex data to business users. What do you focus on?

The most important aspect of explaining complex data is to describe it in terms the audience will understand. No one cares how hard it was to do the analysis, they just want to know the business impact, value and how it can be applied.

Data visualization needs to take this into account and explain the data to the correct audience – not everyone consumes data using the same techniques. Some people will only respond to spreadsheets, while others would like to have nice graphics… Still others want business simulations and augmented reality techniques to be used whenever possible. If I were to have 3 rules related to explaining technical topics, they would be:

  1. Answer the question asked
  2. Display it in a way the audience will understand (use their terminology)
  3.  Use the right data

At the end of that exchange I wasn’t sure if I’d just provided some free consulting, went through a job interview or was just chewing the fat with another technologist. Thoughts???

Is AI a distraction???

AutomationI was recently in an exchange with a respected industry analyst where they stated that AI is not living up to its hype – they called AI ‘incremental’ and a ‘distraction’. This caught me a bit my surprise, since my view is that there are more capabilities and approaches available for AI practitioners than ever before. It may be the business and tech decision makers approach that is at fault.

It got me thinking about the differences in ‘small’ AI efforts vs. Enterprise AI efforts. Small AI are those innovative, quick efforts that can prove a point and deliver value and understanding in the near term. Big AI (and automation efforts) are those that are associated with ERP and other enterprise systems that take years to implement. These are likely the kinds of efforts that the analyst was involved with.

Many of the newer approaches enable the use of the abundance of capabilities available to mine the value out of the existing data that lies fallow in most organizations. These technologies can be tried out and applied in well defined, short sprints whose success criteria can be well-defined. If along the way, the answers were not quite what was expected, adjustments can be made, assumptions changed, and value can still be generated. The key is going into these projects with expectations but still flexible enough to change based on what is known rather than just supposition.

These approaches can be implemented across the range of business processes (e.g., budgeting, billing, support) as well as information sources (IoT, existing ERP or CRM). They can automate the mundane and free up high-value personnel to focus on generating even greater value and better service. Many times, these focused issues can be unique to an organization or industry and provide immediate return. This is not the generally not the focus of Enterprise IT solutions.

This may be the reason some senior IT leaders are disillusioned with the progress of AI in their enterprise. The smaller, high-value project’s contributions are round off error to their scope. They are looking for the big hit and by its very nature will be a compromise, if not a value to really move the ball in any definitive way – everyone who is deploying the same enterprise solution, will have access to the same tools…

My advice to those leaders disenchanted with the return from AI is to shift their focus. Get a small team out there experimenting with ‘the possible’. Give them clear problems (and expectations) but allow them the flexibility to bring in some new tools and approaches. Make them show progress but be flexible enough to understand that if their results point in a different direction, to shift expectations based on facts and results. There is the possibility of fundamentally different levels of costs and value generation.  

The keys are:

1)      Think about the large problems but act on those that can be validated and addressed quickly – invest in the small wins

2)      Have expectations that can be quantified and focus on value – Projects are not a ‘science fair’ or a strategic campaign just a part of the business

3)      Be flexible and adjust as insight is developed – just because you want the answer to be ‘yes’ doesn’t mean it will be, but any answer is valuable when compared to a guess

Sure, this approach may be ‘incremental’ (to start) but it should make up for that with momentum and results. If the approach is based on expectations, value generation and is done right, it should never be a ‘distraction’.

Lessons for IT services

handoffLast night, I went to a meeting of our local ham radio group and had a side discussion with another individual who also worked in the IT services space for decades. I was with Electronic Data Systems (EDS) for the majority of my career. He was with Perot Systems. We were comparing notes about what caused the demise of these organizations and came up with two main issues:

  • People are the service company – When HP purchased EDS or when Dell purchased Perot Systems, they both tried to use their deep product understanding in the services segment. For a product company, people are overhead, and the efficient generation of SKUs is king! For a service company, access to people is what you’re actually selling. In both cases, the HR organizations wanted to lower costs, so they initiated an early retirement offer that caused the flight of many of their senior, knowledgeable service personnel. Suddenly, they had customers screaming with no access to the depth of expertise they had relied upon. With nothing to sell, customer retention begins to spiral down, and costs go up. The exact opposite of what the leadership thought they were trying to do.
  • Value the difference – EDS in the late 90s was still organized by industry, leveraging support organizations for technologies. This was different than most IT service organizations, which were organized around services (like data centers or telecom) or individual customers. Customers were actually buying a relationship based on industry expertise — but it was difficult to compare between vendors.
    In 1998-9, EDS turned itself inside out, organizing around application development and maintenance, infrastructure, consulting…, with a leveraged industry-oriented, sales organization. Customers were initially happy, since they could see how much a network connection or support for a computer and OS cost.
    EDS also began to sell off many of the industry specific IP elements (e.g., financial systems, bank machines…). Though this action harvested cash, it began a spiral into ever more competitive commodity services, fueled by early cloud computing techniques that EDS instigated. Profitability and customer retention began a steady decline.

In both cases, the organizations were brought down by differentiators taken for granted. Once gone they were difficult to reproduce.

Service organizations need to really understand what makes their relationships sticky and view that difference as a strength, not as something too complex for the finance or HR organizations to understand. Unfortunately, hindsight is usually 20/20 and may be obvious. Let’s hope that DXC and NTT Data (who now own the remnants of EDS and Perot Systems) keep their eye on the ball.

Simplicity, the next big thing?

Complex processRecently, Dynatrace conducted a survey of CIOs on their top challenges. Of the top six, almost all deal with concerns about complexity. There is no doubt there are numerous technologies being injected in almost every industry from a range of vendors. Integration of this multivendor cacophony is ripe with security risks and misunderstanding – whether it is your network or IoT vendor environment.

Humans have a limited capacity to handle complexity before they throw up their hands and just let whatever happens wash over them. That fact is one of the reasons AI is being viewed as the savior for the future. Back in 2008, I wrote a blog post for HP that mentioned:

“the advent of AI could allow us to push aside a lot of the tasks that we sometimes don’t have the patience for, tasks that are too rigorous or too arduous.”

IT organizations needs to shift their focus back to making the business environment understandable, not just injecting more automation or data collection. Businesses need to take latency out of decision making and increase the level of understanding and confidence. A whole new kind of macro-level (enterprise) human interface design is required. Unfortunately, this market is likely a bit too nebulous to be targeted effectively today other than through vague terms like analytics…  But based on the survey results, large scale understanding (and then demand) appears to be dawning on leadership.

The ROI for efforts to simplify and encourage action, should be higher than just adding a new tool to the portfolio ablaze in most organizations. We’ll see where the monies go though, since that ROI is likely to be difficult to prove when compared to the other shiny balls available.

Six thoughts on mobility trends for 2018

mobility walkLet’s face it, some aspects of mobility are getting long in the tooth. The demand for more capabilities is insatiable. Here are a few areas where I think 2018 will see some exciting capabilities develop. Many of these are not new, but their interactions and intersection should provide some interesting results and thoughts to include during your planning.

1. Further blurring and integration of IoT and mobile

We’re likely to see more situations where mobile recognizes the IoT devices around them to enhance contextual understanding for the user. We’ve seen some use of NFC and Bluetooth to share information, but approaches to embrace the environment and act upon the information available is still in its infancy. This year should provide some significant use cases and maturity.

2. Cloud Integration

By now most businesses have done much more than just stick their toe in the cloud Everything as a Service (XaaS) pool. As the number of potential devices in the mobility and IoT space expand, the flexibility and time to action that cloud solutions facilitate needs to be understood and put into practice. It is also time to take all the data coming in from these and transform that flow into true contextual understanding and action, also requiring a dynamic computing environment.

3. Augmented reality

With augmented reality predicted to expend to a market somewhere between $120 and $221 billion in revenues by 2021, we’re likely to see quite a bit of innovation in this space. The wide range of potential demonstrates the lack of a real understanding. 2018 should be a year where AR gets real.

4. Security

All discussions of mobility need to include security. Heck, the first month of 2018 has should have nailed the importance of security into the minds of anyone in the IT space. There were more patches (and patches of patches) on a greater range of systems than many would have believed possible just a short time ago. Recently, every mobile store (Apple, Android…) was found to have nefarious software that had to be exercised. Mobile developers need to be ever more vigilant, not just about the code they write but the libraries they use.

5. Predictive Analytics

Context is king and the use of analytics to increase the understanding of the situation and possible responses is going to continue to expand. As capabilities advance, only our imagination will hold this area back from increasing where and when mobile devices become useful. Unfortunately, the same can be said about the security issues that are based on using predictive analytics.

6. Changing business models

Peer to peer solutions continue to be the rage but with the capabilities listed above, whole new approaches to value generation are possible. There will always be early adopters who are willing to play with these and with the deeper understanding possibilities today new approaches to crossing the chasm will be demonstrated.

It should be an interesting year…

Groundhog Day, IoT and Security Risks

groundhogs dayLately I’ve been hearing a great deal of discussion about IoT and its application in business. I get a Groundhog day feeling, since in some sectors this is nothing new.

Back in the late 70s and early 80s, I spent all my time on data collection off factory equipment and developing analytics programs on the data collected. The semiconductor manufacturing space had most of its tooling and inventory information collected and tracked. Since this manufacturing segment is all about yield management — analytic analysis was a business imperative. Back then though you had to write your own, analytics and graphics programs.

The biggest difference today though is the security concerns. The ease of data movement and connectivity has allowed the industries lust for convenience to open our devices and networks to a much wider aperture of possible intruders. Though there are many risks in IoT, here are a few to keep in mind.

1) Complexity vs. Simplicity and application portfolio expansion

Businesses have had industrial control system for decades. Now that smart thermostats and water meters and door bells are becoming commonplace, approaches to managing this range of devices in the home has required user interfaces to be developed for the public and not experts. Those same techniques are being applied back into businesses and can start a battle of complexity vs. simplicity.

The investment in the IoT space by the public dwarfs the investment by most industries. These new more automated and ergonomic tools still need to tackle an environment that is just as complex for the business as its always been – in fact if anything there will be more devices brought into the business environment every day.

Understanding the complexity of vulnerabilities is a huge and ever-growing challenge. Projects relying on IoT devices must be defined with security in mind and yet interface effectively into the business. These devices will pull in new software into the business and increase the application portfolio. Understand the capabilities and vulnerabilities of these additions.

2) Vulnerability management

Keeping these IoT devices up-to-date is a never-ending problem. One of the issues of a rapidly changing market segment like this is devices will have a short lifespan. Business need to understand that they will still need to have their computing capabilities maintained. Will then vendor stand behind their product? How critical to the business is the device? As an example of the difficulties, look at the patch level of the printers in most businesses.

3) Business continuity

Cyber-attacks were unknown when I started working in IoT. Today, denial of services and infections make the news continuously. It is not about ‘if’ but ‘when’ and ‘what you’re going to do about it. These devices are not as redundant as IT organizations are used to. When they can share the data they collect or control the machines as they should, what will the business do? IoT can add a whole other dimension to business continuity planning that will need to be thought through.

4) Information leakage

Many of the IoT devices call home (back to the businesses that made them). Are these transferred encrypted? What data do they carry? One possible unintended conscience is that information can be derived (or leaked) from these devices.  Just like your electric meter’s information can be used to derive if you’re home, a business’s IoT devices can share information about production volume and types of work being performed. The business will need to develop a deeper comprehension of the analysis and data sharing risks that has happened elsewhere, regardless of the business or industry and adjust accordingly.

The Internet of Things has the potential to bring together a deeper understanding of the business. Accordingly, security at both the device and network levels needs to develop as strongly. The same analytics enabling devices to perform their tasks can also be used nefariously or to make the environment stronger.

When it is time to leave…

compass2

Sometimes you’ll initiate leaving a company, other times it may just happen out of the blue. In any case, there are a few things to think through before leaving a company… while you have access to corporate email and phone systems.

    1. Have a personal plan, if you don’t have one get one (make a budget…) but that will likely need to wait until you have time to think about it. What do you want to accomplish in the next 30, 60 or 90 days. Don’t get lost.
    2. Create a list of efforts you are working on and who your backup is – give this to your manager.
    3. Leave an out of office message for those who need to access your efforts/customers.
    4. Make sure your leader and those who will need to know (HR) have a valid address and phone number.
    5. Archive (non-company owned) materials for yourself, so you can reference them later.
    6. Make sure your manager knows about any materials that are in shared resources (e.g., OneDrive for business) that may be accessed by others but could go away when your accounts are removed.
    7. Does your company have any gamification efforts in health or other areas where there are monetary rewards? Make sure you cash them in.
    8. Save all the information on benefits for ex-employees that you can find (e.g., COBRA). They will give you a number of documents in a larger company, when you leave, but there can be alumni groups… on-line as well. They can be a tremendous resource.
    9. >Send a note to those you have been important to your work with the company to let them know you’ll be gone.
    10. Try to get a copy of anything you sign.

Help make the transition go well for everyone. Meet with your supervisor and offer to do anything possible to help fill the void created by your departure. It can be a rough time for them too.

Don’t burn any bridges, you never know which ones you may need to cross again in the future.

After you’ve left:

  1. Check on unemployment benefits. Depending on the state, even forced early retirement can have unemployment benefits associated with it. Health insurance will likely be important area of focus as well.
  2. Be positive — This is just another stop along a journey. Spend your extra time getting fit, educated or something that will improve your person outlook. Prepare yourself of the emotional roller coaster to follow.
  3. Update LinkedIn and your résumé. Some people scoff at LinkedIn, but this is what it is for.

I am sure I’ve missed something but those were what came to mind this morning. Drop a comment with ideas to help things go well.

#unemployment #work #COBRA