I was talking to some folks the other day who said “Gosh, it’s been 20 years since Y2K”. Some of us used to think that 2020 was impossibly far off. I used to do predictions of technology and adoption for EDS and HP. Each year (for about a decade), I’d give about 10 things to look for in the coming years and at the end of the year I’d grade my predictions.
Now that I am retired, even the predictions are receding into the rear view mirror and in some ways they appear naive. In other ways, they’ve held up well.
When I worked in HP labs (almost a decade ago), I remember writing a piece on the impact of the technology trends on services. One of the foundation elements was about the conflict within our expectations.
“We live in a world of conflict:
Simple, yet able to handle complexity
Standard, yet customizable
Secure, yet collaborative
Low cost, yet high quality
Sustainable, yet powerful
Mobile, yet functionally rich”
Some of those conflicts have been resolved to the point where they are barely background noise, while others remain as challenging as ever. A good example of that is gamification, which is now ubiquitous.
The abundance of capability (and possibility) that I tried to represent with the following illustration (that is also almost a decade old) still seems to hold true. Possibilities for new value remain around us everywhere.
Hopefully this year will allow you to expand your horizons and address the goals you’ve been making.
The other day I received a note in LinkedIn from an individual I worked with back in EDS. He mentioned a company he is currently working for that is focused on security. Since security needs to be at the top of the list of concerns at all levels of organizations today, I thought I’d take a deeper look.
The software is called Cyber Observer (they have a fairly effective marketing overview movie on their site). Though this solution is focused on enterprise security monitoring, it reminded me of the data center monitoring programs that came out in the late 80s and 90s that provided status dashboards and information focused on reducing time to action for system events. CA Unicenter was one that was popular.
Back in the late 80s I had system administration leadership over the largest VAX data center that GM had. We had hundreds of VAXen, PDPs and HP 1000s of all sizes scattered over nine or ten plants. Keeping them all running required some significant insight into what was going on at a moments notice.
Fortunately, today folks can use the cloud for many of the types of systems we had to monitor, and the hardware monitoring is outsourced to the cloud providers. Plant floor systems are still an area that need to be monitored.
One of the issues we had keeping hundreds of machines running was that the flood of minor issues being logged and reported can easily lead to ‘alert fatigue’. Those responsible can loose the big picture (chicken little syndrome). Back then, we put a DECTalk in our admin area, when something really serious happened, it yelled at us until it was fixed. We thought that was pretty advanced for its time.
I asked how Cyber Observer handled this information overload concern. Since the software is primarily targeted at leaders/executives — we all know the attention span of most managers for technical issues. I also asked about a proactive (use of honeypots) vs. a reactive approach for the software. Now that both soft (HoneyD among others) and hard honeypots (Canary) are relatively easy to access, they should be part of any large organizations approach to security.
He explained that the alert and dashboarding system was very tunable at both the organizational and individual level.
Although it has more of a dashboard approach to sharing the information, details are available to show ‘why’ the concern reached the appropriate level.
An example he gave me was (for example) a new domain administrator being added in Active Directory. The score next to account management domain would go down and show red. When the user drills down, the alert would state that a new domain admin was added. The score in the system would be reduced and eventually the system baseline would adjust to the change although the score would remain lower. The administrative user would have to manually change the threshold or remove the new domain admin (if it is rogue or unapproved). Only then would the score would go back to its previous number (if no other events took place). Some threshold tolerances come preset out of the box based on expected values (for example if the NAC is in protect mode and not in alert mode, or if the Active Directory password complexity is turned-on — these scores are preset). Some thresholds are organizationally dependent and the user needs to set the proper thresholds as with the number of domain admins.
He also mentioned that if the system was connected to a honeypot that its information monitored the level of concern based on the shift of ‘background radiation’ was possible.
I don’t know much about this market and who the competitors are, but the software looked like a powerful tool that can be added to take latency out of the organizational response to this critical area. As machine learning techniques improve, the capabilities in this space should increase, recognizing anomalies more effectively over time. I was also not able to dig into the IoT capabilities that is a whole other level of information flow and concern.
The organization has a blog covering their efforts, but I would have expected more content since their hasn’t been a post this year.
For me, the simple definition of gamification is “Metrics-based behavior modification” or using game mechanics to influence real-world behavior. Some view this as a way to improve the worker experience for business functions, while others view it less positively as “exploitware”. We see it all around us in healthcare, retail and new areas all the time.
You may wonder “Can sitting on a help desk and answering calls from consumers be turned into a competitive game that improves the experience for everyone?” — it turns out it can. Many activities can be tracked, rewarded and tuned to the needs of the day. People respond when you give “points” for things like “resetting passwords”, “resolving install problems”… as long as the points mean something to the individuals doing the tasks.
Human behavior can be manipulated by just pulling on a few of the right strings. This is one way companies can tap into the streams of data and the inherent human intellect in their business to drive value. Businesses just need to listen and invest in understanding what drives those they want to influence and define systems to meet that unspoken need. One key though is to not make it so blatant that those involve feel manipulated.
When I saw this report of the China state news agency Xinhua introducing AI anchors. I had to comment on it. Chinese viewers saw a simulation of a regular Xinhua news anchor named Qiu Hao. There were examples of the simulation speaking Chinese as well as English. I can see how this would be a very efficient way to produce a news program in a number of languages, but it does edge into the uncanny valley. The simulation is not perfect but self-driving cars were viewed as unlikely at the turn of this century.
I do worry a bit about what effect this will have on the current “opinion as news” trend that seems to be taking over many news outlets. I don’t know about you but I am seeing more and more opinion and less facts presented to enable me to make up my own mind about events.
I’ve not read through the whole thing, the intro starts out with
America’s prosperity and security depend on how we respond to the opportunities and challenges in cyberspace. Critical infrastructure, national defense, and the daily lives of Americans rely on computer-driven and interconnected information technologies. As all facets of American life have become more dependent on a secure cyberspace, new vulnerabilities have been revealed and new threats continue to emerge.
Looks like a document worth understanding.
It defines four pillars for a national approach to cyber-security:
Protect the American People, the Homeland, and the American Way of Life
Promote American Prosperity
Preserve Peace through Strength
Advance American Influence
It will be interesting to see how the impacts of actions along these lies will be measured and felt — something technologists should watch.
The other day I was asked a question: If you were to tell a group of students what key takeaways you would have to share, what would they be?
I thought for a moment and replied:
1)Listen – You’ll never learn unless you listen to what’s being said and going on around you. The answer is not always ‘yes’ and that’s one of the reasons iterative development is so prevalent. The more you listen, internalize and appreciate, the greater opportunity to understand even more.
2)Continue to sharpen the sword – Today, the word is ever changing. Everyone needs to keep learning and improving. There are always new areas to explore and skills to develop. Besides, it keep life interesting too.
3)Leaders must have followers – If you want o be a be a leader, you need to cultivate your network. One great way to have support, is to first support others. The concept of the servant leader can be critical. Closely related to being a leader, is the need to always have an opinion. It may not always be right, but you will never be able to validate your perspective unless you actually state it – and then listen to other’s perspective. It is better to hop on and help steer, rather than to stand-in-the-way of progress.
That was a quick, stream of consciousness perspective. I’d be interested in your view of lessons learned about self-development you’d share with others.
I was recently in an exchange with a respected industry analyst where they stated that AI is not living up to its hype – they called AI ‘incremental’ and a ‘distraction’. This caught me a bit my surprise, since my view is that there are more capabilities and approaches available for AI practitioners than ever before. It may be the business and tech decision makers approach that is at fault.
It got me thinking about the differences in ‘small’ AI efforts vs. Enterprise AI efforts. Small AI are those innovative, quick efforts that can prove a point and deliver value and understanding in the near term. Big AI (and automation efforts) are those that are associated with ERP and other enterprise systems that take years to implement. These are likely the kinds of efforts that the analyst was involved with.
Many of the newer approaches enable the use of the abundance of capabilities available to mine the value out of the existing data that lies fallow in most organizations. These technologies can be tried out and applied in well defined, short sprints whose success criteria can be well-defined. If along the way, the answers were not quite what was expected, adjustments can be made, assumptions changed, and value can still be generated. The key is going into these projects with expectations but still flexible enough to change based on what is known rather than just supposition.
These approaches can be implemented across the range of business processes (e.g., budgeting, billing, support) as well as information sources (IoT, existing ERP or CRM). They can automate the mundane and free up high-value personnel to focus on generating even greater value and better service. Many times, these focused issues can be unique to an organization or industry and provide immediate return. This is not the generally not the focus of Enterprise IT solutions.
This may be the reason some senior IT leaders are disillusioned with the progress of AI in their enterprise. The smaller, high-value project’s contributions are round off error to their scope. They are looking for the big hit and by its very nature will be a compromise, if not a value to really move the ball in any definitive way – everyone who is deploying the same enterprise solution, will have access to the same tools…
My advice to those leaders disenchanted with the return from AI is to shift their focus. Get a small team out there experimenting with ‘the possible’. Give them clear problems (and expectations) but allow them the flexibility to bring in some new tools and approaches. Make them show progress but be flexible enough to understand that if their results point in a different direction, to shift expectations based on facts and results. There is the possibility of fundamentally different levels of costs and value generation.
The keys are:
1)Think about the large problems but act on those that can be validated and addressed quickly – invest in the small wins
2)Have expectations that can be quantified and focus on value – Projects are not a ‘science fair’ or a strategic campaign just a part of the business
3)Be flexible and adjust as insight is developed – just because you want the answer to be ‘yes’ doesn’t mean it will be, but any answer is valuable when compared to a guess
Sure, this approach may be ‘incremental’ (to start) but it should make up for that with momentum and results. If the approach is based on expectations, value generation and is done right, it should never be a ‘distraction’.