Things are not always what they seem – a discussion about analytics

evaluationHave you ever been in a discussion about a topic thinking you’re talking about one area but later find out it was about something else altogether?

We’ve probably all had that conversation with a child, where they say something like “That’s a really nice ice cream cone you have there.” Which sounds like a compliment on your dairy delight selection but in reality is a subtle way of saying “Can I have a bite?”

I was in a discussion with an organization about a need they had. They asked me a series of questions and I provided a quick stream of consciousness response…The further I got into the interaction the less I understood about what was going on. This is a summary of the interaction:

1) How do you keep up to speed in new data science technology? I read and write blogs on technical topics as well as read trade publications. I also do some recreational programming to keep up on trends and topics. On occasion I have audited classes on both EdX and Coursera (examples include gamification, Python, cloud management/deployment, R…)

2) Describe what success looks like in the context data science projects? Success related analytics efforts is the definition, understanding, development of insight on and the addressing of business goals using available data and business strategies. Sometimes this may only involve the development of better strategies and plans, but in other cases the creation of contextual understanding and actionable insight allows for continuous improvement of existing or newly developed processes.

3) Describe how do you measure the value of a successful data science application. I measure the value based on the business impact through the change in behavior or business results. It is not about increased insight but about actions taken.

4) Describe successful methods or techniques you have used to explain the value of data science, machine learning, advanced analytics to business people. I have demonstrated the impact of a gamification effort by using previously performed business process metrics and then the direct relationship with post implementation performance. Granted correlation does not prove causation but by having multiple instances of base cases and being able to validate performance improvement from a range a trials and processes improvements, a strong business case can be developed using a recursive process based on the definition of mechanics, measurement, behavior expectations, and rewards.

I’ve used a similar approach in the IoT space, where I’ve worked on and off with machine data collection and data analysis since entering the work force in the 1980s.

5) Describe the importance of model governance (model risk management) in the context of data science, advanced analytics, etc. in financial services. Without a solid governance model, you don’t have the controls and cannot develop the foundational level of understanding. The model should provide the rigor sufficient to move from supposition to knowledge. The organization needs to be careful not to have too rigid a process though, since you need to take advantage of any information learned along the way and make adjustment, to take latency out of the decision making/improvement process. Like most efforts today a flexible/agile approach should be applied.

6) Describe who did you (team, function, person) interact with in your current role, on average, and roughly what percent of time did you spend with each type of function/people/team. In various roles I spent time with CEO/COOs and senior technical decision makers in fortune 500 companies (when I was the chief technologist of Americas application development with HP: 70-80% of my time). Most recently when with Raytheon IT, I spend about 50% of my time with senior technical architects and 50% of my time with IT organization directors.

7) Describe how data science will evolve during the next 3 to 5 years. What will improve? What will change? Every organization should have in place a plan to leverage both improve machine learning and analytics algorithms based on the abundance of data, networking and intellectual property available. Cloud computing techniques will also provide an abundance of computing capabilities that can be brought to bear on the enterprise environment. For most organizations, small sprint project efforts need to be applied to both understanding the possibilities and the implications. Enterprise efforts will still take place but they will likely not have the short term impact that smaller, agile efforts will deliver. I wrote a blog post about this topic earlier this month. Both the scope and style of projects will likely need to change. It may also involve the use more contract labor to get the depth of experience in the short term to address the needs of the organization. The understanding and analysis of the meta-data (block chains, related processes, machines.…) will also play an ever increasing role, since they will supplement the depth and breadth of contextual understanding.

8) Describe how do you think about choosing technical design of data science solutions (what algorithms, techniques, etc.).

I view the approach to be similar to any other architectural technical design. You need to understand:

  • the vision (what is to be accomplished)
  • the current data and systems in place (current situation analysis)
  • understand the skills of the personnel involved (resource assessment)
  • define the measurement approach to be used (so that you have both a leading and lagging indicator of performance)

then you can develop a plan and implement your effort, validating and adjusting as you move along.

How do you measure the value/impact of your choice?

You need to have a measurement approach that is both tactical (progress against leading indicators) as well as strategic (validation by lagging indicators of accomplishment). Leading indicators look ahead to make sure you are on the right road, where lagging indicators look behind to validate where you’ve been.

9) Describe your experience explaining complex data to business users. What do you focus on?

The most important aspect of explaining complex data is to describe it in terms the audience will understand. No one cares how hard it was to do the analysis, they just want to know the business impact, value and how it can be applied.

Data visualization needs to take this into account and explain the data to the correct audience – not everyone consumes data using the same techniques. Some people will only respond to spreadsheets, while others would like to have nice graphics… Still others want business simulations and augmented reality techniques to be used whenever possible. If I were to have 3 rules related to explaining technical topics, they would be:

  1. Answer the question asked
  2. Display it in a way the audience will understand (use their terminology)
  3.  Use the right data

At the end of that exchange I wasn’t sure if I’d just provided some free consulting, went through a job interview or was just chewing the fat with another technologist. Thoughts???

Advertisements

Is AI a distraction???

AutomationI was recently in an exchange with a respected industry analyst where they stated that AI is not living up to its hype – they called AI ‘incremental’ and a ‘distraction’. This caught me a bit my surprise, since my view is that there are more capabilities and approaches available for AI practitioners than ever before. It may be the business and tech decision makers approach that is at fault.

It got me thinking about the differences in ‘small’ AI efforts vs. Enterprise AI efforts. Small AI are those innovative, quick efforts that can prove a point and deliver value and understanding in the near term. Big AI (and automation efforts) are those that are associated with ERP and other enterprise systems that take years to implement. These are likely the kinds of efforts that the analyst was involved with.

Many of the newer approaches enable the use of the abundance of capabilities available to mine the value out of the existing data that lies fallow in most organizations. These technologies can be tried out and applied in well defined, short sprints whose success criteria can be well-defined. If along the way, the answers were not quite what was expected, adjustments can be made, assumptions changed, and value can still be generated. The key is going into these projects with expectations but still flexible enough to change based on what is known rather than just supposition.

These approaches can be implemented across the range of business processes (e.g., budgeting, billing, support) as well as information sources (IoT, existing ERP or CRM). They can automate the mundane and free up high-value personnel to focus on generating even greater value and better service. Many times, these focused issues can be unique to an organization or industry and provide immediate return. This is not the generally not the focus of Enterprise IT solutions.

This may be the reason some senior IT leaders are disillusioned with the progress of AI in their enterprise. The smaller, high-value project’s contributions are round off error to their scope. They are looking for the big hit and by its very nature will be a compromise, if not a value to really move the ball in any definitive way – everyone who is deploying the same enterprise solution, will have access to the same tools…

My advice to those leaders disenchanted with the return from AI is to shift their focus. Get a small team out there experimenting with ‘the possible’. Give them clear problems (and expectations) but allow them the flexibility to bring in some new tools and approaches. Make them show progress but be flexible enough to understand that if their results point in a different direction, to shift expectations based on facts and results. There is the possibility of fundamentally different levels of costs and value generation.  

The keys are:

1)      Think about the large problems but act on those that can be validated and addressed quickly – invest in the small wins

2)      Have expectations that can be quantified and focus on value – Projects are not a ‘science fair’ or a strategic campaign just a part of the business

3)      Be flexible and adjust as insight is developed – just because you want the answer to be ‘yes’ doesn’t mean it will be, but any answer is valuable when compared to a guess

Sure, this approach may be ‘incremental’ (to start) but it should make up for that with momentum and results. If the approach is based on expectations, value generation and is done right, it should never be a ‘distraction’.

The definition of done

doneI was looking through some material the other day and a statement came up that I found intriguing. “What is your team’s definition of done?”

Since I believe in the concept of “good enough” rather than spending the time and effort to hit perfection (since we all know that is probably not attainable) this concept of having a formal team definition of when to “put a fork in it” and call it done, made me think – how often do we have a formal structure and definition of being done? Is it flexible??

I’ve talked with people who were doing in-house agile development and their definition of done always seemed to be when they ran out of time or money. Rarely was it when they ran out of requirements. Is that OK?

For programmers who do have a defined set of requirements and are coding for money, they’re done when the requirements were all met, the code was all compiling, everything was tested, the code is installed in production and the customer has signed off. That is getting pretty close to perfection.

Can each of the roles in a project can have their own definition of done? If all those individual definitions are met, is the project done? I can think of a number of situations where the architects completed all their work products but the results were never used effectively. The flag was raise but no one saluted, so done was declared too early.

It just seems like something we may all want to take a moment and think about – when I think I’m done, are the other stakeholders satisfied??

Voice recognition project completed at UTD

Every semester I try and work with some students at UTD by facilitating a ‘capstone’ project. It’s another dimension of my support for STEM education.education2 Yesterday, they gave their presentation to their professor and class.

This semester the project was creating an Android based speech recognition solution to facilitate a Voice-based Inspection and Evaluation Framework. We shied away from using Google’s speech recognition, since we wanted off-line capabilities, as well as enhanced security/privacy. Addressing this expectation was one of the first issues the team had to conquer.

They were able to identify and implement an open source library providing the speech recognition (PocketSphinx). They also used Android.Speech.tts for text-to-speech interaction with the user.

The team created a visual programming environment to graphically define a flowchart and export that to an XML file that the mobile device was able to use to facilitate the inspection process. The mobile application could have a number of these stored for later use.

The end product was able to handle a range of speech recognition needs:

  • Yes/no
  • Answer from a list of valid responses (e.g., States)
  • Answer with a number (range checked)
  • Free form sound capture

Overall, I was very impressed with what these students were able to accomplish during the semester and the quality of the Software Life Cycle work products they were able to produce. Naturally, since we didn’t know exactly what they were going to be able to accomplish they used a modified agile approach – since they still had to produce the work products require for the class based on a predefined time table.  We incorporated the concept of designing specific sprints around producing those work products as well as the typical need to define, document and validate requirements.

I started the project while working at HP and Dave Gibson and Cliff Wilke helped facilitate it to the end (they are still with HP).

Thoughts from a discussion about architecture

evaluationYesterday, I had a long discussion with Stephen Heffner the creator of XTRAN (and president of XTRAN, LLC). XTRAN is a meta transformation tool that excels at automating software work – sorry that is the best description I could come up for a tool that can create solutions to analyze and translate between software languages, data structures and even project work products. When you first read about its capabilities it sounds like magic. There are numerous working examples available of its capabilities so you can see its usefulness for yourself.

He and I were talking about the issues and merits of Enterprise Architecture. He wrote piece titled, What is Enterprise Architecture?, where he describes his views on the EA function. Stephen identifies three major impediments to effective EA:

  • Conflating EA with IT
  • Aligning EA with just transformation
  • Thinking the EA is responsible for strategy

We definitely agreed that today’s perspective in most businesses that the EA function is embedded within IT does not align well with the strategic needs of the business. The role is much broader than IT and needs to embrace the broader business issues that IT should support.

I had a bit of problem with the EA alignment with transformation but that may just be a difference in context. One of the real killers of EA for me is the focus on work products and not outcomes. The EA should always have a focus on greater flexibility for the business, addressing rigor but not increasing rigidity. Rigidity is aligned with death – hence the term rigor mortis. To me, the EA function always has a transformational element.

The final point was that the EA supports strategy and the business needs to have a strategy. The EA is not the CEO and the CEO is probably not an EA.  The EA does need to understand the current state of the business environment though. I was talking with an analyst one day who told me that an EA needs to focus on the vision and they shouldn’t worry about a current situational assessment. My response was that “If you don’t know where you are, you’ll not be able to define a journey to where you need to be.” Stephen agreed with that perspective.

My view is that there are 4 main elements of an effective architecture:

  • People – Architecture lives at the intersection of business and technology. People live at the focus of that intersection, not technology. Architectural efforts should focus on the effect upon the people involved. What needs to happen? How will it be measured? These factors can be used to make course corrections along the way, once you realize: an architecture is never finished. If it doesn’t deliver as expected, change it. Make the whole activity transparent, so that people can buy in, instead of throw stones. My view is that if I am talking with someone about architecture and they don’t see its value, it is my fault.
  • Continuous change – When you begin to think of the business as dynamic and not static, the relationship with the real world becomes clear. In nature, those species that are flexible and adjust to meet the needs of the environment can thrive – those that can’t adjust die off.
    Architectures need to have standards, but it also needs to understand where compromises can be made. For example, Shadow IT It is better to understand and facilitate its effective use (through architecture), rather than try and stand in the way and get run over.
    In a similar way, the link between the agile projects and the overall architecture need to be recursive, building upon the understanding that develops. The architecture does not stand alone.
    Architecture development can also have short sprints of understanding, documenting and standardizing the technical innovations that take place, while minimizing technical debt.
  • Focus on business-goal based deliverables – Over the years, I’ve seen too many architectural efforts end up as shelf-ware. In the case of architecture, just-in-time is probably the most effective and accurate approach since the technology and business are changing continuously. Most organizations would just laugh at a 5 year technology strategy today, after all many of the technical trends are predictable. So I don’t mean you shouldn’t frame out a high-level one – just ‘don’t believe your own press’.
    If the architecture work products can be automated or at least integrated with the tooling used in the enterprise, it will be more accurate and useful. This was actually a concept that Stephen and I discussed in depth. The concept of machine and human readable work products should be part of any agile architecture approach.
    From a goal-based perspective, the architecture needs to understand at a fundamental level what is scarce for the organization and what is abundant and then maximize the value generated from what is scarce – or at least unique to the organization.
  • Good enough – Don’t let the perfect architecture stand in the way of one that is ‘good enough’ for today. All too often I’ve seen architecture analysis go down to 2 or 3 levels of detail. Then people say “if 2 is good, let’s go to 5 levels of depth.” Unfortunately, with each level of detail the cost to develop and maintain goes up by an order of magnitude – know when to stop. I’ve never seen a single instance of where these highly detailed architecture definitions where maintained more than 2 or 3 years, since they may actually cost as much to maintain as it took to create them. Few organizations have that level of intestinal fortitude to keep that up for long.
    The goal should be functional use, not a focus on perfection. Architecting the simplest solution what works today is generally best. If you architect the solution for something that will be needed 5 years out, either the underlying business need or the technical capabilities will change before it will actually be used.

None of this is really revolutionary. Good architects have been taking an approach like this for years. It is just easy to interpret some of the architecture process materials from an ivory tower (or IT only) perspective.

Contemplating a more agile architecture

Agile architectureLast year, I did a presentation on the need for a more agile approach to architecture, where the whole approach needs to become more business-centric and less about the underlying technology. Concepts like:

  • Time to action
  • Value vs. expense
  • Transparency
  • Visibility
  • Experimentation and continuous change

are at the core of this discussion and the need to inform so that the business feels enabled to take action. This perspective reinforces the changes needed for architecture in a world of automation change.

In that presentation, it talked about what needed to change but not necessarily how organizations need to go about doing making the change. Like any good architectural approach, there needs to be some level of current situation analysis. What’s the goal? What do we currently have? How well does it support that goal?

But there also needs to be some real questioning of the status quo. Why does the process work that way? What value do those involve play? What new tools and services are available?

I posted on the diginomica blog the other day that there is a shift underway that all products are turning into platforms for deeper relationships. This can only happen if you question where the business generates value. There is more to the enterprise architecture (like TOGAF) than what most traditionally thought.

Just like with agile development approaches, there will always be a bit of waterfall in an architecture approach, but at the core needs to be a close relationship with the business – it’s the businesses architecture after all. Part of the governance and focus needs to be on increasing flexibility, so keeping the rigor without the rigidity.