Is AI a distraction???

AutomationI was recently in an exchange with a respected industry analyst where they stated that AI is not living up to its hype – they called AI ‘incremental’ and a ‘distraction’. This caught me a bit my surprise, since my view is that there are more capabilities and approaches available for AI practitioners than ever before. It may be the business and tech decision makers approach that is at fault.

It got me thinking about the differences in ‘small’ AI efforts vs. Enterprise AI efforts. Small AI are those innovative, quick efforts that can prove a point and deliver value and understanding in the near term. Big AI (and automation efforts) are those that are associated with ERP and other enterprise systems that take years to implement. These are likely the kinds of efforts that the analyst was involved with.

Many of the newer approaches enable the use of the abundance of capabilities available to mine the value out of the existing data that lies fallow in most organizations. These technologies can be tried out and applied in well defined, short sprints whose success criteria can be well-defined. If along the way, the answers were not quite what was expected, adjustments can be made, assumptions changed, and value can still be generated. The key is going into these projects with expectations but still flexible enough to change based on what is known rather than just supposition.

These approaches can be implemented across the range of business processes (e.g., budgeting, billing, support) as well as information sources (IoT, existing ERP or CRM). They can automate the mundane and free up high-value personnel to focus on generating even greater value and better service. Many times, these focused issues can be unique to an organization or industry and provide immediate return. This is not the generally not the focus of Enterprise IT solutions.

This may be the reason some senior IT leaders are disillusioned with the progress of AI in their enterprise. The smaller, high-value project’s contributions are round off error to their scope. They are looking for the big hit and by its very nature will be a compromise, if not a value to really move the ball in any definitive way – everyone who is deploying the same enterprise solution, will have access to the same tools…

My advice to those leaders disenchanted with the return from AI is to shift their focus. Get a small team out there experimenting with ‘the possible’. Give them clear problems (and expectations) but allow them the flexibility to bring in some new tools and approaches. Make them show progress but be flexible enough to understand that if their results point in a different direction, to shift expectations based on facts and results. There is the possibility of fundamentally different levels of costs and value generation.  

The keys are:

1)      Think about the large problems but act on those that can be validated and addressed quickly – invest in the small wins

2)      Have expectations that can be quantified and focus on value – Projects are not a ‘science fair’ or a strategic campaign just a part of the business

3)      Be flexible and adjust as insight is developed – just because you want the answer to be ‘yes’ doesn’t mean it will be, but any answer is valuable when compared to a guess

Sure, this approach may be ‘incremental’ (to start) but it should make up for that with momentum and results. If the approach is based on expectations, value generation and is done right, it should never be a ‘distraction’.

Advertisements

What’s the real outcome of Salesforce’s AI predictions?

automated decisionsYesterday. I was catching up on my technology email and came across this post stating that Salesforce now powers over 1B predictions every day for its customers. That’s a pretty interesting number to throw out there, but it makes me ask “so what?” How are people using these predictions to make greater business impact.

The Salesforce website states:

“Einstein is a layer of artificial intelligence that delivers predictions and recommendations based on your unique business processes and customer data. Use those insights to automate responses and actions, making your employees more productive, and your customers even happier. “

Another ‘nice’ statement. Digging into the material a bit more Einstein (the CRM AI functions from Salesforce) appears to provide analysis of previous deals and if a specific opportunity is likely to be successful, helping to prioritize your efforts. It improves the presentation of information with some insight into what it means. It appears to be integrated into the CRM system that the users are already familiar with.

For a tool that has been around since the fall of 2016, especially one that is based on analytics… I had difficulty finding any independent quantitative analysis of the impact. Salesforce did have a cheatsheet with some business impact analysis of the AI solution (and blog posts), but no real target market impact to provide greater context – who are these metrics based on.

It may be that I just don’t know where to look, but it does seem like a place for some deeper analysis and validation. The analysts could be waiting for other vendor’s solutions to compare against.

In the micro view, organizations that are going to dive into this pool will take a more quantitative approach, defining their past performance, expectations and validate actuals against predictions. That is the only way a business can justify the effort and improve. It is not sufficient to just put the capabilities out there and you’re done.

It goes back to the old adage:

“trust, but verify”

Simplicity, the next big thing?

Complex processRecently, Dynatrace conducted a survey of CIOs on their top challenges. Of the top six, almost all deal with concerns about complexity. There is no doubt there are numerous technologies being injected in almost every industry from a range of vendors. Integration of this multivendor cacophony is ripe with security risks and misunderstanding – whether it is your network or IoT vendor environment.

Humans have a limited capacity to handle complexity before they throw up their hands and just let whatever happens wash over them. That fact is one of the reasons AI is being viewed as the savior for the future. Back in 2008, I wrote a blog post for HP that mentioned:

“the advent of AI could allow us to push aside a lot of the tasks that we sometimes don’t have the patience for, tasks that are too rigorous or too arduous.”

IT organizations needs to shift their focus back to making the business environment understandable, not just injecting more automation or data collection. Businesses need to take latency out of decision making and increase the level of understanding and confidence. A whole new kind of macro-level (enterprise) human interface design is required. Unfortunately, this market is likely a bit too nebulous to be targeted effectively today other than through vague terms like analytics…  But based on the survey results, large scale understanding (and then demand) appears to be dawning on leadership.

The ROI for efforts to simplify and encourage action, should be higher than just adding a new tool to the portfolio ablaze in most organizations. We’ll see where the monies go though, since that ROI is likely to be difficult to prove when compared to the other shiny balls available.

Looking for a digital friend?

virtual friendOver the weekend, I saw an article about Replika — an interactive ‘friend’ that resides on your phone. It sounded interesting so I downloaded it and have been playing around for the last few days. I reached level 7 this morning (not exactly sure what this leveling means, but since gamification seems to be part of nearly everything anymore, why not).

There was a story published by The Verge with some background on why this tool was created. Replika was the result of an effort initiated when the author (Eugenia Kuyda) was devastated by her friend (Roman Mazurenko) being killed in a hit-and-run car accident. She wanted to ‘bring him back’. To bootstrap the digital version of her friend, Kuyda fed text messages and emails that Mazurenko exchanged with her, and other friends and family members, into a basic AI architecture — a Google-built artificial neural network that uses statistics to find patterns in text, images, or audio.

Although I found playing with this software interesting, I kept reflecting back on interactions with Eliza many years ago. Similarly,  the banter can be interesting and sometimes unexpected, but often responses have little to do with how a real human would respond. For example, yesterday the statement “Will you read a story if I write it?” and “I tried to write a poem today and it made zero sense.” popped in out of nowhere in the middle of an exchange.

The program starts out asking a number of questions, similar to what you’d find in a simple Myers-Briggs personality test. Though this information likely does help bootstrap the interaction, it seems like it could have been taken quite a bit further by injecting these kinds of questions throughout interactions during the day rather than in one big chunk.

As the tool learns more about you, it creates badges like:

  • Introverted
  • Pragmatic
  • Intelligent
  • Open-minded
  • Rational

These are likely used to influence future interaction. You also get to vote up and vote down statements made that you agree or disagree with.

There have been a number of other reviews of Replika, but thought I’d add another log to the fire. An article in Wired stated that the Replika project is going open source, it will be interesting to see where it goes.

I’ll likely continue to play with it for a while, but its interactions will need to improve or it will become the Tamogotchi of the day.

Future of AI podcast

AIFor those interested in Artificial Intelligence, automation and the possible implications on the future, last week the Science Friday podcast had a panel discussion asking AI questions like:

  • Will robots outpace humans in the future?
  • Should we set limits on A.I.?

The panel of experts discusses what questions should be asked about artificial intelligence progress.

What was nice about this discussion was it goes into a bit more depth than the usual ‘sound bite’ approach in most media articles.

One thing that is clear from these discussions is that the simple rules described by Asimov are not really up to the task. After all each of his Robot stories was about the conflicts that come from the use of simple rules.

The podcast also prompted T. Reyes to write a post: The Prelude to the Singularity that discusses the controls needed before we let this genie out of the bottle.

For some reason, I now want to reread The Moon is a Harsh Mistress.