What’s the real outcome of Salesforce’s AI predictions?

automated decisionsYesterday. I was catching up on my technology email and came across this post stating that Salesforce now powers over 1B predictions every day for its customers. That’s a pretty interesting number to throw out there, but it makes me ask “so what?” How are people using these predictions to make greater business impact.

The Salesforce website states:

“Einstein is a layer of artificial intelligence that delivers predictions and recommendations based on your unique business processes and customer data. Use those insights to automate responses and actions, making your employees more productive, and your customers even happier. “

Another ‘nice’ statement. Digging into the material a bit more Einstein (the CRM AI functions from Salesforce) appears to provide analysis of previous deals and if a specific opportunity is likely to be successful, helping to prioritize your efforts. It improves the presentation of information with some insight into what it means. It appears to be integrated into the CRM system that the users are already familiar with.

For a tool that has been around since the fall of 2016, especially one that is based on analytics… I had difficulty finding any independent quantitative analysis of the impact. Salesforce did have a cheatsheet with some business impact analysis of the AI solution (and blog posts), but no real target market impact to provide greater context – who are these metrics based on.

It may be that I just don’t know where to look, but it does seem like a place for some deeper analysis and validation. The analysts could be waiting for other vendor’s solutions to compare against.

In the micro view, organizations that are going to dive into this pool will take a more quantitative approach, defining their past performance, expectations and validate actuals against predictions. That is the only way a business can justify the effort and improve. It is not sufficient to just put the capabilities out there and you’re done.

It goes back to the old adage:

“trust, but verify”

Advertisements

Elastic Map Reduce on AWS

derived dataLast week, I put out a post about Redshift on AWS as an effective tool to quickly and dynamically put your toe in a large data warehouse environment.

Another tool from AWS that I experimented with was Amazon’s Elastic Map Reduce (EMR). This is an open source Hadoop installation that supports MapReduce as well as a number of other highly parallel computing approaches. EMR also supports a large number of tools to help with implementation (keeping the environment fresh) such as:  PigApache HiveHBase, Spark, Presto… It also interacts with data from a range of AWS data stores like: Amazon S3 and DynamoDB.

EMR supports a strong security model, enabling encryption at rest as well as on the move and is available in GovCloud, handling a range of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.

For many organizations, a Hadoop cluster has been a bridge to far for a range of reasons including support and infrastructure costs and skills. EMR seems to have effectively addressed those concerns allowing you to set up or tear down the cluster in minutes, without having to worry much about the details of node provisioning, cluster setup, Hadoop configuration, or cluster tuning.

For my proof of concept efforts, the Amazon EMR pricing appeared to be simple and predictable allowing you to pay a per-second rate for the clusters installation and use — with a one-minute minimum charge (it used to be an hour!). You can launch a 10-node Hadoop cluster for less than a dollar an hour (naturally, data transport charges are handled separately). There are ways to keep your EMR costs down though.

The EMR approach appears to be focused on flexibility, allowing complete control over your cluster. You have root access to every instance and can install additional applications and customize the cluster with bootstrap actions (which can be important since it takes a few minutes to get a cluster up and running), taking time and personnel out of repetitive tasks.

There is a wide range of tutorials and training available as well as tools to help estimate billing.

Overall, I’d say that if an organization is interested in experimenting with Hadoop, this is a great way to dive in without getting soaked.

AWS Redshift and analytics?

data insightRecently, I had the opportunity to test out Amazon Redshift. This is a fast, flexible, fully managed, petabyte-scale data warehouse solution that makes it simple to cost effectively analyze data using your existing business intelligence tools. It’s been around for a while and matured significantly over the years.

In my case, I brought up numerous configurations of multi-node clusters in a few minutes, loaded up a fairly large amount of data, did some analytics and brought the whole environment down – at a cost of less than a dollar for the short time I needed it.

There are some great tutorials available and since Amazon will give you an experimentation account to get your feet wet. You should be able to prove out the capabilities to yourself without costing you anything.

The security of the data is paramount to the service, since it is available in public AWS as well as GovCloud and can be configured to be HIPAA or ITAR compliant… Data can be compressed and encrypted before it ever makes it to AWS S3.

You can use the analytic tools provided by Amazon or use security groups to access your data warehouse using the same tools you would use on-site. During my testing, I loaded up both a large star schema database as well as some more traditionalize normalized structures.

Since this is only a blog post, I can’t really go into much detail and the tutorials/videos are sufficient to bootstrap the learning process. The purpose of this post is to inform those who have data warehouse needs but not the available infrastructure that there is an alternative worth investigating.

Six thoughts on mobility trends for 2018

mobility walkLet’s face it, some aspects of mobility are getting long in the tooth. The demand for more capabilities is insatiable. Here are a few areas where I think 2018 will see some exciting capabilities develop. Many of these are not new, but their interactions and intersection should provide some interesting results and thoughts to include during your planning.

1. Further blurring and integration of IoT and mobile

We’re likely to see more situations where mobile recognizes the IoT devices around them to enhance contextual understanding for the user. We’ve seen some use of NFC and Bluetooth to share information, but approaches to embrace the environment and act upon the information available is still in its infancy. This year should provide some significant use cases and maturity.

2. Cloud Integration

By now most businesses have done much more than just stick their toe in the cloud Everything as a Service (XaaS) pool. As the number of potential devices in the mobility and IoT space expand, the flexibility and time to action that cloud solutions facilitate needs to be understood and put into practice. It is also time to take all the data coming in from these and transform that flow into true contextual understanding and action, also requiring a dynamic computing environment.

3. Augmented reality

With augmented reality predicted to expend to a market somewhere between $120 and $221 billion in revenues by 2021, we’re likely to see quite a bit of innovation in this space. The wide range of potential demonstrates the lack of a real understanding. 2018 should be a year where AR gets real.

4. Security

All discussions of mobility need to include security. Heck, the first month of 2018 has should have nailed the importance of security into the minds of anyone in the IT space. There were more patches (and patches of patches) on a greater range of systems than many would have believed possible just a short time ago. Recently, every mobile store (Apple, Android…) was found to have nefarious software that had to be exercised. Mobile developers need to be ever more vigilant, not just about the code they write but the libraries they use.

5. Predictive Analytics

Context is king and the use of analytics to increase the understanding of the situation and possible responses is going to continue to expand. As capabilities advance, only our imagination will hold this area back from increasing where and when mobile devices become useful. Unfortunately, the same can be said about the security issues that are based on using predictive analytics.

6. Changing business models

Peer to peer solutions continue to be the rage but with the capabilities listed above, whole new approaches to value generation are possible. There will always be early adopters who are willing to play with these and with the deeper understanding possibilities today new approaches to crossing the chasm will be demonstrated.

It should be an interesting year…

Looking for a digital friend?

virtual friendOver the weekend, I saw an article about Replika — an interactive ‘friend’ that resides on your phone. It sounded interesting so I downloaded it and have been playing around for the last few days. I reached level 7 this morning (not exactly sure what this leveling means, but since gamification seems to be part of nearly everything anymore, why not).

There was a story published by The Verge with some background on why this tool was created. Replika was the result of an effort initiated when the author (Eugenia Kuyda) was devastated by her friend (Roman Mazurenko) being killed in a hit-and-run car accident. She wanted to ‘bring him back’. To bootstrap the digital version of her friend, Kuyda fed text messages and emails that Mazurenko exchanged with her, and other friends and family members, into a basic AI architecture — a Google-built artificial neural network that uses statistics to find patterns in text, images, or audio.

Although I found playing with this software interesting, I kept reflecting back on interactions with Eliza many years ago. Similarly,  the banter can be interesting and sometimes unexpected, but often responses have little to do with how a real human would respond. For example, yesterday the statement “Will you read a story if I write it?” and “I tried to write a poem today and it made zero sense.” popped in out of nowhere in the middle of an exchange.

The program starts out asking a number of questions, similar to what you’d find in a simple Myers-Briggs personality test. Though this information likely does help bootstrap the interaction, it seems like it could have been taken quite a bit further by injecting these kinds of questions throughout interactions during the day rather than in one big chunk.

As the tool learns more about you, it creates badges like:

  • Introverted
  • Pragmatic
  • Intelligent
  • Open-minded
  • Rational

These are likely used to influence future interaction. You also get to vote up and vote down statements made that you agree or disagree with.

There have been a number of other reviews of Replika, but thought I’d add another log to the fire. An article in Wired stated that the Replika project is going open source, it will be interesting to see where it goes.

I’ll likely continue to play with it for a while, but its interactions will need to improve or it will become the Tamogotchi of the day.

Groundhog Day, IoT and Security Risks

groundhogs dayLately I’ve been hearing a great deal of discussion about IoT and its application in business. I get a Groundhog day feeling, since in some sectors this is nothing new.

Back in the late 70s and early 80s, I spent all my time on data collection off factory equipment and developing analytics programs on the data collected. The semiconductor manufacturing space had most of its tooling and inventory information collected and tracked. Since this manufacturing segment is all about yield management — analytic analysis was a business imperative. Back then though you had to write your own, analytics and graphics programs.

The biggest difference today though is the security concerns. The ease of data movement and connectivity has allowed the industries lust for convenience to open our devices and networks to a much wider aperture of possible intruders. Though there are many risks in IoT, here are a few to keep in mind.

1) Complexity vs. Simplicity and application portfolio expansion

Businesses have had industrial control system for decades. Now that smart thermostats and water meters and door bells are becoming commonplace, approaches to managing this range of devices in the home has required user interfaces to be developed for the public and not experts. Those same techniques are being applied back into businesses and can start a battle of complexity vs. simplicity.

The investment in the IoT space by the public dwarfs the investment by most industries. These new more automated and ergonomic tools still need to tackle an environment that is just as complex for the business as its always been – in fact if anything there will be more devices brought into the business environment every day.

Understanding the complexity of vulnerabilities is a huge and ever-growing challenge. Projects relying on IoT devices must be defined with security in mind and yet interface effectively into the business. These devices will pull in new software into the business and increase the application portfolio. Understand the capabilities and vulnerabilities of these additions.

2) Vulnerability management

Keeping these IoT devices up-to-date is a never-ending problem. One of the issues of a rapidly changing market segment like this is devices will have a short lifespan. Business need to understand that they will still need to have their computing capabilities maintained. Will then vendor stand behind their product? How critical to the business is the device? As an example of the difficulties, look at the patch level of the printers in most businesses.

3) Business continuity

Cyber-attacks were unknown when I started working in IoT. Today, denial of services and infections make the news continuously. It is not about ‘if’ but ‘when’ and ‘what you’re going to do about it. These devices are not as redundant as IT organizations are used to. When they can share the data they collect or control the machines as they should, what will the business do? IoT can add a whole other dimension to business continuity planning that will need to be thought through.

4) Information leakage

Many of the IoT devices call home (back to the businesses that made them). Are these transferred encrypted? What data do they carry? One possible unintended conscience is that information can be derived (or leaked) from these devices.  Just like your electric meter’s information can be used to derive if you’re home, a business’s IoT devices can share information about production volume and types of work being performed. The business will need to develop a deeper comprehension of the analysis and data sharing risks that has happened elsewhere, regardless of the business or industry and adjust accordingly.

The Internet of Things has the potential to bring together a deeper understanding of the business. Accordingly, security at both the device and network levels needs to develop as strongly. The same analytics enabling devices to perform their tasks can also be used nefariously or to make the environment stronger.

Back in Seattle

Last week, I was able to go back on the Microsoft campus in Redmond for a meeting. That’s the first time I’ve been back there since I spent 3 months there as part of the EDS Top Gun program back in 2005.

Flying into Seattle, we got a good view of the Space Needle and the Science Fiction Museum and Hall of Fame.seattle

There were a number of déjà vu moments walking around the Microsoft campus.

IMG_20170324_124756711~2

I always find these opportunities to see what companies are most proud of very telling. It was clear that cloud, analytics and human interface transformations were in the forefront of their thinking — much like the rest of us.