Experimenting with Bing Chat

Since I have been playing with ChatGPT, and Bing Chat came out, I thought I’d try it. I already had a baseline dealing with writing Python for dice rolling, so I thought I’d start with the same kind of query for Bing Chat.

I asked:

Using the Visual Studio Python IDE Python Development Tools to deploy an application for the Android platform?

The answer I got back was:

Hello, this is Bing. I searched the web for your query and found some results that might be helpful.

According to the search results, Visual Studio is a powerful IDE for Python language on Windows, and it provides open-source support for the Python language through the Python Development and Data Science workloads12You can use Visual Studio to edit, debug, and test Python code, and then publish an app3.

However, developing for Android using Python comes with some additional challenges, as Python is not “native” on those platforms, so you have to include Python with your app, and there are no “official” Pythons for either platform4You might want to use some tools or frameworks that can help you create Android apps with Python, such as Briefcase5Kivy5, or BeeWare5.

That was nice, and clear about the Android side of things but still left be wondering about the Visual Studio installation. I had to ask a follow up question:

What would I need to do on Windows Visual Studio to install kivy to develop for the android platform?

And the reply was:

I searched the web for your query and found some results that might be helpful.

According to the search results, to install Kivy on Windows Visual Studio, you need a compiler. On Windows, the Visual Studio Build Tools are required, and they are available for free1You can either download and install the complete Visual Studio IDE, which contains the build tools1, or download and install only the build tools1Then, you can use pip to install Kivy2.

To develop for Android using Kivy, you need to create a Python file and a Kivy file for your app. You can use Kivy’s widgets and layouts to design your interface, and use Python’s logic to implement your functionality3You can also use Kivy’s APIs to access Android features, such as vibration, sensors, texting, etc4To test your app, you can use the Kivy Launcher app, which allows you to run your Kivy app without a compilation step4To compile and publish your app, you need to use the Buildozer tool, which can create a fully signed AAB (Android App Bundle) for your app4.

That is different answer than the one from Chat GPT. It was more succinct but not quite as detailed.

So I thought I’d give it the first question I gave ChatGPT related to GUI development that really shocked me:

write a graphic user interface in python that rolls two dice using Kivy

The answer was a bit underwhelming:

I searched the web for your query and found some results that might be helpful.

According to the search results, to create a graphic user interface in python using Kivy, you need to import the Kivy library and create an App class that inherits from the App base class1The App class has a build() method that returns a widget that will be displayed on the screen1You can use various widgets and layouts from the Kivy library to design your interface, such as Label, Button, BoxLayout, GridLayout, etc2You can also use the Canvas instruction to draw shapes and lines on the screen3.

To create a graphic user interface in python that rolls two dice using Kivy, you need to import the random module and create a function that generates two random numbers between 1 and 6, and returns them as a tuple4You also need to create a Label widget that displays the result of the dice roll, and a Button widget that triggers the dice roll function when clicked5You can use a BoxLayout or a GridLayout to arrange the widgets on the screen2.

Here is an example of a Python file and a Kivy file for a graphic user interface that rolls two dice using Kivy:

# dice.py

import kivy

kivy.require('2.1.0')

from kivy.app import App

from kivy.uix.label import Label

from kivy.uix.button import Button

from kivy.uix.boxlayout import BoxLayout

from kivy.properties import StringProperty

import random

class DiceApp(App):

    # A property to store the result of the dice roll

    result = StringProperty('Roll the dice!')

    def build(self):

        # Create a box layout to hold the widgets

        layout = BoxLayout(orientation='vertical')

        # Create a label widget to display the result

        self.label = Label(text=self.result, font_size=50)

        # Create a button widget to trigger the dice roll

        self.button = Button(text='Roll', font_size=50, on_press=self.roll_dice)

        # Add the widgets to the layout

        layout.add_widget(self.label)

        layout.add_widget(self.button)

        # Return the layout as the root widget

        return layout

    def roll_dice(self, instance):

        # Generate two random numbers between 1 and 6

        dice1 = random.randint(1, 6)

        dice2 = random.randint(1, 6)

        # Update the result property with the dice values

        self.result = f'{dice1} + {dice2} = {dice1 + dice2}'

# Run the app

if __name__ == '__main__':

    DiceApp().run()

# dice.kv

<DiceApp>:

    # Define the appearance of the widgets

    label:

        color: 0, 0, 0, 1 # Black text

        size_hint: 1, 0.8 # Take 80% of the vertical space

    button:

        background_color: 0, 0, 1, 1 # Blue background

        color: 1, 1, 1, 1 # White text

        size_hint: 1, 0.2 # Take 20% of the vertical space

It did give me some code and it gave me references that were totally missing from ChatGPT but I was not as impressed.

I will need to try and go back and forth between the two and see what happens.

It did make me think about writing a paper with Bing Chat (to get the references) and then having ChatGPT make it better. So, I asked my question that started me down the path of looking at ChatGPT:

Write 500 words about the origin of computer aided design.

Even though it said it couldn’t do it, it produced a surprisingly good response:

I searched the web for your query and found some results that might be helpful.

According to the search results, computer aided design (CAD) is the use of computers to facilitate or assist design, especially in engineering and architecture1The term CAD emerged in the 1950s and is commonly credited to Douglas Ross, a computer scientist who was working at MIT when the acronym was coined23The origin of CAD can be traced back to the Second World War, when there was a huge wave of development in computing1.

One of the earliest examples of CAD was the Program for Numerical Tooling Operations (PRONTO), the very first Computer Numerical Control (CNC) programming system, developed by Patrick Hanratty in 19571PRONTO allowed the creation of numerical control programs for machining parts using a computer1Another pioneer of CAD was Ivan Sutherland, who developed Sketchpad in 1963, the first interactive computer graphics program that allowed users to draw and manipulate geometric shapes on a screen using a light pen4.

The development of CAD was influenced by the advancement of computer hardware and software, such as the invention of the transistor, the integrated circuit, the mouse, the graphical user interface, and the object-oriented programming25The first commercial CAD systems were introduced in the late 1960s and early 1970s, such as ADAM (Automated Drafting And Machining), DAC-1 (Design Augmented by Computer), and CADAM (Computer Augmented Drafting And Manufacturing)2These systems were mainly used for 2D drafting and design, and were expensive and limited to large corporations and institutions2.

The 1980s and 1990s saw the emergence of 3D modeling and rendering, parametric and feature-based modeling, solid and surface modeling, and computer-aided engineering (CAE) and manufacturing (CAM) integration24Some of the notable CAD systems of this era were AutoCAD, CATIA, Pro/ENGINEER, SolidWorks, and Unigraphics2These systems were more affordable and accessible, and enabled more complex and realistic design and simulation2.

The 21st century has witnessed the development of cloud-based, web-based, and mobile CAD systems, as well as the application of CAD in various fields, such as animation, gaming, biomedical, and fashion24Some of the current trends and challenges in CAD are artificial intelligence, virtual and augmented reality, additive manufacturing, and collaborative and distributed design24CAD has become an indispensable tool for designers and engineers, and has revolutionized the design process and the quality of the products24.

With references that were totally missing from the ChatGPT response.

So I went to Chat GPT and asked it to improve the essay. ChatGPT produced the following:

Computer-aided design (CAD) is a technology that has revolutionized the way that engineers and designers create and refine products. Simply put, CAD is the use of computers to facilitate or assist in the design process, particularly in the fields of engineering and architecture. From its humble beginnings in the 1950s to its current use in a wide range of industries, CAD has come a long way in terms of both capabilities and accessibility.

The term “CAD” is commonly credited to Douglas Ross, a computer scientist who was working at MIT when the acronym was coined in the 1950s. However, the origins of CAD can be traced back even further, to the Second World War, when there was a huge wave of development in computing. One of the earliest examples of CAD was the Program for Numerical Tooling Operations (PRONTO), which was the very first Computer Numerical Control (CNC) programming system. PRONTO allowed the creation of numerical control programs for machining parts using a computer. Another pioneer of CAD was Ivan Sutherland, who developed Sketchpad in 1963. This was the first interactive computer graphics program that allowed users to draw and manipulate geometric shapes on a screen using a light pen.

The development of CAD was influenced by the advancement of computer hardware and software, such as the invention of the transistor, the integrated circuit, the mouse, the graphical user interface, and the object-oriented programming. The first commercial CAD systems were introduced in the late 1960s and early 1970s, such as ADAM (Automated Drafting And Machining), DAC-1 (Design Augmented by Computer), and CADAM (Computer Augmented Drafting And Manufacturing). These systems were mainly used for 2D drafting and design, and were expensive and limited to large corporations and institutions.

The 1980s and 1990s saw the emergence of 3D modeling and rendering, parametric and feature-based modeling, solid and surface modeling, and computer-aided engineering (CAE) and manufacturing (CAM) integration. Some of the notable CAD systems of this era were AutoCAD, CATIA, Pro/ENGINEER, SolidWorks, and Unigraphics. These systems were more affordable and accessible, and enabled more complex and realistic design and simulation.

The 21st century has witnessed the development of cloud-based, web-based, and mobile CAD systems, as well as the application of CAD in various fields, such as animation, gaming, biomedical, and fashion. Some of the current trends and challenges in CAD are artificial intelligence, virtual and augmented reality, additive manufacturing, and collaborative and distributed design. CAD has become an indispensable tool for designers and engineers and has revolutionized the design process and the quality of the products.

A definite style improvement though now all the references disappeared.

A mixed bag of results from my perspective.

Continuing to understand ChatGPT and its programming capabilities.

I saw a post yesterday in the media stating that ChatGPT is not particularly innovative. This statement was made by one of Meta‘s AI chief scientists. He said, “It’s nothing revolutionary, although that’s the way it’s perceived in the public. It’s just that, you know, it’s well put together, it’s nicely done.”

Guess what, a usable tool is innovative. Look at the Lisa (predecessor to the Macintosh). Did it have anything all that innovative compared to what Xerox was doing already? Not really, but it was well integrated and thought out. By the time it became the Macintosh, it was also accessible and easy to use.

The Meta AI chief scientist may have access to “what’s available at other labs.”, but I don’t, so I am continuing my path to discovery with what I have. Taking up where I left off in the previous post about ChatGPT’s ability to aid software development:

I got my Python environment setup to use the Python PyQt5 library. Now I have the following screen displayed and working:

Not too bad for someone who is barely functional in Python…

Now I need to determine how to move the software program over to Android, so I asked ChatGPT:

Using the Visual Studio Python IDE Python Development Tools to deploy an application for the Android platform?

The answer I got back was:

Deploying an application for the Android platform using the Visual Studio Python IDE and Python Development Tools (PTVS) is not a straightforward process and requires some setup and additional tools. Here's an overview of the steps you can take:

Install the Android SDK on your computer:

Download the Android SDK from the official website (https://developer.android.com/sdk)

Install the SDK and set the ANDROID_HOME environment variable to the path of the SDK

Install the Java Development Kit (JDK) on your computer:

Download the JDK from the official website (https://www.oracle.com/java/technologies/javase-downloads.html)

Install the JDK and set the JAVA_HOME environment variable to the path of the JDK

Install Python for Android (python-for-android) on your computer:

Run the command pip install python-for-android

Create a new project in Visual Studio using the Python Development Tools:

Open Visual Studio and create a new project

Select the "Python Application" template

Write your Python code for the application

Use python-for-android to create an APK file:

Open a command prompt or terminal window

Navigate to the directory where your Python code is located

Run the command python-for-android apk --private . --package org.test.myapp --name "My App" --version 0.1 --bootstrap=sdl2

This will create an APK file in the bin directory of the project

Install the APK file on your Android device:

Connect your Android device to your computer

Copy the APK file to your device

Open the file manager on your device and navigate to the location where you copied the APK file

Tap on the APK file to start the installation process

Note that this process may require some tweaking and adjustments to your script and settings to make it run well on mobile devices, also it's not as simple as using Kivy, which is a more complete framework for building mobile applications.

Kivy is an Open Source Python application development Framework to build and distribute Python cross-platform GUI apps. These instructions are going to have to wait to test out another day.

Finally got around to playing with chatGPT

I finally got access to ChatGPT after hearing about it for a while. I have worked with folks in the AI space since the 1990s (back when I worked at Artificial Intelligence Services at GM), so when ChatGPT was released for testing in December, I was a bit skeptical. I needed to give it a test.

I am working on a presentation for our woodshop on an introduction to CAD, so I thought getting a short discussion from the Chatbot about the origins of CAD would provide an interesting result, since I have just done some research on this space.

I asked the ChatGPT the following:

Write 500 words about the origin of computer aided design.

The results started coming back instantaneously. They were:

Computer-aided design (CAD) is a method of using computers to assist in the creation, modification, analysis, or optimization of a design. The origins of CAD can be traced back to the early 1960s, when the first computer-aided design systems were developed.

One of the earliest forms of CAD was sketching programs, which allowed engineers and architects to create basic 2D drawings using a computer. These early systems were limited in their capabilities and were primarily used for basic drafting and design tasks.

In the 1970s, the development of more powerful computers and the creation of computer graphics technology allowed for the development of 3D CAD systems. These systems allowed for the creation of more complex and accurate designs, and they quickly became popular among engineers and architects.

The 1980s saw the introduction of parametric design, which allowed for the creation of designs that could be easily modified and updated. This was a major breakthrough in CAD technology and it greatly increased the efficiency and flexibility of the design process.

In the 1990s, the widespread adoption of personal computers and the development of more powerful and affordable computer hardware allowed for the widespread use of CAD among small businesses and individuals. Additionally, the development of the Internet and the creation of online design communities made it possible for designers to collaborate and share their work with others from around the world.

Today, CAD systems are used in a wide variety of industries, including architecture, engineering, construction, manufacturing, and product design. They are used to create everything from buildings and bridges to cars and electronics.

CAD systems have become an essential tool for many design professionals and are used to create accurate and detailed designs, as well as to analyze and optimize the performance of a design. With the help of computer aided design, it becomes easier for the designer to communicate their ideas to other members of the team.

Overall, computer-aided design has come a long way since its origins in the 1960s. Today, it is an essential tool for many industries and is used to create a wide variety of products and structures. With the advancement of technology, the future of CAD is looking bright and promising.

Not too bad, I wish it referenced specific examples of the systems in the 60s,70s, 80s and 90s that were the milestones. So, I asked it:

write 500 words about the milestones of CAD solutions

I am hoping that I can get specific examples this way. I didn’t really come back with specific system names… but it did provide more rational about the topic:

Computer-Aided Design (CAD) solutions have come a long way since their inception in the 1960s. Over the past several decades, CAD technology has undergone significant advancements, making it an essential tool for many industries today. Here are some of the key milestones in the evolution of CAD solutions:

The 1960s: The Origins of CAD
The first computer-aided design systems were developed in the 1960s. These early systems were limited in their capabilities, and were primarily used for basic drafting and design tasks. They were mainly used in the aerospace, automotive, and shipbuilding industries.

The 1970s: 3D CAD Systems
The development of more powerful computers and the creation of computer graphics technology in the 1970s allowed for the development of 3D CAD systems. These systems allowed for the creation of more complex and accurate designs, and they quickly became popular among engineers and architects.

The 1980s: Parametric Design
The introduction of parametric design in the 1980s was a major breakthrough in CAD technology. This feature allowed for the creation of designs that could be easily modified and updated. This greatly increased the efficiency and flexibility of the design process.

The 1990s: Widespread Adoption
With the advent of personal computers and the development of more powerful and affordable computer hardware, CAD systems became widely adopted by small businesses and individuals in the 1990s. The development of the internet and online design communities further facilitated collaboration and sharing of designs.

The 2000s: Building Information Modeling (BIM)
The early 2000s saw the advent of Building Information Modeling (BIM) which is a process for creating and managing building data during its lifecycle. BIM enables professionals to share, collaborate and use data in a common format, which leads to more efficient and effective delivery of projects.

The 2010s: Cloud-based and Mobile CAD
The 2010s saw the rise of cloud-based and mobile CAD solutions. Cloud-based CAD systems allow users to access and collaborate on designs from anywhere, while mobile CAD solutions allow users to access and edit designs on the go.

The 2020s: Artificial Intelligence and Machine Learning
With the development of Artificial Intelligence and Machine Learning in the recent years, CAD solutions have become even more powerful. AI-based CAD systems can analyze designs, identify patterns, and optimize performance, while machine learning-based CAD systems can learn from past designs and improve their design recommendations.

In conclusion, CAD solutions have undergone significant advancements throughout the years, evolving from basic 2D drafting tools to advanced 3D design software that incorporates artificial intelligence and machine learning capabilities. These developments have made CAD an essential tool for many industries, including architecture, engineering, construction, manufacturing, and product design. The future of CAD looks bright, with continued advancements in technology expected to lead to even more advanced and powerful solutions.

This answer had a few additional grammatical anomalies, but not too bad. I thought I’d try one more time to get specific system names, so this time I asked:

write 500 words containing the specific system names that defined the milestones of CAD solutions since 1960

The answer this time was closer to what I was looking for:

Computer-Aided Design (CAD) solutions have come a long way since their inception in the 1960s. Over the past several decades, CAD technology has undergone significant advancements, making it an essential tool for many industries today. Here are some of the key milestones in the evolution of CAD solutions and the specific system names that defined them:

The 1960s: The Origins of CAD
The first computer-aided design systems were developed in the 1960s, some of these early systems include Sketchpad by Ivan Sutherland, which was developed at MIT in 1963. Sketchpad was a 2D drawing program that could be used to create simple sketches and drawings. Another early system was Sketchpad III which was developed by Ivan Sutherland in the late 1960s. It was one of the first systems to use a light pen for input, allowing users to draw and create designs on the screen.

The 1970s: 3D CAD Systems
The development of more powerful computers and the creation of computer graphics technology in the 1970s allowed for the development of 3D CAD systems. One of the first 3D CAD systems was AutoCAD, developed by Autodesk in 1982. AutoCAD is a computer-aided design software that enables the creation of 2D and 3D designs. Another early 3D CAD system was Pro/Engineer, developed by PTC in the late 1970s. Pro/Engineer was one of the first 3D CAD systems that could be used to create complex designs and models.

The 1980s: Parametric Design
The introduction of parametric design in the 1980s was a major breakthrough in CAD technology. One of the first parametric design systems was Unigraphics, developed by EDS in 1983. Unigraphics was a 3D CAD system that allowed for the creation of designs that could be easily modified and updated. Another parametric design system that was developed in the 1980s was SolidWorks, which was developed by SolidWorks Corp. in 1995. SolidWorks is a 3D solid modeling computer-aided design software.

The 1990s: Widespread Adoption
With the advent of personal computers and the development of more powerful and affordable computer hardware, CAD systems became widely adopted by small businesses and individuals in the 1990s. One of the most popular systems during this time was AutoCAD, which was widely adopted by architects, engineers, and construction professionals. Another popular system was SolidEdge which was developed by Intergraph Corp in 1996, it was a 3D CAD system that provided advanced modeling capabilities.

The 2000s: Building Information Modeling (BIM)
The early 2000s saw the advent of Building Information Modeling (BIM) which is a process for creating and managing building data during its lifecycle. One of the most popular BIM systems that was developed during this time was Revit, developed by Autodesk in 2002. Revit is a BIM software that enables architects, engineers, and construction professionals to collaborate and share building data in a common format.

The 2010s: Cloud-based and Mobile CAD
The 2010s saw the rise of cloud-based and mobile CAD solutions. One of the most popular cloud-based CAD systems that was developed during this time was AutoCAD 360, developed by Autodesk in 2011. AutoCAD 360 is a cloud-based CAD system that allows users to access and collaborate on designs from anywhere. Another popular mobile CAD system that was developed during this time was AutoCAD mobile app, developed by Autodesk in 2010. AutoCAD mobile app is a mobile CAD system that allows users to access and edit designs on the go.

The 2020s:

This time it ran out of words to get all the details and stopped. I am not sure those were the best examples of groundbreaking system — they were commercial impactful examples, once they got into the 70s and beyond. What I learned is that you can shape the interaction with follow-up questions to get more of what you want.

Overall, I was impressed with what it was able to produce.

Next, I’ll tackle a graphic user interface for a text only implementation I wrote a while back on the dice version of the Settlers of Cataan game, as an experiment in learning Python. Based on what I’ve seen, it should really facilitate that process and since I’ve never written a GUI is Python, it should be an interesting test of how the tool can aid a developer.

I am sure the code will not be great, but good enough is good enough,

My workaround for Windows 11 Start Menu

Since I know this will only be a temporary need, I created folders of shortcuts based on the role being performed. For example:

I created a folder called Start Menu where I dragged shortcuts for the programs I normally use in that mode:

My start menu folder

I then right-clicked on each of the folders and selected Pin to Start

I go to the start menu and nothing happens!?!? You will needed to log out and log back in before the updated start menu is displayed.

Start Menu with program groupings

It is definitely a kludge, but better than nothing — I suppose.

What is up with the Windows 11 Start Menu

I upgraded my computer to Windows 11 the other day — almost by accident since I logged into my Admin account on the machine and it asked me about Windows 11 and I clearly clicked the wrong button since it started me down the road to the upgrade and there was no turning back.

I needed to clean up my machine anyway since I had not rebuilt it in over a year. I had everything backed up so I just rolled with it.

The upgrade was relatively painless except for one BIG ISSUE. The Windows 11 Start menu has been lobotomized. Sure Windows 11 allows for some minimal customization options, but not to the extent I was doing in windows 10. There is a much more intensive customization option for those familiar with JSON, but I really don’t want to go down that path yet.

It may be fine if the only thing you do is run office or a few things, but I have at least 5 major operating modes:

  • CAD/CAM
    • Laser
    • 3D printing
    • CNC
    • CAD design
  • Office
    • MS office
    • Blogging
    • Web maintenance
  • Ham radio
  • Recreational programming
  • Gaming and entertainment

Each of these modes have dozens of programs on their own.

Now the Start Menu is a cesspool with no real way to organize it. I thought, maybe I could do something with multiple desktops (now supposedly enhanced in Windows 11), but NO this ‘feature’ doesn’t really allow for different desktop icons or folder groupings either. At least that I can tell.

I am going to continue to pound away on this, hopefully without having to pay for a UI enhancement that was built in before. It looks like icon grouping will be added back at some future date.

Just a word of warning to really look over the feature list about Windows 11 before making the leap.

An article on the negative side of gamification

Back when I worked in HP labs, one of the areas I focused some attention to was gamification. In a recent article in Fast Company they looked into the dark side of gamifying workAnother interesting article was in the January 2012 issue of Wired titled The Curse of Cow Clicker.

For me, the simple definition of gamification is “Metrics-based behavior modification” or using game mechanics to influence real-world behavior. Some view this as a way to improve the worker experience for business functions, while others view it less positively as “exploitware”. We see it all around us in healthcare, retail and new areas all the time.

You may wonder “Can sitting on a help desk and answering calls from consumers be turned into a competitive game that improves the experience for everyone?” — it turns out it can. Many activities can be tracked, rewarded and tuned to the needs of the day. People respond when you give “points” for things like “resetting passwords”, “resolving install problems”… as long as the points mean something to the individuals doing the tasks.

Human behavior can be manipulated by just pulling on a few of the right strings. This is one way companies can tap into the streams of data and the inherent human intellect in their business to drive value. Businesses just need to listen and invest in understanding what drives those they want to influence and define systems to meet that unspoken need. One key though is to not make it so blatant that those involve feel manipulated.

Is automation forcing divergent paths of quality vs. cost?

robots-too-humanI saw an interesting post: When Robot Writers Take Over, Will Freelancers Be Obsolete? The article was focused on freelance writing, but it did make me wonder about the whole concept of freelancing, in general.

The relatively fixed and easy to automate positions in many fields are ripe for automation. Those that require creativity or unique insight should be safe for a long time to come. In fact, automation could make the freelancers life less mundane and more interesting. It reminded me of a situation earlier in my career…

Back in the early 90s, I worked in the AI space for Electronic Data Systems (EDS). We focused primarily on solving problems for GM and the US government. Somewhere around here I have a coffee cup with the moto of the group: “Make it Work, Make it Real”. Unfortunately, the folks working in the group had felt it really meant that if we could make it work, it wasn’t really AI — since someone would always say that it was just regular old programming, no matter what innovative technique or esoteric language we used.

One of the projects I led was called Knowledge-based Tool Design. We were trying to automate tooling design for clamping and welding car parts using CAD techniques, a project far ahead of its time. Programmatically determining the right type of clamp and the correct way to swing it into place was too difficult spatially, for the time. We just didn’t have the compute power and the algorithms determine orientation and approach. A good human tool designer could see the solution intuitively.

We did figure out that people are not good at pulling together the bill-of-materials to ensure that the clamp and all the hydraulic and mounting components… were defined. We shifted our attention to defining that type of detail using computers — reducing the errors and rework later in the process.

Similarly, in other industries, there are so many annoying and resource intensive, low hanging fruit to be picked that the return on investment for tackling truly intuitive problems just isn’t there. That can all change though as better algorithms and computing capabilities develop.

There are a couple of ways this could go:

  • The intuitive functions will likely become more of a freelance function, since companies will not need (or be willing to pay) for those expert roles all the time and the work will be interesting.
  • The focus shifts to less high-quality designs that can be automated.

In any case, employment as we know it will be changing.

What’s the real outcome of Salesforce’s AI predictions?

automated decisionsYesterday. I was catching up on my technology email and came across this post stating that Salesforce now powers over 1B predictions every day for its customers. That’s a pretty interesting number to throw out there, but it makes me ask “so what?” How are people using these predictions to make greater business impact.

The Salesforce website states:

“Einstein is a layer of artificial intelligence that delivers predictions and recommendations based on your unique business processes and customer data. Use those insights to automate responses and actions, making your employees more productive, and your customers even happier. “

Another ‘nice’ statement. Digging into the material a bit more Einstein (the CRM AI functions from Salesforce) appears to provide analysis of previous deals and if a specific opportunity is likely to be successful, helping to prioritize your efforts. It improves the presentation of information with some insight into what it means. It appears to be integrated into the CRM system that the users are already familiar with.

For a tool that has been around since the fall of 2016, especially one that is based on analytics… I had difficulty finding any independent quantitative analysis of the impact. Salesforce did have a cheatsheet with some business impact analysis of the AI solution (and blog posts), but no real target market impact to provide greater context – who are these metrics based on.

It may be that I just don’t know where to look, but it does seem like a place for some deeper analysis and validation. The analysts could be waiting for other vendor’s solutions to compare against.

In the micro view, organizations that are going to dive into this pool will take a more quantitative approach, defining their past performance, expectations and validate actuals against predictions. That is the only way a business can justify the effort and improve. It is not sufficient to just put the capabilities out there and you’re done.

It goes back to the old adage:

“trust, but verify”

Simplicity, the next big thing?

Complex processRecently, Dynatrace conducted a survey of CIOs on their top challenges. Of the top six, almost all deal with concerns about complexity. There is no doubt there are numerous technologies being injected in almost every industry from a range of vendors. Integration of this multivendor cacophony is ripe with security risks and misunderstanding – whether it is your network or IoT vendor environment.

Humans have a limited capacity to handle complexity before they throw up their hands and just let whatever happens wash over them. That fact is one of the reasons AI is being viewed as the savior for the future. Back in 2008, I wrote a blog post for HP that mentioned:

“the advent of AI could allow us to push aside a lot of the tasks that we sometimes don’t have the patience for, tasks that are too rigorous or too arduous.”

IT organizations needs to shift their focus back to making the business environment understandable, not just injecting more automation or data collection. Businesses need to take latency out of decision making and increase the level of understanding and confidence. A whole new kind of macro-level (enterprise) human interface design is required. Unfortunately, this market is likely a bit too nebulous to be targeted effectively today other than through vague terms like analytics…  But based on the survey results, large scale understanding (and then demand) appears to be dawning on leadership.

The ROI for efforts to simplify and encourage action, should be higher than just adding a new tool to the portfolio ablaze in most organizations. We’ll see where the monies go though, since that ROI is likely to be difficult to prove when compared to the other shiny balls available.

Six thoughts on mobility trends for 2018

mobility walkLet’s face it, some aspects of mobility are getting long in the tooth. The demand for more capabilities is insatiable. Here are a few areas where I think 2018 will see some exciting capabilities develop. Many of these are not new, but their interactions and intersection should provide some interesting results and thoughts to include during your planning.

1. Further blurring and integration of IoT and mobile

We’re likely to see more situations where mobile recognizes the IoT devices around them to enhance contextual understanding for the user. We’ve seen some use of NFC and Bluetooth to share information, but approaches to embrace the environment and act upon the information available is still in its infancy. This year should provide some significant use cases and maturity.

2. Cloud Integration

By now most businesses have done much more than just stick their toe in the cloud Everything as a Service (XaaS) pool. As the number of potential devices in the mobility and IoT space expand, the flexibility and time to action that cloud solutions facilitate needs to be understood and put into practice. It is also time to take all the data coming in from these and transform that flow into true contextual understanding and action, also requiring a dynamic computing environment.

3. Augmented reality

With augmented reality predicted to expend to a market somewhere between $120 and $221 billion in revenues by 2021, we’re likely to see quite a bit of innovation in this space. The wide range of potential demonstrates the lack of a real understanding. 2018 should be a year where AR gets real.

4. Security

All discussions of mobility need to include security. Heck, the first month of 2018 has should have nailed the importance of security into the minds of anyone in the IT space. There were more patches (and patches of patches) on a greater range of systems than many would have believed possible just a short time ago. Recently, every mobile store (Apple, Android…) was found to have nefarious software that had to be exercised. Mobile developers need to be ever more vigilant, not just about the code they write but the libraries they use.

5. Predictive Analytics

Context is king and the use of analytics to increase the understanding of the situation and possible responses is going to continue to expand. As capabilities advance, only our imagination will hold this area back from increasing where and when mobile devices become useful. Unfortunately, the same can be said about the security issues that are based on using predictive analytics.

6. Changing business models

Peer to peer solutions continue to be the rage but with the capabilities listed above, whole new approaches to value generation are possible. There will always be early adopters who are willing to play with these and with the deeper understanding possibilities today new approaches to crossing the chasm will be demonstrated.

It should be an interesting year…