The definition of done

doneI was looking through some material the other day and a statement came up that I found intriguing. “What is your team’s definition of done?”

Since I believe in the concept of “good enough” rather than spending the time and effort to hit perfection (since we all know that is probably not attainable) this concept of having a formal team definition of when to “put a fork in it” and call it done, made me think – how often do we have a formal structure and definition of being done? Is it flexible??

I’ve talked with people who were doing in-house agile development and their definition of done always seemed to be when they ran out of time or money. Rarely was it when they ran out of requirements. Is that OK?

For programmers who do have a defined set of requirements and are coding for money, they’re done when the requirements were all met, the code was all compiling, everything was tested, the code is installed in production and the customer has signed off. That is getting pretty close to perfection.

Can each of the roles in a project can have their own definition of done? If all those individual definitions are met, is the project done? I can think of a number of situations where the architects completed all their work products but the results were never used effectively. The flag was raise but no one saluted, so done was declared too early.

It just seems like something we may all want to take a moment and think about – when I think I’m done, are the other stakeholders satisfied??

Another step toward gesture based computing

You may have seen some on the analysis that is taking place related to Google Soli, which is being viewed with a great deal of excitement (even though it will not be out until next year).

There has been significant work in this space over the years with Leap Motion (focused on hand based gestures) and Microsoft Kinect (addressing whole body or room scale sensing) with numerous examples of special application interfaces.

The first time I recall writing about gesture-based interfaces was back in 2007, although the Wii came out in 2006 (hard to believe that was almost a decade ago). The excitement about Soli did surprise me since the Leap Motion technology is available today (version 2.2.6 was released this week) and can do many of the same levels of gesture sensing (although it doesn’t have the same range as Soli).

In any case, I think we’ll see a whole new level of experimentation in how computers and humans can interface in a more intuitive fashion – and that’s a great thing.

$9 Linux computer in the wings

The device is called C.H.I.P. and it is part of a Kickstarter campaign to create a Linux-powered computer “built for work, play, and everything in between!” They have already raised $653K of their $50K goal.

At the heart of the C.H.I.P. is a 1GHz Allwinner A13 compatible SoC, with a built-in Mali400 GPU that is compatible with OpenGLES and OpenVG. Backing that up is 512MB of DDR3 RAM and 4GB of flash storage.

For output, it features a single USB port, a micro USB that supports OTG, a composite video output (with options for VGA + $10 and HDMI + $15 via an adapter), headphones output and microphone input. It has built in composite output as well. C.H.I.P. has built in 802.11b/g/n Wi-Fi and Bluetooth 4.0 to connect to the Internet or wireless devices.

Their site shows that C.H.I.P will be powerful enough to power LibreOffice, the Chromium browser, games…

There are also options for an external battery and even a case to provide a 4.3 inch touchscreen and a QWERTY keyboard for a pocket computer experience.

It will be interesting to see how this low cost platform is used.

Not your father’s SAP

This week’s SapphireNow was eye opening for me. My interactions with SAP were primarily from implementing BW in its early days (1999 V1.2B) and being the CT for the EDS side of relationships with large outsourcing arrangements that used SAP R3.

It was clear just walking around the SAP area that things have changed significantly. There were no SAP GUI screens visible, everything had a clean modern look. The UI customization demos were both easier to perform and actually possible for the end user with little customization. Granted they were not doing anything too complex.

Integration options seemed to be more intuitive and actually possible for a range of other systems, supporting bi-directional information flow.

Even the executive dashboard (sorry for the reflection in the picture but I took it myself) seemed to be something an executive could actually use with relatively minor training. I’ve always been fascinated by executive dashboards! The person I talked with said it is even relatively easy to extend the display using HTML 5 techniques.

SAP executive dashboard

I am sure there is still quite a bit of work ahead for SAP to get all the functionality (especially industry) over and running at maximum efficiency to S4 HANA, but what was shown was impressive. Likely the first thing any organization contemplating the move needs to do is triage their customizations and extensions. The underlying data structures for S4 HANA are much less redundant, since the in-memory model removes the need for the redundancy to hit performance. The functionality also seems more versatile, so hopefully many of the customizations that organizations ‘just had to have’ can be eliminated.

I’ve always said the first rule of buying 3rd party packages is: “don’t do anything that prevents you from taking the next release”. With the new approach by SAP those running S4 HANA on the cloud will be getting the next release on a continuous basis. Those with an on premise approach will be getting it every nine months (or so).  So the option of putting of releases is becoming less viable.

I’ll get a post on Diginomica next week with more of an enterprise architect’s perspective.

In-shoring opportunities in with automation

AutomationI had a long discussion with a serial entrepreneur last week that is looking to define a service offering in the help desk/virtualized meeting/education front. He seems to have a good handle on the business model and the differentiation between what he provides and the other services in the market place.

During the discussion, it did remind me a bit about the CNN post about the effect of Silicon Valley’s virtualization and automation efforts on jobs. What was most intriguing about the discussion was the ability to move the skills in demand to underserved parts of the country.

We both grew up in small mid-west towns and feel that techniques virtualizing the workforce he is developing could open up possibilities in areas of the country that are currently under employed.  With the possibilities of human-centered automation, these approaches will be increasingly important. I do question if today’s HR organizations are ready for this level of innovation.

7 Questions to Help Look Strategically at IoT

question and analyticsThere are still many people who view the Internet of Things as focused on ‘the things’ and not the data they provide. Granted there are definitely some issues with the thing itself, but there are also concerns for enterprise, like the need to monitor the flow of information coming from these things, especially as we begin to automate the enterprise response to events.

A holistic perspective is needed and these are the top issues I believe an organization needs to think through when digging into their IoT strategy:

  1. What business value do the devices provide – independent of the data they collect?
    Having said that it is not really about the devices, it remains true that the devices should be delivering value in themselves – the data may be just a side effect of this role. Understanding those functions will increase the reliability and usefulness of the data over the long haul. No one wants to put an approach consuming a data stream just to have it dry up.
  2. What access will the devices have to the enterprise?
    Is it bi-directional? If it is the security risk of the devices is significantly higher than those that just provide raw data. If a positive feedback loop exists, it needs to be reinforced and secure. If the data flow is too narrow for this level of security, the need for bi-directional information flow needs to be scrutinized – if the interaction is that valuable, it really needs to be protected. Think about the issue of automotive data bus attacks, as an example.
  3. If attacked, how can the devices be updated?
    Does the devices support dynamic software updates and additions, if so how can those be delivered, by whom? Users of devices may download applications that contain malware, since it can be disguised as a game, security patch, utility, or other useful application. It is difficult for most to tell the difference between a legitimate application and one containing malware. For example, an application could be repackaged with malware and a consumer could inadvertently download it onto a device that is part of your IoT environment. Not all IoT devices are limited SCADA solutions, they may be smartphones, TVs… pretty much anything in our environment in the future.
  4. How will the data provided be monitored?
    Wireless data can be easily intercepted. When a wireless transmission is not encrypted, data can be easily intercepted by eavesdroppers, who may gain unauthorized access to sensitive information or derived behaviors. The same may be true of even a wired connection. Understanding the frequency of updates and shifts in data provided is usually an essential part of IoT’s value, and it should be part of the security approach as well.
  5. Can any personal or enterprise contextual information leak from the device connection?
    I blogged a while back about the issue of passive oversharing. As we enable more devices to provide information, we need to understand how that data flow can inadvertently build a contextual understanding about the business or the personnel and their behavior for other than the intended use.
  6. Is the device’s role in collecting information well-known and understood?
    No one like the thought of ‘big brother’ looking over their shoulder. People can easily feel offended or manipulated if a device enters their work environment and provides data they feel is ‘about them’ without their knowing this is taking place. A solid communications plan that keeps up with the changes in how the data is used will be a good investment.
  7. Who are all the entities that consume this data?
    As IoT data is used to provide a deeper contextual understanding of the environment, the contextual understanding may be shared with suppliers, partners and customers. These data flows need to be understood and tracked, like any consumer relationship, otherwise they may easily turn into a string of dominoes that enable unexpected shifts in results as they change. Awareness of enterprise context management will be growing in importance over the coming years – note that was not content management but context management.

All these issues are common to IT systems, but with an IoT deployment, the normal IT organization may only be able to influence how they are addressed.