Cyber Observer – a powerful security dashboard that reminded me of an earlier time

The other day I received a note in LinkedIn from an individual I worked with back in EDS. He mentioned a company he is currently working for that is focused on security. Since security needs to be at the top of the list of concerns at all levels of organizations today, I thought I’d take a deeper look.

The software is called Cyber Observer (they have a fairly effective marketing overview movie on their site). Though this solution is focused on enterprise security monitoring, it reminded me of the data center monitoring programs that came out in the late 80s and 90s that provided status dashboards and information focused on reducing time to action for system events. CA Unicenter was one that was popular.

Back in the late 80s I had system administration leadership over the largest VAX data center that GM had. We had hundreds of VAXen, PDPs and HP 1000s of all sizes scattered over nine or ten plants. Keeping them all running required some significant insight into what was going on at a moments notice.

Fortunately, today folks can use the cloud for many of the types of systems we had to monitor, and the hardware monitoring is outsourced to the cloud providers. Plant floor systems are still an area that need to be monitored.

One of the issues we had keeping hundreds of machines running was that the flood of minor issues being logged and reported can easily lead to ‘alert fatigue’. Those responsible can loose the big picture (chicken little syndrome). Back then, we put a DECTalk in our admin area, when something really serious happened, it yelled at us until it was fixed. We thought that was pretty advanced for its time.

I asked how Cyber Observer handled this information overload concern. Since the software is primarily targeted at leaders/executives — we all know the attention span of most managers for technical issues. I also asked about a proactive (use of honeypots) vs. a reactive approach for the software. Now that both soft (HoneyD among others) and hard honeypots (Canary) are relatively easy to access, they should be part of any large organizations approach to security.

He explained that the alert and dashboarding system was very tunable at both the organizational and individual level.

Although it has more of a dashboard approach to sharing the information, details are available to show ‘why’ the concern reached the appropriate level.

An example he gave me was (for example) a new domain administrator being added in Active Directory. The score next to account management domain would go down and show red. When the user drills down, the alert would state that a new domain admin was added. The score in the system would be reduced and eventually the system baseline would adjust to the change although the score would remain lower. The administrative user would have to manually change the threshold or remove the new domain admin (if it is rogue or unapproved). Only then would the score would go back to its previous number (if no other events took place). Some threshold tolerances come preset out of the box based on expected values (for example if the NAC is in protect mode and not in alert mode, or if the Active Directory password complexity is turned-on — these scores are preset). Some thresholds are organizationally dependent and the user needs to set the proper thresholds as with the number of domain admins.

He also mentioned that if the system was connected to a honeypot that its information monitored the level of concern based on the shift of ‘background radiation’ was possible.

I don’t know much about this market and who the competitors are, but the software looked like a powerful tool that can be added to take latency out of the organizational response to this critical area. As machine learning techniques improve, the capabilities in this space should increase, recognizing anomalies more effectively over time. I was also not able to dig into the IoT capabilities that is a whole other level of information flow and concern.

The organization has a blog covering their efforts, but I would have expected more content since their hasn’t been a post this year.

National Cyber Strategy of the United States of America

securityIn case you’ve not heard about it, the White House released the PDF – National Cyber Strategy of the United States of America.

I’ve not read through the whole thing, the intro starts out with

America’s prosperity and security depend on how we respond to the opportunities and challenges in cyberspace. Critical infrastructure, national defense, and the daily lives of Americans rely on computer-driven and interconnected information technologies. As all facets of American life have become more dependent on a secure cyberspace, new vulnerabilities have been revealed and new threats continue to emerge.

Looks like a document worth understanding.

It defines four pillars for a national approach to cyber-security:

  1. Protect the American People, the Homeland, and the American Way of Life
  2. Promote American Prosperity
  3. Preserve Peace through Strength
  4. Advance American Influence

It will be interesting to see how the impacts of actions along these lies will be measured and felt — something technologists should watch.

Symantec Security Report

security compromizeAbout a month ago, I wrote a post about a new Cisco security report that was totally missing the concept of cyber mining and its impact on home and server devices.

I just had a chance to look at Symantec’s annual security center report and it went overboard the other way. Quoting statistics like an increase in coinmining by 8,500% — using the law of small numbers to provide headlines, since coinmining was in its infancy a year ago.

Other than that little bit of histrionics, the report did more effectively cover the concerns that I’ve seen over the last year, with significantly greater software supply chain attacks and mobile malware incidents (their number is up by 54%).

I thought the report well worth reviewing.

Was something missing from the Cisco Annual Cybersecurity Report?

security compromizeAccording to Cisco’s 2018 Annual Cybersecurity Report:

  • “Burst attacks” or short DDoS attacks affect 42% of the organizations studied
  • Insider threats are still a huge issue
  • More Operational Technology and IoT attacks are coming
  • Hosting in the cloud as a side benefit of greater security
  • Nearly half of security disks come from having multivendor environments
  • New domains tied to SPAM campaigns

Many of these findings seem like common sense or in some ways in CISCO’s interest at first glance, but this 60+ page report goes into much greater detail than these one-liners. It breaks down the analysis by region and time and concludes about the difficulties of cyber defense:

“One reason defenders struggle to rise above the chaos of war with attackers, and truly see and understand what’s happening in the threat landscape, is the sheer volume of potentially malicious traffic they face. Our research shows that the volume of total events seen by Cisco cloud-based endpoint security products increased fourfold from January 2016 through October 2017”

The breadth and volume of attacks can overwhelm any organization and it is not a case of ‘if’ but ‘when’.

One thing I didn’t see mentioned at all was cryptojacking, the unapproved leveraging of processing cycles for mining cryptocurrency. This form of cybersecurity risk affects large entities as well as individuals through their access of websites. Generally, this is less destructive than the previous cyber attack methods and may even be seen as an alternative to advertisements on sites, but it seemed odd to me that this rapidly advancing trend wasn’t mentioned.

The report is still worth looking over.

NIST standards draft for IoT Security

IoTThe draft version of NIST’s “Interagency Report on Status of International Cybersecurity Standardization for the Internet of Things (IoT)” was  released this week and is targeted at helping policymakers, managers and standards organizations develop and standardize IoT components, systems and services.

The abstract of this 187 page document states: “On April 25, 2107, the IICS WG established an Internet of Things (IoT) Task Group to determine the current state of international cybersecurity standards development for IoT. This Report is intended for use by the IICS WG member agencies to assist them in their standards planning and to help to coordinate U.S. government participation in international cybersecurity standardization for IoT. Other organizations may also find this useful in their planning.”

The main portion of the document is in the first 55 pages with a much larger set of annex sections covering definitions, maturity model, standards mappings… that will be likely of great interest to those strategizing on IoT.

The document is a great starting point for organizations wanting an independent injection of IOT security perspectives, concerns and approaches. My concern though is the static nature of a document like this. Clearly, this Information Technology area is undergoing constant change and this document will likely seem quaint to some very quickly but be referenced by others for a long time in the future. A wiki version may make this more of a useful, living document.

Comments on the draft are due by April 18. Reviewers are encouraged to use the comment template, and NIST will post comments online as they are received.

Hosts file for your protection

securityWith the recent rash of security concerns (across all platforms) I was looking into what can be done to route at least some of the nefarious traffic to the bit bucket. So I thought I’d write a brief post about the effort.

Most people are aware that DNS servers change the more user friendly internet addresses like yourbusiness.com to an IP address that computers can work with more effectively (e.g., 192.x.x.x). We can use this process to provide a bit more safety.

There are two simple ways you can try to subvert addresses pointing to bad locations. One is to use a domain name server that knows about bad services and provides a safe place to route the traffic.

IBM recently announced its quad 9 (9.9.9.9) DNS server. The Global Cyber Alliance (GCA) has partnered with IBM and Packet Clearing House to launch this free public DNS service. It intended to block traffic to domains associated with botnets, phishing attacks, and other malicious hosts. They continue to update it as new porly behaving addresses are discovered.

The other technique is to place entries in the hosts file on your machines. The hosts file actually gets a first shot at interpreting address. There are organizations that maintain HOSTs file that you can download, containing known ads servers, banner sites, sites that give tracking cookies, contain web bugs, or infect you with hijackers. Here are web sites for organizations that produce these hosts files:

Life hacker had an article about modifying your local hosts file, that is still valid and may be worth looking at if you’re thinking about adding this level of protection.

This all came to mind over the last few weeks, since Steve Gibson’s Security Now! podcast mentioned some new user tracking software that can be easily thwarted with a few hosts file entries.

 

Could Blockchain be at the center of IoT security?

Blockchain can be used for many things… Blockchain technology has the potential to reduce costs, improve product offerings and increase speed for banks, according to a recent report from the Euro Banking Association (EBA). If you’d like a nice overview of blockchains and bitcoin, there’s one on Khan Academy.

Blockchains can be used to keep track of transfers and to ensure that the data collected has gone through a verification process. One of the properties is that the blockchain is a globally distributed database that anyone can add to, but whose history no-one can modify.

This feature could be very valuable for IoT applications where there is data coming in that you would like to both verify and keep for predictive analytics… IBM has been looking at this for a while, since one of the security concerns has been that nefarious data sources could either modify the incoming data or change the data history. Blockchain techniques could make that almost impossible. One of the issues when you have an abundance of data coming into the enterprise is that the length of the chain could expand to the point where maintaining the chain costs more than the data is worth so the processing of the chain would probably need to be outside the IoT sensors/devices themselves. The devices would need to have their own private/public keys though if the validation goes all the way to the edge.

A simple way to think of the block chain for data transactions…

blockchain

Where each block likely contains:

  • A timestamp
  • The hash of the previous block as a reference (except the Genesis Block)
  • A pointer to the data transactions hash
  • The block’s own hash
  • The Merkle Root – a hash of all the hashes in the block

This is definitely quite a bit of security but when needed it should be sufficient…

In a security breach, the perspective of whose responsible is shifting…

securityThe implications of boards holding Chief Executive Officers accountable for breaches will be something to watch. Recently a survey of 200 public companies shows that corporate boards are now concerned about cybersecurity and willing to hold top executives accountable.

Since the board (and the CEO that they put in place) is ultimately responsible for the results of the company, making the CEO responsible shouldn’t be a surprise.  A security breach is just one example of a business risk. not just a “technical issue,” so it should be treated in a similar fashion.  There are roles like the CISOs, CIOs, CROs that may support the CEO in their efforts to steer the ship, but if the organization runs aground, the highest levels of corporate leadership need to be held accountable — just like they are rewarded for improved corporate performance. Neither scenario is accomplished by the CEO alone.

A data breach can impact customer confidence, stock price, and the company’s reputation for a long time and those are not “technical issues.” Unfortunately, it is not a matter of “if” but “when” a security incident will occur so a formal effort must be expended to anticipate, detect, develop contingency plans to limit, and correct the situation when it occurs, as quickly and effectively as possible, reducing the impact on the customers as well as the organization itself.

That is likely one reason why in job postings today there are an abundance of openings in the security space.

Measuring the value and impact of cloud probably hasn’t changed that much over the years but…

cloud question markI was in a discussion today with a number of technologists when someone asked “How should we measure the effectiveness of cloud?” One individual brought up a recent post they’d done titled: 8 Simple Metrics to Track Your Cloud SuccessIt was good but a bit too IT centric for me.

That made me look up a post I wrote on cloud adoption back in 2009. I was pleased that my post held up so well, since the area of cloud has changed significantly over the years. What do you think? At that time I was really interested in the concept of leading and lagging indicators and that you really needed to have both perspectives as part of your metrics strategy to really know how process was being made.

Looking at this metrics issue made me think “What has changed?” and “How should we think about (and measure) cloud capabilities differently?”

One area that I didn’t think about back then was security. Cloud has enabled some significant innovation on both the positive and the negative sides of security. We were fairly naive about security issues back then and most organizations have much greater mind-share applied to security and privacy issues today – I hope!

Our discussion did make me wonder about what will replace cloud in our future or will we just rename some foundational element of it – timesharing anyone?

One thing I hope everyone agrees to though is: it is not IT that declares success or defines the value, it remains the business.

Security certificate maintenance – there must be a better way

Broken-chainOver the last few years, I’ve seen numerous instances where will maintained systems that are run by organizations with good operational records have fallen over, caused by security certificate expiration.

Just last week, Google Mail went down for a significant time when their security key chain broke (note Google’s use of SHA-1 internally – but that’s a whole other issue). Gmail is a solution that is core to an increasing % of the population, schools and businesses. Most people likely believe that Google operations are well run and world class – yet they stumbled in the same way that I’ve seen many others before.

A reliable and rigorous approach is needed for organizations to track their certificate chains that proactively warns the organization before they expire, since it will take hours to repair them once they break. There are many critical tasks that come with certificate management, and ignoring or mishandling any one of them can set the stage for Web application exploits or system downtime.

These certificates (which contain the keys) are the cornerstone to the organization’s cryptography-based defense. As the market-facing application portfolio of an organization expands, the number of certificates will also expand and the key chains can get longer with more convoluted interrelationships as well (especially if not planned and just allowed to evolve). Additionally, the suite of certificate products from vendors can be confusing. There are different levels of validation offered, numerous hash types, lengths and warranties (which actually protect the end users, not the certificate owner). It can be difficult to know what type of certificate is required for a particular application.

CSS-Security put out this high-level video about certificates and why they’re blooming in organizations (there is an ad at the end of the video about their product to help with certificate management).

Most companies still manage their certificates via a spreadsheet or some other manual process. That may be fine when you’re just getting started but it can quickly spiral out of control and addressing the problem may involve costs that are just not understood.

There are products and approaches to the enterprise certificate management. Automation tools can search a network and collect information all discovered certificates. They can assign certificates to systems and owners and manage automated renewal. These products can also check that the certificate was deployed correctly to avoid using an old certificate. Automated tools are only part of the answer and will require some manual intervention.

When purchasing one of these certificate management tools, ensure that the software can manage certificates from all CAs, since some will only manage certificates issued from a particular CA.