Comments, administrivia, and the future of the “infosec professional”

Back when the spam was spiraling out of control, I configured my blog to close comments after 90 days. I’ve removed the limitation now, for two reasons: the spam is under control, and I wanted to reply to a comment made to my post on IPsec/IPv6 direct connect.

On 13 August, jcorey asked about how to deal with those who firmly believe that the only answer to any security problem is to inspect everything at the edge. This is an important question, and I wanted to give Joe an answer. (You might have to scroll down when you click the previous link, it seems that linking to individual comments is broken.)

Today, 15 October, I wrote a little thesis as an answer to his question. I’m calling it out in a separate post because I want to make sure those of you with aggregators that don’t update when posts receive new comments still have a chance to reply with your thoughts. I’ll also repost it here:

jcorey-- You've nailed the biggest obstacle to deploying something like direct connect. Many security professionals have been taught that there simply is, and never will be, a process or technology that allows you to trust anything that originates from outside your corpnet. These professionals cling to this belief, and have been the cause that allowed the whole “detection” market to bloom.

Let me be clear: this total lack of trustworthiness is no longer absolutely true. Of course there will be times when unknown machines will be used by known and unknown people to access your information. But what about one particular subset -- known humans, with known portable computers -- can't we do something better than treat them as toxic invaders?

Indeed we can. And that's what I'm proposing with direct connect. The technology -- managed, of course, with the right processes -- exists so that you can extend the trust to known computers even though you don't trust the network they're connected to. This is because you have mechanisms that:

1. Allow you to configure the machine according to your requirements (domain join, group policy)

2. Dictate computer and user authentication requirements (IPsec policies, smart cards)

3. Limit what the users of these machines can do (UAC, non-admin, Forefront Client Security, Windows Firewall, even software restriction policies)

4. Validate the health of machines initiating incoming connections and remediate if necessary (NAP, System Center Configuration Manager)

5. Limit the threat of attacks against stolen computers (domain logon, smart cards, BitLocker with TPM)

With the robust authentication, validation, configuration, and control mechanisms available to you, I simply don't see that there's any need to fall back to “detection” now. Detection technologies were -- and remain -- necessary for the times when we have no clue about the health of client computers and when we had no way to gauge the intent of the users. But it is truly reflective of a head-in-the-sand mentality to assume that this is a complete description of what's capable today.

You know, someone once asked me what it takes to be a security professional. I answered that there are two primary elements: become a networking/packet wonk, and be willing to change your opinions when the right evidence comes along. Indeed, I suspect that many security folk have forgotten the need to keep their wonikness updated, which in turn makes them resist new ideas regardless of the strength of the evidence. I'm not very proud of what I just wrote, because I loathe generalities, but I'm not sure what else to think here. Sigh.

Joe’s question is important and strikes at the foundation of what it means to be a security professional today. I’m eager to continue this conversation, because it’s reflective of what I sense to be a radical shift in our jobs—we are, or should be, no longer the wolf-crying propeller-head who sits in the basement and twiddles with the firewall. Instead, our job should be defined as one who’s charged with protecting the organization’s information from attack, while maximizing its utility to authorized users, according to the principles of least privilege. Your thoughts?

Comments

  • Anonymous
    January 01, 2003
    MikeS-- true, direct connect works best when the clients are Windows. However, we can still support heterogeneous environments with third-party support for NAP and group policy provided by partner-created add-ons. And for those instances like you mentioned -- you simply can't make configuration decisions about computers you don't own -- there's always Terminal Server. In the next security newsletter, I'll have an article that covers this briefly. Mikko-- again, I'm not discounting Terminal Server; indeed, it's a critical part of the complete design. In the detailed documention I intend to start writing later, I will include that.

  • Anonymous
    January 01, 2003
    Marta-- I'm sorry but there's not a whole lot I can do to help you here. How do you know your passwords have been stolen? What evidence can you describe that supports this? What harassment are you receiving? Perhaps the best thing to do is simply close those accounts down. Log into them, change their passwords, log out, and never check them again. Eventually (I think after 90 days) they will deactive themselves.

  • Anonymous
    January 01, 2003
    gbromage-- No, the validation isn't against threats, but rather it's validating that the computer is configured the way you want before a connection is made. NAP gives you some of this, SCCM is more thorough (mostly through inventorying). No configuration can completely protect you against zero-day exploits and rootkits. Most of these have to run as local admin to spread beyond themselves; that's why it's important that people run as standard user and that UAC remain enabled.

  • Anonymous
    January 01, 2003
    Thanks much ... And yeah, I know, the blog spam is getting bad again. I can't believe that there's any kind of economic gain from it, I just don't get it.

  • Anonymous
    October 16, 2008
    Totally agree on the last paragraph, add to it based on sound strategies, policies and procedures supported by the business (ie C-Level) we are no longer part of a Break/Fix department living in a place with no sign of daylight, but an intergral business unit. CB

  • Anonymous
    October 16, 2008
    The comment has been removed

  • Anonymous
    October 16, 2008
    The comment has been removed

  • Anonymous
    October 19, 2008
    I gave up worrying about the device and netwrok several years ago. Today I have only worry about 2 things.

  1. Who are you, and what should you be allowed to access.
  2. How do I manage the bandwidth. Devices don't access data, the most they can damage is point 2. Bandwidth impacts are irritating, but not long term critical. Data loss is the problem, and it's device and network agnostic. Of course - there is the problem of sensitive data accessed on an untrusted device, and that device using those credentials or storing that data. I haven't seen ANY good answers for this space yet. Paul
  • Anonymous
    October 21, 2008
    The comment has been removed

  • Anonymous
    October 22, 2008
    Mr. Steve, I have to appologize for using this way of communication with You, but after hours and hours of searching the Web for technical support in order to get help and solutions to my problem, I bumped into Your blog which I found very interesting. I am facing a security problem, where somebody has stolen passwords for old hotmail accounts of mine and is using them to harass me and harm me in many ways. I don't seem to be able to find answers anywhere and don't know how to stop it. Would You be so kind to help me with this problem if you could? Thank you very much, Best Regards.

  • Anonymous
    November 09, 2008
    Steve, I did enjoy your presentation at TechEd EMEA in Barcelona on this. I still have a concern over the portion of "Validate the health of machines initiating incoming connections and remediate if necessary " Validating the health would mean validating again known threats, surely?  It's the unknown ones that concern me more. I would think that there is a risk that people might make a basic assumption that a "trusted" client is automatically trust-worthy. A zero-day exploit would not be picked up by health certificate, and once the client is compromised that negates the BitLocker and client-side firewall mitigations. Further, if a client was root-kitted, it would not be detected by a server-side validation because how would the server tell, if the client (at the kernel level) is unaware that it has been compromised.  This could lead to an administrator assuming a trusted client is safe (for a given definition of "safe") and exposing more information then they should. It might be better to still consider these clients as manageable, but inherently untrusted. I do realise that this is more of an implementation/assumption risk than an inherant design flaw, but it does still need to be considered. P.S. - there's comment spam above.....

  • Anonymous
    November 20, 2008
    What the mind doesn't know, the eyes can't see. Corollary: If you don't know what attack surface you're exposed to, how can you adequately defend yourself? Further: If your people don't know what they're protecting your organization against, they can't adequately protect you. In other words: ... Know what I mean? :-)

  • Anonymous
    November 21, 2008
    You are doing a great job. Thank you. P.S. Are you checking your comments? There are several ads comments posted in last two days.

  • Anonymous
    December 01, 2008
    it's nice site!!! <a href=" http://www.scam.com/member.php?u=103405 ">adipex cheap</a>  6317