OLPC experience advice for your project


Regular readers of this blog know that I’m a huge fan of the One Laptop Per Child (OLPC) project and the XO laptop. A previous OLPC related post may be found here. As a result I follow the OLPC News blog which recently had this great article by 16-year-old Derek Chan on his experience with a small scale OLPC implementation in Kenya.

My name is Derek Chan, I’m 16 years old, and I was part of Mark Battley’s team of high school students from Upper Canada College that initiated a small scale OLPC implementation at the Ntugi Day Secondary School.
Part of our goal was to provide Ntugi with power for their initial complement of 8 XOs and 2 Cradlepoint PHS300s at a school that had no access to the country’s power grid.

In addition to this being a very well written piece about an extremely fascinating project, Derek enumerates some lessons learned that are directly applicable to any Infrastructure and Integration project. Especially security infrastructure projects like say a Network Access Control (NAC) or Enterprise Single Sign On (SSO) project. Just replace the word “school” with “enterprise” or “business“.

Ultimately, we were successful, but not without missteps and failures along the way. We did lots of things right, but we made a few newbie errors. Here’s what we learned!

  1. Learn as much as you can about your destination school’s physical resources.
  2. Don’t assume that tests in the lab will duplicate conditions in the field.
  3. Read all the relevant blogs, forums and bulletin boards before implementing.
  4. Don’t underestimate the sophistication of local technology and expertise at your destination.

 

Let’s think about each of these in turn, much as Derek did in his post.

Learn as much as you can about your destination physical resources.
Who hasn’t heard the horror stories from the installation team that just tried to add “one more appliance” to the customer’s data center, only to find out that the power or cooling or rack space just wasn’t there. Always verify ahead of implementation that the destination has all of the physical resources required by your hardware, all of the compute resources required by your software, and all of the network resources, including IP address space, required to connect it all together. An actual visit to the site by your Systems Engineers is a really great idea. Never assume that the destination is a “typical” configuration or that the customer knows the difference.

Don’t assume that tests in the lab will duplicate conditions in the field.
Boy Howdy! This assumption ranks right up there with “no customer would ever do that” as a surefire path to failure. The point is that the lab, by definition, is an artificial environment. Sure our QA engineers do the best job they can to simulate a real world environment, but the key word here is simulate. It’s pretty hard to simulate things like network latencies or ATM noise in the lab. Remember your lab techs are good, not god. What a difference that “o” makes.

Read all the relevant blogs, forums and bulletin boards before implementing.
Not that this has ever happened to me, mind you, but I’ve heard of engineers that actually believe the promo literature and design the system around that, assuming that all the details are handled. I mean how much difference can there be between Server 2K3 and Server 2K3 R2? Yeah. Just do the homework. That’s called “due diligence” in business speak.

Don’t underestimate the sophistication of local technology and expertise at your destination.
As engineers we always like to think we’re way smarter than the mere mortals we tolerate in our presence. But never fool yourself into believing that you can understand the ins and outs of a customer’s infrastructure as well as they do. You may think they are yokels, but they are yokels with way more relevant experience than you. And they are the ones who control your payday. Just suck it up and let them make it easier (or possible) for the project to succeed.

So there you have it. Excellent advice from a 16-year-old who has already learned some important lessons. Well done Derek.

Security For All First Birthday: Revisiting Forrester and NAP

By a fairly large margin the most popular and contentious post in the first year of Security For All [if you discount one entitled Prophecy for 2009 which got tons of hits I suspect by mistake due to the clever title] was the September 24, 2008 post entitled I so want to be a Forrester analyst wherein this report on the state of Network Access Control (NAC) by Forrester pegged the old BS-O-meter.

In Forrester’s 73-criteria evaluation of network access control (NAC) vendors, we found that Microsoft, Cisco Systems, Bradford Networks, and Juniper Networks lead the pack because of their strong enforcement and policy. Microsoft’s NAP technology is a relative newcomer, but has become the de facto standard and pushes NAC into its near-ubiquitous Windows Server customer base.

I responded with the following assertions.

Until all enterprises make the switch to Windows Server 2008, there is no real NAP install base.

As of now there is one, count ‘em, one SHA/SHV set provided to the “near-ubiquitous Windows Server customer base“. And guess who provides it (hint – they build a well known OS). So if your endpoint policies require only the Microsoft Security Center stuff and all of your endpoints are Windows XP SP3 or Vista Business+ and your servers are Windows Server 2008 you are golden! Both of you.

There was feedback. Todd from Napera responded thusly.

Thanks for the mention of Napera Joe. I wanted to clarify a couple of points from your posting specific to Napera rather than the Forrester analysis per se.
A Napera deployment does not require Windows Server 2008. As stated clearly in the blog post you linked to – our solution is self contained – we licensed the NAP protocols directly from Microsoft and we speak directly to the NAP agent. This removes the requirement for customers to upgrade to Server 2008 to deploy NAP. In fact, we don’t require changes to any server infrastructure (DHCP, AD etc) to deploy NAP. Just last week a brand new user told me they were checking health on PC’s within ten minutes of deploying Napera.
Also, NAP does not require Vista Business – just Vista.

There are several SHA/SHV’s shipping today beyond the Microsoft WSHA in XP/Vista you mention. Microsoft Forefront Client Security, McAfee, Symantec, Blue Ridge and Avenda are some that come to mind.
Apple has yet to commit to releasing a TNC based agent for Mac. Our Napera health agent for Mac OS X has similar functionality to the Windows NAP agent, but isn’t based on NAP or TNC protocols per se. The Napera agent could easily be made TNC compatible if that option presents itself in the future, and provides a great solution in the interim.

There were several exchanges of ideas and the following conclusion was reached with respect to Napera’s product and Microsoft’s NAP.

The Napera solution doesn’t require NPS since that’s a component of Windows Server 2008. It is a third party NAP Network Policy Server (or TNC Policy Decision Point) that uses the MS enforcement mechanisms.

Additional information was provided by Joe Davies, Senior Program Manager of the NAP Team at Microsoft.

Just wanted you to know that there are seven additional SHA/SHVs that are available from third-party vendors and two additional SHA/SHVs that are available from Microsoft for System Center Configuration Manager and Forefront Client Security.

So what has changed in the State of NAC and NAP in the year following the infamous Forrester report? Well for one thing no one (at least no one sane) proclaimed 2009 as the Year of NAC. Which was a good thing. But were we to give credence to the Forrester report we might expect that NAP or NAP -based solutions would be dominating the NAC market by now. Well guess what didn’t happen. That’s not to say that NAP development has ceased. In fact there are now eight additional SHA/SHVs that are available from third-party vendors – including an offering from Korean UNETsystem that reportedly brings NAP to Linux and Mac OS/X – and three additional SHA/SHVs that are available from Microsoft. As far as I can tell, the market penetration and predicted dominance failed to occur primarily because enterprises stayed away from Vista in droves. Partly because of the crippled economy but mostly because, well, Vista sucks. And actually useful NAC systems – yes this includes NAP – are not trivial to design, deploy and maintain. Furthermore the adoption of Windows 2008 server has been somewhat less successful than some had predicted. All of which conspires to make the analysis of the Forrester report even more amusing now than it was 12 months ago.

The really significant change in the NAC landscape during the last year is actually systemic to the information security business – the move to security as a service and managed security services. Yep – information security is moving into the cloud. Since NAC is definitely one of the trickier services to move into said cloud, we’re only now beginning to see it happen. StillSecure acquired ProtectPoint and now offers managed security services based on several StillSecure products. It’s a safe bet that their Safe Access NAC product has got to near the top of Alan’s “cloud it” list. Napera announced a beta program in July for a new online service, codenamed Cobalt that “will give you an advanced look at your network and the state of every computer connected to a compatible switch.

Oh yeah, and Microsoft announced a free consumer security offering codenamed Morro that directly competes with three of the eight third-party vendors who have those NAP SHA/SHVs. Wonder how that’s working out.

And I still so want to be a Forrester analyst.

7 Lessons SMBs can learn from big IT redux

1104459_buildings_and_towers

David Strom has an interesting article in Network World about 7 Lessons That SMBs Can Learn from Big IT. It’s basically sound and definitely worth checking out. But there are some important gotchas and caveats that didn’t make the cut. So I thought I’d just stuff in a few extra ideas and warnings into the list.

1. Standardize on Desktops and Cell Phones to Reduce Support Differences

This is really a great idea, and you will definitely save money, pain and suffering by standardizing your hardware and software. This would work really swell in an ideal world where you started from zero – with no existing “legacy” equipment or software and were able to bring everything in completely new. Problem is, not only do you have legacy hardware and software, you also can’t afford to refresh every desktop or cell phone simultaneously. So what you are forced to do is review your standards continuously and develop a “refresh path plan” that takes into consideration that different departments (or users) have completely different refresh schedules. For example, you need to refresh engineering every year, but accounting can probably refresh every 3 years. This also leads to some gnarly incompatibilities with different versions of software. A notorious example of this is brought to you by Microsoft who chose a new, improved and decidedly not backward compatible format for Word documents in Office 2007. Finally there is the problem of what “standard” means to hardware vendors. Just for grins compare actual hardware – with the same SKU – that ships in early and later versions of a Dell model number. Just keep in mind that if you choose to save money by standardizing on consumer hardware you run the risk of incompatibilities even with the same model number.

2. Perform Off-Site Backups

Off-site backup are definitely a must have. But they must also be automatic. No transferring data by hand from one place to another. Recall those data breaches by way of lost backup tapes? David suggests some online solutions and even cites a nifty side effect of this method.

Earlier this summer, Damian Zikakis, a Michigan-based headhunter, had his laptop stolen when someone broke into his offices. He replaced it a few days later; and because he had used Mozy, he thought that he was covered in terms of being able to bring back his files from the Internet backup.

When Zikakis had a moment to examine the layout of his new machine, he “found several incriminating files. The individuals who had my computer did not realize that the Mozy client was installed and running in the background. They had also used PhotoBooth to take pictures of themselves and had downloaded a cell phone bill that had their name on it,” he says.

Another possibility is to utilize your web hosting provider or colocation service to provide backup and archive space. In any case, it has to be offsite, easy and automatic otherwise it just won’t work.

3. Use Hardware to Secure Your Internet Connection

An article like this really shouldn’t have to include a point this obvious. But sadly it does. Not only that, it cannot be stress strongly or often enough that you have to understand and configure your security hardware. You can’t just plug it in and be safe. Furthermore, the appropriate selection of a security solution is critical. No, they are not all created equal. And no, they don’t all do the same things. The hardware that David mentions by way of example is a Unified Threat Management (UTM) system which generally puts quite a bit of security functionality into a single box. UTMs basically secure your internet access and if you intend to become larger than an SMB you need to be aware that they don’t scale up that well. Also if your problem is access control, rather than internet security a Network Access Control (NAC) system might be more appropriate. Or you might need both. Or something lighter weight like StillSecure’s Cobia network platform. Or something completely different. The point is that while everyone agrees that you need something – just which something is a not a trivial question. There is no one size fits all security solution. Here’s where judicious use of your consulting budget makes a lot of sense. And no, I’m not a consultant. I just play one on the internet.

4. Use a VPN

If you don’t like eavesdroppers and you do anything remotely, you need a Virtual Private Network (VPN). Period. They are cheap, easy to set up and will probably even come with that UTM solution you are considering in #3. If you choose to do this in-house instead of a managed VPN services like the ones mentioned by David, make sure you have the internal expertise to handle it. Do not hire a contractor to set up your VPN. Either outsource it all or none of it.

5. Run Personal Firewalls, Especially on Windows PCs

Actually what this title should probably be is “Run a desktop security suite on Windows PCs and make sure that all endpoints are compliant to your policies before you let them on your network.” While that is certainly longer winded that David’s succinct title, it more accurately captures what he is saying. The point is that you should have a desktop security policy that specifies what software your network endpoints must be running and have a way to determine if your network endpoints are compliant to that policy. The best way to accomplish that is with a Network Access Control (NAC) solution, like the Napera appliance mentioned in the article. There is, as usual, more to the story. Once you determine that an endpoint is non-compliant you can’t quarantine them forever. You have to provide a remediation mechanism, preferably automated, so that they can get back to work as soon as possible. It’s been my experience that sales guys get really cranky if you quarantine them for a long time. And just try that with your CEO. Bet it only happens once. And if you are going to have a NAC solution in place, what about “guest” users – you know contractors, visiting product reps, partners. They all need varying levels of access to your network as well, while you still need to be protected. The point is that this isn’t as easy as slapping in an appliance and your endpoint compliance problems are solved. If a sales guy tells you different, hang up the phone. Now.

6. Rely on VoIP PBX for Your Phone System

This is definitely one of the biggest money and time savers you can do. The services associated with a good VoIP PBX system are killer and my experience with these systems has been excellent. The only caveat here is that you should definitely get VoIP as a managed service unless you have some really serious talent in-house. If you think you can just whip an Asterisk server on one of your Linux boxes and you are good to go, think again. VoIP is very cool and it isn’t that hard if you really know what you are doing. Which I don’t and you probably don’t either.

7. Have a Solid Test Plan for Adding New Technology

This is probably the most important point. Treat your technology test plan like an actual project. It’s not good enough to simply say, “Joe will look into it”. That’s not a plan. And it assumes that Joe will do it during his slack time (an IT guy with slack time – whoa!). What will actually happen is that Joe will call one or two vendors and talk to the nice sales folks and ultimately pick the one with the best swag or the hottest looking booth babes. Just pony up and do it right. It will save you beaucoup time, money, pain and suffering. And you stand a chance of actually developing some of that in-house expertise.

Wherefore art thou TCG IF-MAP?

This all started, as many things do, with an article by Hoff wherein this idea was posed.

I’m really interested in how many vendors outside of the NAC space are including IF-MAP in their roadmaps. While IF-MAP has potential in convential non-virtualized infrastructure, I see a tremendous need for it in our move to Infrastructure 2.0 with virtualization and Cloud Computing.

Integrating, for example, IF-MAP with VM-Introspection capabilities (in VMsafe, XenAccess, etc.) would be fantastic as you could tie the control planes of the hypervisors, management infrastructure, and provisioning/governance engines with that of security and compliance in near-time.

So, of course, a response was crafted by NACMeister Alan Shimel who thoughtfully sets Hoff straight on the state of TCG/TNC adoption by NAC vendors, explaining it this way.

I think very few vendors are actually supporting and have implemented it. In fact it is not just non-NAC vendors, it is NAC vendors as well. Other than Juniper, I am not aware of another NAC vendor who actually supports MAP yet. Not because we don’t want to, it is just not important enough. Customers have not demanded it. So no one has the cycles to spend on it.

And predictably Steve Hanna fired back with this explanation of the TNC adoption curve.

I have found that standards adoption follows the classic innovation adoption lifecycle. Innovators are the vendors and customers that have the vision and foresight to see where things must go. They are the first to create and adopt new technology. Next come Early Adopters, Early Majority, Late Majority, and Laggards. It takes at least a year for each stage: six months to turn prototypes into products and six months for the next generation of adopters to catch on. That’s the timescale we’ve seen for the other TNC standards. So I expect to see Innovator vendors shipping products that implement IF-MAP in the next few months and Innovator customers deploying those products in the months after that.  Then will come Early Adopters and so on.

So clearly I couldn’t let this topic slide by untarnished by my view from the NAC trenches.

In principle I don’t disagree with anybody here. I mean as a card-carrying member of the group who would enjoy the most benefit from adoption of TCG/TNC – NAC software developers – what’s not to like? A standard. Everybody being able to interoperate. Not having to design one-off protocols to allow your own products to interoperate. Reverend Hanna, you are preaching to the choir – say Amen!

But Alan’s point about customers not demanding it is the nasty thing floating in the TCG/TNC NAC adoption koolaid punch bowl. However I think the reason for this lack of demand is more problematic than “it simply hasn’t hit the customer’s radar”. Given that TNC’s raison d’être is to allow different vendors’ products to interoperate such that a customer could integrate new stuff into an existing environment or do a “best of breed” grab bag for a complete NAC solution; and given that implementing a real, working NAC solution is, shall we say, non-trivial; I don’t see customers clamoring for this feature any time soon. I mean, it is challenging enough to get a single vendor’s NAC solution working in your environment even with copious amounts of support from that vendor. It makes me queasy to even think about trying to make multiple competing NAC vendors’ stuff play nice. Much less actually work. Cage match maybe, NAC solution not so much.

So I’m thinking that Hoff’s ideas may actually help TCG/TNC adoption get traction quicker than the NAC purveyors it was intended to corral into a single herd. Because at the end of the day, I don’t really think customers give a rodent’s patoot whether or not your NAC product implements an industry standard. They just want NAC with the least amount of pain and suffering. If and when we can make TCG/TNC the agent of that blessed relief, then it will be.

So say we all.

I so want to be a Forrester analyst

Now that would be a totally sweet gig. No experience necessary, no research required. Just collect the swag from vendors. Totally sweet deal – sign me up.

Now hang on there, that’s harsh – even for you! Yeah, well what conclusion am I supposed to come to with this report on the state of Network Access Control (NAC)? Actually I should start at the beginning with how I came across this amazing piece of … information.

So I’m browsing the blogoshere, just minding my own business, looking for NAC news. I should mention that in real life I make my living developing a NAC system. So when I come across this article, it totally pegged the old BS-O-meter. I mean nailed it.

Microsoft NAP Leading the NAC Pack

It didn’t surprise us when Forrester Research put Microsoft NAP as the frontrunner in the Network Access Control market. “Microsoft’s NAP technology is a relative newcomer but has become the de facto standard…,” said Rob Whiteley in his report. While Cisco and others might be able to claim more direct revenue from NAC products as of now, I believe Microsoft has the technology and framework that positions it for success.
As Tim Greene pointed out in his NAC newsletter, “the result is interesting because it’s not based on how many units were sold or performance tests but rather on evaluation of how well the products would meet the challenges of a set of real-world deployment situations.”
Tim hit the nail on the head, as NAP works in the real world, not just in a complex architectural diagram that only exists in a 30-page white paper. I think NAP’s success is twofold: One, NAP is built into the operating system on the client and server, making it easier for customers to use and deploy; and, two, NAP is one of those rare examples of Microsoft truly achieving interoperability and playing nice with others.

So at this point, I’m thinking well sure, these Napera guys are NAC vendors who are trying to ride the NAP wave so I’ll cut them some slack. I mean you do have to dial down the sensitivity on the old BS-O-Meter when dealing with marketing copy. But they reference an article by Tim Greene in his NAC newsletter. So I go there thinking surely they must have taken Tim totally out of context for their own vulgar marketing purposes. But much to my astonishment, (after navigating past NetworkWorld’s lame cover ad – which shows up as a nice blank page for those of us who block doubleclick – get a clue guys!) those Napera flaks were pretty much quoting Tim verbatim.

Microsoft comes out on top of the NAC heap in an evaluation of 10 vendors that was published recently by Forrester Research.

The result is interesting because it’s not based on how many units were sold or performance tests but rather on evaluation of how well the products would meet the challenges of a set of real-world deployment situations.

Which led me to the original report by Forrester. By now my poor BS-O-Meter is toasted.

In Forrester’s 73-criteria evaluation of network access control (NAC) vendors, we found that Microsoft, Cisco Systems, Bradford Networks, and Juniper Networks lead the pack because of their strong enforcement and policy. Microsoft’s NAP technology is a relative newcomer, but has become the de facto standard and pushes NAC into its near-ubiquitous Windows Server customer base.

So at this point I can no longer remain silent – you guys broke my BS-O-Meter! And it was industrial strength! So NAP “would meet the challenges of a set of real-world deployment situations“? What color is the sky in your real-world?

Here’s the deal guys. Until all enterprises make the switch to Windows Server 2008, there is no real NAP install base. Also, NAP is critically dependent on these nifty little client and server plugin combos – System Health Agents (SHA) and System Health Validators (SHV), that fill the roles of TNC Integrity Measurement Collectors (IMC) and Integrity Measurement Verifiers (IMV) respectively. It not a bad idea since the SHA’s are managed by a single client-side meta agent, and the SHV’s are plugins on the server side (the Network Policy Server (NPS) to be exact). But the real strength of this idea is that everyone who has some endpoint component they want to monitor for policy purposes (like say an AV package) just builds an SHA and corresponding SHV to be part of the happy NAP family. As of now there is one, count ’em, one SHA/SHV set provided to the “near-ubiquitous Windows Server customer base“. And guess who provides it (hint – they build a well known OS). So if your endpoint policies require only the Microsoft Security Center stuff and all of your endpoints are Windows XP SP3 or Vista Business+ and your servers are Windows Server 2008 you are golden! Both of you. Maybe I’m wrong and Napera has partnered with a whole bunch of competing endpoint security vendors to get all the system heath gizmos that they have been developing in secret. Hey – they do make this claim:

Napera then builds on the NAP platform to provide a single solution that combines health enforcement for both Windows and Macintosh computers with identity enforcement and guest access.

Whoa – A Mac SHA? I had no idea that OS/X had the basic plumbing to support such a beast! Oh wait – I get it – it’s a TNC IMC. So what’s the SHV for that bad boy look like? You see, I’ve written an SHV (no I’m not going to tell you how it works) and I’m pretty sure the Napera guys are blowing marketing smoke. If not I’d love a demo of an actual working system (not a “30-page white paper”). Preferably in my real-world.

So this brings me back to my original point. I want to be a Forrester analyst. I mean if I can make conclusions “not based on how many units were sold or performance tests but rather on evaluation of how well the products would meet the challenges of a set of real-world deployment situations“. Dude! sign me up. Don’t get me wrong – in all likelihood NAP will eventually become a “de facto standard” (well duh, it’s a Microsoft framework) and that’s not a bad thing. It’s just not there yet. In the meantime I need a new BS-O-Meter.

NAC: answering the right questions

Let me start this off by setting a baseline. I know a lot about Network Access Control (NAC). A real lot. I  work on (as in design, develop and support) what is arguably the industry leading and undeniably the best NAC solution in the industry. I’ll let you guess, since I’m not a shill for my employer. Don’t get paid for it, don’t do it, don’t care. Just say no to marketing. In any case, I know a lot about NAC.

So I sign up for a videocast entitled “NAC: Answering the hard questions” which has this intriguing abstract (emphasis mine):

A recent survey showed that of the companies that already have NAC deployed, 36% said their networks became infected with malware anyway. Clearly, there are still plenty of questions about NAC that need to be addressed. In this video, Joel Snyder, one of the top NAC experts in the industry, will help viewers answer the most pressing questions surrounding this technology, including:

  • How do you handle lying endpoints?
  • How does NAC extend to branch offices?
  • How much does NAC’s effectiveness rely on the security of your network infrastructure?
  • And more

I’ve tried to find the source of this study because those afflicted 36% really need to check out my earlier posting “Security Ideas for your mom part 1”. Wherein I enumerate the most important ideas (in my humble opinion) that your mom needs to know about secure computing. Let me quote myself from idea #2:

“don’t use something you don’t understand.”

You see Network Access Control does not directly prevent your network from being infected by malware. What it does, when configured correctly, is verify the security posture of your network endpoints before allowing them access to your network. In other words, a good NAC system will check to see that a PC requesting access to your network has whatever Anti-Virus programs you require installed and that the engine and signatures are up to date, but it will not check to see if the endpoint is already infected with a virus or if the AV package itself is worthwhile. Furthermore, NAC systems have the facility to “white list” certain endpoints since it’s usually a career limiting move (CLM) to quarantine the CEO’s PC. But if your CEO likes to surf for porn on said PC, it might be a CLM, but it’s still not a bad idea for security. So the general statement you can make about NAC is that it will only validate and enforce compliance to your security policy. It will do nothing to make sure your policy doesn’t suck or that you haven’t swiss-cheesed it to allow unlimited access to clueless VIPs. So let me say this once and for all – NAC is not magic. It is not a silver bullet. It will only enforce your network access policies, regardless of how lame they are, and only then if you configure the system correctly.

So I watched the videocast. I’d actually recommend it. Dr. Joel Snyder is a very sharp guy even if he relies a bit heavily on vendor marketing. Since I couldn’t find a place to comment on the site that hosted the videocast (Bitpipe), I decided to comment here. Okay, I was planning on commenting here anyway.

How do you handle lying endpoints? Well, if you are one of the NAC products that Dr. Joel is familiar with, apparently rather badly. He references the Trusted Computing Group (TCG) Trusted Network Connect (TNC) architecture to point out that ultimately system health telemetry originates from sensors on the endpoint itself (Integrity Measurement Collectors (IMC) in TNC lingo). Yep, that’s a problem all right – with the TNC reference architecture. He correctly concludes that some other mechanism (e.g. TCG Trusted Platform Module (TPM)) must be utilized to assure the integrity of the client-based sensors. Okay, how about this idea instead: lets start by assuming that all endpoints are lying (or are capable of lying) and instead of relying on the endpoint to give us a statement of health, have our Policy Decision Point check for itself. There are NAC products (at least one) that do this today. And it works really well. And it can even be done without any kind of agent software installed on the endpoint. Is it magic? No – just really clever design (if I say so myself). Now there are clearly some advantages to the TNC take on this, most obvious is that the vendor of the endpoint security software you want to check for compliance is in the best position to know the health of their stuff and they can build their own IMCs. Problem is, when you have Vendor A’s AV and Vendor B’s firewall and Vendor C’s HIDS running on Vendor M’s platform you are trusting that these vendors will play nicely with each other. Even when they have competing products. You bet.

How much does NAC’s effectiveness rely on the security of your network infrastructure? Dr. Joel answers this one with an emphatic “a lot”. Thereby earning him the Security For All GOTO award for his outstanding Grasp Of The Obvious. Of course NAC’s effectiveness relies on the security of your network infrastructure – in fact, it is predicated on it. If your network infrastructure is not secure, NAC will certainly not make it so. In fact I would go so far as to say that slapping NAC into an insecure environment is no more than security theater – users see it and think they are more secure, while nothing (good) really happens securitywise. To be fair, Dr. Joel is mostly warning NAC implementers to be aware that in all likelihood you will have NAC enforcement at the edge of your network and that it does, in fact, become another attack surface. Of course, it was probably already an attack surface before NAC was added to the picture. The point is that if you are using old leaky routers and switches, or a bad network security architecture you should probably take care of that stuff before you even think about adding NAC into the mix.

Marketeer’s have done an outstanding job of overhyping NAC. The fact that Dr. Joel even has to make himself a candidate for the GOTO award (and my bothering to award it to him), is a testament to how successful NAC vendors have been at getting folks to breathe their exhaust. And it does everyone a disservice. NAC is not magic. There is no silver bullet. Period.