Follow the link for the full article, but here’s my take…
The inbuilt and integrated AV is one of the core benefits of Windows Intune, and is really the only way for a customer > 10 but < System Center Configuration Manager to get the MS AV endpoint technologies. It’s part of what I call the perfect storm for Intune applicability to a customer – expired or expiring AV, a relatively unmanaged environment, and preparing for an SOE or desktop OS upgrade. As one or more of these is removed, the value proposition for Intune is reduced, making it a harder sell for the partner, or a harder justification for the customer.
For a partner looking at Intune as a scale out support option, possibly as an MSP, the integration of the Intune Endpoint Protection into the Intune administration console is a great convenience, and I would really ask them to find really good reasons to not use what Intune provides versus what the 3rd party offerings are. I’m not saying that Intune Endpoint Protection will necessarily check all the boxes for all customers, but it’s worth checking if it does before committing to alternatives.^ Scroll to Top
Why the Windows Experience Index? Well, it’s simple, it’s included in Windows 7, and I didn’t want to run an extensive test of the disk performance under different RAID options, not yet anyway. That would be something outside of scope for what I’m really doing with these devices.
First up, the HP MicroServer N36L, which has a dual core Athlon Neo at 1.3Ghz. As you can see the CPU score is quite low by modern standards, but when this is being used a basic server or NAS device, it shouldn’t really matter, the disk subsystem and network throughput are going to make a bigger difference. The N40L is also available with a 1.5GHz CPU, but I highly doubt it would do much to close the gap in CPU and RAM performance. Today it’s easy enough saturate Gigabit ethernet with a single HDD, so unless you drop in an additional NICs you aren’t going to hit any real throughput issues with this server. Note that this server, like the Acer AC100, has 8GB of RAM installed.
For server purposes the graphics scores are largely irrelevant, and the HDD score is capped due to testing against a single 7200RPM HDD.
Now on to the Acer – as expected, the WEI score for the CPU blows the HP out of the water, but this is not unexpected at all. I was surprised though that an almost entry level Xeon in many ways was able to score so well. Again, as a NAS, the CPU speed isn’t going to have that much of an impact, but for running our test VMs this is going to make a big difference, as is the ability to take it up to 16GB of RAM instead of maxed out at 8GB like the HP. As this server also has the ability to take an additional PCI-E card you can add a multiport NIC if you really want to push the throughput across the wire.
The Acer is going to be my primary server for the next few months, so it will certainly get put through it’s paces. My MicroServer has been doing 24/7 duty in various roles for a while now, so the reliabiliy of the unit is a known quantity, now it’s time to see how the Acer copes, but with bigger workloads than the HP was ever really capable of.^ Scroll to Top
The Acer unit arrived yesterday, and the first thing I noticed was that it ships in a much smaller box than the HP MicroServer, but I wasn’t surprised by how much smaller the Acer would be in comparison. This is instantly a big plus for me at the moment with some extended travel coming up, something small and light, with a degree of flexibility is what I need.
So far I have 8GB installed, and have been in contact with my favourite Kingston employee to get the scoop on supported memory to take it to 16GB, which is going to be a much better option longer term for some of my virtualisation and testing projects I need to perform around Windows Intune and Windows 7 deployments.
The faster CPU is really noticeable, and it’s a bit of an unfair comparison for a low power dual core Athlon against a quad core Xeon with multithreading, and that’s before the CPU speeds are even taken into account. Like most techs, I like to see more cores in Task Manager, and this certainly delivers, but the overall responsive while under load is much, much better. I will do a Windows 7 install within the next few days so that I can provide a sample WEI comparison between the two microservers, but remember though, one is a quarter the price of the other, and while HP and Acer may be targetting them at similar audiences – SMBs with a need for Small Business Server Essentials 2011, Windows Server Foundation 2008 R2 or Windows Server Standard 2008 R2, the way they go about the task is very different.
I’m not completely sold on the Acer concept of keeping the power button behind the locked front panel, and the keyhole on the side of the unit, but that’s a minor squabble. The ease of dropping in new drives is just as simple as the HP, but in this case you are limited to 4 internal HDDs plus 1 external eSATA drive, versus the HP’s ability to take up to 6 internal drives if you forgo the optical drive and route the eSATA cable into one of the internal drive bays that are free.
Setup was simple, the only catch I had was needing to change the order of boot devices to be able to get Windows to install off the flash drive. I’ve encountered this on my Acer Iconia W500 tablet as well, so it was easy enough to change, but it did have me scratching my head for a few minutes. I’ve kept the drives in AHCI mode rather than taking advantage of either the LSI or Intel onboard RAID capabilities, I’ll test those out at some point in the future.^ Scroll to Top
With an extended overseas journey approaching, and some spare time which will be dedicated to some software and scenario testing, I’ve decided to purchase one of the Acer AC100 micro servers for this purpose.
The Acer box is a very different beast to the HP MicroServer, with Acer going down the path of a high performance Xeon versus the ultra low voltage AMD CPU in the HP. For general NAS, storage or other light CPU overhead work, this difference won’t really be seen, but due to the amount of work I’ll be doing building out VMs and running various test loads, the AMD CPU in the HP is going to be a little anemic. Two cores versus four cores with HyperThreading, the ability to go to 16GB of RAM instead of just 8GB are the big winners on this front. I will be sticking to 8GB to start with due to the lack of supported 8GB ECC Unbuffered RAM on the Acer compatibility list, and the lack of support from the major RAM manufacturers as well, but this will change. Having an Intel NIC on board is also a nice sweetener.
That’s not to say the HP doesn’t have it’s own charm – the ease with which you can turn this 4 HDD device into a 6 HDD device means that it’s potentially a better option for the storage junkie. It also has two PCI-E slots instead of the single slot in the Acer (which will be occupied by an Intel i350-T2 NIC due to it’s support for advanced virtualisation and iSCSI capabilities), and is built like a tank. The modular design of the HP generally impresses, hopefully the Acer comes close. The HP unit is also a quarter of the price of the Acer, which is definitely going to be a deciding factor for most.
There are some things that both servers lack – neither support hardware RAID 5. While Acer promotes the support of RAID 5, the fine print reveals that it is via an Intel software solution. Both can support RAID 5 via OS configuration, but hardware offloading would defintiely be appreciated. The extra horsepower in the Acer should reduce the overall potential impact performance of parity calculations, but a better RAID implementation wouldn’t hurt.
I’ll give a further update when the Acer unit arrives, and give some feedback on setup and build quality versus the HP, but to me they are very different beasts, even though they appear similar at first.^ Scroll to Top
This is just a short update, primarily confirming that the NIC traffic in Hyper-V VMs is being accurately reported again. I’m not quite sure what has triggered this, but I’ve got a few VM snapshots I can revert to when I want to dig further into the issue.
The other piece of the testing that I’ve just confirmed is working as expected, that is traffic is being cached comprehensively, is the single NIC proxy/caching only scenario for TMG that I wanted initially. The TMG wizards make it easy enough to reconfigure the server after removing the additional virtual NIC, and the client was updated with the changed IP address of the proxy server via IE and netsh as shown in previous posts. With these changes in place, I now have a working TMG configuration for all the machines on my network, not just machines in a virtual network, and I’ll certainly save myself some Windows Intune and Windows Update traffic on my ISP connection each month.
After spending way too much time doing updates and rebuilds (or reverting to snapshots…) I’ve been noticing some interesting differences between the way Windows Intune delivers the updates versus the way Windows Updates does, but that will be a topic for a future post.^ Scroll to Top
I’ve mentioned before that I’m a big fan of MDT, and using whatever tools possible to help with the automation and customisation of OS images, so was pleased to get this information today. SCCM 2012 and Windows 8 support are the two things that should get most people excited, and by most people, I mean a subset of most people, who like technology that helps deploy operating systems.
Reliable and Flexible OS Deployment-now with support for System Center Configuration Manager 2012 RC2
The Solution Accelerators team is pleased to announce Microsoft Deployment Toolkit (MDT) 2012 RC1 is available for download on Connect now.
New features and enhancements make large-scale desktop and server deployments smoother than ever!
Support for Configuration Manager 2012 RC2: This update provides support for Configuration Manager 2012 RC2 releases. MDT 2012 fully leverages the capabilities provided by Configuration Manager 2012 for OS deployment. The latest version of MDT offers new User-Driven Installation components and extensibility for Configuration Manager 2007 and 2012. Users now also have the ability to migrate MDT 2012 task sequences from Configuration Manager 2007 to Configuration Manager 2012.
Customize deployment questions: For System Center Configuration Manager customers, MDT 2012 provides an improved, extensible wizard and designer for customizing deployment questions.
Ease Lite Touch installation: The Microsoft Diagnostics and Recovery Toolkit (DaRT) is now integrated with Lite Touch Installation, providing remote control and diagnostics. New monitoring capabilities are available to check on the status of currently running deployments. LTI now has an improved deployment wizard user experience. Enhanced partitioning support ensures that deployments work regardless of the current structure.
Secure Deployments: MDT 2012 offers integration with the Microsoft Security Compliance Manager (SCM) tool to ensure a secure Windows deployment from the start.
Reliability and flexibility: Existing MDT users will find more reliability and flexibility with the many small enhancements and bug fixes and a smooth and simple upgrade process.
Support for Windows 8: The RC1 release of MDT 2012 provides support for deploying Windows 8 Consumer Preview in a lab environment.
For System Center Configuration Manager customers:
For Lite Touch Installation:
For all customers:
Tell us what you think! Test drive our release and send us your constructive feedback through the Connect site. We value your input; this is the perfect opportunity to be heard.
Tell your peers and customers about Solution Accelerators! Please forward this to anyone who wants to learn more about OS deployment with MDT, and Microsoft Solution Accelerators.
Already using the Microsoft Deployment Toolkit? We’d like to hear about your experiences.
Microsoft Solution Accelerators
^ Scroll to Top
Today’s post is a recap of some of some of the testing and scenarios I’ve been through. All of this unscientific, not laboratory controlled, and can be interpreted many ways, and here is my take on it all… I’m not saying I understand exactly what I am seeing here, but am open to suggestions.
I originally started all of my testing running Windows 7 VMs against a single NIC, proxy/caching only solution, but I noticed that there was alot of traffic that was going outside of the proxy when I was using the Install approach for updating. If I was purely going through Windows Update, the results were as expected, exceptional caching of the Microsoft Update and Windows Update traffic, and incredibly high speed downloads of the updates. The issue that I was seeing was that I hadn’t configured the network to force activity outside of the logged on user to go via a proxy, and I could monitor this easily via Resource Monitor. Here’s a screenshot of what it should look like, note that all the connections are going through 8080, which I have forced via the following steps.
In order to start isolating the network traffic further, I set the TMG VM as the gateway with a private network in Hyper-V, which in my case is 10.10.10.1. If the traffic didn’t go via TMG, it didn’t go anywhere. The IE proxy was set to match TMG, which in my case is 10.10.10.1:8080. All fairly simple and standard for many network configurations. However, because I hadn’t gone through the process of setting up the whole test environment to match a working environment with domain users, domain joined PCs etc, I had to follow another step, which was to run netsh to configure the machine based proxy settings. This was required in order to avoid “The software cannot be installed, 0x80cf402c.” installation error…
Running the netsh command here is quite easy, but first I want to make sure there are no other proxy settings already in place.
Defining a machine based proxy is easy to do.
Kicking off the Intune install succesfully now, and allowing it to update the latest signatures for Intune Endpoint Protection, my client VM NIC shows roughly 140MB of traffic, which matches the incremental traffic on my internally facing NIC on my TMG machine. So far so good. Only about 70MB is showing as moving through my ISP, which also includes traffic from some additional machines on the network, and the TMG TMG NIC is only showing 3MB of traffic.
Apparently I received 42 updates, which are now installing, and they were less than 5MB of network traffic. Whether this issue is due to the accuracy of reporting traffic within the VM or some other reason I do not know yet, but would love to hear if you’ve seen the same or received an explanation. The external NIC on the TMG machine is showing less than 1MB of traffic, and the Hyper-V internal NIC is showing around 10MB. Again, at least from the internal NIC perspective, there is a bunch of traffic that just isn’t being reported. It’s not all bad though, this update, and my other network traffic, has generated less than 70MB of traffic. This means that there is definitely caching taking place, it’s the reporting that’s the issue. My TMG cache hit ration has moved up from 67% to 80% over the course of the afternoons testing, so it’s at reporting at least some of this activity.
The takeaways from all of this…
1. Simulating a real environment is going to give you better results when it comes to reproducing them outside of your sandbox
2. Route all traffic via the caching option you go for. There are huge benefits to be had here, both from a bandwidth savings perspective for those of you who pay per MB, and also from the perspective of speed of additional clients downloading the updates. The previous post to this had an image of a download coming down the wire at 111MB/s, which is close to the maximum download speed over Gigabit ethernet. This is what you want.
3. Part of this exercise was reacquainting myself with the Microsoft proxy/firewall family, which I had succesfully avoided for many years. While it had changed quite a bit, for the simple tasks I have it performing it has not been much of a roadblock or learning curve.
4. While I chose to start with a non-SP1 ISO of Windows 7 Ultimate as the base for my VM testing, you are going to save some time using the latest media with SP1 integrated, or manually adding SP1 during your build process.I just wanted the worst possible state for the machines to start with, ensuring that my TMG cache was getting very well used.
5.The numbers don’t lie. The caching works. Big thanks to the Intune team for posting the script on their blog site.
^ Scroll to Top
A picture tells a thousand words…^ Scroll to Top
After a false start due to some hardware issues, I’m pleased to report that Project TWIAD is live.
The current setup of the server is as folllowing, with an explanation behind these decisions:
HP Microserver N36L – I had one
8GB RAM – RAM is cheap
250GB Boot Drive – Included drive
6TB Windows RAID 5 Array (4x2TB drives) – I had 5 of the same type of drive, so I have a spare. It’s been years since I’ve used software RAID, and I didn’t want to stay at RAID 0 or 1 as supported in hardware
BIOS patch to enable full speed on the 5th and 6th SATA ports – probably not what you want to do a in a production environment, but this is a playpen.
Windows Server 2008 R2 Enterprise – I had the licence, and it supports 4 VM instances so I have some room to grow
Hyper-V role installed
Much of the above can be changed to suit your preferences, and you can argue about some of the decisions made above, but for now, that is the environment this series of posts will be based upon.
While building the host instance of Windows Server 2008 R2, I chose to disable Windows Updates so that I could test this OS install against Threat Management Gateway (TMG) once installed. The additional Windows Server 2008 R2 virtual instance install went smoothly as expected, and my first experience with TMG was pleasant. It’s been many years since I had done anything with the ISA family of servers (I did my Proxy 1.0, Proxy 2.0 and ISA 2000 MCP certs if that helps date my experiences…).
As soon as I had this VM up and running I downloaded the Internet Explorer Administration Kit to build a proxy setting package for Internet Explorer. Yes, I know that doing this manually would have been easy, but as I scale out the test environment this small piece of automation will save some time.
There where things didn’t go quite as they planned, but it wasn’t a disaster. Building out a 4 x 2TB drive array takes several days, and I was building VMs on this array while it was building, which had a big impact on performance during this period. I also encountered issues with the first drives I tried to use in the array not being the appropriate sector format for Hyper-V, which meant I had to shuffle quite a bit of data, and then the correseponding drives between servers. This took time, way too much time.
Before installing the Windows Intune specific BITS caching rules, I updated the TMG VM via Windows Update, accepting all of the updates on offer. As you could imagine, this took quite a while, mainly due to the degraded disk performance while the RAID array was being built out. The mapping of bytes in and bytes out on the assigned NIC to the TMG VM was close to a 1:1 mapping, as would be expected. It got much more interesting though as most of the same updates were applied to the host instance of Windows Server 2008 R2. Very little internet traffic, and NIC traffic higher than my internet connection can supply.
Technically I have a 100Mb/s download speed at home, but it’s rare for any single download to come close to saturating it, but with TMG in place I was getting bursts greater than what a 100Mb/s could provide, which meant my Gb/s infrastructure was what was delivering. The speed of download impact wasn’t the goal here, however, just watching the cache efficiency, via the NIC properties and via the TMG cache perfmon counters. So far, so good.
The next piece was running the sample TMG/ISA script provided by Paul Bourgeau at the Windows Intune team blog, which adds support for BITS caching of the Windows Intune traffic, which will get tested and reported upon shortly. As I finish this up a Windows Intune powered Windows 7 64 bit non-SP1 client VM is being built from scratch. I’ve deliberately avoided any process that will help automate this or reduce the download requirements, such as installing from SP1 media or applying SP1 prior to hitting Windows Update, so that we are seeing the worst possible way of doing things. The first round of Windows Updates on offer to this VM is 296.2MB, and it’s currently sitting at 45% complete.
Once all the updates are complete, I’ll be back, with some images to help liven things up and show the real impact of what’s happening.^ Scroll to Top
One of the upcoming series of articles that will be posted to this site is what I have called Project TWIAD. The goal of this is to focus on the types of time improvements and download savings that can be made if there is some type of caching solution in between the client PCs and the Internet connection.
This is something that has been documented on the Official Windows Intune Team blog, but I will be going in to details on the hardware configuration used, most likely the very versatile HP MicroServer, and running through some of the different choices of Windows Server that may be suitable, giving the pros and cons of each. Of course you can implement similar solutions on other devices and operating systems, but my heritage is Windows Server so that’s what I’ll be sticking with.
If there are any specific scenarios you would like tested (such as small packages and large application deliveries by the software distrubtion component, initial setup speed differences etc), leave a comment, and I will do my best to incorporate that into the testing that is presented.
This is not going to be highly detailed peer reviewed content, but instead obersvations of what has been found in my specific testing environment, but that doesn’t mean you can’t criticise me or my methodologies once published.
^ Scroll to Top