Search Our Site

Our Newsletter

Our Ramblings

Generator Sizing and Compatibility for Uninterruptible Power Supplies

Most critical power protection solutions, incorporating uninterruptible power supplies (UPS), today are interfaced with an alternative source of back-up power (standby power) which could be a fuel cell or flywheel but more usually it is a diesel generator. Generator sizing and UPS compatibility are fundamental to power continuity and must be taken into account at the outset of any power protection plan.

Power Rating

A generator must be sized correctly so that when it’s required to do so it will be able to power the UPS (taking into account any allowance for harmonics that the UPS’s rectifier will generate) and the load/s that the UPS is supplying. Generators are typically rated in two ways:

Prime Power Rating (PPR) – whereby the generator supplies power as an alternative to the mains power supply, but on an unlimited basis.

Standby Power Rating (SPR) – whereby the generator supplies power as an alternative to the mains power supply but for a short duration, typically one hour out of every twelve.

A generator rated under SPR can be as much as 10 percent larger than one sized using PPR. This provides an overload capability for a short duration, perhaps to meet sudden load demand changes, for example.

For an uninterruptible power supply installation, PPR is the more suitable method of rating. It is extremely important, for achieving greater resilience (fault tolerance), that a generator and its UPS are suitably matched. Not only must a generator be able to accept the load of the uninterruptible power supply but the UPS rectifier and static bypass supplies must be able to operate with, and synchronise to, the output of the generator.

Generator set manufacturers have four recognised categories of load acceptance: one = 100%, two = 80%, three = 60% and four = 24%. Categories two, three and four are used in practice for PPR-rated generators. Load acceptance is closely related to the turbo charging system and the Break Mean Effective Pressure (BMEP) of the engine. This is a function of engine speed, number of cylinders and the swept volume of each cylinder.


For load acceptance to occur, a UPS must be able to synchronise to the voltage waveform supplied by the generator. Uninterruptible power supplies tend to have fairly wide input voltage windows and generator output is usually well within this. Its frequency, however, can vary, which can be problematic. This is overcome by widening the UPS operating parameters to accept a broader range. This may not always be sufficient, particularly for poorly maintained or undersized generators. Their output frequencies could drift and make it impossible for the UPS to synchronise.

A generator can never be matched on a 1:1 aspect ratio with an uninterruptible power supply. A UPS will at times be drawing additional current to charge its battery set. Generator sizing may also have to take into account the powering of essential loads, air-conditioning, for example, and emergency lighting. As already mentioned, a UPS rectifier can generate harmonics and this needs also be the taken into consideration when sizing the generator.

Ambient Temperature

The ambient temperature around a generator is important. It is usual for the engine room temperature to rise by around 10 degrees centigrade when a generator is in operation. Things can get quite hot if the outside temperature is also hot. High ambient temperatures can degrade generator performance and cause damage to turbo-chargers and exhaust systems. In such instances, it is normal to de-rate and increase the overall size of the generator installation.

Recommended practice is to oversize a generator by a factor of one-and-a-quarter to two-times the size of the uninterruptible power supply and to increase this to three-times or more when additional essential loads are to be powered.

Be Sure to Plan Carefully When Virtualizing Your Infrastructure

There is a lot of excitement around Microsoft virtualization technologies these days and rightfully so. One of the ‘hottest’ areas right now appears to be making virtual machines highly available using Windows Server R2 Failover Clusters so end users can take maximum advantage of Live Migration and Cluster Shared Volumes (CSV). This configuration not only saves a lot of money but also provides business continuity in the event of an unforeseen failure in the environment.

While I could spend time extolling the virtues of our virtualization technologies, I am really here to discuss what can happen if one were to get too ‘overzealous’ and not use common sense and a sound plan for implementing the solution correctly. As with many of the blogs you read here on the Rustyice blog, they have been written because of experiences we have had with our customers. This one is no different.

So, what happens when a customer decides they love Microsoft virtualization and high availability technologies so much, they want to virtualize their entire infrastructure? And, suppose they want to be sure it’s highly available so they create a multi-node Failover Cluster to host the virtual machines. When the customer completes the project, they are so very proud of what they have done because now they can retire their old hardware and save tons of money on power and cooling costs in their datacenter. Everyone is happy and celebrations abound. And, then it happens…..someone decides that they need to shutdown the cluster(s), for whatever reason, it does not matter, and, after awhile, when they decide it is OK to bring the cluster(s) back online…they cannot. Oh, and one more thing…..the clusters are running on Windows Server 2008 R2 CORE. Trust me…this is a true story and has already happened more than once, hence the impetus behind this blog.

If the predicament is not immediately obvious, and it should be for cluster veterans, I will tell you that the cluster service will fail to start because it cannot contact a Domain Controller somewhere in Active Directory. And, this is because all of the Domain Controllers and DNS servers (critical infrastructure servers) have been virtualized and are, in fact, virtual machines currently supported by the cluster that is trying to start up. Clearly, this is a case of having ones eggs all in one basket – not good.

How did we fix this? It was not a quick fix. In a nutshell, what the Support Engineer did was have the customer determine which storage LUN was hosting the VM files for one of the virtualized Domain ControllerDNS servers. Then, the LUN was mapped to a standalone server so the VHD file could be copied off to another standalone Hyper-V server so a new VM could be created and placed in service. Once this was accomplished, the cluster could be started.

How can this type of scenario be avoided?

1. Develop a solid, well thought out migration plan. Ensure the planning team includes people who understand how all the technologies function in a virtualized environment.

2. Have at least one physical Domain ControllerDNS server available in the environment.

3. If #2 is not an option, distribute the virtualized infrastructure servers across multiple hyper-v clusters and hope they will not all be Offline at the same time.

4. Plan to have one of more Hyper-V servers running in a WORKGROUP configuration. Hyper-V servers do not have to be joined to an Active Directory domain. Then distribute some of the virtualized infrastructure servers across these servers.

Cloud Computing Used to Hack Wireless Passwords

German security researcher Thomas Roth has found an innovative use for cloud computing: cracking wireless networks that rely on pre-shared key passphrases, such as those found in homes and smaller businesses.

Roth has created a program that runs on Amazon’s Elastic Cloud Computing (EC2) system. It uses the massive computing power of EC2 to run through 400,000 possible passwords per second, a staggering amount, hitherto unheard of outside supercomputing circles–and very likely made possible because EC2 now allows graphics processing units (GPUs) to be used for computational tasks. Among other things, these are particularly suited to password cracking tasks.

In other words, this isn’t a clever or elegant hack, and it doesn’t rely on a flaw in wireless networking technology. Roth’s software merely generates millions of passphrases, encrypts them, and sees if they allow access to the network.

However, employing the theoretically infinite resources of cloud computing to brute force a password is the clever part.

Purchasing the computers to run such a crack would cost tens of thousands of dollars, but Roth claims that a typical wireless password can be guessed by EC2 and his software in about six minutes. He proved this by hacking networks in the area where he lives. The type of EC2 computers used in the attack costs 17 pence per minute, so £1 is all it could take to lay open a wireless network.

Roth intends to make his software publicly available, and will soon present his research to the Black Hat conference in Washington, D.C.

Using EC2 for such ends would be against Amazon’s terms of use, of course, but Reuters quotes Amazon spokesman Drew Herdener as saying that if Roth’s tool is used merely for testing purposes, everything’s above board.

Roth’s intention is to show that wireless computing that relies on the pre-shared key (WPA-PSK) system for protection is fundamentally insecure. The WPA-PSK system is typically used by home users and smaller businesses, which lack the resources to invest in the more secure but complicated 802.1X authentication server system.

WPA-PSK relies on administators setting a passphrase of up to 63 characters (or 64 hexadecimal digits). Anybody with the passphrase can gain access to the network. The passphrase can include most ASCII characters, including spaces.

WPA-PSK is believed to be secure because the computing power needed to run through all the possibilities of passphrases is huge. Roth’s conclusion is that cloud computing means that kind of computing power exists right now, at least for weak passwords, and is not even prohibitively inexpensive.

In other words, if your network relies on WPA-PSK, its time to check that passphrase. It’s claimed that up to 20 characters are enough to create an uncrackable passphrase, but the more characters you can include in the passphrase, the stronger it will be. It should be noted that Roth very probably cracked open networks with short passwords.

Include a good variety of symbols, letters and numbers in the passphrase, and change it regularly–monthly, if not weekly. Don’t use words you might find in a dictionary, or any words that are constructed cunningly by replacing letters with numbers (that is, passwords like “n1c3”); hackers are way ahead of you on such “substitution” tricks.

Passphrases constructed like this are effectively impossible for computers to guess by brute force, even by cloud computing systems running Roth’s software, due to the amount of time it would take.

Because WPA-PSK is also calculated using the service set identifier (SSID, or base station name) of the wireless router, it also makes sense to personalize this and ensure it isn’t using the default setting (usually the manufacturer’s name). This will protect you against so-called “rainbow” attacks, which rely on a look-up table of common SSIDs.