https://www.henrik.org/

Blog

Tuesday, July 14, 2015

How to get the most out of your BizSpark azure credits

BizSpark is arguably one of the best deals on the internet for startups. For me the key benefit that it brings is the 5 x $150 per month of free Azure credits. That said they are a little bit tricky to claim.

The first thing you need to do is claim all you BizSpark accounts and then from each of those accounts claim your Azure credits. This blog post describes this process, so start by doing that.

So after doing this you have 5 separate Azure accounts each with $150 per month of usage. However what we want is one Azure account where we can see services from all of these subscriptions at once and that requires a couple of more hoops to jump through. In the end you will end up with one account where you can see and create services from all 5 subscriptions without having to log in and out the Azure management portal to switch between them.

  1. The first step is to pick the one account you want to use to administrate all the other accounts.
  2. This is a bit counter intuitive, but you need to start by adding every other account as co administrators to the account from the first step. Yes, I am saying this correctly. All the other accounts need to be added as administrators to the main admin account (Don't worry, this is temporary).
  3. The following steps need to be done for each of the accounts except for the main account from step 1.
    1. Log into the management console using one of the four auxiliary accounts and go to settings.
    2. Make sure you are on the subscription tab.
    3. Select the subscription that belongs to the account you are currently logged into. It will be the one that has the account administrator set to the account you are currently logged into. If you have done this correct you should see two different subscriptions, one for the subscription you are logged in as and one from the account in step 1.
    4. Click the Edit Directory button at the bottom.
    5. In the image below make sure you select the directory of the main account from step 1. It shouldn't be hard because it will be the only account in the list and pre-selected. If you have already set up any co administrators to the account you will be warned that they will all be removed.
    6. Add the account from step 1 as co administrator to this account as described in the linked to article at the top of the post.
    7. The last step is optional but all the subscriptions will be called Bizspark and hard to keep apart so you might want to rename them.
      1. To do this go to the Azure account portal at https://account.windowsazure.com/Subscriptions. This page tend to be very slow, so be patient following links.
      2. Click on the subscription name. Your screen might look different depending on how many subscriptions you have.
      3. Click on the Edit Subscription Details.
      4. Enter the new name in the dialog presented. You can also optionally change the administrator to the account from step 1 at the top, this will remove the owning account as an administrator from the account all together (Although they are still responsible for billing).
  4. You can now remove all the other accounts from being administrators to the main account that you added in step 2 if you want.

If you follow all these steps when you log into the account from step 1 you should be able to see all of your subscriptions at the same time in the Azure management console like in the screenshot below.

Keep in mind this does not mean that you have $750 to spend as you want. Each subscription still has a separate limit of $150 and you have to puzzle together your services as you create them to keep all of the 5 limits from running out but at least this way you have a much better overview of what services you have provisioned in one place.

Thursday, July 9, 2015

Algorithm for distributed load balancing of batch processing

Just for reference this algorithm doesn't work in practice. The problem is that nodes under heavy load tend to be too slow to answer to hold on to their leases causing partitions to jump between hosts. I have moved on to another algorithm that I might write up at some point if I get time. just a fair warning to anybody who was thinking of implementing this.

I recently played around a little bit with the Azure EventHub managed service which promises high throughput event processing at relatively low cost. At first it seems relatively easy to use in a distributed matter using the class EventProcessorHost and that is what all the online examples provided by Microsoft are using too.

My experience is that the EventProcessorHost is basically useless. Not only does it not contain any provision that I have found to provide a retry policy to make its API calls fault tolerant. It also is designed to only checkpoint its progress at relatively few intervals meaning that you have to design your application to work properly even if events are reprocessed (Which is what will happen after a catastrophic failure). Worse than that though once you fire up more than one processing node it simply falls all over itself constantly causing almost no processing to happen.

So if you want to use the EventHub managed service in any serious way you need to code directly to the EventHubClient interface which means that you have to figure out your own way of distributing its partitions over the available nodes.

This leads me to an interesting problem. How do your evenly balance the load of work evenly over a certain number of nodes (In the nomenclature below the work is split into one or more partitions) which can at any time have a catastrophic failure and stop processing without a central orchestrator.

Furthermore I want the behavior that if the load is completely evenly distributed between the nodes the pieces of the load should be sticky, meaning that the partitions of work currently allocated to a node should stay allocated to that node.

The algorithm I have come up with requires a Redis cache to handle the orchestration and it uses only 2 hash tables and two subscription for handling the orchestration. But any key value store that provides publish and subscribe functionality should do.

The algorithm have 5 time spans that are important.

  • Normal lease time. I'm using 60 seconds for this. It is the normal time a partition will be leased without generally being challenged.
  • Maximum lease time. Must be significantly longer than the normal lease time.
  • Maximum shutdown time. The maximum time a processor has to shut down after it has lost a lease on a partition.
  • Minimum lease grab time. Must be less than the normal lease time.
  • Current leases held delay. Should be relatively short. A second should be plenty (I generally operate in the 100 to 500 millisecond range). This is multiplied by the number of currently processing partitions. It can't be too low though or you will run into scheduler based jitter of partitions jumping between partitions.

Each node also should listen to two Redis subscriptions (Basically notifications to all subscribers). Each will send out a notification that is the partition being affected.

  • Grab lease subscription. Used to signal that the leas of a partition is being challenged.
  • Allocated lease subscription. Used to signal that the lease of a partition has ended when somebody is waiting to start processing it.

There are also two hash keys in use to keep track of things. Each one contains the hash field of the partition and will contain the name of the host currently owning it.

  • Lease allocation. Contains which nodes currently is actually processing which partition.
  • Lease grab. Used to race and indicate which node won a challenge to take over processing of a partition.

This is the general algorithm.

  1. Once every time per normal lease time each node will send out a grab lease subscription notification per each partition that.
    • It does not yet own and which does not currently have any value set for the partition in the lease grab hash key.
    • If it has been more than the maximum lease time since the last time a lease grab was signaled for the partition (This is required for the case when a node dies somewhere after step 3 but before step 6 has completed). If this happens also clear the lease allocation and lease grab hash for the partition before raising the notification since it is an indication that a node has gone offline without cleaning up.
  2. Upon receipt of this notification the timer for this publications is reset (So generally only one publication per partition will be sent during the normal lease time, but it can happen twice if two nodes send them out at the same time. Also when this is received each node will wait based on this formula.
    • If the node currently is already processing the partition it will wait the number of active partitions on the node currently held times the current leases held delay minus half of this delay (So basically (Locally active partitions - 1) * current leases held delay).
    • If the node currently is not busy processing the partition that is being grabbed the node should wait the local active partitions plus one times the current leases held delay (On so fewer words (Locally active partitions + 0.5) * current leases held delay).
  3. Once the delay is done try to set the lease grab hash key for the partition with the conditional transaction parameter of it not being set.
    • Generally the node that has the lowest delay from step 2 should get this which also means that the active partitions on each node should distribute evenly among any active nodes since the more active partitions each individual node has the longer it will wait in step 2 and the less likely it is that they will win the race to own the partition lease.
    • If a node is currently processing a partition but did not win the race in step 2 it should immediately signal its partition to gracefully shut down and once it is shut down it should remove the lease allocation hash field for the partition. Once this is done it should also publish the allocated lease subscription notification. After that is completed this node should skip the rest of the steps.
  4. Check by reading the lease allocation hash value to see if another node than the winner in step 3 is currently busy processing the partition. If this is the case either wait for either the allocated lease subscription notification signaling that the other node has finished from step 3b or if this does not happen wait for a maximum of maximum shutdown time and start the partition anyway.
  5. Mark the lease allocation hash with the new current node that is now processing this partition.
  6. Also after the minimum lease grab time remove the winning indication in the lease grab hash key for the partition so that it can be challenged again from step 1.

When I run this algorithm in my tests it works exactly as I want it. Once a new node comes online within the normal lease time the workload has been distributed evenly among the new and old nodes. Also an important test is that if you only have one partition the partition does not skip among the nodes, but squarely lands on one node and stays there. And finally if I kill a node without giving it any chance to do any cleanup after roughly maximum lease time the load is distributed out to the remaining nodes.

This algorithm does not in any way handle the case when the load on the different partitions is not uniform, in that case you could relatively easily tweak the formula in step 2 above and replace the locally active partitions with whatever measurement of load or performed work you wish. It will be tricky to keep the algorithm sticky though with these changes.