https://www.henrik.org/

Blog

Showing posts with label #Computers. Show all posts
Showing posts with label #Computers. Show all posts

Sunday, April 25, 2021

My quest for fiber provided by AT&T

When I moved to a new house about two years ago, I was disappointed to learn that there were no options for fiber-based internet in the area so I would have to take the step down to cable based internet. Fortunately, I was pleasantly surprised in September of 2020 that AT&T Fiber had added support of my area.

First try in September 2020

I ordered it as quickly as I discovered it, even though I was a bit hesitant about having people in my house as COVID cases were on the rise (I have a person in my household who is in a risk group). The person on the phone with AT&T assured me that all AT&T personnel involved with the install would be wearing a mask though, so I proceeded regardless.

The day of the appointment I was excited and had cleared my schedule. The first person to show up was not the technician, but just a salesperson that wanted to make sure I didn't have any trouble creating my AT&T account (Which I had already set up days before as per the instructions in the AT&T communication). This person also assured me that they get in trouble with AT&T if they do not wear a mask which felt reassuring to me.

About an hour later, still during the appointment window the installation technician showed up (Still within the assigned service window). We tried to figure out where the AT&T connection at my house was and eventually found it. Unfortunately, there was no fiber pulled to my house and it needed to be pulled around 100 feet from a neighbor's access point. He tried snaking the existing conduit but failed. He needed to call in a specialist that both had better snaking equipment and if that failed, they might have to do some digging to fix the conduit.

The second technician made an appointment and showed up around a week later. He also had a helper with him. They spent a good hour trying to get through the conduit. They also failed and I was told that they now would have to bring in a 3rd party company that would try again and might potentially had to dig a new conduit.

The third technician just showed up with no appointment. He also had no mask and did not have one to put on when I asked about it. I told him to come back when he had a mask and at an appointed time. After this interaction I called AT&T to complain and was told that the mask mandate only really applied to AT&T employees and since this person was a 3rd party contractor there was nothing they could do. At that point I told the representative that I only wanted this after I had gotten the expressed promise that everybody involved would wear a mask and since that was not true, I did now want to cancel the order. The AT&T rep told me that they did not have the authority to cancel my order, but instead that had to transfer me to a loyalty specialist. I told them that they could either cancel the order or not but that I would not open the door or let them on my property and hung up.

About a week later another man showed up from AT&T, also not wearing a mask. I told him that, no I had not ordered any AT&T Fiber and that he should go away through the closed door. A week after that an additional person showed up from AT&T, this time with a mask. I explained to him what was going on and he apologized and said that he knew how to make this issue go away for me. And in fact, it turned out that he did because that was the last I heard from AT&T for the time being.

In total this first try involved 7 visits from AT&T with a total of 8 people visiting my house.

Second try in March 2021

Skip forward to March 2021, as I am now vaccinated, I decided it was time to make another try. I also happened on an ad for AT&T Fiber with a good introductory offer, so I decided to try again with an order placed online on March 8th. I got an initial appointment for the morning of March 18th. About an hour after the appointed time with no visit I called AT&T customer support. I was told not to worry. The technician was just running late, and he was still on the way. After another 3 hours I called again and was then told that the technician had gone to the wrong house and since nobody at that house had ordered internet he had left. At no point during this had AT&T proactively reached out to me to let me know what was going on.

Slightly miffed I rescheduled the appointment for a week later. That day came around and no technician showed up that day either. At around 2 hours after the appointment window ended, I called support again. The first person told me not to worry, the technician was just running late. I told them about my experience last time and was put on hold to a second operator. This operator said the same thing. At this point I told the operator that this is no problem. However, if the technician does in fact not show up then they do not have to try again. At this point the agent transferred me again to a "Loyalty Specialist".

This third person that I spoke to did in fact do some digging and figured out that when the technician who had gone to the wrong house and left, he had in fact cancelled the entire installation. And me rescheduling it with the support agent did not actually reopen that ticket so there was no technician coming. He then proceeded to say that I shouldn't worry, he knows how to restart the process again properly. At that point I said "Thank you, but no thank you. You gave it the old college try but couldn't even get a technician to my house in 2 tries so I am done".

Third try using Sonic Internet

I had discovered that Sonic Internet also resold AT&T Fiber at my location and figured that at least in that case I would deal with a support department that was prompt and knowledgeable even though I would still have to deal with AT&T for the actual installation. The same day that I cancelled the second try with AT&T I ordered Fiber from Sonic instead.

The first appointment was scheduled on March 31st. Same as the original visit that I had in September a year earlier the conduit is broken and need to be fixed. This technician managed to get the specialist team to do the second visit the same day though. They showed up as a 2-person crew and told me in refreshing detail what was going to need happening next. First an underground survey needed to be performed after which a digging crew would be dispatched to fix the conduit. I should expect the survey to happen within a few days and then the digging crew would show up in a week or two.

On April 6th I had a second appointment scheduled at which time an AT&T technician showed up to install my internet with the assumption that the fiber had by this time already been installed. Of course, the underground survey had not even happened yet, so he had to leave without anything done.

April 8th 2 big trucks with a team of 5 people showed up. They started by taking an hour lunch and after that got down to the work of digging my conduit. When I pointed out that the underground survey had not yet been done, they got a bit flummoxed and told me that unfortunately they could not do any digging until this had been completed. But the foreman told me that he had put in a rush order to make sure the survey would get done as soon as possible.

On April 12th I got an email from Sonic telling me that they had been instructed by AT&T to check that my internet was working correctly. At this point of course, there had still not been any actual work done by AT&T, so I sent an email to Sonic support letting them know this.

On April 15th I got a notification from Sonic telling me that the installation of my internet had been scheduled for April 19th. Since this sounded strange, I contacted Sonic support to tell than that I am not currently waiting for an AT&T technician, but an underground survey. What I was told is that the last people who were here marked the installation as complete (Which is why I got a notification earlier in the week making sure my internet was working correctly) and because of that they now had to start over from the beginning. Which means that a person must first come out and assess that a dig needs to happen (So starting all the way from scratch again). Was told by Sonic rep that they had gotten into a discussion with AT&T that got so heated that the AT&T rep hung up.

On April 16th I got a visit from a cheerful AT&T customer service rep asking me how I was enjoying my new AT&T Fiber internet. She got an earful of what I thought of AT&T at that moment.

April 19th comes around and I get a visit from another AT&T customer service rep to help me set up my AT&T account. I explain the situation to him, and he promises to get on the phone with his manager to see if there is anything he can do to help. While he does that the AT&T installation technician shows up. The technician asks if I am speaking for Yvonne? I tell him that I have no idea what he is talking about and he tells me that his work order says that he there to install internet for an Yvonne from Florida through the third-party provider Earthlink (Not Sonic). There is literally nothing in the work order that is correct except for my address. I do manage to get the technician on the phone with Sonic support and both escalate the issue to their managers. In the end there is nothing AT&T can do to have the technician do the work, even though he is here. He has to come back at a later date when the order has been corrected. At this point the AT&T service rep steps back and says that he will take me under his wing and sort this out for me. I pointedly ask him if that means that I would become and AT&T customer instead of Sonic. He says yes and I politely refuse.

After AT&T leave, I spend some more time with Sonic support. They promise to get back to me when this is sorted out. While this is happening, on April 22nd, another AT&T technician shows up to do a fiber install for Yvonne of Florida. Later that same day I hear back from Sonic support and they tell me that they have sorted out the issue with AT&T and that I now have an appointment for April 27th (Next Tuesday) to get this process started.

To sum up

So far AT&T have made 15 visits to my house with a total of 21 people. That does not include the visit they made to the wrong house in the second attempt with AT& or the 2 appointments they scheduled for that try. There has been no progress made whatsoever to actually installing fiber and the person that is coming on the 27th is actually the first person AT&T is dispatching for this install from their perspective.

To be continued...

Tuesday, May 3, 2016

Comparing Macbook Pro to Windows 10 based laptop for software development

My post from a few years ago about Why I hate Mac and OSX is by far the most read post I have ever posted on this blog (Somebody cross-posted it to an OSX Advocacy forum and the flame war was on). So it has been a few years, both OS X and Windows has moved on since 2009 and hardware has improved tremendously. I have also started a job which more or less requires me to use a Mac laptop so I have recently spent a lot of time again working with a Mac so I figured I would revisit the topic of what I prefer to work with.

The two laptops I will be comparing specifically is a Dell Precision 7510 running Windows 10 vs a current 2015 Macbook Pro running OSX El Capitan.

Before I start the comparison I'll describe what and how I use a computer. I'm a software developer that has been working with this for decades. I prefer to use my keyboard as much as possible. If there is a keyboard shortcut, I will probably use it pretty quickly. I tend to want to automate everything I do if I can. I have great eyesight and pretty much the most important aspect a laptop is that it has a crisp high resolution screen (Preferably non glossy) which to me translates to more lines of code on the screen at the same time. So with that in mind lets get started.

Screen

This one is fortunately easy. For some bizarre reason OSX does no longer allow you to run in native resolution without installing an add-on. Even with that add-on installed the resolution is paltry 2880 by 1800 in compared to 3840 by 2160. That means that on my DELL I can fit almost twice as much text on the screen. Also Mac's are only available with a glossy screen which is another strike against it. I don't really care at all about color reproduction or anything like that, and even if I hear that the Mac is great at that (And so supposedly is the DELL) but don't care about that at all.

Windows used to have pretty bad handling of multiple screens before Windows 10, especially with weirdly high resolution. This has gotten a lot better with Windows 10. That said OSX has great handling of multiple screens, especially when you keep plugging in and out of a bunch of screens, things just seem to end up on the screen they are supposed to be when you do. Windows is much less reliable in this sense. That said, the better handling of multiple screens are nowhere near weighing up for the disaster that is the OSX handling of native resolutions or the low resolution of the retina display.

Winner: Windows

Portability

The PC is as a friend of mine referred to it "a tank". It is amazing how small and light the Macbook Pro is compared to everything that they crammed into it.

Winner: OSX

Battery Life

I can go almost a full day on my Mac, my PC I can go a couple of hours. No contest here, the Macbook Pro has amazing battery life.

Winner: OSX

Input Devices

Let me start off by saying that the track pad on the Mac is fantastic. Definitely the best I have ever used on any computer any category. That said why can't you show me where the buttons are (I hate that), the 3D touch feature is completely awful on a computer (I don't really like it on a phone either, but there it has its place). I started this review by saying that I use a lot of keyboard and when it comes to productivity there is absolutely no substitute for a track point. This is that weird little stick in the middle of the keyboard that IBM invented. The reason why it is superior is that when I need to use it I never have to move my fingers away from their typing position on the keyboard so I don't lose my flow of typing if I have to do something quickly with the mouse.

In regards to keyboards both Macbook Pro and the DELL Precision laptops have great keyboards. However, for some weird reason Macbook's still don't have page up and page down keys. And not only are there no dedicated keys for this, there isn't even a default keyboard shortcut that does this (Scroll up and scroll down which are available are not the same thing) so to get it at all you need to do some pretty tricky XML file editing. You also don't have dedicated keys for Home and End on a Macbook Pro. And given that there is so much space when the laptop is open not used by the keyboard on a 15" Macbook Pro I find it inexcusable.

Winner: Windows

Support

With my Windows machine (And this is true for pretty much any tier 1 Windows laptop supplier) I call a number or open a chat and 1 to 2 days later a guy shows up with the spare parts required to fix it. With Apple I take it to the store and then they usually have to ship it somewhere, it takes a week or two... If you are lucky. For me that would mean that I can't work for those two weeks if I didn't have a large company with their own support department to provide me with a replacement to help out where Apple falls short.

Winner: Windows

Extensibility

I can open up my PC and do almost all service myself. Dell even publishes the handbook for doing it on their support site. Replacing the CPU would be very tricky because I think it is soldered to the motherboard, but everything else I can replace and upgrade myself. I also have 64GB of memory, two hard drives and if I want to upgrade a component in a year or two it wont be a problem. The Macbook Pro has Thunderbolt 2 which is great (Although the PC has a Thunderbolt 3 port), but that is pretty much it in regards to self service upgrades.

Also my PC beats the Mac on pretty much any spec from HD speed, size, CPU, GPU, memory.

Winner: Windows

Price

Everybody talks about the Apple tax. I don't find that to be very true. A good laptop (Which don't get me wrong both of these are great laptops) costs a lot of money. And my PC cost quite a bit more than the Macbook Pro did. Granted it has better specs, but I don't think there is really any difference in price when you go high end with a laptop purchase.

Winner: Tie

Productivity

For me productivity is synonymous with simplicity and predictability. Specifically I move around a lot of different applications and I need to be able to get to them quickly, preferably through a keyboard shortcut and I want to do it the same way every time. With that in mind OSX is an unmitigated disaster in this area. First of all, you have to keep track of if the windows you want to get to is in the same application or another one. And if it is another application, you first have to swap to the application you want and then after that you need to use a different keyboard shortcut to find the specific window in the application. I do like that you can create multiple desktops and assign specific applications to specific desktop (Predictable!). However then when you go full-screen with those windows they move to another desktop and this desktop has no predictability at all of where it is placed in comparison to other ones, it is strictly the order in which they are placed. Going on, I still don't understand how OSX still doesn't have a Maximize window button that takes the window and just makes it fill the screen. There are some third party tools that helps you a bit with this madness (Like being able to maximizing windows without going full-screen for instance). And regrettably in my opinion this is an area where OSX is moving backwards where the original Exposé was actually pretty good compared to the current mess. Also I don't like having the menu bar at the top of the screen because it means that it is usually further away from where my mouse currently is which means it takes longer to get there.

Meanwhile Windows 10 in this area took a huge leap with the snapping of windows to the side and allowing you to optionally selecting another window to see on the left. And you can easily switch to any window quickly using one keyboard shortcut same as always

A side note that doesn't affect me much but it does kind of need to be stated is that unsurprisingly Microsoft Office 2016 is just so much better on Windows than OSX.

Winner: Windows

Development Environment

In regards to development environments everything Java is available for both platforms so this comes down to comparing Visual Studio to XCode as far as I think. And obviously this comes down to whether you are developing in Swift or C# but since Visual Studio has recently moved more and more into the multi platform arena this is more of a real choice every day.

XCode has improved in huge leaps and bounds since the original versions I worked with (I started working with it around version 3). However there is simply no contest here. Visual Studio is the best development environment that I know. Both when it comes to native features, and the 3rd party extension system that support it is simply amazing. The only one that might possibly come close as far as I am concerned is IntelliJ.

Winner: Windows

Command Line Interface and Scripting

This is also a very easy call. OSX is Unix based, has a real shell, PERL and SSH installed by the OS. Sure Powershell is OK, but I just don't like it. I would argue that I think the terminal emulation in Putty seems a little bit better than Terminal, but on the other hand it doesn't have tabs and it also isn't installed by default.

Winner: OSX

Software Availability

This is a tricky category because there is obviously a lot more software available on Windows than OSX. However I find OSX has a lot of really good software that isn't available on Windows in similar quality. So I'm going to call this another tie.

Winner: Tie

Reliability

You would think that this is an easy win for Mac. And for normal non power users I would say that is absolutely true. It is harder for a non technical user to mess up an OSX system than a Windows system, no question about it. I however tend to tinker with stuff that normal people wouldn't and I can say that I have managed to mess up my Mac several times to the point where it will not boot and I have to completely reinstall the OS. However, I have done the same thing more times on Windows than on OSX I think. I also am a little bit worried about Apple's general stance on solving security issues in a timely manner, something that Microsoft is actually really good it. That said, even though this is not as much of a slam dunk as you would think I still have to give this to OSX.

Another thing I would like to add in here is that pretty much every PC that I have bought there have been some part of the hardware that did not quite live up the expectations. On my previous laptop DELL Precision m4800 it was the keyboard (In 2 years I replaced it 6 times), on this one I am still working with support on fixing some flakiness with the trackpoint. I have never had similar issues with any Apple computer (Although I did have an iPad 4 where the screen just shattered when I placed it on a table for no reason).

Winner: OSX

Conclusion

If you travel a lot and need to work on battery a lot I think you might want to give the Macbook a go. It's pretty neat.

That said the clear winner for me when it comes to both productivity, usability and just raw performance is going to be a Windows machine when it comes to doing software development. The beauty with Windows is that since there are so many of them you can usually find one that fits you exactly (There are obviously PC:s that are very similar to the Macbook Pro, for instance the bezel-less Dell XPS 15 looks pretty sweet if you are looking for a PC equivalent of a Macbook Pro).

Winner: Windows

Thursday, July 9, 2015

Algorithm for distributed load balancing of batch processing

Just for reference this algorithm doesn't work in practice. The problem is that nodes under heavy load tend to be too slow to answer to hold on to their leases causing partitions to jump between hosts. I have moved on to another algorithm that I might write up at some point if I get time. just a fair warning to anybody who was thinking of implementing this.

I recently played around a little bit with the Azure EventHub managed service which promises high throughput event processing at relatively low cost. At first it seems relatively easy to use in a distributed matter using the class EventProcessorHost and that is what all the online examples provided by Microsoft are using too.

My experience is that the EventProcessorHost is basically useless. Not only does it not contain any provision that I have found to provide a retry policy to make its API calls fault tolerant. It also is designed to only checkpoint its progress at relatively few intervals meaning that you have to design your application to work properly even if events are reprocessed (Which is what will happen after a catastrophic failure). Worse than that though once you fire up more than one processing node it simply falls all over itself constantly causing almost no processing to happen.

So if you want to use the EventHub managed service in any serious way you need to code directly to the EventHubClient interface which means that you have to figure out your own way of distributing its partitions over the available nodes.

This leads me to an interesting problem. How do your evenly balance the load of work evenly over a certain number of nodes (In the nomenclature below the work is split into one or more partitions) which can at any time have a catastrophic failure and stop processing without a central orchestrator.

Furthermore I want the behavior that if the load is completely evenly distributed between the nodes the pieces of the load should be sticky, meaning that the partitions of work currently allocated to a node should stay allocated to that node.

The algorithm I have come up with requires a Redis cache to handle the orchestration and it uses only 2 hash tables and two subscription for handling the orchestration. But any key value store that provides publish and subscribe functionality should do.

The algorithm have 5 time spans that are important.

  • Normal lease time. I'm using 60 seconds for this. It is the normal time a partition will be leased without generally being challenged.
  • Maximum lease time. Must be significantly longer than the normal lease time.
  • Maximum shutdown time. The maximum time a processor has to shut down after it has lost a lease on a partition.
  • Minimum lease grab time. Must be less than the normal lease time.
  • Current leases held delay. Should be relatively short. A second should be plenty (I generally operate in the 100 to 500 millisecond range). This is multiplied by the number of currently processing partitions. It can't be too low though or you will run into scheduler based jitter of partitions jumping between partitions.

Each node also should listen to two Redis subscriptions (Basically notifications to all subscribers). Each will send out a notification that is the partition being affected.

  • Grab lease subscription. Used to signal that the leas of a partition is being challenged.
  • Allocated lease subscription. Used to signal that the lease of a partition has ended when somebody is waiting to start processing it.

There are also two hash keys in use to keep track of things. Each one contains the hash field of the partition and will contain the name of the host currently owning it.

  • Lease allocation. Contains which nodes currently is actually processing which partition.
  • Lease grab. Used to race and indicate which node won a challenge to take over processing of a partition.

This is the general algorithm.

  1. Once every time per normal lease time each node will send out a grab lease subscription notification per each partition that.
    • It does not yet own and which does not currently have any value set for the partition in the lease grab hash key.
    • If it has been more than the maximum lease time since the last time a lease grab was signaled for the partition (This is required for the case when a node dies somewhere after step 3 but before step 6 has completed). If this happens also clear the lease allocation and lease grab hash for the partition before raising the notification since it is an indication that a node has gone offline without cleaning up.
  2. Upon receipt of this notification the timer for this publications is reset (So generally only one publication per partition will be sent during the normal lease time, but it can happen twice if two nodes send them out at the same time. Also when this is received each node will wait based on this formula.
    • If the node currently is already processing the partition it will wait the number of active partitions on the node currently held times the current leases held delay minus half of this delay (So basically (Locally active partitions - 1) * current leases held delay).
    • If the node currently is not busy processing the partition that is being grabbed the node should wait the local active partitions plus one times the current leases held delay (On so fewer words (Locally active partitions + 0.5) * current leases held delay).
  3. Once the delay is done try to set the lease grab hash key for the partition with the conditional transaction parameter of it not being set.
    • Generally the node that has the lowest delay from step 2 should get this which also means that the active partitions on each node should distribute evenly among any active nodes since the more active partitions each individual node has the longer it will wait in step 2 and the less likely it is that they will win the race to own the partition lease.
    • If a node is currently processing a partition but did not win the race in step 2 it should immediately signal its partition to gracefully shut down and once it is shut down it should remove the lease allocation hash field for the partition. Once this is done it should also publish the allocated lease subscription notification. After that is completed this node should skip the rest of the steps.
  4. Check by reading the lease allocation hash value to see if another node than the winner in step 3 is currently busy processing the partition. If this is the case either wait for either the allocated lease subscription notification signaling that the other node has finished from step 3b or if this does not happen wait for a maximum of maximum shutdown time and start the partition anyway.
  5. Mark the lease allocation hash with the new current node that is now processing this partition.
  6. Also after the minimum lease grab time remove the winning indication in the lease grab hash key for the partition so that it can be challenged again from step 1.

When I run this algorithm in my tests it works exactly as I want it. Once a new node comes online within the normal lease time the workload has been distributed evenly among the new and old nodes. Also an important test is that if you only have one partition the partition does not skip among the nodes, but squarely lands on one node and stays there. And finally if I kill a node without giving it any chance to do any cleanup after roughly maximum lease time the load is distributed out to the remaining nodes.

This algorithm does not in any way handle the case when the load on the different partitions is not uniform, in that case you could relatively easily tweak the formula in step 2 above and replace the locally active partitions with whatever measurement of load or performed work you wish. It will be tricky to keep the algorithm sticky though with these changes.

Thursday, June 25, 2015

Designing for failure

One of the first things you hear when you learn about how to design for the cloud is that you should always design for failure. This generally means that any given piece of your cloud infrastructure can stop working at a given time so you need to design for this when constructing your architecture and gain reliability by creating your application with redundancy so that any given part of your applications infrastructure can fail without affecting the actual functionality of the website.

Here is where it gets tricky though. Before I actually started running things in a cloud environment I assumed this meant that every once in a while a certain part of your infrastructure (For instance a VM) would go away and be replaced by another computer within a short time. That is not what designing for failure means. To be sure this happens too, but if that was the only problem you would encounter you could even design your application to deal with failures in a manual way once they happen. In my experience even in a relatively small cloud environment you should expect random intermittent failures to happen at least once every few hours and you really have to design every single piece of your code to handle failures automatically and work around them.

Every non local service you use, even the once that are designed for ultra high reliability like Amazon S3 and Azure Blob Storage can be assumed to fail a couple of times a day if you make a lot of calls to them. Same thing with any database access or any other API.

So what are you supposed to do about it. The key thing is that whenever you try to do anything with a remote service you need to verify that the call succeeded and if it didn't keep retrying. Most failures that I have encountered are transient and tend to pass within a minute or so at the most. The key is to design your application to be loosely coupled and whenever a piece of the infrastructure experiences a hiccup you just keep retrying it for a while and usually the issue will go away.

Microsoft has some code that will help you do this as well which is called The Transient Fault Handling Block. If you are using the Entity Framework everything is done for you and you just have to specify a Retry Execution Strategy by creating a class like this.

    public class YourConfiguration : DbConfiguration 
    { 
      public YourConfiguration() 
      { 
        SetExecutionStrategy("System.Data.SqlClient",
                             () => new SqlAzureExecutionStrategy()); 
      } 
    }

Then all you have to do is add an attribute specifying to use the configuration on your Entity context class like so.

    [DbConfigurationType(typeof(YourDbConfiguration))] 
    public class YourContextContext : DbContext 
    { 
    }

It also comes with more generic code for retrying execution. However I am not really happy with the interface of the retry policy functionality. Specifically, there is no way that I could figure out to create a generic log function that allows me to log the failures where I can see what is actually requiring retries. I also don't want to have a gigantic log file just because for a while every SQL call takes 20 retries each one being logged. I rather get one log message per call that indicates how many retries were required before it succeeded (Or not).

So to that effect I created this little library. It is compatible with the transient block mentioned earlier in that you can reuse retry strategies and transient exception detection from this library. It does improve on logging though as mentioned before. Here is some sample usage.

      RetryExecutor executor = new RetryExecutor();
      executor.ExecuteAction(() =>
        { ... Do something ... });
      var val executor.ExecuteAction(() =>
        { ... Do something ...; return val; });
      await executor.ExecuteAsync(async () =>
        { ... Do something async ... });
      var val = await executor.ExecuteAsync(async () =>
        { ... Do something async ...; return val; });

By default only ApplicationExceptions are passed through without retries. Also the retry strategy will try 10 times waiting for the number of previously tries seconds until the next try (Which means it will signal a failure after around 55 seconds). The logging will just write to the standard output.

Thursday, May 7, 2015

C# Task scheduling and concurrency

It is very hard to figure out how the new async Task API for handling threading and concurrency works in .Net 4.5. I have dug around a lot to try and find any documentation on this topic and have mostly failed so when in doubt I decided to simply figure it by writing some test applications that checked how it actually behaved. It is important to note that this is how threading works in a console .Net 4.5 application on Windows 8.1. I would not be surprised if specific numbers of the thread model were different in a server setting, different OS version or even .Net versions. So without further ado here are my findings.

First of all if you simply start a lot of Task's that all run for a long time you quickly notice that by default the .Net runtime will allocate a minimum of 8 threads to run tasks. Then it gets interesting though because for every second that the task queue keep being full another thread is added. This keeps going all the way up to a maximum of 1023 threads. After 1023 threads have been allocated no more threads will be allocated for any reason so any remaining tasks will wait to start until a previous task has completed. If a thread executes no tasks at all for 20 seconds it will be removed from the thread pool.

There are also odd things happening with the order of which tasks are scheduled. For instance if you were to run the following code below it will run very slowly because no threads from the second for loop will be scheduled to run until the thread pool has expanded to run all tasks from the first loop concurrently (So for almost 100 seconds no processing will happen).

      for (int i = 0; i < 100; i++)
      {
        int thread = i;
        firstTasks.Add( Task.Run(() =>
        {
          Thread.Sleep(100);
          // Do something else
          secondTasks[thread].Wait();
        }));
      }

      for (int i = 0; i < 100; i++)
      {
        secondTasks.Add( Task.Run(() =>
        {
          // Do something in the background.
        }));
      }

In fact if you increase the upper bound of i from 100 to 1024 this example will never finish since all the 1023 possible available threads will be taken up with this initial tasks waiting for second tasks to finish which will never be scheduled for execution because of thread exhaustion.

This might seem like a contrived example, but it is actually not that uncommon to end up with a similar scenario if you use non async code inside a task in a complicated multithreaded application. If you instead write the code below like this it will complete almost immediately and not have any issues regardless of how many iterations of the loop you make because the second thread when created within the affinity of the thread that then waits for it actually causes the second thread to be executed immediately on that thread (As long as it hasn't been scheduled to run on another thread already).

      for (int i = 0; i < 100; i++)
      {
        Task.Run(() =>
        {
          Thread.Sleep(100);
          Task secondTask = Task.Run(() =>
          {
            // Do something in the background.
          });
          // Do something else
          secondTask.Wait();
        }));
      }

One last thing you have to be very careful about when it comes to task, especially when using the async syntax is that you have to realize that once you await on something there is absolutely no guarantee that once the execution continues it is on the same thread. So for instance this code is just waiting to creating a deadlock that will be really hard to track down.

      object lockObj = new object();

      Monitor.Enter(lockObj);
      await MethodAsync();
      Monitor.Exit(lockObj);

There really is no way to handle locking securely but if you absolutely need to do locking of a resource while doing async coding you could possibly use semaphores which do not require being reset from the same thread. This generally doesn't lead to good code though and generally if you think about where your synchronization code is you can avoid having locks over awaits but it might take a little bit of extra work.

Thursday, October 9, 2014

Finally someone explained why Sweden has so much better IT infrastructure than the US

Ran into this article about why Sweden has so much better internet connectivity than the US. I've been complaining about this for years that even now 10 years later I am still paying more for less bandwidth than I had before I moved in Sweden (And this is unfortunately the norm, not a fluke). The reason is quite simple that government here don't dare to tread on the toes of large entrenched economic interests.

Tuesday, August 28, 2012

Is your VPN secure? The answer might surprise you!

During DEF CON 20 a new attack against the MS-CHAP 2 protocol was announced that basically reduces the complexity of cracking a MS-CHAP login down to a single DES 56 bit brute force attack. The announcers also combined this with a new services on the site CloudCracker which will handily brute force this DES for you in less than 24 hours.

The input required is a network capture of the MS-CHAP 2 handshake. For now there are a few manual steps, but they shouldn't be beyond anybody with a basic understanding of networks and using command line tools. The payoff is huge though, once you have the cracked token you can both listen in on any subsequent traffic from the authenticated user and also authenticate as the user yourself.

CHAP authentication is currently used in almost all PPTP VPN networks (It is usually the default authentication). It is also often used in enterprise WiFi authentication but there the handshake is already encrypted using TLS so the attack is usually not possible in this case.

Microsoft has put out a security advisory (Although they are by no means the only affected vendor) advising everybody to switch to EAP authentication for PPTP. However the change is not an easy one since it needs to be configured both on the client and the server side of the VPN tunnel.

Wednesday, August 22, 2012

How to protect your digital life

I've already written in another article about how to digitize your life and what benefits that can bring. When you do this you need to start thinking about how you make sure it stays secure though as was highlighted in a spectacular fashion by Mat Honan who almost lost everything he had in a digital form including all his photos of his 1 year old daughter. So I figured I would write up some stuff that you can do to help you protect yourself online.

Securing your central email account

Almost every service you use will allow you to reset your password through by sending an email to an account you gave them when you signed up for the service. This obviously means that it is critical that you protect this account as much as you can. To this end make sure that this account has two factor authentication and make sure you enable it. It is a little bit of an extra hassle to set it up, but the extra security it buys you is absolutely worth it. Currently as far as I know GMail and Facebook do support this (Your phone being your second factor in both cases). Unfortunately Yahoo, Outlook or Hotmail do not.

Furthermore, don't use work or your internet provider as your central account. It will be a pain in the ass if you ever need to get a new internet provider or move to another job if you do, because all of a sudden you need to go in and reconfigure all your accounts to another email address. Furthermore keep in mind that your employer has the right to read and use your company email address so using that for anything you want to keep for yourself is just a bad idea.

Can add as a note that if you use Google Authenticator for your Google account you only have one chance to set up a device for this (Or you have to start over from the beginning setting it up), so if you want to have it on more than one device make sure you set them all up while you still have the chance.

Handling your passwords

Creating secure passwords are getting harder and harder. Here are some tips about what to do now.

  • Make sure they are at least 8 characters long, preferably longer.
  • Use lower case, upper case, digits and special characters in your passwords.
  • Don't use passwords that are words or combination of words.
  • Don't use the same passwords for all your sites. At least use special passwords for sites that are important (For instance your central email account or accounts that deal with real money or sites where you have saved your credit card information).
  • One method to make a password more security is to use password haystacks.

What I am trying to say here is that you really can't realistically remember all your passwords everywhere and I can totally recommend LastPass to help you out. For a detailed evaluation of it's security model check out this Security Now episode.

In case you don't have 2 hours to watch the video here it is in short.

  • It is completely Trust No One, meaning LastPass can never retrieve your passwords even if they wanted to.
  • It supports two factor authentication (Using Google Authenticator from above).
  • It supports every platform that I use. The iOS support kind of sucks though. On Android you will want to use either the Dolphin or Firefox browser.
  • It contains a password generated so you don't have to think up good passwords yourself.

Make sure you have a backup of everything

This can't be repeated enough. Even though data is rarely lost from online services it does happen and worse an attacker might wipe an account once they are done with it just to wipe out their tracks (As happened in the Mat Honan case mentioned above).

Backupify is a great service that allows you to back up a lot of online services. For your computers I can recommend Crashplan which is very cheap, is easy to set up but still has tons of features for the advanced user. If you make a backup onto an external drive make sure that drive is not stored somewhere in your house since a fire or a robber might be able to get to both the original and the copy if they are stored in the same place.

Don't enable remote wipe of your laptop

One of the main reasons the Mat Honan hack turned so disastrous was that he had enabled remote wiping on his laptop and when a hacker compromised his iCloud account they could also wipe his laptop. Remote wipe is a feature that makes a lot of sense on a cell phone that most of us has lost at least a few by this point in our lives. Laptops are lost a lot more rarely and unless you have critically secret stuff on it I don't think the chance of someone being able to remote wipe it simply by getting into one of your cloud accounts is worth the benefit. If it is for you though, make sure you have that backup.

Keep your password recovery questions secret

A lot of services allow you to set up security questions that allow you to reset your password. Make sure that the answers to these questions are not available online.

For example your first school is probably not a good idea if you grew up in a small town like me since there aren't that many schools to choose from. Other bad examples are your mothers maiden name or what was your first car (You sure you didn't post a picture of it somewhere?).

Also don't post your exact address online. Knowing your address is a good place for someone who want to hack your accounts to start. It might help with both security questions and social engineering. Just don't do it.

Online banking

The state of security for online banking in the US is just atrocious compared to Sweden but there is at least one thing you can do to at least make it a little bit safer.

Many banks allow you to select your username and password to log into it. Make sure that both of these are secret and not related to any publicly known information about you. On Wells Fargo and Bank of America you can reset the password by knowing your ATM card number, PIN and online username. This means that if your username is for instance your name (Which is also printed on the card) and someone skims your ATM card they can also hack into your online account and potentially do a lot more damage.

Sunday, May 13, 2012

Awesome Chrome plugin to visualize who is snooping on you

Collusion is a really cool plugin available for Chrome that visualizes all the sites that are linked in when you access a site so that you can easily see who is tracking as you wander around the internet. In the latest update it also gives you the option to block known tracking sites.

You can get it in the Chrome Web Store or if you want to read more about it you can also read about it on Gizmodo.

Thursday, May 10, 2012

Going to start being a bit more active online

I'm doing an experiment with my online presence. I'm going to try to be a bit more active with my blog (Last post was in 2009 which is ridiculous). I don't think it will be that big of a change for me given that I was already usually emailing links I found interesting to people. I will just post them here instead from now on until I get tired of it.

Also as part of this I've spend a little while to set up so that whenever I post something on my blog here it should also be posted to Facebook, Twitter and also Google+. I've also added some connections to also share my media center automatically generated top-list with the world (Although I'm sure nobody cares).

All of this is handled by using some clever magic on IFTTT.com, a really cool site for automating stuff happening to you online. IFTTT is short for If This Then That and basically it allows you to define triggers and then an action that should be performed if triggered. As an example I have a rule for when I post here it should also duplicate that post on Facebook & Twitter.

Tuesday, September 22, 2009

Another week another Apple problem

Just have to share as a follow up to my latest post on not liking Macs. Yesterday my Mac broke down again. This time it unfortunately seems like it wasn't something easy that they could fix in the store.

Last time I was actually quite impressed by the couple of hours of turnaround that the Apple Store had to fix it. This was about what I expected from a major computer manufacturer and definately in line with what you get from for instance Lenovo or Dell.

This time around I was not that lucky and I was told they needed to ship the computer off to a separate repair facility and that it would take 1.5 to 2 weeks. This brings the level of service down to around what I got when I bought a laptop from Discount Laptops. They were great value, but as always (as long as you don't buy Apple) you get what you pay for.

Friday, August 28, 2009

Why I hate Mac & OSX

I already wrote one article on why I don't like Mac computers or the OSX operating system. However, the last one was written before I had really used it that much. I have now been forced to use a Mac computer quite a lot because the iPhone SDK is only available for Mac computers and I have been playing around with that a lot lately.

The computer I have been using is a Macbook Air, which is supposedly pretty much a top of the line model when it comes to laptops. When I got it I got a lot of excited Mac users telling me to just give it a few weeks and I would come to love it. It's now been about 5 months and I can with certainty say that it is definitely the worst computer I've owned in the last couple of years. So here goes explaining why.

Hardware

First of all the most obvious flaw which is the track pad with just one button. It baffles me how anybody could be so stupid that when they had designed an operating system such as OSX which is so obviously made for two mouse buttons and then build the computers with just one button. With that I mean that there are tons of functionality in every application you can think of that is only accessible through the CTRL+Click context menu. On desktops you at least have an option to enable two mouse buttons, but it seems on laptops you have no such luck.

Next thing that annoys me immensely is that Apple have removed a bunch of very useful keys from the keyboard. The ones most obvious to me is PageUp, PageDown, Home and End. All keys which you as a coder use a lot. Granted I have the really small laptop, but even the 17" Macbook Pro still doesn't have those keys.

That brings me to the other problem with the keyboard. Mac computers have no less than 5 modifier keys. Granted Windows machine have basically the same set of modifier keys. The difference is that on Windows two of them are use very sparingly. On Mac you use all 5 modifier keys and a dizzying array of combinations. They have also tied up all the function keys with rarely used OS functionality so that when you are in X-Code for instance instead of using something easy like F8 for step you are forced to resort to Command+Shift+O.

Lets move on to the screen which compared to what I am used to really sucks. The resolution is simply atrocious. True I opted for the smallest version of laptop and I expect a smaller resolution, but the problem applies to all the models of their laptop line. If you go with something else than Mac you can get 1920x1280 resolution on a 15.4" screen but with Mac you need to go all the way to the ridiculously sized 17" model to get the screen size.

Finally lets go on to build quality. You would think that since Apple computers are so much more expensive than their PC counterparts you would get good build quality. Unfortunately that doesn't seem to be the case. My Air broke within about one month which is a record even for me. Granted fixing it was easy and fast, but that is usually the case with any named brand as long as you go for the on-site warranty.

Software

I usually don't have a big problem moving between different computer platforms. I've been on Gnome, KDE most kinds of Windows and even though I might have grumbled in a few weeks I'm usually OK with it. Not so with OSX, it is simply too badly designed for me to get used to.

First of all the whole application/window separation where each application can have zero or more windows. You use one keyboard combination to switch between windows and one keyboard combination to switch between windows within the application. You also have a menu bar that isn't attached to the windows they are attached to. This setup might be acceptable for a non power user which only uses a web browser or one application doing a specific task. For a software developer it sucks. When you develop something in X-Code you will always have one editor window, one debugger console, one document window within that application. Then you have one application for the interface designer and finally one window for the iPhone simulator. So whenever you are going to another window you have to figure out which application you need first and then switch to the correct window. In Windows or Linux you have one list with all your windows and navigation is just much faster and more convenient.

Secondly also in regards to window handling is the fact that you can't maximize windows. All window systems I've seen (Including the ones that predate the original Mac OS) all have a maximize button and I can't imagine who decided that you didn't need that. There is an "optimize size" button, but for some reason the optimal size for a web browser for instance is not the whole screen.

You'd think with it's much touted BSD roots that OSX would play nice with X-Windows applications from Linux and the like. For some reason this is not the case. GIMP and Blender for instance works much worse on OSX than they do on Windows. In Blender you have weird painting artifacts and in GIMP there is a problem that whenever you click in another window you have to click twice for the click to be recognized.

Sort of incidental, but I just installed Snow Leopard yesterday and their progress meter is if possible even more inaccurate than the Microsoft one when installing Windows. During the period of 1 hour it went from 42 minutes remaining to 39 minutes remaining. After that I went out to a pub and didn't get back until 4 hours later and then it was actually finished.

Since what I do on the Mac is software development it is inevitable to compare X-Code with Visual Studio (And also Eclipse, but they are pretty similar so I will stick with Visual Studio).

  • In Visual Studio you have everything you need in one easy to access window and it changes the layout automatically to include stuff you might need through different stages of the development cycle. In X-Code everything is spread out in a gazillion different windows and if you want to look at different things when coding and debugging it's up to you to move stuff around.
  • X-Code doesn't remember watches between different debug sessions. Debugging anything complex is ridiculously complicated. The only way that works kind of OK is the text based GDB prompt.
  • As far as I have managed to figure out you can't inspect the contents of an array at all without looking at the individual items.
  • Very mouse centric in it's user interface which is something you strive to avoid when working with code. I still haven't figured out which keyboard combination switches the file you look at in the editor without using the mouse.
  • X-Code is incredibly buggy (Even compared to Visual Studio). I have to restart it and even the entire computer all the time when it stops working (Especially the debugger).

Company

Just have to write a little bit about Apple's support. I've actually really tried to figure out how to use the Mac in a reasonable way so I have been visiting the incredibly ineptly named "Genius Bar" in the Apple Store a bunch of times and have also had some contact with support in regards to their developer program. Granted, when my computer was actually broken they did fix it pretty fast and hassle free but apart from that their support sucks!

  • I asked them about how you maximize a window and a guy seriously told me that yes, you can do that. No problem and then showed me how to use the mouse to resize the window to cover the entire screen.
  • When I was asking them about if there was a laptop with a PageUp/PageDown key they asked me why in gods name I needed to do that (Which is funny because their desktop keyboard still have the keys)?
  • It took the apple developer program almost 3 months to approve my membership. The problem was that instead of using the phone number I provided in my application to call me back they looked it up at PRV (The official records agency in Sweden) and found another phone number and used that to try to contact me. The problem is that that phone number has not been connected for over 5 years now. And with that they stopped and didn't even try the number I actually provided in the application. Even better is that though I contacted them several times through email all I got back was "in progress of reviewing your application" and not a peep about they not being able to get hold of me. Not until a friend of mine provided me with a customer support phone number (Which isn't listed on their web site) and a 30 minute yelling match with the customer service representative did they agree to try a working number.
  • It's not been over 3 weeks for Apple to approve my first application submission. Which incidentally is a lot longer than it took to write the app.
  • Apple is the only company I know of in the US who require a signature to their packages when you order something from their website. You can get around it by signing a waiver, but that requires me to have a printer. You'd think Apply would assume people having a job and not being able to sign for packages in the middle of the day. I don't why they insist on this, I've had much more expensive stuff delivered without signatures from other companies.

Thursday, January 22, 2009

First brush with Mac OS X

The Apple juggernaut seems unstoppable these days and where ever I turn I seem to run into people asking me why I'm not running a Mac. That said a few days ago I had the opportunity to work for a little bit with an iMac by myself. I had a very simple task at hand. I needed to print a couple of flight itineraries that I had in an web mail account. Here are my impressions that I came away with.

First of all there is that whole bull shit mantra that "Mac always works". First Firefox locked up on me within 5 minutes and then Safari locked up after a little while after that. My problem is that I don't know how to kill an app on OS X so after the last of them locked up I was effectively done. I have way better mileage with browsing on Windows than this regardless if I use Firefox, IE or Chrome.

I really hate the whole menu at the top of the screen thing they have going on Mac. I'm sure in some ways you probably get used to it, but I also dislike it on a philosophical level. The problem with it is as your screen starts to fill up with documents it is very disconcerting to have the menu end up sometimes far from your actual document. I also dislike the fact that the menu has a mixture of global and application items. I also don't like the buttons on the left hand side of the window without icons in them. I'm sure you learn this, but which of red, green and yellow means maximize window. Perhaps it's me, but it just isn't obvious in my mind.

Also very annoying is that Mac doesn't remember your last used print settings. Every time I tried to print something the page size was reset to a CD sleeve (Which I'm sure is something that Johansson had set up somewhere). On any other modern OS the print dialog simply remembers what the last settings you entered in the print dialog was.

Then there is the issue with the mouse. I can see that one mouse button can be easier to use for a novice. But the way they have apparently done it on an iMac is that it looks like there is only one button, but you can still press it as a right and left mouse button (And have different actions occur). That is just plain retarded, and how can that possibly be construed as simpler?

Also, during the entire time I was using the computer there was also this weird quiet chirping sound coming from it. I assumed that it had to do with IM or something like that. However when I asked him about it he didn't know either, but he guessed it was a problem with having too many USB devices connected to the computer. User friendly indeed!

Finally I have the number one gripe with this whole mess. The keyboard layout! What the hell kind of idiot decided that the Swedish layout of the keyboard shouldn't match what is actually printed on the keys (It was a Swedish keyboard)? I could never figure out how you got a '@' character on the Swedish keyboard. It was printed as being on the equivalent of AltGR+'2' which is the same as a "normal" keyboard. However, that didn't work. In the end I had to switch to an English layout for this one character. I later learned from Johansson learned that it was located on something like AltGR+'ä'.

Before I did this I had basically always thought Mac's had kind of cool hardware (Off course except for the glide point which would just have to go) and the slick UI based off of a unix kernel appealed to me too and that I would probably have liked running it as long as I could for work. Now I know not to believe the hype and when I don't have to run Windows anymore I'll be switching back to Linux.

Thursday, November 3, 2005

Googling yourself

I just tried googling (Searching on Google) myself. I'm pleased to announce that my homepage scores around place 30 if searching for "Henrik". But even cooler, searching for my full name "Henrik Johnson" will yeild stuff that is mine for the first 23 entries. Nice to know one has managed to make at least some impression on the web. What do you score with your names?