Kelly Leahy (technical)

Thursday, April 21, 2016

No Git-Gui, I don't ever want you to compact my database. Kthxbye.

So... for years now I've been hacking git-gui to turn off the GC check since it really annoyed me and until today didn't know how to turn it off "properly".

Today, I learned you are just supposed to set a config option to turn it off (go figure - I don't know if this is new or if I just never looked hard enough).

Anyway, here's the info in case you want it.

Just set the option gui.gcwarning to false.

Do it like this:
git config --global gui.gcwarning false
If you're interested how I was doing it before (you really shouldn't be but in case you're curious) - under your git install folder there's a file somewhere (probably called git-gui.tcl) that makes a call to a function called hint_gc that is the function that shows the annoying message.  I just commented out that line in previous builds.  Now, I see that it checks a config variable, but I guess I either didn't notice or it didn't used to check.

Thursday, December 17, 2015

ATL OLE DB Provider Wizard Issues

There are some issues with the example ATL OLE DB Provider Wizard generated code that cost me some time when playing around with it today.

I thought I'd post my solutions in case it helps others.

  1. The generated code does not use the COM_INTERFACE_ENTRY2 macro properly - instead of the line of code on which you'll receive a compile error, you should have: COM_INTERFACE_ENTRY2(ICommand, ICommandText) -- notice, this is a reversal of the parameters. Apparently, the ATL macro changed the order of the arguments at some point in the past (or the sample code is just wrong). In any case, the base class goes on the left and the derived class on the right (not the way it was in the sample).
  2. The generated code includes PROVIDER_COLUMN_ENTRY_STR(...) for the column map in the C...WindowsFile class that is auto-generated in the ...RS.h file. If your project is compiled for UNICODE (as mine was), you need to make this PROVIDER_COLUMN_ENTRY_WSTR instead. You'll see the issue when you see only a single character for each of the file names (rather than the full file name) when using the provider.
Anyway, hope this helps anyone who runs into these issues.

Wednesday, August 24, 2011

Azure Fast-Deploy

We have been building a web system on Windows Azure for quite a while now in my company, and are very familiar with the 20 minute development cycle of redeploying an Azure Role. Recently, the Windows Azure team released a streamlining process for doing web deployments to Azure (for development purposes only, not production deployments). Andy Cross talks about it here: http://blog.bareweb.eu/2011/04/using-web-deploy-in-azure-sdk-1-4-1/

The azure team blog talks about it here.

We tried to use Web Deploy, but we don't do things from Visual Studio. We instead do everything from the command line for our automation (since we do a TON of automation). As such, we tried for a while (to no avail) to get Web Deploy to work from the command line (tried for a while for us is measured in minutes not days, we don't have much patience for crappy tooling on our team).

So... we built our own.

It works like this - we have a very simple console app that we built in C# that looks at a particular blob storage account and tries to download a new blob from there if a new blob exists. In our case, we've named the container "w-deploy-newbits" and the blob name is "WebRole.zip". This zip file is built on our build machine by doing a msbuild /t:Publish on our ccproj in order to build a deployable version of our website (_PublishedWebsites\WebRole in our build folder). We zip this file and upload it to the appropriate cloud location from the desktop as part of our build script (powershell with psake), and when the "NewBits" console app sees that new zip, it redeploys.

We have been using this technique for our profiling of our app (for performance) so that we can quickly make changes, redeploy, reprofile, and repeat the process. It has worked VERY well for this. We have no intention of using this for production releases or anything of that matter and would never recommend such a thing.

I'm going to try to make the code available in github, but it will probably take me some time to do so, as I need to remove anything sensitive to our own application setup and I'm really busy right now. However, if you want more details about how we did this, please feel free to contact me via comments or my email address.

Sunday, February 20, 2011

Setting up branches to always rebase in git

So, today at Harmony Hackathon, several people asked me to post the way to set up your branches in git to always rebase. I know this is a controversial thing in git, if you disagree with me on why you might want to rebase, I don't care. I like it and all I'm doing in this post is telling people how to do it if they want to. Don't do it if you're afraid of rebase, you don't want rebase, or you are afraid your friends will shun you for using rebases by default. Don't tell me if you don't like rebase, because I don't care ;)

Ok... I'm going to tell you how to set up rebase to ALWAYS rebase on ALL repos. If you don't want this for all repos, then you can do this on a per-repo basis by doing all the config without "--system". Also, you can use "--global" instead of "--system" if you want to ensure that other people that use your machine aren't affected by the setting (nobody else uses my machines, so I don't care and do all my config at the --system level).

So, to set up all "master" branches to do rebase by default - do the following:
git config --system branch.master.rebase true

To set up all "new" remote tracking branches to be created with rebase by default, do:
git config --system branch.autosetuprebase always

If you already have remote tracking branches, then I recommend you make sure you're current, delete the local branches, and recreate them (1.7.x does this automatically if you use the branch name to check out the remote branch, so make sure you're using Git 1.7.x or later for that hawtness (as darkxanthos would say)).

Resharper: Please indent my shit by 1000 miles, please! kthxbye

So... Today, while at the Harmony Hackathon, I took a few minutes of yak shaving time to figure out the location of the setting that had all of us at the hackathon cursing resharper from time to time. If you want to see an example of why we were cursing, see a screenshot below from visual studio for one of the methods we were working on.

resharperlove

Resharper has a setting that tells it to indent the crap out of array and object initializers, and the default is "please resharper, indent my shit off the screen so far to the right that I run out of scrollbar space if I have more than two nested lambdas or objects". Today we took to rectifying this problem in our configuration.

The setting is under formatting style / other / other / indent array, object and collection initializers. You can see it in the screenshot below. Uncheck it and stop bitching about resharper moving your shit 1000 miles to the right ;)

resharpersettings

Thursday, March 18, 2010

Not your mama’s sort algorithm

So, not all that long ago, we were profiling our application, and I noticed that a particular part of our system was consuming a huge amount of time sorting arrays of 300,000 to 1M items.  I was a little surprised to see that this was taking an inordinate amount of time but it shouldn’t have been so surprising to me, since the keys by which we were sorting these arrays are somewhat complex and so comparisons are nontrivial.

The arrays we were sorting were lists of keys (and their associated data), and we were sorting them in a lexicographic ordering that was based on an elsewhere defined sort order (that was defined by the user).  The keys are themselves a list of id-value pairs where the ‘id’ is a short integer, and the value is a string.  So, as an example, consider the problem of sorting the following list in lexicographic order by the key ids 3, 4, 0 (in that order):

  • (0 = “UL1”, 3 = “M”, 4 = “S”)
  • (3 = “F”, 0 = “UL4”, 4 = “S”)
  • (4 = “S”, 3 = “M”, 0 = “UL2”)
  • (4 = “N”, 3 = “F”)

When looking at the data, a few things stand out.  Firstly, the keys aren’t all in the same “internal order”.  This is because the keys come from various different places in the system and are created at various different times, and are not sorted because the cost of doing so proved to be a performance issue.  Secondly, not all of the keys have all of the key ids (0, 3, and 4).  This is a feature of our system where not all keys require all ids.  Keys cannot overlap, but a missing ‘id’ is a wildcard, and allows searches to match that key with any value for that id.

When doing a comparison between these keys, we were using a IComparer implementation that looks like the following code snippet.  The IKeyInstance interface is the interface for the objects that contain one of the rows of the above list of data.  GetValue returns the string associated with the id passed as its argument (or null if the value doesn’t exist in the key).

   1: public class KeyInstanceSortedComparer : IComparer<IKeyInstance>
   2: {
   3:     private readonly short[] _KeyIdSortOrder;
   4:  
   5:     public KeyInstanceSortedComparer(IEnumerable<short> keyIdSortOrder)
   6:     {
   7:         _KeyIdSortOrder = keyIdSortOrder.ToArray();
   8:     }
   9:  
  10:     public int Compare(IKeyInstance keyInstanceA, IKeyInstance keyInstanceB)
  11:     {
  12:         foreach (var keyId in _KeyIdSortOrder)
  13:         {
  14:             var keyValueA = keyInstanceA.GetValue(keyId);
  15:             var keyValueB = keyInstanceB.GetValue(keyId);
  16:  
  17:             var compareResult = string.Compare(
  18:                 keyValueA.ToValue(), 
  19:                 keyValueB.ToValue());
  20:             if (compareResult != 0)
  21:                 return compareResult;
  22:         }
  23:         return 0;
  24:     }
  25: }

The _KeyIdSortOrder field is the sort order for the particular sort operation we are performing when using this comparer.  The ToValue method is an extension method that returns the value’s string if it’s non-null, or null otherwise.  It’s pretty easy to see from this snippet that this code implements a lexicographic sort on the IKeyInstance implementers.


Now, given the fact that the IKeyInstance implementers did not sort their id-value pairs internally, GetValue has to do a linear search of the key instance in order to find the appropriate value.  Additionally, the comparator must compare as many keyId’s values as it encounters until it either reaches the end of the list of id’s to compare, or it finds an id for which the two keys have different values.  The keys are never larger than about 25 ids in any key, and they are typically more like 2 – 8 id-value pairs per key.  Also, much of the memory consumed in the runtime footprint of our application was at one point held by these IKeyInstance implementers or their associated id-value pairs, so every effort to keep their size down must be made.


One of the optimizations we made to reduce the size of these key instances was to manufacture the strings used as “values” for the id-value pairs from a factory that coalesced duplicates so that we only ever had one instance of a given value in memory at any point in time.  Another optimization was to do the same with id-value pairs themselves.  These two combined optimizations had a profound impact on the memory footprint of our application, due to the amount of duplication present in the data, and the limited amount of uniqueness in the key’s values and id-value pairs.


Anyway, to get back to the sorting, I spent some time looking at the sort performance and realized that no matter what I did, I couldn’t really improve the performance of the sort much, so long as we were continuing to do all these comparisons.  The problem is, the comparisons are simply expensive due to the nature of GetValue in IKeyInstance and the fact that we weren’t sorting the entries.  We could have used Dictionary in the key instances, but that would have taken up too much memory for our taste.  We could have used a sorted list of id/value pairs, but when there are only typically 2-8 columns in the key, binary search would likely only be slightly faster than a linear search, and may have even been slower.



So….   I started looking for ways to parallelize the algorithm.  One of the other developers on my team – Robert Ream – was looking into parallel implementations of quicksort and mergesort to try to figure out how we could improve the performance of our sorting, since we have a few processors at our disposal on our typical users’ machines.  I also asked him to look into ways of making the algorithm cancellable and restartable so that we could ‘stop’ the algorithm in progress, serialize its state to disk, and move on to another item in the UI without losing the work that had already been done.  While this was a very interesting problem to solve, it turned out that trying to get rid of the comparisons would bear much more tasty fruit.


Thinking outside the box


In the process of looking for ways to reduce the number of comparisons, I started talking to Robert about which of the algorithms we knew about had less comparisons (mergesort / quicksort / etc.).  After some discussion, we really didn’t end up anywhere useful, so Robert continued working on his parallel quicksort implementation.  In the meantime, the weekend came, and I decided to spend a couple hours researching what else could be done.  I pulled out Knuth’s TAOCP Vol 3 and started reading.  In it, I came across his description of radix sort.  I read his description, which didn’t lead me to any great revelations (since he basically said that it doesn’t typically perform that much better than the other algorithms), but I was intrigued by the fact that it didn’t use comparisons.


I then continued searching for information, reading Cormen, and a couple of other algorithm books I have around the house, and then did some searching online.  I came across several publications by Stefan Nilsson on the subject of radix sort, and read some of them.  It turns out that after reading a few of these I started to think about our problem a bit more and try to consider whether it could be framed in a structure that allowed us to apply radix sort.  Still, the complete lack of comparisons was the motivating factor for me, given my knowledge of the fact that the comparisons were more than 95% of the time spent in our sorting code.


After a few hours of thought on the subject on Friday, I decided Saturday afternoon to investigate a bit in code.  So, if you don’t have any background on radix sort, the basic idea is this – instead of using comparisons, you must have numerical values that can be assigned to the ‘values’ of your key elements being sorted, and the assignment must be in the same order as your sort order.  For instance, if you want to sort a bunch of strings, the values are the ASCII values of the characters (or at least their ASCII sort collation order number).  Radix sort takes advantage of this fixed space to allow you to do array assignments and simple linked list operations instead of doing comparisons.  Radix sort performs best if you have a large ‘alphabet’ of values to work with and your keys are either non-overlapping, or don’t have a very large number of duplicates.


There are several variations of radix sort algorithms - the two classifications are MSD and LSD approaches.  I used the MSD approach since it was easier to understand how it applied to my problem, and I originally thought I had varying length keys so it seemed like it would be more appropriate (LSD approach seems to work better with fixed-length keys).


In order to apply radix sort to my problem, I needed to be able to assign a value to each key-value in my key.  Since I already had a factory that maintained the invariant that there was only one instance of each key-value in memory at any given point in time, and in order to do this, the factory needed to track these objects, I decided to provide the ability for the factory to assign a ‘sort order’ value to each of the key-values.  An integer was big enough, and I could put some intelligence in the caching factory to have it only perform the sort when requested AND necessary (due to a change having occurred to the list of key-values).


Given these sort order values, I could define an “alphabet” of 0..N where N is the number of values in existence and the highest sort order value, 1 is the lowest, and 0 represents a missing id in the key.  Then, I could logically think of my keys as looking like ordered M-tuples of integers (where M is the number of IDs in my sort order specification, and the maximum number of id-value pairs to be found in any key).  The integers in these M-tuples were the sort order values from the key-value list, and the tuples were logically sorted in the appropriate order for doing the lexicographic sort.


Then, the keys above could (fictitiously) look something like (in the sort order 3, 4, 0):



  • (2, 4, 5)   //(0 = “UL1”, 3 = “M”, 4 = “S”)

  • (1, 4, 7)   //(3 = “F”, 0 = “UL4”, 4 = “S”)

  • (2, 4, 6)   //(4 = “S”, 3 = “M”, 0 = “UL2”)

  • (1, 3, 0)   //(4 = “N”, 3 = “F”)

with the sort order values being:


(1 = F, 2 = M, 3 = N, 4 = S, 5 = UL1, 6 = UL2, 7 = UL4)


and the prescribed sort order is, of course



  • (1, 3, 0)

  • (1, 4, 7)

  • (2, 4, 5)

  • (2, 4, 6)

Implementation of Radix Sort


So, armed with this approach of assigning ordinal positions to the values in my keys, and the logical definition of a key as given above, I was in a position to apply radix sort (which requires a desired lexicographic ordering and known ‘cardinal’ values for the letters of the alphabet you are sorting).  My implementation breaks into three pieces – an ‘inner’ algorithm (in my case bucketsort), a sorter, and a way to track the unfinished portions of the array being sorted.


The basic idea of my implementation is that you start with the full array as a single ‘unsorted’ range.  The algorithm loops as long as there is work to do, by removing a single unsorted range from a queue, processing that unsorted range, and then moving on to the next step.  Once there is no more work to do, the array is sorted.


The algorithm requires an ‘inner’ sort algorithm, since radix sort processes one ‘digit’ at a time from the source alphabet.  In my case, radix sort will process one key-value column at a time, starting from left to right.  When processing a column, it will use a working array that has one linked list for each possible value of the digit.  For instance, we would have a working array (of linked lists) of length 8 (since there are 7 distinct key values, plus one ‘missing’ key value).  The sort will walk linearly through the list putting items into the ‘bucket’ that corresponds to the value for the first column.  Then, after it has assigned all items to their appropriate buckets, it will put those items back into the array in the order in which they appear in the array and their linked list.  At this point, the array is sorted by the first column, and the work remaining to be processed is the list of buckets of length 2 or more.  The process is iteratively applied to each column, ignoring the values of previous columns, but when applying for a column, it is only applied to the subset being currently processed (i.e. after splitting buckets 1 and 2, the ‘remaining work’ will be the ranges (0, 2) and (2, 2), where the tuples are (start, length).


The work of splitting items into their buckets is mostly done by my ‘RadixSortBucketer’ class, given here:



   1: public class RadixSortBucketer
   2: {
   3:     private readonly LinkedList<int>[] _Buckets;
   4:  
   5:     public RadixSortBucketer(int bucketCount)
   6:     {
   7:         _Buckets = new LinkedList<int>[bucketCount];
   8:     }
   9:  
  10:     public void AddItem(int bucket, int item)
  11:     {
  12:         var list = _Buckets[bucket];
  13:         if(list == null)
  14:             _Buckets[bucket] = list = new LinkedList<int>();
  15:         list.AddLast(item);
  16:     }
  17:  
  18:     public void SaveItemsTo(
  19:         LinkedList<RadixSortUnfinishedRange> unsortedRanges, 
  20:         int[] workArray, 
  21:         int startOffset)
  22:     {
  23:         var targetIndex = startOffset;
  24:         foreach(var list in _Buckets.Where(b => b != null))
  25:         {
  26:             if(list.Count > 1)
  27:                 unsortedRanges.AddLast(
  28:                     new RadixSortUnfinishedRange(
  29:                         targetIndex, 
  30:                         list.Count));
  31:             foreach(var item in list)
  32:                 workArray[targetIndex++] = item;
  33:         }
  34:     }
  35:  
  36:     public void Clear()
  37:     {
  38:         for(var i = 0; i < _Buckets.Length; i++)
  39:             _Buckets[i] = null;
  40:     }
  41: }

The array of LinkedList<int> is the buckets, the LinkedList<int> in each bucket is (possibly empty) list of indices that fall into that bucket.  The unsorted range object looks like:



   1: public class RadixSortUnfinishedRange
   2: {
   3:     public readonly int Start;
   4:     public readonly int Count;
   5:  
   6:     public RadixSortUnfinishedRange(int start, int count)
   7:     {
   8:         Start = start;
   9:         Count = count;
  10:     }
  11: }


and the overall algorithm looks like:



   1: public class RadixSorter
   2: {
   3:     public delegate int ValueFunctionDelegate(int level, int item);
   4:  
   5:     private readonly RadixSortBucketer _Bucketer;
   6:     private readonly int[] _WorkArray;
   7:     private readonly ValueFunctionDelegate _ValueFunction;
   8:     private readonly int _LevelCount;
   9:  
  10:     public RadixSorter(int[] arrayToSort, int bucketCount, 
  11:         int levelCount, ValueFunctionDelegate valueFunction)
  12:     {
  13:         _WorkArray = arrayToSort;
  14:         _Bucketer = new RadixSortBucketer(bucketCount);
  15:         _LevelCount = levelCount;
  16:         _ValueFunction = valueFunction;
  17:     }
  18:  
  19:     public void Sort()
  20:     {
  21:         var currentLevel = 0;
  22:         var currentUnsortedRanges = new LinkedList<RadixSortUnfinishedRange>();
  23:         currentUnsortedRanges.AddLast(
  24:             new RadixSortUnfinishedRange(0, _WorkArray.Length));
  25:         while(currentUnsortedRanges.Count > 0)
  26:         {
  27:             var nextUnsortedRanges = new LinkedList<RadixSortUnfinishedRange>();
  28:             foreach(var range in currentUnsortedRanges)
  29:             {
  30:                 _Bucketer.Clear();
  31:  
  32:                 for(int i = range.Start, j = 0; j < range.Count; i++,j++)
  33:                     _Bucketer.AddItem(
  34:                         _ValueFunction(currentLevel, _WorkArray[i]),
  35:                         _WorkArray[i]);
  36:  
  37:                 _Bucketer.SaveItemsTo(
  38:                     nextUnsortedRanges,
  39:                     _WorkArray,
  40:                     range.Start);
  41:             }
  42:  
  43:             currentUnsortedRanges = nextUnsortedRanges;
  44:             currentLevel++;
  45:             if(currentLevel >= _LevelCount)
  46:                 break;
  47:         }
  48:     }
  49: }


The only ‘special’ part of this is that I have defined a delegate to allow this to be parameterized.  The delegate is a value function that returns the integral value corresponding to a particular item and ‘level’ (level is like the ‘current column’ value).  In my particular problem, the value function is a delegate that uses the ‘level’ to determine which ‘value’ to ask for, and returns 0 if there is no value, or the sort order of the value if the value is not null.  Item is the index into the original array to use when getting values for the algorithm.


Conclusion?


So, what was the end result of my experimentation, you ask?  Well, the sort that used to take 17 seconds or so, now takes under 1 second, and much less if it doesn’t include the cost of assigning the sort orders to the key values (which doesn’t happen for each sort, it only happens for the first sort after a change to the key-value cache – which is pretty infrequent).  Could my implementation be more efficient – absolutely.  However, I’m a strong believer in “write it the way that makes the most sense to you, and then only make it ‘more efficient’ if profiling shows it to need to be more efficient” – this is basically the philosophy of avoiding ‘premature optimization’.  Of course, I don’t condone ‘premature pessimisation’ either, so you shouldn’t choose the wrong algorithm or data structure for the job, but you also shouldn’t micro-optimize every bit of code you write, as you’re writing it.


I hope you find this description useful – I tried to give details about how & why I did what I did, instead of just ‘what’ I did.  Hopefully that makes this an interesting read and provides some insights that just posting the code wouldn’t provide.

Tuesday, March 16, 2010

Weak Events

A while back I was having a discussion with my friend Tim Erickson (@imerickson) about problems surrounding GC and events in WPF.  Specifically, we were talking about problems with ICommand (WPF) implementations.  I mentioned to him that we have and use our own weak eventing pattern on my team.  I promised to write a blog post about it, but I’ve been slow doing so.  This is my attempt to make good on my promise.  Hopefully it’s not too awful.

The problem that weak events are meant to solve is one in which you an event source (the thing raising the events) that has a potentially longer lifetime than some of the corresponding event sinks (the things receiving the events).  In that case, normal .NET events (delegates) will cause the event source to keep the event sinks alive, unless some explicit event removal code is executed on the sinks to tell them to remove their event handlers from the source.  In many cases this is relatively easy to do, but in many it is very inconvenient to do so and the programmer would rather not “know” when the removal is necessary.

As an example, in our application, we have an editor grid that has bindings that act as an event source, while the grid acts as an event sink.  BTW, when I say “bindings” here, think of them as a ‘data source’ for the grid, not ‘bindings’ in the WPF sense.  These bindings are not really a problem, as the bindings are created and “attached” to the grid, and when they are “detached” from the grid, the grid removes any event handlers it has attached to the bindings.  In this case, we don’t need weak events.  However, underlying the grid bindings is a view model object (that is sort of an amalgamation of a view model and a data model object, but that’s an argument for another day) that has events to notify its subscribers if any of the contents of the VM have changed from anywhere in the UI.

In our UI, we can have multiple views on the same exact model object (think multiple views in MS Excel or MS Word) that each need to be notified if any one of the views causes a change in the model.  In this case, our model is an event source, and our bindings are an event sink.  Here’s the problem – our model lives “forever” (at least it lives as long as our application is ‘running’), yet the grid bindings get recreated when the user changes their view, moves to another item, opens a new view, or any number of other changes to the state of the UI.  In effect, we create MANY of these grid bindings objects throughout a single run of our application, and we wish to make no effort to track when they are no longer “in use”, since doing so is rather difficult and would litter our code everywhere with code to handle this tracking.

However, if we ignore the tracking, and just let the grid bindings “get released” by the grid, etc., the events that were hooked by the bindings from the model in order to get notifications of changes in the model will never get unhooked.  This will lead to the model “keeping alive” a huge number of these bindings objects and never releasing them, effectively creating a “memory leak” until the model is shut down.  Therefore, we had a choice – either hook the events and try to force appropriate shutdown in all the places it was needed, or find another solution that didn’t involve the bindings hooking events directly on the model.

Since we have total control over both the control and the underlying model, we had a lot of flexibility in our choice of solutions to this dilemma.  I investigated the .NET (WPF) model for weak events, as well as several other methods people had used on the net.  I even found a description from the venerable Jeffrey Richter in his CLR via C# book (and found it to have errors, as a matter of fact).  The WPF model is very general and is the way to go if you are building a control and don’t have control over the model that will be hooked to your control (or want to play nice in the WPF data binding way), but it leaves much to be desired in terms of the programming model.

The WPF weak event model is centered around the Microsoft.Windows.IWeakEventListener interface and the Microsoft.Windows.WeakEventManager base class.  For a discussion of some of the details, see http://blogs.msdn.com/nathannesbit/archive/2009/05/28/weakeventmanager-and-iweakeventlistener.aspx

For the most part, I agree with Nathan’s assessment of this class.  I do, however, believe that if you are writing controls to be consumed by applications out of your control, you effectively MUST follow this model (not mine or anyone else’s), since this is the model that is used throughout WPF.  While you could implement your own variation of this model, you still need to apply the general concepts that it does in any solution you come up with (i.e. the ‘endpoint’ providing the ‘weakness’ in the events is the event sink, not the event source).

For our application, we have adopted the ‘opposite’ pattern – that is, that the weak events should be sourced from the event source as weak, rather than having an intermediary in the weak event manager that handles turning ‘strong’ events from the model into ‘weak’ events (listeners) on the event sink.  We like this model because it hides the complexity of dealing with the ‘weakness’ of the events behind the facade of the event handler attachment points and ‘invocations’ in the model.  Our model basically works as follows: the event source provides a custom event implementation, and when sinks subscribe to this event, they are added to a weak event “multicast delegate-like” object that is private to the event source.  The subscribers don’t know that they are hooking to a weak event.  On the other hand, the model raises events using this “special weak event object” and the attached event sinks receive these messages if they are still alive.

There is one small wrinkle to our model.  Event handlers attached to weak events in this manner MUST be kept around by something else, or else they will get collected and will cease to receive events.  This isn’t a problem for us, since it’s exactly what we’d like to see happen, but there can be problems if one isn’t careful, as it will seem like events stop getting ‘caught’ by their intended targets, and you won’t know why.

So, now on to the details.  First off, we have a ‘WeakDelegate’ class, that is responsible for being a weak version of the Delegate class built in to the .NET framework.  It looks like the following:

   1: /// <summary>
   2: /// A "weak reference" version of a delegate.  This class is used 
   3: /// by the MulticastWeakDelegate type to support "weak events" 
   4: /// pattern in the system.
   5: /// </summary>
   6: /// <typeparam name="T">The type of delegate to return.</typeparam>
   7: public class WeakDelegate<T>
   8:     where T : class
   9: {
  10:     private readonly WeakReference _Target;
  11:     private readonly MethodInfo _Method;
  12:  
  13:     /// <summary>
  14:     /// Constructs a weak delegate from the provided delegate.
  15:     /// </summary>
  16:     /// <param name="source"></param>
  17:     public WeakDelegate(Delegate source)
  18:     {
  19:         if (source == null)
  20:             throw new ArgumentNullException("source");
  21:         if (source.Target == null)
  22:             throw new ArgumentException(
  23:                 "Source points to a null target.  " 
  24:                 + "This is not usable as a weak reference.", 
  25:                 "source");
  26:  
  27:         if (source.GetType() != typeof(T))
  28:             throw new ArgumentException(
  29:                 "Source is not the proper delegate type.  "
  30:                 + "The delegate type must match the type passed "
  31:                 + "as the type parameter 'T'",
  32:                 "source");
  33:         _Target = new WeakReference(source.Target);
  34:         _Method = source.Method;
  35:     }
  36:  
  37:     /// <summary>
  38:     /// Returns the contents of the WeakDelegate as a proper 
  39:     /// Delegate object, or NULL if the
  40:     /// delegate's target has been collected already.
  41:     /// </summary>
  42:     public T Delegate
  43:     {
  44:         get
  45:         {
  46:             var obj = _Target.Target;
  47:             if (obj == null)
  48:                 return null;
  49:             return (T)(object)System.Delegate.CreateDelegate(
  50:                 typeof(T), _Target.Target, _Method);
  51:         }
  52:     }
  53:  
  54:     /// <summary>
  55:     /// This property identifies if the delegate target has 
  56:     /// already been collected (false if so).  Note: there is a 
  57:     /// possible race condition with this property, so be sure to 
  58:     /// check the return value of the Delegate property for null 
  59:     /// even if IsAlive returned true.  This is just a quick way to 
  60:     /// identify as "definitely dead", it cannot be trusted not to 
  61:     /// change between calling it and calling the Delegate method!
  62:     /// </summary>
  63:     public bool IsAlive
  64:     {
  65:         get { return _Target.IsAlive; }
  66:     }
  67:  
  68:     /// <summary>
  69:     /// Checks whether the contents of this weak delegate is the 
  70:     /// same as the delegate passed as a parameter.  If it 
  71:     /// contains the same target & method combination, then the 
  72:     /// function returns true.  If the contained "delegate" has 
  73:     /// already been collected, or doesn't match the passed 
  74:     /// delegate, it returns false.
  75:     /// </summary>
  76:     /// <param name="other">The delegate to test against.</param>
  77:     /// <returns>True if the delegate is the same as the 
  78:     /// target/method contained by this weak delegate, False if 
  79:     /// the weak delegate has already been collected or if the 
  80:     /// target/method differs.</returns>
  81:     public bool IsSameAs(Delegate other)
  82:     {
  83:         return ReferenceEquals(other.Target, _Target.Target)
  84:                && other.Method.Equals(_Method);
  85:     }
  86: }



The weak delegate has support for “wrapping” a normal .NET delegate (it, of course, copies the contents of the delegate and throws away the underlying delegate so that it can hold a weak reference to the target of the original delegate).  It also has support for asking about the validity of the weak reference, getting the delegate back (or null if it’s not alive anymore), and comparing this delegate to others (used in the ‘remove’ handler of events).


To implement weak events, we use these weak delegates in a similar way to the MulticastDelegate class’ use in C# (the class that is used for ‘event’ implementation).  For this, we have a MulticastWeakEvent generic class that allows us to get as close as possible to declaring an ‘event’ in C#, but the language won’t let us get quite there.  Here is our MulticastWeakEvent implementation:



   1: /// <summary>
   2: /// MulticastWeakEvent is the "weak event" pattern support object.  This class 
   3: /// allows classes that wish to provide weak event registration to do so.  One 
   4: /// of these delegate classes must be created for each event, and the class 
   5: /// must be given the event type (typically EventHandler&lt;TEventArgs&gt;) and 
   6: /// the type of the event arguments.
   7: /// </summary>
   8: /// <typeparam name="TEventType">The type of event to support.  This should 
   9: /// normally be the .NET 2.0 EventHandler&lt;TEventArgs&gt; type.</typeparam>
  10: /// <typeparam name="TEventArgs">The type of the event arguments object to use 
  11: /// in calls to this event.</typeparam>
  12: public class MulticastWeakEvent<TEventType, TEventArgs>
  13:     where TEventType : class
  14:     where TEventArgs : EventArgs
  15: {
  16:     private readonly List<WeakDelegate<TEventType>> _List;
  17:     private readonly Action<TEventType, object, TEventArgs> _InvokeAction;
  18:  
  19:     private static Delegate _AsDelegate(TEventType handler)
  20:     {
  21:         return (Delegate)(object)handler;
  22:     }
  23:  
  24:     /// <summary>
  25:     /// Constructs a new weak event delegate.  The action should simply call 
  26:     /// the delegate with the appropriate arguments.  For instance, the 
  27:     /// action should simply be a delegate that looks like 
  28:     /// <code>(d,s,e) =&gt; d(s,e)</code>.
  29:     /// </summary>
  30:     /// <param name="invokeAction">A delegate that invokes its first argument 
  31:     /// (the event delegate) passing the second and third arguments as the 
  32:     /// arguments to the invocation.</param>
  33:     public MulticastWeakEvent(Action<TEventType, object, TEventArgs> invokeAction)
  34:     {
  35:         if (!typeof(Delegate).IsAssignableFrom(typeof(TEventType)))
  36:             throw new InvalidOperationException(
  37:                 "TEventType must be a delegate type in MulticastWeakDelegate");
  38:  
  39:         _List = new List<WeakDelegate<TEventType>>();
  40:         _InvokeAction = invokeAction;
  41:     }
  42:  
  43:     /// <summary>
  44:     /// Invokes the event for all listeners, one at a time.
  45:     /// </summary>
  46:     /// <param name="sender">The sender to pass to the invocation.</param>
  47:     /// <param name="e">The event args for the invocation.</param>
  48:     public void Invoke(object sender, TEventArgs e)
  49:     {
  50:         lock (_List)
  51:         {
  52:             _List.RemoveAll(eh => !eh.IsAlive);
  53:             _List.ForEach(
  54:                 eh =>
  55:                 {
  56:                     var target = eh.Delegate;
  57:                     if (target != null)
  58:                         _InvokeAction(target, sender, e);
  59:                 });
  60:         }
  61:     }
  62:  
  63:     /// <summary>
  64:     /// Adds a handler to the weak event invocation list.
  65:     /// </summary>
  66:     /// <param name="handler">The handler delegate.</param>
  67:     public void AddHandler(TEventType handler)
  68:     {
  69:         lock (_List)
  70:         {
  71:             _List.RemoveAll(eh => !eh.IsAlive);
  72:             _List.Add(new WeakDelegate<TEventType>(_AsDelegate(handler)));
  73:         }
  74:     }
  75:  
  76:     /// <summary>
  77:     /// Removes a handler from the weak event invocation list (and compacts 
  78:     /// the list).
  79:     /// </summary>
  80:     /// <param name="handler">The handler delegate.</param>
  81:     public void RemoveHandler(TEventType handler)
  82:     {
  83:         lock (_List)
  84:             _List.RemoveAll(
  85:                 eh => (!eh.IsAlive) || eh.IsSameAs(_AsDelegate(handler)));
  86:     }
  87: }
  88:  
  89: public class MulticastWeakEvent : MulticastWeakEvent<EventHandler, EventArgs>
  90: {
  91:     public MulticastWeakEvent(): base((d, s, e) => d(s, e))
  92:     {
  93:     }
  94: }



Now, given this class, we can show some examples on how to expose weak events.  Consider a class that has a “NameChanged” event of type EventHandler<NameChangeEventArgs>.  We can declare a subclass of MulticastWeakEvent (not strictly necessary, but helps with readability) as:



   1: public class NameChangeEventArgs: EventArgs
   2: {
   3:     public readonly string OldName;
   4:     public readonly string NewName;
   5:  
   6:     public NameChangeEventArgs(string oldName, string newName)
   7:     {
   8:         OldName = oldName;
   9:         NewName = newName;
  10:     }
  11: }
  12:  
  13: public class NameChangedEvent: 
  14:     MulticastWeakEvent<EventHandler<NameChangeEventArgs>, NameChangeEventArgs>
  15: {
  16:     public NameChangedEvent()
  17:         : base((d, s, e) => d(s, e))
  18:     {
  19:     }
  20: }

The main reason for declaring a special type for one of these events is that it helps avoid the silly (d,s,e) => d(s, e) crap that we’re forced to do so that we can avoid reflection costs in our code.  I couldn’t find a way around this without using reflection, so if someone has any ideas I’d be happy to hear about them (but don’t be surprised if I’m skeptical, since I fought this problem for a very long time).


Now, to create a model object that raises events of the type NameChanged (transparently), we can do the following:



   1: public interface IPerson
   2: {
   3:     string Name { get; }
   4:     event EventHandler<NameChangeEventArgs> NameChanged;
   5: }
   6:  
   7: public class Person: IPerson
   8: {
   9:     private string _Name;
  10:     private readonly NameChangedEvent _NameChanged = new NameChangedEvent();
  11:  
  12:     public Person(string nameAtBirth)
  13:     {
  14:         _Name = nameAtBirth;
  15:     }
  16:  
  17:     public string Name
  18:     {
  19:         get
  20:         {
  21:             return _Name;
  22:         }
  23:         set
  24:         {
  25:             if(_Name == value)
  26:                 return;
  27:             var oldName = _Name;
  28:             _Name = value;
  29:             _NameChanged.Invoke(
  30:                 this, 
  31:                 new NameChangeEventArgs(oldName, _Name));
  32:         }
  33:     }
  34:  
  35:     public event EventHandler<NameChangeEventArgs> NameChanged
  36:     {
  37:         add { _NameChanged.AddHandler(value); }
  38:         remove { _NameChanged.RemoveHandler(value); }
  39:     }
  40: }

I think this programming model is pretty clean, and it works well for what we’re doing.  The only caveat, as I said earlier, is that you absolutely must understand how GC works when working with these, or else you’ll have some very weird bugs related to event sinks disappearing in the night ;)  Allow me to give an example of the “event sink disappearing” problem in case you haven’t figured out what I’m referring to yet.  Consider the following code:




   1: public class PersonWatcher
   2: {
   3:     private readonly List<IPerson> _PeopleWithNameChanges = new List<IPerson>();
   4:  
   5:     public IEnumerable<IPerson> GetPeopleWithNameChanges()
   6:     {
   7:         return _PeopleWithNameChanges;
   8:     }
   9:  
  10:     private class Listener
  11:     {
  12:         private readonly List<IPerson> _TargetList;
  13:  
  14:         public Listener(List<IPerson> targetList)
  15:         {
  16:             _TargetList = targetList;
  17:         }
  18:  
  19:         public void HandleNameChanged(object sender, NameChangeEventArgs args)
  20:         {
  21:             _TargetList.Add((IPerson)sender);
  22:         }
  23:     }
  24:  
  25:     public void ListenToPerson(Person personToListenTo)
  26:     {
  27:         var listener = new Listener(_PeopleWithNameChanges);
  28:         personToListenTo.NameChanged += listener.HandleNameChanged;
  29:     }
  30: }

The code above (while only an example) could, or at least some variation of it could very well arise in ‘real life’.  One might think they are decoupling the ‘listening’ aspect from the tracking aspect of the problem.  The problem with this is that since the Person sources ‘weak’ events, there are no objects that are ‘live’ that are holding strong references to the ‘listener’ instances created on line 27 of the snippet.  This means that as soon as the GC decides to do so, the listener will get collected, and you won’t receive the events you expected to receive.


One way around this is for something ‘live’ to hold on to the listeners until their work is done.  For instance, if the “PersonWatcher” class held on to a list of all its listeners, this would keep them alive.  Of course, this has the other problem of requiring you to release them at some point in order to allow them to get GCed, but that may not be such a big deal, considering you might know better when to do that release than you know when to ‘detach’ the event listeners.  In this particular example, I suspect it’s a wash.


Anyway, I hope you enjoyed this brief tour of our island of code surrounding weak references.  It has served us well thus far, even with the caveats I keep mentioning.  Until next time – happy coding!