Sunday, November 2, 2008

Getting the message count from an MSMQ Queue

While at KaizenConf ( today, I was attending a session held by Chris Patterson (twitter: @PhatBoyG) & Dru Sellers (twitter @drusellers) on ESBs and MassTransit, their ESB implementation.

During the session, Dru complained that there wasn't a good way to get the message count from an MSMQ queue.  Of course, I was required to take that as a challenge, since that's the kind of guy I am :).  I found that as of the 3.0 API for MSMQ (apparently, that's the XP / Server 2003 vintage), there's a few ways to get the message count for a queue.

The available methods (that I found) for asking for the count of messages in a Queue were:

  • Call MQMgmtGetInfo API to query the queue for the PROPID_MGMT_QUEUE_MESSAGE_COUNT property.
  • Load up a MSMQManagement COM object, call it's Init method to associate with a Queue, and then ask for it's MessageCount property value.

It appears that the first method (which is an API call) is actually just a proxy for the second, so I'm not going to talk about it.  Calling the COM object from .NET is much easier than calling the API anyway, since it's not exactly a 'pretty' API for P/Invoke purposes.

Since I'm not really interested in investing a lot of time in this blog post, I'm just going to paste the code here and let you do with it as you please...  Here goes...

var path = @".\Private$\foo";
MessageQueue mq = MessageQueue.Exists(path)
? new MessageQueue(path)
: MessageQueue.Create(path);
// try to insert a few items into the queue...

var msmqMgmt = new MSMQManagement();
object machine = null; // mq.MachineName;
object queuename = mq.Path;
object formatname = null; //mq.FormatName;
msmqMgmt.Init(ref machine, ref queuename,
ref formatname);
int messageCount = msmqMgmt.MessageCount;

MessageBox.Show(string.Format("Queue has {0} items",
string path2 = mq.Path;

Of course, this code requires a reference to the COM type library - namely the "Microsoft Message Queue 3.0 Object Library" on the COM list in VS2008 when you have MSMQ installed on your dev box.

I had some weird problems trying to test this code on my machine, hence the commented machinename and formatname.  I think the problem was probably related more to the configuration of my machine than it was the code.  I suspect, however, that there may be some complexities that require you to specify machine name, queue name, and format name differently depending on whether you are working with a local queue or a remote one.

I found that for a local queue, the easy way to reference it was the code snippet above (don't specify machine, don't specify format name, supply the "path").  For a remote queue, I suspect that it will be easier to pass the machine name, the format name, and omit (pass null for) the path.  Note, the API states that you should NOT pass both the format name and the path name, or it will give an exception.

As always, if you have questions regarding this code, please don't hesitate to contact me via the comments or my email.

Saturday, November 1, 2008

.NET wish list...

  • Generics that can support 'text-like' replacement at runtime, where you can basically say that "I know that type T actually has a method called Foo with the signature bool Foo(int) but may not implement some particular interface (since you may not control the implementation of T)". I'd like to be able to call Foo from my generic class even if I can't change T to implement an interface that supports Foo. I'm thinking something like:
    public static void DoSomething(T target)
    where T: class having bool Foo(int)
  • A way of marking objects that MUST be used in a 'using' expression (i.e. they only make sense there) and having the C# compiler enforce it. For instance, an attribute would work for me (similar to how 'FlagsAttribute' indicates special semantics on enums). My reasoning for this is that I'd like to use IDisposable for some C++-style RAII-like stuff, but there's no way to guarantee that the objects are used correctly.
  • A way of injecting simple code / hooking "before" and "after" property notifications on 'automatic properties' in C#. For instance, if I do an automatic property, I'd love to be able to say "any time this changes, call this method", or something like that. It could be useful for INotifyPropertyChanged, but also for other things as well.

Saturday, August 9, 2008

Building a WPF Grid Control (Part 2 of ?)

So, last time (Building a WPF Grid Control (Part 1 of ?)) we described some of the data binding structures of our WPF grid control. I'm not going to provide implementations of those interfaces yet, instead concentrating on some of the interesting bits of the control's implementation. For the most part, building the control comes down to several bits of functionality - scrolling, rendering, mouse support, keyboard support, and data binding. We've already seen the data binding part, so next I want to concentrate on scrolling.

Building scrolling into the UI is not terribly difficult, and for the most part, the implementation can be done without actually dealing with the data binding interfaces at all. However, there are a few details that we need from the interfaces in order to have a go at scrolling - namely, we need to know how big the fixed and scrolling regions need to be. Therefore, we need a dummy implementation of IDimensionMetrics that give us some sizes to play with. Since we currently won't have fixed rows or fixed columns, and all the grid cares about for laying out the regions is the TotalSpace member of this interface, we should be able to get away with a really dumb implementation.

So, let's implement IDimensionMetrics as:

public class ReallyDumbMetrics : IDimensionMetrics
private double _TotalSpace;

public ReallyDumbMetrics(double totalSpace)
_TotalSpace = totalSpace;

#region IDimensionMetrics Members

public double GetSpace(int index)
throw new NotImplementedException();

public void SetSpace(int index, double space)
throw new NotImplementedException();

public double GetStart(int index)
throw new NotImplementedException();

public double TotalSpace
get { return _TotalSpace; }

public int Count
get { throw new NotImplementedException(); }

public event EventHandler SpaceChanged;


I just realized that I forgot to describe the IGridBindings interface in my last post. This interface is very simple and is just a container for the various pieces of the bindings (selection info, dimensions, region render info) that allows a single point of binding for the grid control. The interface is defined as:

public interface IGridBindings
IRegionRenderInfo TopLeftRenderInfo { get; }
IRegionRenderInfo HScrollRenderInfo { get; }
IRegionRenderInfo VScrollRenderInfo { get; }
IRegionRenderInfo HVScrollRenderInfo { get; }

IDimensionMetrics FixedRowMetrics { get; }
IDimensionMetrics FixedColMetrics { get; }
IDimensionMetrics ScrollingRowMetrics { get; }
IDimensionMetrics ScrollingColMetrics { get; }

ISelectionInfo SelectionInfo { get; }

void Reorder(int[] oldPositions);

Since we'll be starting on our grid control now, it would be nice to have a IGridBindings implementation that will allow us to start working with the binding interfaces of the grid. So, let's go with a dummy implementation of GridBindings that uses our ReallyDumbMetrics implementation above. Here goes:

public class ReallyDumbGridBindings : IGridBindings
ReallyDumbMetrics _FixedRowMetrics;
ReallyDumbMetrics _FixedColMetrics;
ReallyDumbMetrics _ScrollingRowMetrics;
ReallyDumbMetrics _ScrollingColMetrics;

public ReallyDumbGridBindings(double fixedRowSize, double fixedColSize, double scrollRowSize, double scrollColSize)
_FixedRowMetrics = new ReallyDumbMetrics(fixedRowSize);
_FixedColMetrics = new ReallyDumbMetrics(fixedColSize);
_ScrollingRowMetrics = new ReallyDumbMetrics(scrollRowSize);
_ScrollingColMetrics = new ReallyDumbMetrics(scrollColSize);

#region IGridBindings Members

public IRegionRenderInfo TopLeftRenderInfo
get { throw new NotImplementedException(); }

public IRegionRenderInfo HScrollRenderInfo
get { throw new NotImplementedException(); }

public IRegionRenderInfo VScrollRenderInfo
get { throw new NotImplementedException(); }

public IRegionRenderInfo HVScrollRenderInfo
get { throw new NotImplementedException(); }

public IDimensionMetrics FixedRowMetrics
get { return _FixedRowMetrics; }

public IDimensionMetrics FixedColMetrics
get { return _FixedColMetrics; }

public IDimensionMetrics ScrollingRowMetrics
get { return _ScrollingRowMetrics; }

public IDimensionMetrics ScrollingColMetrics
get { return _ScrollingColMetrics; }

public ISelectionInfo SelectionInfo
get { throw new NotImplementedException(); }

public void Reorder(int[] oldPositions)
throw new NotImplementedException();


Ok... I think we're ready to write some Grid code...

Region Implementation

Let's start with a simple implementation of a GridRegion base class that will provide the functionality for the four cell-containing regions in the grid UI. This class will derive from FrameworkElement just as our Grid does, and will participate in the layout system as usual (measure and arrange passes). For now, to avoid getting bogged down in the details of rendering rows and columns, we'll make it just draw big red ellipses in the regions.

Let's start with the GridRegion. We can begin with the following class:

internal class GridRegion: FrameworkElement
public GridRegion()

Now, for any FrameworkElement, we need to support Measure and Arrange layout passes. We also need to support the dimension metrics in order to obtain the sizes for our control. The dimension metrics additions to GridRegion will be:

private IDimensionMetrics _RowMetrics;
private IDimensionMetrics _ColMetrics;

private void _ReplaceMetrics(ref IDimensionMetrics target, IDimensionMetrics source)
if (target != null)
target.SpaceChanged -= Metrics_SpaceChanged;
target = source;
if (target != null)
target.SpaceChanged += Metrics_SpaceChanged;

public void SetBindings(IDimensionMetrics rowMetrics, IDimensionMetrics colMetrics)
_ReplaceMetrics(ref _RowMetrics, rowMetrics);
_ReplaceMetrics(ref _ColMetrics, colMetrics);

void Metrics_SpaceChanged(object sender, EventArgs e)

The basic idea here is the SetBindings method, used by the EditorGrid (that owns the GridRegion objects) to initialize the RowMetrics and ColMetrics properties. Each time the Bindings property of the EditorGrid is changed (it's the property that holds the IGridBindings interface reference) the grid will call SetBindings on each of the four regions.

We also need to support the layout and arrange passes for our control. Our container (the grid) will decide the layout of our control, all we need to do is request as much space as it is willing to give us (by returning availableSize from the MeasureOverride method as follows).

protected override Size MeasureOverride(Size availableSize)
return availableSize;

This same functionality could possibly be achieved in another way, but this was the easiest way that I found. I suspect that setting the alignment properties to 'stretch' might have worked, but it didn't seem to (or at least I don't remember it working when I thought I tried it).

Grid Implementation

Ok, we can now start implementing the grid control itself. Let's start with this class, similar to the GridRegion we just completed:

public class EditorGrid: FrameworkElement
GridRegion _TopLeftNonScroll;
GridRegion _HScrollRegion;
GridRegion _VScrollRegion;
GridRegion _HVScrollRegion;

public EditorGrid()
_TopLeftNonScroll = new GridRegion();
_HScrollRegion = new GridRegion();
_VScrollRegion = new GridRegion();
_HVScrollRegion = new GridRegion();

private IGridBindings _Bindings;
public IGridBindings Bindings
get { return _Bindings; }
if (_Bindings != value)
if (_Bindings != null)
_Bindings.FixedColMetrics.SpaceChanged -= Metrics_SpaceChanged;
_Bindings.FixedRowMetrics.SpaceChanged -= Metrics_SpaceChanged;
_Bindings.ScrollingColMetrics.SpaceChanged -= Metrics_SpaceChanged;
_Bindings.ScrollingRowMetrics.SpaceChanged -= Metrics_SpaceChanged;
_Bindings = value;
if (_Bindings != null)
_Bindings.FixedColMetrics.SpaceChanged += Metrics_SpaceChanged;
_Bindings.FixedRowMetrics.SpaceChanged += Metrics_SpaceChanged;
_Bindings.ScrollingColMetrics.SpaceChanged += Metrics_SpaceChanged;
_Bindings.ScrollingRowMetrics.SpaceChanged += Metrics_SpaceChanged;
_TopLeftNonScroll.SetBindings(_Bindings.FixedRowMetrics, _Bindings.FixedColMetrics);
_HScrollRegion.SetBindings(_Bindings.FixedRowMetrics, _Bindings.ScrollingColMetrics);
_VScrollRegion.SetBindings(_Bindings.ScrollingRowMetrics, _Bindings.FixedColMetrics);
_HVScrollRegion.SetBindings(_Bindings.ScrollingRowMetrics, _Bindings.ScrollingColMetrics);


void Metrics_SpaceChanged(object sender, EventArgs e)

As you can see, the bindings on the regions are set to different metrics depending on their locations in the grid. The TopLeftNonScroll region uses 'Fixed' metrics for both rows and columns, the HScrollRegion uses 'Scrolling' for columns and 'Fixed' for rows, and so on.

We now need to discuss the layout of the children of the editor grid. We also need to add scrollbars and the other non-cellular regions. For the moment, we'll ignore the other regions and the scrollbars. Let's just get the cellular regions in place first. In order to do all the layout stuff, we'll create a helper class that will make things easier for us. I'll call this class LayoutMetrics and define it as follows:

internal class LayoutMetrics
public Rect vscroll_rect;
public Rect hscroll_rect;
public Rect hscrollR_rect;
public Rect vscrollR_rect;
public Rect hvscrollR_rect;
public Rect topleft_rect;
public Rect topright_rect;
public Rect botleft_rect;
public Rect botright_rect;

public LayoutMetrics(Size size, double vscroll_width, double hscroll_height, double fixedRowHeight, double fixedColWidth)
vscroll_rect =
new Rect(size.Width - vscroll_width,
Math.Max(0, size.Height - hscroll_height - fixedRowHeight));

hscroll_rect =
new Rect(fixedColWidth,
size.Height - hscroll_height,
Math.Max(0, size.Width - vscroll_width - fixedColWidth),

hscrollR_rect = hscroll_rect;
hscrollR_rect.Y = 0;
hscrollR_rect.Height = fixedRowHeight + 1;

vscrollR_rect = vscroll_rect;
vscrollR_rect.X = 0;
vscrollR_rect.Width = fixedColWidth + 1;

hvscrollR_rect =
new Rect(hscroll_rect.X, vscroll_rect.Y,
hscroll_rect.Width, vscroll_rect.Height);

topleft_rect =
new Rect(new Size(fixedColWidth + 1, fixedRowHeight + 1));
topright_rect =
new Rect(vscroll_rect.X, 0, vscroll_rect.Width, hscrollR_rect.Height - 1);
botleft_rect =
new Rect(0, hscroll_rect.Y, vscrollR_rect.Width - 1, hscroll_rect.Height);
botright_rect =
new Rect(vscroll_rect.X, hscroll_rect.Y, vscroll_rect.Width, hscroll_rect.Height);

The main idea of this class is to break the space occupied by the Grid into the component rectangles. For now, we'll supply some dummy values for the sizes of the horizontal and vertical scrollbars. Now, given this class, we can implement our 'measure' and 'arrange' methods for the WPF layout system. We do so (on our EditorGrid class) as follows:

protected override Size MeasureOverride(Size availableSize)
// for now, fake the sizes of the scroll bars just to reserve some space.
LayoutMetrics m = new LayoutMetrics(availableSize,
14, // vscroll_width
14, // hscroll_width


return availableSize;

protected override Size ArrangeOverride(Size finalSize)
// for now, fake the sizes of the scroll bars just to reserve some space.
LayoutMetrics m = new LayoutMetrics(finalSize,
14, // vscroll_width
14, // hscroll_width


return finalSize;

Now, we need to add visual tree support to our grid control. In order to do this, we need a few features - first, we need to add the regions to the visual tree by calling AddVisualChild on our grid visual. Second, we need to override the 'render list' method & property GetVisualChild and VisualChildrenCount respectively. The first (calling AddVisualChild) we do by adding the following lines to the constructor (after the creation of the regions):


Now that we have those lines in the constructor, we need to implement the 'rendering' functionality. The easiest way to do this is either with a VisualCollection, or since in our case we have a fixed list, just an array of Visuals. The GetVisualChild method must return visuals in the order in which they should be rendered, and we want our regions to render in the following order: HVScrollRegion, HScrollRegion, VScrollRegion, TopLeftNonScroll. We will add a field to EditorGrid class that is an array of visuals (Visual[]) called _Visuals, and initialize it in the constructor (after the four lines above) as follows:

_Visuals = new Visual[]

Additionally, we need to implement the GetVisualChild method and VisualChildrenCount property as follows:

protected override Visual GetVisualChild(int index)
return _Visuals[index];

protected override int VisualChildrenCount
get { return _Visuals.Length; }

We now need to implement rendering in our GridRegion and then we'll have something we can start messing with. Here's the implementation of the OnRender method for the GridRegion control.

protected override void OnRender(DrawingContext drawingContext)
if (_RowMetrics == null _ColMetrics == null)

double xmid = _ColMetrics.TotalSpace / 2;
double ymid = _RowMetrics.TotalSpace / 2;

drawingContext.PushClip(new RectangleGeometry(new Rect(RenderSize)));
drawingContext.DrawEllipse(Brushes.Red, null, new Point(xmid, ymid), xmid, ymid);

Next time, we'll work on getting some scrolling features working.

Friday, August 1, 2008

Building a WPF Grid Control (Part 1 of ?)

So, some of you may know that I'm building a Grid control for our MG-ALFA application. We started by looking at the WPF grids out there, and found that none of them really fit our needs. There were several issues with the controls on the market. Much of what we wanted was very 'simple', as far as look & feel (like a traditional grid), yet all the WPF grids on the market seemed to be focused on 'pretty'. Also, we needed to be able to customize several very specific features - for instance, we wanted dragging of columns for reordering, fixed rows & columns, and the ability to easily transpose the grid.

None of these features were easy to come by in existing controls, and in order to get an of them, we would have to heavily customize the controls out of the box. The customization would be 'on top' of the control, so there wasn't a good way to tie it to our data model, and transpose was the killer feature. In order to get transpose, we would have had to write some really nasty code and do some really unpleasant things with databinding. If we didn't do those nasty things, we'd have to use 'unbound' mode on the controls, which would lead to really unpleasant code to keep the grid in sync with the data changes.

Finally, after a ton of investigation, we decided we'd be better off just writing our own control and building a truly custom data model for the grid, rather than trying to force fit an existing control to our problem. We were very skeptical about the amount of time it would take to build a grid that had the features we needed, but I was pretty sure it would be less than a few weeks, and it turns out I was mostly right about that. I'm going to try to describe the design of the control, from the ground up, in a series of blog posts, but I hope you'll ask questions if you want more details, as I undoubtedly won't cover everything.

So, first, let's talk about the basic design of the grid, and the features we required. The grid is built on WPF mostly using visual layer programming to do the rendering. It supports fixed rows, fixed columns, and has row and column headings. Visually it looks very much like MS Excel. The UX is also intended to be very much like Excel, except with some modifications that are specific to our domain needs (and no support for formulas).

What to derive from?

The first decision in any custom control development for WPF is to decide which of the multitude of classes you should derive from. The class hierarchy is:


Visual is very rudimentary, and basically provides only the ability to manage and participate in the visual tree. It is possible to build components at the 'Visual' level, but relatively difficult, and they must be built from other components that are at a higher level (at least UIElement). This is because the only thing a Visual can effectively do is contain other Visuals, and perform hit testing. There are a few Visual-derived classes that can be used by control designers (DrawingVisual, ContainerVisual), but these aren't really useful as base classes (maybe ContainerVisual, but certainly not DrawingVisual).

UIElement is effectively the lowest level class that a control designer might want to derive from. It provides basic layout, event handling, focus support, and rendering features. To provide code for rendering a UIElement, you must override OnRender. You also will want to override MeasureCore and ArrangeCore to participate in the layout process. You may also want to override HitTestCore to provide sophisticated hit testing for your control, especially if you have a non-rectangular area (our control will be rectangular and covered by other child controls, so we don't really need to mess with HitTestCore).

FrameworkElement is really the 'entry point' into WPF framework-level programming. Much of the core functionality for rendering is introduced in the UIElement class, but FrameworkElement builds on these features and provides some core implementation that makes it easier for you to implement the layout methods (i.e. it handles things like HorizontalAlignment, VerticalAlignment, Width, Height, etc., so you don't have to write the tedious code to make these work in your MeasureCore and ArrangeCore implementations). It also provides the core functionality needed for data binding. We'll derive our grid control from FrameworkElement, because it's the lowest class we can derive from without making a ton of extra work for ourselves, and it's the highest class we can effectively derive from without introducing features we don't want.

Control introduces style and templating support. Since we specifically don't want the XAML user to be able to customize the control template and styles for our grid (we want very specific control over how things are rendered), and don't need the flexibility that styles and templates provide, we don't want Control. However, it should be noted that if you want to build the 'best' control, from a flexibility standpoint, you probably do want to provide these features and use the 'recommended' approach of deriving from something at the Control or higher levels of the inheritance tree.

Visual Layout

Our grid control looks like the screenshot below (currently). It is still a work in progress, and that's why those ugly orange sections are there, and why the fixed rows look kinda funny (no gridlines, green background, etc.).


Obviously the data I've been working with is dummy data, generated by my data source for my benefit during development.

My first step in building the control was to design the layout of the control, in terms of separate 'regions' of the grid, based on their scrolling nature, and based on the relative positions of the scrolling regions. In the picture below, I've labeled the 9 independent regions of the grid control.


The regions, in left-to-right (top-to-bottom) order are:

  • TopLeftNonScroll - the fixed row/fixed column intersection, including the "select all" box.
  • HScrollRegion - the horizontal-only scrolling region (fixed rows, scrolling columns).
  • TopRightNonScroll - the area above the vertical scroll bar, that doesn't scroll and will eventually house buttons or some other visual cue / support.
  • VScrollRegion - the vertical-only scrolling region (fixed columns, scrolling rows).
  • HVScrollRegion - the 'data' section of the grid. This area scrolls both directions, and is made up of the scrolling rows / scrolling columns intersection.
  • VScrollBar - the vertical scroll bar (a ScrollBar control with its Orientation set to Orientation.Vertical)
  • BottomLeftNonScroll - the non scrolling area to the left of the horizontal scroll bar, will eventually be another place for buttons, etc.
  • HScrollBar - the horizontal scroll bar (a ScrollBar control with its Orientation set to Orientation.Horizontal).
  • BottomRightNonScroll - this will likely just be a gray 'dead-zone' so the scroll bars don't look stupid.

For the most part, except for scrolling, the HScrollRegion, TopLeftNonScroll, VScrollRegion, and HVScrollRegion have the same UI / UX, so I've combined much of the functionality into a single base class called "GridRegion". It has some parameterized options (like which directions it can scroll), but for the most part the code is all shared and used from this class. The derived classes are generally pretty small, and are just responsible for 'customizing' the GridRegion functionality.

The other regions are currently implemented just as canvases (except the ScrollBars, of course).

Data Binding model

My data binding model has several parts. The entire model is really a 'view' from the standpoint of MVP-like design patterns (at least in my assessment it is). The real data model is specific to the application. The Data Binding model supported by the grid has bindings for the selection and active / anchor cells, the row & column sizes, and the contents of the cells (including render flags and other special items).

In order to simplify the design and implementation, I've separated the binding into several objects based on the way the regions break up the grid. The major breakdown is between dimension metrics, render info, and selection support.

Dimension Metrics (sizes of rows / columns)

For the dimensions of the rows and columns, I've defined an interface called IDimensionMetrics that allows management of and provides information about the sizes of rows or columns. It is defined as follows.

public interface IDimensionMetrics
double GetSpace(int index);
void SetSpace(int index, double space);
double GetStart(int index);
double TotalSpace { get; }
int Count { get; }

event EventHandler SpaceChanged;

Each object that implements IDimensionMetrics only represents a single group of rows or columns. There are 4 implementations of IDimensionMetrics for a single set of grid bindings. The columns are broken into "fixed" and "scrolling", as are the rows (for a total of 4 separate groups).

In order to provide some additional features for IDimensionMetrics without requiring that all implementers implement these features, I've used some extension methods to implement common algorithms based on IDimensionMetrics. The extension class is as follows.

internal static class DimensionExtensions
public static double GetEnd(this IDimensionMetrics metrics, int index)
return metrics.GetStart(index) + metrics.GetSpace(index);

public static int HitTestNoSizing(this IDimensionMetrics metrics, double v)
for (int i = 0; i < metrics.Count; i++)
double ofs = v - metrics.GetStart(i);
if (ofs >= 0 && ofs <= metrics.GetSpace(i))
return i;
return -1;

public static int HitTestWithSizing(this IDimensionMetrics metrics, double v, out bool overSizingGrip)
for (int i = 0; i < metrics.Count; i++)
double ofs = v - metrics.GetStart(i);
double space = metrics.GetSpace(i);
if (ofs.InNeighborhood(space, 3))
overSizingGrip = true;
return i;
else if (ofs.InBetween(0, space, DoubleExtensions.EndpointInclusionMode.LeftInclusive))
overSizingGrip = false;
return i;
overSizingGrip = false;
return -1;

private delegate Rect CVR_GetRect();
private delegate void CVR_UpdateRect(double space);

public static void ComputeVisibleRange(this IDimensionMetrics metrics, Rect visibleRect, Direction direction, out int first, out int second)
if(direction != Direction.Horizontal && direction != Direction.Vertical)
throw new ArgumentException("direction must be horizontal (columns) or vertical (rows)", "direction");

int min = metrics.Count;
int max = -1;

var initializeRect = direction == Direction.Vertical ?
(CVR_GetRect)(() => new Rect(visibleRect.Left, 0, visibleRect.Width, 0))
: (CVR_GetRect)(() => new Rect(0, visibleRect.Top, 0, visibleRect.Height));

Rect rngRect = initializeRect();

var updateRectSize = direction == Direction.Vertical ?
(CVR_UpdateRect)((double space) => rngRect.Height = space)
: (CVR_UpdateRect)((double space) => rngRect.Width = space);

var updateRectPos = direction == Direction.Vertical ?
(CVR_UpdateRect)((double space) => rngRect.Y += space)
: (CVR_UpdateRect)((double space) => rngRect.X += space);

for (int i = 0; i < metrics.Count; i++)
double space = metrics.GetSpace(i);

if (rngRect.IntersectsWith(visibleRect))
min = Math.Min(i, min);
max = Math.Max(i, max);


first = min;
second = max;

In IDimensionMetrics, GetStart gives the starting position of a row or column, and GetSpace gives the space that it occupies. The extension method GetEnd returns the result of GetStart + GetSpace for a given column. TotalSpace is the sum of all GetSpace values for all rows/columns in the IDimensionMetrics. It could have also been computed as an extension method, but I decided it would be better as a property so that the IDimensionMetrics implementer could precompute it.

The HitTestNoSizing and HitTestWithSizing extension methods help determine which column or row a given point is over (for mouse hit testing). The former ignores sizing grips, while the latter will indicate the proper position for a sizing grip (currently hardcoded to a neighborhood of 3 device-independent pixels on each side of the sizing line).

Selection Info

First of all, my grid only supports selection of rows / columns / cells in the scrolling area of the grid. For that reason, I have a single SelectionInfo binding for all regions, and the columns / rows used by the SelectionInfo members (FirstCol, FirstRow, LastCol, LastRow, etc.) are relative to the top left corner of the data region (HVScrollRegion). If I needed support for selecting within the fixed rows and columns, then there would be some additional complexity in my code but it could be supported. I think of the fixed rows and fixed columns as essentially being 'extended' headers, so it doesn't make sense to select them, or have an active cell in these regions (just as it doesn't make sense to be able to make the active cell be the 'C' header in the C column of Excel!).

As with my other stuff, selection info has an interface that exposes the functionality required by the system - ISelectionInfo, defined as follows.

public enum SelectionType

public interface ISelectionInfo
SelectionType SelectionType { get; }
int FirstRow { get; }
int FirstCol { get; }
int LastRow { get; }
int LastCol { get; }

int AnchorRow { get; }
int AnchorCol { get; }

int ActiveRow { get; }
int ActiveCol { get; }

void SelectColumn(int column, bool extend);
void SelectRow(int row, bool extend);
void SelectCell(int row, int column, bool extend);
void SelectAll();
void Clear();

int MaxRow { get; }
int MaxCol { get; }

event EventHandler SelectionChanged;

I'm still not sure whether this interface will remain the same forever, I might change it to act more like Excel (i.e. removing the distinction of SelectionType and just making the different selection rendering be handled by comparing FirstRow/FirstCol with 0, and LastRow/LastCol with MaxRow/MaxCol). The interface is pretty self-explanatory, except for AnchorXXX and ActiveXXX. ActiveXXX is used to track where the keyboard has you on a keyboard-based selection extension (i.e. you hold shift and move around with the keyboard). AnchorXXX is used to track the starting cell for the selection. When moving away from a selection (without the shift key held), AnchorXXX is the position from which you start. This is counterintuitive to me, but it's how Excel works so I've replicated it.

Notice that ISelectionInfo is just the 'keeper' of the selection, and provides some methods for modifying the selection, but it doesn't have anything to do with the keyboard or the mouse. The support for modifying the selection via keyboard or mouse is isolated in the KeyboardManager and MouseManager classes, discussed in a later post from this series.

Render Info

Within each of the regions, we need to be able to obtain and change the cell text, get the render flags (i.e. is it selected, is it a special cell, etc.), get the text alignment, and some other special info for the grid region. The interfaces involved are IRenderInfo and IButtonInfo. The applicable definitions are as follows.

public enum CellRenderFlags
None = 0,
TopLeft = 1,
ColHeader = 2,
RowHeader = 4,
FixedCol = 8,
FixedRow = 16,
Active = 32,
Selected = 64,
Hover = 128,
Anchor = 256,

static class CellRenderFlagsExtensions
public static bool Contains(this CellRenderFlags target, CellRenderFlags flag)
return (target & flag) == flag;

public interface IRegionRenderInfo
CellRenderFlags GetCellFlags(int row, int col);
string GetCellText(int row, int col);
void SetCellText(int row, int col, string text);
TextAlignment GetTextAlignment(int row, int col);

Orientation ButtonOrientation { get; }
IButtonInfo GetButtonInfo(int index);
int ButtonCount { get; }

event EventHandler ResetRenderInfo;

public interface IButtonInfo
... <removed for brevity> ...

We will ignore IButtonInfo and the associated bits from the IRenderInfo interface for now and focus on the other IRenderInfo information. The cell render flags are various pieces of info that allow the CellRenderer class (discussed in a later post) to do its work. The CellRenderer also needs to know the text alignment for the cell and the cell text itself. The SetCellText method is provided for the in-place editor that allows modification of the cell text.

This concludes our discussion of the data binding interfaces used by the grid.

Thursday, June 12, 2008

MSDN Articles Online (.chm)

Glenn Block (@gblock on Twitter) just posted an tweet that mentioned the existence of these.  I had no idea they were there and I'm sure others didn't as well - so here you go:


Sunday, June 8, 2008

Extension Methods are way cool!

OK.  So everyone's heard of LINQ by now.  Most everyone has even heard of some of the cool features of C# 3.0 (lambdas), but in my mind, the coolest - extension methods - largely goes unnoticed.  Extension methods are the plumbing on which LINQ and some of the other cool features in the C# 3.0 libraries are implemented.  They are, in my opinion, the best feature C# has introduced since Generics, and are possibly one of the best features added to traditional languages EVER!

Consider the following - you have a class that someone else wrote.  On their class, they've provided a public interface for doing all of the things you need, but there are several additional things that you've implemented (as a separate utility set of functions) that it would be nice to add to the class' public interface.  Unfortunately, the class is marked 'sealed', or it is the base of a large hierarchy of classes that you simply can't add your functionality to (since you can't cause classes in a vendors library to derive from your 'new' version of their base class).

Extension methods to the rescue - all you need to do is declare a static class in your library (which you probably already have called 'StringUtils' or something like that :)), and provide some static methods on it that use the new 'this' keyword on their first argument.  Magically, the compiler will then 'add' this method to all items that have a type that is compatible with the type you have in the 'this-marked' argument.

For example:

public static class StringUtils
public static string RemoveAll(this string s, params string[] args)
string ret = s;
foreach(string sremove in args)
ret = ret.Replace(sremove, string.Empty);

by the way, of course I know this is the most horrible way to implement this function - it's just an example so don't tell me how crappy my code is or that I should be using StringBuilder or yada yada yada...!

The point of this is that after declaring such a function, all objects of type 'string' syntactically receive a member called 'RemoveAll' that has a single 'params' argument.  This is VERY cool.

The coolest thing about this - you can also do it for interfaces, enums, and various other types that you can't possibly provide "code" for in a more traditional way.

What else?

Much of the code that I write on a day-to-day basis works with tree-based data structures.  Some of these structures can get very complicated and much of my unit test code needs to do asserts over a large part of a tree (after performing some complex operation).  As a for instance, consider an expression parser.  Such a parser would presumably build an AST for the expression it's given and return that AST for further processing.  ASTs for all but the most simple expressions can get very tedious to 'check' for validity when writing a parser.

I've recently begun using extension methods for by base node class to help with my unit testing.  I directly put the unit test 'asserts' into the extension methods, and these extension methods are in NO WAY suitable to exist in the library that is being tested (why on earth would I want to have all this extra junk in my library just to support unit tests).  As a matter of fact, my libraries even target .NET 3.0 (C# 2.0) rather than .NET 3.5, C# 3.0.  However, that doesn't stop me from being able to use extension methods in my unit testing code (which doesn't get deployed to my clients, so I don't require them to have 3.5, I just have to have it on my dev machine and build machine).

Here's a simple example of how some of my unit testing code looks:

public void FormulaTests2()
PrimaryLexer l = new PrimaryLexer();
StringReaderAdapter sra = new StringReaderAdapter("a / (b + c)", 0);
InforceScriptLexerFilter lf = new InforceScriptLexerFilter(sra, l);
InforceScriptSemanticParser sp = new InforceScriptSemanticParser(lf);

RootFormula rf = sp.Parse();

// look for 'a' and '/'
// look for 'b' and '+'
// look for 'c'

In order to make all this possible, I defined a few extension methods:

internal static class TreeAssertions
public static T Is<T>(this ExpressionBase node) where T : ExpressionBase
Assert.IsInstanceOfType(typeof(T), node, "wrong node type");
return (T)node;

public static IdReference NameIs(this IdReference node, string name)
Assert.AreEqual(name, node.Id.Name, "names don't match");
return node;

public static BinaryOp OperatorIs(this BinaryOp node, InforceScriptTokenId op)
Assert.AreEqual(op, node.OperatorTokenId);
return node;

As you can see, the 'Is' test checks the type of a node, and then returns the node, so I can continue checking other things for the same node (assuming it 'passed' the check).  The same is true for the 'NameIs' and 'OperatorIs' checks.  This sort of programming is generally referred to (I think) as 'Literate Programming' - a technique for which the venerable D. Knuth is given the credit.  However, in order to do this sort of thing in the past, I'd have needed to put all these methods on my base class for the tree nodes, something that would have absolutely been the 'wrong' thing to do (since this test code should not be part of the library-proper).  (By the way, I think this style is now being referred to as 'fluent interfaces' in programming circles).

I can't wait to see what else I can find to use these methods for.  I've already found it to be an amazing benefit to my productivity and the readability of my tests.

Thursday, June 5, 2008

CI / TeamCity is Seriously COOL!

OK... I'll be the first to admit that I'm just getting into Agile processes and I'm still a bit skeptical.  At first, I thought (CI = Continuous Integration) 'CI builds - is it really worthwhile?'.  Now, I've got a TeamCity site & build agent up and going, and I'm totally SOLD!

Here are the benefits as I see them for our situation:

  1. We know almost immediately when someone broke the build (they know too!)
  2. We have better checkin quality now that people are tired of getting those 'compilation failed' emails.
  3. We always have a source to go to for a 'current' build - no need to get the sources and build on your own machine, or go ask a 'build master' to get you a build.
  4. We have other 'automation' points that we can hook into when we're ready to move on to bigger & better methods.

As an example of #4, I hope to soon have our NUnit tests running as part of an automated build.  I also think we can have automated installer builds going if we wanted to.  And, best of all, by virtue of TeamCity's ability to 'watch' our source control server for updates, and it's ability to run any arbitrary command line, NAnt, or MSBuild (or many more) task in response to those updates, the sky is the limit!

I can't wait to get more 'good stuff' implemented on TeamCity.

Wednesday, June 4, 2008

The Day of Bugs

Ok, I can honestly say that today was one of the weirder days that I've had in a long while.  I don't know about others, but I can say with confidence that I've never personally identified a bug in Visual Studio in my career.  I've seen plenty of them mentioned by other folks, I've seen 'features' that I'd be inclined to call a bug (but could be interpreted either way), but I've never really found a bug myself.

Today, I found two.  I guess it's a case of 'when it rains, it pours'.  One of them was known long before I 'found' it, but obviously not known to me.  The other, I'm pretty confident, is still unknown to 'everyone'.

Bug #1 - Dynamic Version vs. BAML.

Ok...  So we've done a fair bit of playing with WPF on my project, and we've done some custom control development (user controls) in WPF for use in our application.  WPF is very convenient for being able to prototype and design the UX/UI of something without being bogged down by all the crap you have to do to customize WinForms (our users seem to never like the 'way it is').  I can say that I feel pretty comfortable that my skills with WPF, while not the best on the block, are probably up there along with most of the folks currently doing WPF development.  I've done a ton of data binding work, and feel pretty confident that I know most of the tricks there - especially thanks to the wonderful work of Bea Costa!

So, I was very puzzled when one of our developers started having issues with running our application's forms that use one of my user controls.  The user control was pretty simple - it was a list of names that had alternating highlights for the rows (one row was white, one was gray, etc.).  It did a few other things, but mostly that was the gist of it.  This is one of the simpler controls we have.  Anyway, the weird thing was that the 'bug' that we kept seeing only appeared when running the debug build of our application, and it only appeared when our UI was being used from the APL application.  It never appeared in release builds, and it never appeared when running debug mode in our UI test harness.

So, naturally, I looked first at the APL runtime, thinking it was a bad install on this dev's machine.  We then took his build and his APL workspace and ran it on my machine.  To my surprise, it crashed on my machine too.  Then, we tried running one of my builds on his machine - it worked (also to my surprise!).  So then, I concluded it was a problem with his machine.

Two days later, after he had gotten some other work done and managed to uninstall all of .NET 2.0 through 3.5, VS2005 and VS2008, and then reinstall all of them (carefully in order), he tried it again.  BOOM!  It still didn't work.  I brought him my old laptop, and I had IT set it up for him to be able to use it instead of his desktop, thinking we'd be rebuilding his desktop from scratch.  All the while, still being puzzled by the fact that the behavior ran around to different machines and environments and was so skittish.

Later that day, he came over to my desk and told me the problem started appearing on his release builds too.  I thought, "oh great - a viral bug!".  He then said that the problem also started appearing on my builds.  At this point, I thought - "ok, there's gotta be something else going on here".

The bad part about this bug was this - whenever you ran the application, it would look like it wanted to pop up an error dialog, in fact it would show the thread exception dialog (System.Windows.Forms.ThreadExceptionDialog) briefly (actually several of them on top of each other), but then the application would disappear before you could do anything.  Apparently, looking back, the problem was on one of WPF's "special" threads and APL apparently doesn't react very nicely to the .NET AppDomain having threads other than the main UI thread throw exceptions.

Finally recognizing that I might be able to do something about this, I went into the code and added an exception handler with a plain-old message box in it (e.ToString()).  Looking at the exception text, I saw that it said something about a XAML parse error and that my ValueConverter couldn't be loaded (I had a ValueConverter as a static resource in my XAML for getting the backcolor brush for doing the highlighting).  This error message pointed me to: Rob Relyea's blog post (along with several MSDN forums posts).  The only thing was that his post didn't apply completely to my issue.  But, the workarounds did.  It turned out that I was using AssemblyVersion(1.0.*) in my files (which I really like for our 'in dev' work), but it was causing problems.  It seems that the reason the bug was so 'fleeting' was that there must have been a timing issue on the fourth bit of the version number (the revision), since it's based on a timestamp.

Apparently, my computer is too fast (most of the time), so I didn't see this bug on my builds, just on my colleagues'!  As I said, this bug has been known about for a long time, and while I'm not thrilled with the workaround, it's there and working, so I'll live with it.

BTW, I spent nearly 4 days chasing this bug, off and on (my colleague did most of the legwork).  It wasn't much fun, being that we couldn't get an error message for the first 3 of those.

Bug #2 - VS2008 STL vs. NUnit, C++/CLI, and ME!

So, I spent the last 4 days chasing what I though must have been a bug in our code, only to figure out that I'm pretty sure it's a bug in MS's STL implementation (for Debug builds).  However, this bug is really hard to find (though it, at least, is VERY consistent - i.e. happens every time and is very reproducible).

First, some background on our design.  Our application consists of two major pieces, a UI, and a backend calculation engine.  The UI is written in .NET 3.0/3.5 (C#), and the backend is written in C++ (native).  However, these systems need to be able to share a common file format.  To meet this need, we developed a generic file library in native C++ code that can be used to read/write files for our system.  The files are basically like MS's structured storage, but with the features and interfaces that we desired for our system (along with a design oriented towards meeting our required performance characteristics).

All of our file structures are build upon this file library, along with several other native libraries that support it and some of the other 'shared' features.  We also have a generic 'key' library (this is a domain concept for us - you can think of it as a property bag with some special 'matching' features).

In order to support using these libraries from both C++ and C# code, we decided we'd write the implementations in native C++ using traditional object-oriented design principles (many of which were lifted from my C#/Java experiences), and then write a thin C++/CLI (the managed C++ language) wrapper around this library using the IJW (it just works) interop supported by C++/CLI.  Then, our C# clients would call into the C++/CLI managed library (not even knowing that it's implemented in C++) just as they would call into managed C# code, but be able to use the underlying data structures and implementation of the native libraries.  I think this design is pretty elegant, and we solved quite a few interesting issues when developing it.  We've been using it now for some time and it's working quite well.

So...  now, the bug.  Just recently we needed to add a new feature to the 'key' library.  This library is very simple and the feature was also quite simple.  I added it to the native code, added it in the managed C++/CLI library, and added the C# unit test code to exercise it.  By convention, we only run our unit tests in Release mode, unless we're debugging them, since we only want to take the time to test in one build environment and it makes most sense to test the bits that are going out the door...

Anyway, so I tested in release mode, the tests all passed, and I checked in.  I was happy and I went home for the day.  The next day, I happened to be compiling in Debug mode and I decided to run what I was working on.  I had forgotten that my startup project in VS2008 was set to the unit tests for the 'key' library, so the unit tests ran instead of what I was intending to test.  To my surprise, the unit tests for the 'key' library 'exploded' (they didn't fail, they caused an AccessViolationException that was caught by VS2008 and popped up in the debugger!).  The AV was showing up on the destructor call for one of our unmanaged C++ objects (native library).

To make matters worse, the bug only showed up when the finalizer was called for the class, and even weirder, only showed up when the variables were not deterministically destroyed (i.e. using IDispose and 'using').  Since my unit test code wasn't using 'using' anywhere, I saw the 'explosion'.  But, I saw it only when the finalizer was called (much later than the offending code, obviously).  I invested some time writing code to track which object was the culprit, and after figuring out which (using 'value numbering', a technique I use a lot in single-threaded debugging of applications with lots of object instances and no unique identifiers in them), I followed the code.  It was not at all obvious why there was a problem.  In fact, I looked at it for several days and couldn't figure out what the problem was.

I then posted here, and called my Arch. Evangelist at MS and talked with him about it, and still couldn't figure out what it is (the spoiler is the last post in the thread).  I finally had to resort to the 'commenting' technique to track down the bug.  I first commented out all unit tests and started adding them back in one by one.  Once I found which unit test failed, I commented out the entire test and started adding code back in block by block.  Once I found the offending block, I looked down into the C++ code and STILL couldn't find anything wrong with it.

At that point, I decided I needed to think outside the box.  I looked at what was different between the various method calls that worked and didn't work, and decided that the throwing of the exception might be a problem.  First, I removed all the code in the offending method.  Now, my test failed, but it didn't explode.  So, I put the code back, and looked at the exception more thoroughly - I decided to move it to the head of the function.  That also caused my test to fail, but it didn't explode.  So then I concentrated on the code before the exception (in the original function).  I kept saying to myself - there's NOTHING wrong with this code!  (you can see the code in the MSDN post).  Finally, I thought - "it has to be this code, so let's assume it's broken and figure out when and why".  I then restructured the code (as described in the MSDN post) and determined that it was absolutely something in the 'begin()' or 'end()' STL vector calls.  There's no way I messed that up - that's their code.  Voilà - bug #2.

For this one, I'm going to have to figure out how to submit it to MS.  I'm sure nobody's seen this one (at least as far as I can tell from searching).

Whew!  Looking forward (hopefully) to not having very many more of those days!

Saturday, May 31, 2008

Redistributing C++ runtime components

Ok.  I've had to look for this several times over the years.  This time, I finally decided to just post it on my blog so I can find it again later :)

Here it is

Hopefully it helps you sometime too...

Thursday, May 29, 2008

StL gave Oakwood 1/2 Million Dollars???

What the hell?

Article in recent STL Business Journal:

"St. Louis development fund invests in Oakwood Systems.Oakwood Systems Group Inc. received a $500,000 investment for expansion from the St. L Business Development Fund, which invests in St. L based companies that are unable to access sufficient senior debt to finance organic growth or acquisition.Creve Coeur MO based technology consulting firm Oakwood Systems is one of the fastest growing private companies in the St. Louis area, according to Business Journal research. The company reported $7.5 million in fiscal 2006 revenue, a 92.3 percent increase from fiscal 2004 revenue of $3.9 million. The firm has 85 employees and an office in Nashville, TN."

If this isn't the funniest thing I've heard in a long time - it's gotta be up there. Of course this will only be funny to those of you who have heard my stories about my time at Oakwood and the goings on there.

Monday, May 26, 2008

Patching from a single .exe

We've been doing some work lately to try to figure out a good way to make our software patches go out the door as a single .exe. We aren't using MSI for our patches because we need to be able to do some things that MSI can't do (i.e. install older patches over newer versions, verify the current system version before installing, always install files, regardless of whether they are have been modified by the user or not, etc.).

So, I was thinking about different approaches. As I saw it, there were:
  1. Package everything in a zip, require users to download the zip and extract it into a temp folder, and run a .exe inside the zip that does the patch. Then require the user to delete the folder on their own.
  2. Build an .EXE that has resources in it for the files that were to be included in the patch - have the .exe extract these resources to disk in the proper locations and perform the other logic necessary.
  3. Build a generic packaging process, package stub, and 'package runner' that can solve the problem as well as other similar problems should we ever want to package anything else as a single .EXE (besides patches).

I didn't like (1) for the UX - I think it's pretty nasty UX-wise. I didn't like (2) because there are several problems - (a) the person who would be building the patches doesn't really work in VS, and I don't want them compiling the code. I also don't want to write code to 'package' the resources - it's a pain. (b) I already know one other case where we will need to package stuff into a single .exe for deployment that isn't the same as our patch process, so I'd have to build a similar project again - I hate duplicate code...

I decided to go down the path of (3). I knew from my Win32 days that the .EXE loader will be perfectly fine with an .EXE with 'junk' concatenated to the end of it. I've done this several times in the past - the PE format doesn't have any problem with 'extra' bits at the end of the file. So, I decided to go with the idea of a stub .EXE with a 'package' tacked on the end.

The next question was - what should my package support? I decided that for me a package would be two things - a list of files, and an entry assembly (that gets run by the package stub upon loading). The package would be a 'manifest' (an XML file that lists the package contents), along with a stream of bytes for the package data. For my needs, it was sufficient if all packages contained (1) a list of files, (2) an entry assembly, (3) an argument to pass to the entry assembly on running it, and (4) a list of assemblies upon which the entry assembly depended. The entry assembly would be loaded and executed by the package stub (.EXE) upon running the .EXE.

So, the file looks like:

Stub .EXE
Package Data Bits
Package Manifest
Package Trailer

Where the data bits are GZip compressed streams of bits corresponding to the items listed in the manifest, and the Package Trailer is a 'header' (trailer actually) that is of fixed length and identifies this file as a valid package (i.e. has a identifying 'magic number') and contains offsets of the various items within the file (i.e. the location of the first byte of data bits, and the length of the manifest).

Since my support code for this functionality is in a class library (I called it 'Packaging'), the class library is embedded in the stub .exe as a resource. The stub performs the following steps when it is run:

  1. load the resource "Packaging" and call Assembly.Load on the byte array.
  2. set up a 'resolver' to resolve this assembly for the other assemblies that might be loaded by this package.
  3. unpackage the dependencies of the entry assembly and load them as well
  4. unpackage the entry assembly
  5. look for a 'Package' class in the global namespace of the entry assembly, and cast it to an IPackage (defined in Packaging assembly)
  6. call 'Execute' method on the IPackage from (5), passing an IPackageHost interface that provides functionality for reading the contents of the package and interacting with the stub.

(2) is an event handler attached to the AppDomain.CurrentDomain.AssemblyResolve event.

The IPackageHost is an interface that allows the IPackage.Execute implementation to gain access to the entries (and bits) in the manifest. This host interface is provided by the stub, but can also be implemented by other code in order to do testing, extract packages, run them separately, etc. In fact, my Packaging library provides the implementation of IPackageHost via a PackageHost class that is used by the stub .EXE to provide the necessary support to the entry assembly's IPackage implementation.

If you want more details on how this all works, give me a call, or shoot me an email - I'll be happy to provide more details.

Saturday, May 24, 2008

Another great ALT.NET event!

Yet another great ALT.NET event today. Thanks Glenn for the conversation in the car on the way there (and congrats / good luck on your new job on MEF), thanks Justin for the space, and thanks everyone else for the great conversations.

Hopefully people are interested in coming to my place in July or early Aug for one.

I'll try to be more diligent about blogging some of my recent dev work. I think it's pretty interesting and I haven't found similar content elsewhere.

Monday, April 21, 2008

Coolest weekend ever.

Ok... the weather sucked - it was snowing here - but otherwise this weekend was awesome. I got to meet a lot of great people, and some real top minds in the field. Who did I meet? Among others - Brad Abrams (.NETfx team, coauthor of framework guidelines book), Jeremy Miller (the StructureMap guy), Glenn Block (P&P guy), Scott Hanselman (goes without introduction), Martin Fowler (the 'Refactoring' book guy), Charlie Poole (the NUnit guy) ... the list goes on (no offense please if I met you and didn't include you in this list).

I had a wonderful lunch conversation with Brad for about 20-30 minutes over lunch. It's amazing to have someone like Brad come sit down next to you, introduce himself, and then share a one-on-one conversation with you. It'd be like Fisher Black coming and sitting down next to an actuary and introducing himself and asking about your work. It's very hard to not feel intimidated in such a situation, but not only was Brad not intimidating, but he's a really genuinely nice guy to talk to. Hopefully I'll have the luck of meeting him again and having more conversations. hehehe... maybe he'll ask me if I want a job someday :) I can dream, right?

All in all, this is the best work-related event I've ever been to, and it's probably the best weekend of my working life thus far. I definitely consider the 30 minutes I spend with Brad to be the best 30 minutes of my career to this point.

If you ever hear of an 'open spaces' event, I highly recommend you jump on the opportunity to attend. Also, if you ever want to go to ALT.NET events, hopefully I'll see you there.

Monday, February 25, 2008

Interop, Interop, and more Interop

Well, we're sure doing our fair share of WinForms and WPF interop! We've really jumped on the WPF bandwagon, but we really like the functionality of our application and aren't willing to give up major parts of it in order to go 'all' WPF.

For instance, our Syntax Editor control (from is only available currently as a WinForms control. Also, my management really likes the Infragistics Grid's functionality that we're using for our selection grid in our formula browser tab. In addition, we really like our tabbed MDI style of windowing, and our tool windows (pinnable tool windows like in VS2005).

Unfortunately, there are several ways in which WinForms doesn't play nice with WPF. Basically, the problem is that when WinForms draws on a window, it 'owns' that space on the screen. However, when WPF draws on a window, it is using DirectX compositing. This makes lots of things in WPF very nice, but in order to allow WinForms controls to 'live' on a WPF window, they must have total control over the space where they 'live'. This means that the WPF controls can not render 'on top of' the WinForms control. This is very well documented behavior and basically the only way the WPF team could figure out to make this work (as far as I can tell).

What does it mean for us? A whole lot of interop. Basically, we want to use WPF everywhere we can and would love to be able to use it for our whole application. Currently, we can't use it for the whole app, and so we must use it where we can and hope for the best later. However, tool windows have this neat 'auto-hide' feature with them. We really liked the WPF 'SandDock' tab & dock controls from DivElements. However, since our WinForms grid is in the main client area of our form layout, we couldn't use a WPF docking control, since when you drug the dockable windows around, or when they tried to 'flyout' from their 'auto-hide' position, they would do so UNDER the WinForms control! That sucks, but it's the only way it can reasonably work.

What's that mean? Our docking controls can only be WinForms controls until we get rid of all the WinForms controls that a flyout would want to fly over (and replace them with WPF equivalents). Now, since our docking and tabs controls (they're the same control for us, since we've switched to Actipro's UIStudio) have to be hosted in WinForms, and you can't have a WinForms host inside a WPF host inside a WinForms host (the interop facilities of WPF only allow WF inside WPF or WPF inside WF, but not 'double' nesting), we MUST use WinForms for our main application (this sucks since it means we lose a bunch of the nice features of WPF, like commands, command bindings, routed events, etc.).

This also means that EVERY WPF control in our app must be hosted inside a WPF element host (interop) control on the WinForms form.

Yuck. That's a hell of a lot of interop.

Wednesday, February 20, 2008

Portrait Mode

So, have you guys heard of "Portrait mode" for monitors? If not, you should really check these out - here's an example of what it is:

It turns out that portrait mode is actually a feature of your video card, not your monitor. What makes it a "feature" of the monitor is simply that a monitor that supports portrait mode must have a stand that allows you to rotate the monitor.

So... it turns out that the Dell 20" Widescreen LCD panel that I've had on my desk in the Milliman office for 6 months now is actually a 'portrait-capable' monitor. Of course I had to hire a new employee in order to figure that out :)

Of course, after setting up my monitor at the office to do portrait mode, I decided I needed to check to see if my Samsung SyncMaster 225BW supported it (my home monitor). After doing a web search, it seemed that it didn't, so I did some investigating of my own. This monitor is absolutely awesome, and this is the very first design 'flaw' I've seen with it so far. However, it turns to be a weirdly long-sighted and simultaneously short-sighted design!

The stand on the 225BW is removable, and has a square mount. Inside the back of the monitor, the metal substructure has four mounting studs. Woohoo! I say, it seems like I can rotate this thing after all. Unfortunately, there are two mounting ears on the stand that fit into some holes in the plastic case in order to ensure that the monitor doesn't pull away from the stand when tilting it (it only mounts with two screws - the top ones).

So, long story short, I pulled out some cutting tools and cut myself a couple of holes in the plastic case (it's backed by the metal substructure, which I verified before cutting so that I wouldn't damage any components). Voila! - I now have a portrait mode monitor! I can fit 351 code lines in visual studio on one screen given my current toolbar settings (and the error list/bottom windows unpinned) - WAY COOL! I can taste the productivity already!

Monday, February 11, 2008

Collections library for .NET

Here's a neat collections library somebody just mentioned in one of the newsgroups I frequent. I haven't had a chance to check it out much yet, but it seems very promising.

Saturday, February 9, 2008

Interesting Blog Post - SKU driven development

Jeff has a very good post here. Mostly, I like the bulleted list in the post, I think it's one of the best lists of 'must read topics' I've ever seen in one place.

Friday, February 8, 2008

BNF in code?

Ok... this is pretty cool.

I was very surprised this works. I wonder how expressive you can really be with this, but it's an interesting idea.

Wednesday, February 6, 2008

NETfx source code...

Get it while it lasts!

Chris blogs about a tool that will let you get the whole distro in one fell swoop!

Thursday, January 31, 2008

Job Opening

We have a job opening for a C# developer on our MG-ALFA development team. If you're interested, contact me soon.

Executing code in another app domain.

Recently, we found out that the Infragistics controls we're using have some bugs in them that cause them to never be garbage collected once you've used them in your code. There doesn't seem to be any resolution short of asking them to fix the bugs, as the bugs are related to their code adding event handlers for receiving notification of changes in the office 2007 themes container (their internal class).

Anyway, we figured out that there isn't any workaround for this short of modifying and recompiling their code or waiting for them to fix their bugs. Since we don't have time for that, and it represents a HUGE memory leak in our application (each time our form is opened and closed, the UI sucks up 30-50MB of additional memory!)

So, our solution, you ask? We first built an out of process approach to executing our UI via IDispatch implemented in a out-of-process .EXE COM server. This was a fair amount of work and was a really cool project in and of itself. If you're interested in it, let me know and I'll tell you how it works. Unfortunately, due to changes in windows rules for window activation and some plumbing issues, it turns out that the out-of-process approach won't activate the window when it starts up, and short of doing some really nasty hack, there isn't an easy fix for that.

When we figured the out-of-process solution wasn't really the best answer for our problems, we decided to investigate further. Once we determined that the leak was on our .NET side of the fence (our .NET UI is called by a Dyalog APL-based user interface via their .NET interop support), we decided to investigate the .NET side a bit more. That's when we found that it was a leak caused by the Infragistics controls.

I did a proof of concept that created a separate AppDomain, called our NUI, and then shutdown the app domain. This seems to fix our memory leak problem as all memory associated with the app domain is freed (at least it seems to be). It turned out to be much harder to get this working than I had expected. In theory it's quite easy, but we have several deployment issues that make it very difficult (we can't put our DLLs in the GAC, and our application DLLs are not (and can't be) in the same folder as the 'application base' (because the base application that creates the main app domain is Dyalog.exe or DyalogRT.exe and we don't want this in our application folder).

The process goes something like this:
1) create an app domain
2) load our assembly into the domain
3) create a serializable object with our arguments for the call to the assembly
4) call to the assembly (serializing the arguments) (blocking call)
5) the assembly returns from the call, returning the results in a serializable object
6) the calling code deserializes the object and continues as it normally would.

The problems in this are: (2) finding the assembly; (3) making the serialized object's assembly available to the target assembly; (4) getting a reference to a MarshalByRefObject to make the call against; (5) loading the results objects assembly in the source app domain.

The main issues are loading the assemblies on the caller's side, rather than the target side, amazingly! It seems like this should be the easy part, but for some reason the remoting infrastructure isn't smart enough to realize that the assemblies are ALREADY LOADED!

Anyway, the basics are like this:

  1. create an object that derives from MarshalByRefObject. Put this in an assembly that you don't mind having loaded in both app domains.
  2. define a method on this object that does the work you want done.
  3. make sure that the objects you pass to and from the method are marked Serializable (and can be serialized and deserialized - you can test this by using the BinaryFormatter to write and read the file to/from a stream).
  4. create a method somewhere in your main app domain that does the following:
    1. packages the parameters into the serializable objects
    2. creates an app domain using AppDomain.CreateDomain. We also pass a different appbase that is based on our assemblies' locations.
    3. sets up an event handler on the current app domain's AssemblyResolve event (this is only necessary if you can't get the app domain to properly resolve your assembly and instead your object comes back only as MarshalByRefObject, instead of what you expected).
    4. call CreateInstanceAndUnwrap on the target app domain, asking it to create an instance of your MarshalByRefObject derived object.
    5. cast the object from CreateInstanceAndUnwrap to your target object type.
    6. Call your method.
  5. Put code in the method handler (if you need it) for your AssemblyResolve event that iterates through the loaded assemblies and returns the one that was requested, if it matches by name. ResolveEventArgs.Name should be matched against Assembly.FullName. My code looks like:
    static Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args)
    Debug.WriteLine("Attempting to resolve assembly: "
    + args.Name);
    // search the loaded assemblies for this one.
    foreach (Assembly assy
    in AppDomain.CurrentDomain.GetAssemblies())
    if (assy.FullName == args.Name)
    return assy;
    return Assembly.GetExecutingAssembly().FullName
    == args.Name
    ? Assembly.GetExecutingAssembly() : null;
  6. Write code to unload the AppDomain (call AppDomain.Unload(...)) (I do this in a 'finally' that follows a 'try' started immediately after the app domain is created)

This seems to do the trick. Let me know if you have any problems.

Here's where I got some of my information:

By the way, if you've never come across it, Suzanne's blog is a GREAT source of information.