Graph Colouring With Simple Backtracking, Part Two

Before I begin a quick note: congratulations and best wishes to David Johnson, currently the president of my alma mater, the University of Waterloo. The Queen has appointed him to be the next Governor General of Canada come this October. For those of you unfamiliar with the Canadian political structure, Queen Elizabeth is the sovereign ruler Canada; the Governor General acts as her direct representative in Canada and therefore has the (mostly ceremonial, but some real) powers of a head of state.

Moving on!

OK, we've got our graph class. Our basic algorithm for assigning colours will be to associate with each node in the graph a set of "possible colours", and then gradually reduce that set for each node until either we've proved that there is no possible colouring, or we've found one that assigns exactly one colour to each node. Therefore we will need a type that represents a set of possible colours.

First off, what type should represent a colour? There are already Color enums in the BCL, but is that really what we want? We're probably going to want to ask questions like "is there a colouration of this graph that uses only three colours?", rather than "is there a colouration of this graph that uses only green, blue and orange?"  The actual colours are kind of irrelevant. Let's say that a colour is simply an integer from zero to max-1. We can always make a mapping from integers to real colours if we want to.

Note that this design decision immediately introduces a point of possible confusion; last time we said that nodes in a graph were also represented as integers. If I were particularly concerned about that, I might devise two structs that each wrap an integer, say, GraphNode and GraphColour. Then I could let the compiler tell me if I were to accidentally assign something that is logically a colour to a variable that is logically a node.

This is a tough call. Basically we are trading off the time and effort of implementing and maintaining the unnecessary wrapper structs against the possibility of time and effort lost to accidental bugs. Does the cost justify the benefit? My feeling is in this case, no. I'm going to not implement these wrapper structs this time.

OK, so a colour is a plain integer. I want to represent a set of plain integers. A HashSet<int> seems like a reasonable off-the-shelf choice, but it has a few problems. First off, it is mutable; part of the point of this exercise is to program in an immutable style. Second, it's heavyweight; in order to maintain a set of integers it is allocating arrays and hash buckets and all kinds of stuff.

If we're willing to constrain the problem we can do a lot better than a HashSet<int> to represent a set of integers. Suppose for example that we say that we're never going to try to colour a graph with more than 32 colours, which seems reasonable. Given that constraint, we can represent a set of integers in a single 32 bit integer, by simply using the integer as a bit set. Integers are already immutable, so it seems natural to make an immutable set out of an integer.

Now, of course if that's our approach then we don't even need a type that wraps an integer. We could just use an integer. But, unlike in our example of the graph nodes and graph colours, we actually have operations we want to perform on this type, like "enumerate all your members". This is much more like a real abstract data type, so it's justifiable to make a real type out of it rather than simply using an integer directly.

// A super-cheap immutable set of integers from 0 to 31 ;
// just a convenient wrapper around bit operations on an int.
internal struct BitSet : IEnumerable<int>
{
  public static BitSet Empty { get { return default(BitSet); } }
  private readonly int bits;
  private BitSet(int bits) { this.bits = bits; }
  public bool Contains(int item)
  {
Debug.Assert(0 <= item && item <= 31); 
    return (bits & (1 << item)) != 0;
}
  public BitSet Add(int item)
  {
    Debug.Assert(0 <= item && item <= 31);
    return new BitSet(this.bits | (1 << item));
  }
  public BitSet Remove(int item)
  {
    Debug.Assert(0 <= item && item <= 31); 
    return new BitSet(this.bits & ~(1 << item));
  }
IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); }
  public IEnumerator<int> GetEnumerator()
  {
    for(int item = 0; item < 32; ++item)
      if (this.Contains(item))
        yield return item;
  }
}

Let's again go through this line by line.

As with the graph class, I've made the type internal. Since I know that I'm the only one using this, I can be a bit more cavalier about not doing bounds checking with nice exceptions and whatnot. If this were a public type in the BCL then I'd have to do much more "retail" bounds checking to ensure that the items actually are integers from 0 to 31; instead, I'll just do that checking with debug mode assertions.

An immutable set of integers can logically be treated as a value, and this type is very small in terms of the data it contains, so I've made this a value type.

Unlike the graph class, here I am emphasizing the implementation detail of the set by naming the class BitSet instead of, say, SmallIntegerSet. Am I inconsistent? Yes, I contain multitudes. It feels like the right thing to do here is to emphasize that this is fundamentally a mechanism, rather than the carrier of semantics. I'm not sure why that is.

I could implement a more full-on interface like ICollection<int>, but that includes operations that I don't want, like the ability to mutate the collection. Rather than use an off-the-shelf interface in a confusing, off-label manner, I'll just implement the standard interface that I can implement correctly, IEnumerable<int>, and then implement my own Add and Remove methods that do the right thing.

I'm using the standard immutable object pattern of providing a static way to get at the empty collection. This is unnecessary; default(BitSet) gives you the right thing. But it reads much more nicely to say BitSet.Empty; it is clear from the code that you're getting an empty collection.

I've made the private data a signed integer rather than an unsigned integer. It's marginally safer to represent a bit vector as a uint because you run less risk of having a problem with sign bit issues. However, you also then have to do casts all over the show, which is ugly. I say that the math here is simple enough that we can verify that it is correct without worrying too much about sign issues.

The constructor which sets the internal state is private; this deprives the caller of the power of setting many bits at once. But if the caller has an int that has many bits set at once, then why are they using the BitSet in the first place? Apparently they're already willing to do bit twiddling in their own code to set those bits. Again the YAGNI principle says to not provide unneeded functionality.

The Contains, Add and Remove methods, as mentioned before, do not verify the validity of their arguments because the type is only used internally. Debug assertions tell the caller when an invariant expected by the type has been violated.

You might wonder why the GetEnumerator method does a loop with a yield return, rather than being implemented as

  public IEnumerator<int> GetEnumerator()
  {
return
from item in Enumerable.Range(0, 32)
where this.Contains(item)
select item;
  }

Do you see why I chose to use a loop for this one, despite my stated desire to eschew loops in favour of queries?

Notice that I choose to not implement special versions of sequence operators that are extension methods, even though I could. For example, there's no "Count" property on this thing. The naive implementation of Count() will call GetEnumerator, which as we've seen enumerates all 32 possibilities. But the count of the bitset is the Hamming Weight, and there are many well-known algorithms for efficiently calculating the Hamming Weight of a bit vector. We could for example use the algorithm commonly attributed to Kernighan (though it has been discovered independently many times throughout the history of computer programming.) This algorithm clears the low bit until they're all cleared, and tells you how many times it had to do that:

  public int Count()
  {
int remaining = this.bits;
int count = 0;
    while(remaining != 0)
{
remaining &= (remaining - 1);
count++;
}
return count;
  }

The cost of this is proportional to the number of set bits, rather than always going through all 32. But who cares? 32 is a small number. Again, let's not add this complexity unless profile-driven empirical data suggests that naively counting the bitset entails an unacceptably high cost.

Notice that in contrast I did choose to implement Contains() myself rather than using the extension method. Had I used the extension method then obviously I would not have been able to use Contains() inside GetEnumerator; we would have gotten a stack overflow. I would have had to put the bit twiddling logic in GetEnumerator, and since I was going to have to write the bit twiddling logic anyway, it seemed reasonable to put it in its own method for general use.

Once more we've built a trivial little piece of code, but to do so, had to make many design decisions along the way. The fundamental question that has driven the entire design is "is it reasonable that we'll never have more than 32 colours?" We could trivially expand that to 64 by using a long instead of an int, but going farther than that would be a lot trickier.

Next time, let's actually solve a slightly harder problem; let's make an algorithm that actually solves colouring problems.

Comments

  • Anonymous
    July 15, 2010
    The comment has been removed

  • Anonymous
    July 15, 2010
    Perhaps it's a YAGNI thing but it seems to me that for this particular type, Union(BitSet) and Intersection(BitSet) methods would be so simple to add and so much more efficient than the naive implementations using the enumerators that they'd be worth putting in up-front? Not having seen the coloring algorithm you are going to write, I don't know for sure that they will be used, but it seems entirely possible to me that they would be useful in such an algorithm. Good idea. But as it turns out, I don't need them. - Eric

  • Anonymous
    July 15, 2010
    In regard to readonly - I'm not sure I'd consider it a lie. By virtue of being a "value" type, a struct represents a value, not a storage-location-for-a-value. When you say "s = default(S)" you're not assigning a new value to s.x in the same "instance" of S, you're replacing the entirety of s with a new "instance" of S (that happens to live in the same storage location). Your Evil method deliberately confuses the issue by its use of a ref parameter of the same type as itself, such that "this" and "s" refer to the same thing but "s" is writable. What that says to me is "don't have nonstatic methods on a struct that take a ref parameter of the same type as the struct", rather than "don't put readonly on structs". As long as you avoid evil methods, "readonly" isn't a lie at all. But that's just one way that you can make readonly lie; it looks contrived because it is contrived to show the problem in as little space as possible. But we can show this in many other ways: struct S
    {
      private readonly int x;
      public S(int x) { this.x = x; }
      public Bogus(Action action)
      {
        Console.WriteLine(this.x);
        action();
        Console.WriteLine(this.x); // different!
      }
    }
    struct T { public S s; ... } T t = new T(new S(123), ... );
    t.s.Bogus(()=>{t = default(T); }); now we have a method that no longer takes a ref to S, and the action only affects S indirectly. There are lots of ways that we can show readonly to be a lie, and any of them are sufficient to make it unreliable. Saying "as long as you avoid situations in which the mutation of a readonly field is apparent, mutations are not apparent" is a tautology; my point is that the mere fact that there are any such situations means that the feature is misleading. - Eric

  • Anonymous
    July 15, 2010
    What, no Enumeratour?

  • Anonymous
    July 15, 2010
    The comment has been removed

  • Anonymous
    July 15, 2010
    Eric, I love that you even mentioned using int-wrapper structs to guarantee compiler static type checking for integers which represent different domains. I've done exactly this thing for my company's model's identifiers using a T4 template to generate the struct code for each identifier type, e.g. we have StudentID, StaffID, CourseID, ClassID, etc. There are certain groups of similarly named models in our domain which could have their members be easily confused, e.g. Class and Course.

  • Anonymous
    July 15, 2010
    Eric, This is shaping up as an excellent series (no suprise there)...two questions/comments..

  1. ReadOnly value types....Would it not make sense to prohibit using a readonly as a ref (or out) parameter at the compiler CLR level?
  2. Debug.Assert...I 100% agree with the logic that this is "robust enough for internal use", but why not use Code Contracts???
  • Anonymous
    July 15, 2010
    on the readonly aspect: For the exact reasons you describe I don't assume that they are immutable in that way. I do however think it makes programmer intent clear and so if someone comes along and decides the change things and messes up the immutability in some less subtle manner the compiler tells them about it. Anyone who writes the sort of code you show to expose the issue should know what they are doing (I don't mean that in the general sense, I mean anyone working for me who does it!) so the readonly is a benefit of both intent/sanity checking. I was surprised you didn't add it initially, though appreciate the subtleties involved.

  • Anonymous
    July 15, 2010
    A perhaps minor nitpick about personal preference: C# lets you ignore the return value of methods and has no const or similar annotation.  That makes it particularly hard to tell an immutable from a mutable structure.  I've seen code quite similar to this where people accidentally use such methods (IIRC, on the Rect structure from WPF) in a fashion that assumes the data structure is mutable when it actually isn't. It's not always easy to avoid that confusion, but in particular, I find that avoiding names that are commonly used for mutable methods helps.  For example, Add/Remove could be called With/Without or similar.  Another less-pretty alternative is to use static methods. e.g.: bitset = bitset.With(x).Without(y); //or bitset = Bitset.Remove(Bitset.Add(bitset,x),y); Immutability annotations or annotations for pure functions that could be used by the compiler to enforce that you may not ignore return values would be nice...

  • Anonymous
    July 16, 2010
    "now we have a method that no longer takes a ref to S, and the action only affects S indirectly. There are lots of ways that we can show readonly to be a lie, and any of them are sufficient to make it unreliable. Saying "as long as you avoid situations in which the mutation of a readonly field is apparent, mutations are not apparent" is a tautology; my point is that the mere fact that there are any such situations means that the feature is misleading. - Eric" Yes, I actually thought of that right after posting, but I felt like posting three comments in a row was a little excessive! :) Here's my question, though: Is it possible to mutate a readonly field of a value type without the value type doing something that exposes the ability to do it? In other words - in the case of your BitSet type readonly isn't a lie, because it doesn't expose any callbacks nor call anything else that knows about BitSet that could overwrite it. Can you write a method that mutates BitSet.bits without changing the code of BitSet itself? If so I retract all my comments :) I've always felt that the proper construction of value types is very difficult; that in order to write a value type that isn't confusing for consumers of the type you have to follow a very restrictive set of best practices, none of which are really laid out anywhere that I've found yet. Since I don't feel like I know the best practices well enough I usually avoid creating my own value types entirely. But I've always considered immutability to be vital; mutable value types confuse the heck out of me! So the idea of leaving off readonly makes me nervous. By virtue of your comments on this post I'd say I can add "don't expose callbacks within instance methods", "don't take ref parameters in instance methods" and "don't call any methods in other types that are aware of this type" to the list of best practices. But your comments also reinforce the "writing good value types and understanding their behavior is hard" attitude that I already had...

  • Anonymous
    July 16, 2010
    The comment has been removed

  • Anonymous
    July 16, 2010
    By the way, I feel dumb for needing to ask, but is the reason you implemented GetEnumerator() as a loop rather than a query because it returns an Enumerator rather than an Enumerable? Because if so, you could implement it as: var set = this; return      (from item in Enumerable.Range(0, 32)      where set.Contains(item)      select item).GetEnumerator(); I'm not sure what tradeoffs that introduces, but I think it also solves my issue of mutating the set, and probably reduces the amount of generated code...

  • Anonymous
    July 16, 2010
    "Do you see why I chose to use a loop for this one, despite my stated desire to eschew loops in favour of queries?" I think it is faster as you save the cost of enumerating the range and calling into the where-iterator. You still have one iterator though.

  • Anonymous
    July 18, 2010
    The comment has been removed

  • Anonymous
    July 18, 2010
    "It's not always easy to avoid that confusion, but in particular, I find that avoiding names that are commonly used for mutable methods helps.  For example, Add/Remove could be called With/Without or similar.  Another less-pretty alternative is to use static methods." How about operator + and -? bitset += x; bitset -= y; bitset = bitset + x - y; bitset1 += bitset2; // union - or could be |= Or does that go against C#'s operator-overloading philosophy? Seems to me it's not too different from adding a char to a string. other thought... it'd be nice if there were a way to make new BitSet { 1, 2, 3 }; work while still being immutable - that's always struck me as a deficiency in the way collection initialization is handled - there should be some way to use the syntax in a way that passes the arguments to a constructor (or maybe a factory method) instead of an .Add method

  • Anonymous
    July 19, 2010
    Random832: The "new BitSet { 1, 2, 3 }" syntax could easily translate into "new BitSet().Add(1).Add(2).Add(3)". There's no need to use a constructor because if you wanted that you could have just typed "new BitSet(1, 2, 3)".

  • Anonymous
    July 19, 2010
    The comment has been removed

  • Anonymous
    July 19, 2010
    @Tanveer I think you miss the point. A person is making decisions when they're typing the code into the console. A professional (should) know the impact of their decisions. A novice makes decisions without understanding consequences. Also Eric, like anyone else, has preferences, and the compiler defaults are just that - compiler defaults. Some people might make a case for 'sealed by default' should be a default, but this isn't that forum (today). The real issue is a lot of the time developers don't realize what decision they're making, and what their code is communicating. I code review junior devs all the time, and most of the time, out of curiosity, when I ask them to justify why they did something in particular, they simply shrug their shoulders. Also, bear in mind that Eric has already planned or written the entire series, so it's obvious that some decisions he's documented at this stage may seem irrelevant and over the top. When you can see the story as a whole I'm sure they will make more sense. If it were me, I'd have written the tests first, but as a blog post, that's perhaps not quite as fun to read.