Debunking another myth about value types

Here's another myth about value types that I sometimes hear:

"Obviously, using the new operator on a reference type allocates memory on the heap. But a value type is called a value type because it stores its own value, not a reference to its value. Therefore, using the new operator on a value type allocates no additional memory. Rather, the memory already allocated for the value is used."

That seems plausible, right? Suppose you have an assignment to, say, a field s of type S:

s = new S(123, 456);

If S is a reference type then this allocates new memory out of the long-term garbage collected pool, a.k.a. "the heap", and makes s refer to that storage. But if S is a value type then there is no need to allocate new storage because we already have the storage. The variable s already exists and we're going to call the constructor on it, right?

Wrong. That is not what the C# spec says and not what we do. (Commenter Wesner Moise points out that yes, that is sometimes what we do. More on that in a minute.)

It is instructive to ask "what if the myth were true?" Suppose it were the case that the statement above meant "determine the memory location to which the constructed type is being assigned, and pass a reference to that memory location as the 'this' reference in the constructor". Consider the following class defined in a single-threaded program (for the remainder of this article I am considering only single-threaded scenarios; the guarantees in multi-threaded scenarios are much weaker.)

using System;
struct S
{
private int x;
private int y;
public int X { get { return x; } }
public int Y { get { return y; } }
public S(int x, int y, Action callback)
{
if (x > y)
throw new Exception();
callback();
this.x = x;
callback();
this.y = y;
callback();
}
}

We have an immutable struct which throws an exception if x > y. Therefore it should be impossible to ever get an instance of S where x > y, right? That's the point of this invariant. But watch:

static class P
{
static void Main()
{
S s = default(S);
Action callback = ()=>{Console.WriteLine("{0}, {1}", s.X, s.Y);};
s = new S(1, 2, callback);
s = new S(3, 4, callback);
}
}

Again, remember that we are supposing the myth I stated above to be the truth. What happens?

* First we make a storage location for s. (Because s is an outer variable used in a lambda, this storage is on the heap. But the location of the storage for s is irrelevant to today's myth, so let's not consider it further.)
* We assign a default S to s; this does not call any constructor. Rather it simply assigns zero to both x and y.
* We make the action.
* We (mythically) obtain a reference to s and use it for the 'this' to the constructor call. The constructor calls the callback three times.
* The first time, s is still (0, 0).
* The second time, x has been mutated, so s is (1, 0), violating our precondition that X is not observed to be greater than Y.
* The third time s is (1, 2).
* Now we do it again, and again, the callback observes (1, 2), (3, 2) and (3, 4), violating the condition that X must not be observed to be greater than Y.

This is horrid. We have a perfectly sensible precondition that looks like it should never be violated because we have an immutable value type that checks its state in the constructor. And yet, in our mythical world, it is violated.

Here's another way to demonstrate that this is mythical. Add another constructor to S:

public S(int x, int y, bool panic)
{
if (x > y)
throw new Exception();
this.x = x;
if (panic)
throw new Exception();
this.y = y;
}
}

We have

static class P
{
static void Main()
{
S s = default(S);
try
{
s = new S(1, 2, false);
s = new S(3, 4, true);
}
catch(Exception ex)
{
Console.WriteLine("{0}, {1}", s.X, s.Y);};
}
}
}

Again, remember that we are supposing the myth I stated above to be the truth. What happens? If the storage of s is mutated by the first constructor and then partially mutated by the second constructor, then again, the catch block observes the object in an inconsistent state. Assuming the myth to be true. Which it is not. The mythical part is right here:

Therefore, using the new operator on a value type allocates no additional memory. Rather, the memory already allocated for the value is used.

That's not true, and as we've just seen, if it were true then it would be possible to write some really bad code. The fact is that both statements are false. The C# specification is clear on this point:

"If T is a struct type, an instance of T is created by allocating a temporary local variable"

That is, the statement

s = new S(123, 456);

actually means:

* Determine the location referred to by s.
* Allocate a temporary variable t of type S, initialized to its default value.
* Run the constructor, passing a reference to t for "this".
* Make a by-value copy of t to s.

This is as it should be. The operations happen in a predictable order: first the "new" runs, and then the "assignment" runs. In the mythical explanation, there is no assignment; it vanishes. And now the variable s is never observed to be in an inconsistent state. The only code that can observe x being greater than y is code in the constructor. Construction followed by assignment becomes "atomic"(*).

In the real world if you run the first version of the code above you see that s does not mutate until the constructor is done. You get (0,0) three times and then (1,2) three times. Similarly, in the second version s is observed to still be (1,2); only the temporary was mutated when the exception happened.

Now, what about Wesner's point? Yes, in fact if it is a stack-allocated local variable (and not a field in a closure) that is declared at the same level of "try" nesting as the constructor call then we do not go through this rigamarole of making a new temporary, initializing the temporary, and copying it to the local. In that specific (and common) case we can optimize away the creation of the temporary and the copy because it is impossible for a C# program to observe the difference! But conceptually you should think of the creation as a creation-then-copy rather than a creation-in-place; that it sometimes can be in-place is an implementation detail that you should not rely upon.

----------------------------

(*) Again, I am referring to single-threaded scenarios here. If the variable s can be observed on different threads then it can be observed to be in an inconsistent state because copying any struct larger than an int is not guaranteed to be a threadsafe atomic operation.

Comments

  • Anonymous
    October 11, 2010
    But what are you supposed to do if your desired invariant is one that "0, 0" cannot meet? You are supposed to either not use a value type or abandon that invariant, or put more stuff in the struct that enables you to deal with the situation. For example, suppose you have a value type that represents a handle. You might put some logic in the non-default constructor that verifies with the operating system that the arguments passed in to the constructor allow the internal state to be set to a valid handle. Code which receives a copy of the struct from an untrusted caller cannot assume that a non-default constructor has been called; a non-trusted caller is always allowed to create an instance of a struct without calling any methods on the struct that ensure that its internal state is valid. Therefore when writing code that uses a struct you are required to ensure that the code properly handles the case where the struct is in its default "all zero" state.  It is perfectly acceptable to have a flag in the struct which indicates whether the non-default constructor was called and the invariants were checked. For example, the way that Nullable<T> handles this is it has a flag that indicates whether the nullable value was initialized with a valid value or not. The default state of the flag is "false". If you say "new Nullable<int>()" you get the nullable int with the "has value" flag set to false, rather than the value you get with "new Nullable<int>(0)", which sets the flag to true.
  • Eric
  • Anonymous
    October 11, 2010
    Somewhere in the back of my mind, I had this inkling that new struct could be used (inside an unsafe block) to create a pointer to a struct instance (like malloc), like the myth discussed above. I wonder what's the C# equivalent of C++'s new, that would return a pointer to newly allocated "heap" memory for a struct instance. (Should my descent into insanity ever prompt me to want to do such a thing.) Sure, that's easy. To get a pointer to a heap allocated instance of struct S, simply create a one-element array "new S[] { new S(whatever) }", and then use the "fixed" statement to obtain a pointer to the first element of the array. And there you go; you've got a pointer to a pinned, heap-allocated struct. - Eric

  • Anonymous
    October 11, 2010
    Is there a reason why you cannot do this instead?

  • Determine the location referred to by s.
    * Set the memory behind s to all zeros (i.e., clear all fields)
    * Run the constructor, passing a reference to s for "this". Igor How does that technique solve the problem? You still end up with situations in which violations of a given invariant might be observable from outside the constructor. And don't forget about exceptions. Suppose a constructor throws an exception halfway through construction, and you catch the exception. The memory could be half initialized and half still zero if we did it your way. - Eric

  • Anonymous
    October 11, 2010
    Nevermind, ignore my comment above. I am still surprised that the C# compiler does this, though. I would have thought that calling into unknown code from within a constructor is the anti-pattern that breaks the example code. It is a bad pattern and you should probably not do it. The code shown is deliberately contrived so as to more clearly demonstrate the issue. The rules of the language are designed so that you don't run into this sort of problem in realistic, subtle, non-contrived situations, not so that you can do obviously dangerous stuff like invoke arbitrary functions from inside ctors. - Eric

  • Anonymous
    October 11, 2010
    Eric, without your callback, JIT could have optimized this assignment out. Perhaps, as a benefit of a doubt, that's what people meant. While from a general correctness the statement is incorrect.

  • Anonymous
    October 11, 2010
    @Dmitiry My view is that people rarely quote things like this as a simplification, but instead as a misunderstanding.

  • Anonymous
    October 11, 2010
    So, would you mind explaining what (in your mind) structs ARE for? I think the reason why these myths persist is because people want rules of thumb for when structs should be used instead of classes, and one possible rule of thumb is, "When you want to avoid dynamic allocation of memory." But if that's not valid, then when IS a good time to use a struct instead of a class? "Dynamic allocation of memory" is a strange thing to want to avoid. Almost everything not done at compile time allocates memory! You add x + y, and the result has to go somewhere; if it cannot be kept in registers then memory has to be allocated for the result. (And of course registers are logically simply a limited pool of preallocated memory from which you can reserve a small amount of space temporarily.) Sometimes the memory is allocated on the stack, sometimes it is allocated on the heap, but believe me, it is allocated. Whether the result is a value type or a reference type is irrelevant; values require memory, references require memory, and the thing being referenced requires memory. Values take up memory, and computation produces values. The trick is to work out what exactly it is that you're trying to avoid when doing a performance optimization. Why would you want to avoid dynamic allocation? Allocation is very cheap! You know what's not cheap? Garbage collection of a long-lived working set with many small allocations forming a complex reference topology. Don't put the cart before the horse; if what you want to avoid is expensive collections then don't say that you're using structs to avoid dynamic allocation, because you're not. There are lots of strategies for making garbage collections cheaper; some, but not all of them, involve aggressively using structs. To address your actual question: you use value types when you need to build a model of something that is logically an immutable value, and, as an implementation detail, that consumes a small fixed amount of storage. An integer between one and a hundred is a value. True and false are values. Physical quantlties, like the approximate electric field vector at a point in space, are values. The price of a lump of cheese is a value. A date is a value. An "employee" type is not a good candidate for a value type; the storage for an employee is likely to be large, mutable, and the semantics are usually referential; two different variables can refer to the same employee.

  • Eric
  • Anonymous
    October 11, 2010
    M.E., Use Value Types (structs) when you want Value Type semantics i.e. 'Copy Value' during assignments. I have never felt the need to avoid 'dynamic allocation of memory' and reached 'structs' as the tool to address the requirement.

  • Anonymous
    October 11, 2010
    The comment has been removed

  • Anonymous
    October 11, 2010
    The comment has been removed

  • Anonymous
    October 11, 2010
    I am surprised that more people haven't mentioned a key difference between reference types and value types is that value types (can be) blittable. Baring strings/stringbuilders (which are designed quite carefully to allow it if desired) this is only possible on structs. Another facet of them, with similar technical reasons to the blittability, is that they have no overhead in size terms bar their payload (and optionally packing) and being multiples of 8bits.

  • Anonymous
    October 11, 2010
    Hi, Eric. You wrote: "you use value types when you need to build a model of something that is logically an immutable value, and, as an implementation detail, that consumes a small fixed amount of storage" But, for example types System.Drawing.Rectangle and System.Windows.Rect are both structures, but at the same time they have methods which mutates theirs content (for example method Inflate). May you know why these two types from Winforms and WPF were made mutable structures? And may be you can point some cases when actually it is ok to make mutable value types?

  • Anonymous
    October 11, 2010
    While I do agree about creating a temp and copying it to its final destination is the way to go, I do not agree with the demonstration, and especially not with the example. Clearly, your example code is severely flawed. If your invariant is that x>y, then you cannot call the callback between the assignment of this.x and this.y. If you do so, you are calling an external method while having some object whose invariant is not maintained. A truly invariant-checking compiler would insert a check of the invariant both before and after each call of the callback, and your program would throw an exception. What your code does is provide a pre-condition on the arguments of the constructor, not a real invariant.

  • Anonymous
    October 11, 2010
    The comment has been removed

  • Anonymous
    October 11, 2010
    "as an implementation detail, that consumes a small fixed amount of storage" I find this the most pressing argument for using structs, since to my mind this 'detail' makes structs play nice with interop and (graphics) hardware. I realize this is probably a niche case, but when choosing between classes and struct I find I first consider if I need a fixed memory layout. My typical 2nd criterium is whether using structs would make life easier on the GC, which can be an important issue on the compact framework, but your treatise has me seriously doubting what I think I know about deterministic finalization.

  • Anonymous
    October 11, 2010
    M.E. wrote: > If I could write "public sealed readonly class CostOfALumpOfCheese" and then declare a non-nullable variable > "CostOfALumpOfCheese! cheeseCost" (where '!' is the opposite of '?') Yes, that is it. A programmer should not have to deal with such implementation details as value or reference types. It should be up to the compiler / JITer to optimize sealed readonly "classes" in non-nullable variables which are relativly small in size and optimize them as it sees fit by making value types out of them. But a programmer should not need the distinction. We much more need a distinction between mutable and non-mutable (as you specified with readonly) types than we need a distinction between value types and reference types. Stephan Leclercq wrote: > Clearly, your example code is severely flawed. If your invariant is that x>y, then you cannot call the callback between the > assignment of this.x and this.y. why? The called function has no way of getting at the intermediate values (I think). Just as Eric wrote, you get 0,0 three times (since the values will be copied after the constuctor is done and no external code can access the intermediate copy). 0,0 satisfies the invariant (remeber x>y results in an exception, meaning the invariant is x is less than or equal y, which 0,0 satisfies). If you use structs, you have to accept the fact that someone can make a default struct wich should satisfy your invariants or you made a wrong design choice.

  • Anonymous
    October 11, 2010
    You don't need to construct a new instance of S over the top of an old one to get that invariant to fail for one of those callbacks. Just pass (-2, -1) and the invariant will fail for the second callback. One good reason to carefully manage dynamic memory allocation is when you're using the Compact Framework. For example take a look at at: blogs.msdn.com/.../713396.aspx

  • Anonymous
    October 11, 2010
    Surely the x and y referred to in the conditional test are the parameters to the constructor and NOT the private member fields of the structure?  One would have to use 'this.x' and 'this.y' to refer to the member fields.  Thus, I don't see a case here where x is > y and any exception should be thrown.  What am I missing?

  • Anonymous
    October 11, 2010
    [pedants corner...] > because copying any struct larger than an int is not guaranteed to be a threadsafe atomic operation For the CLI, an IntPtr is atomic, not an int. For C#, an int (and other 32bit values) are guaranteed atomic. So for a 16bit CLR, 32bit values are atomic, whereas for a 64bit CLR, any 64bit value is atomic. ....according to the specs at any rate....

  • Anonymous
    October 11, 2010
    marc: Hopefully you will read through the posts again and see that value types have significant semantic differences from reference types, such that you can't turn a class into a struct without breaking things. The whole point of a value type is that it doesn't have any memory overhead (so an array of a million ints takes 4MB instead of 12MB), meaning that it doesn't include storage for the monitor (to enable the "lock" statement) or type information (to enable things like co-/contravariance). What the runtime could do is optimize reference types to allocate them on the stack instead of the heap when it knows that that there's no danger that the reference will escape the current method. However, heap allocation is no more expensive than stack allocation in the CLR (as opposed to C++ where allocating on the heap can be expensive), so the optimization only reduces the load on the garbage collector. Presumably since this is a non-trivial optimization to detect (you'd have to prove that no method of the object stores a this-reference anywhere) and may not make things much faster, it's not done at the moment

  • Anonymous
    October 12, 2010
    public static void RunSnippet() {    ValueTypeObject x = new ValueTypeObject(1, 2); } .method public hidebysig static void RunSnippet() cil managed {    .maxstack 3    .locals init (        [0] valuetype MyClass/ValueTypeObject x)    L_0000: nop    L_0001: ldloca.s x    L_0003: ldc.i4.1    L_0004: ldc.i4.2    L_0005: call instance void MyClass/ValueTypeObject::.ctor(int32, int32)    L_000a: nop    L_000b: ret } C++ allows the compilers to construct directly on the storage of the local variable being initialized. In addition, I see no evidence in the example output from Reflector of any additional temporary variable being created to store the initial constructed value.

  • Anonymous
    October 12, 2010
    The temporary is stored on the stack, but this becomes a CLR issue, not a C# language issues, as to whether to enable the optimization to initialize directly an previously unused variable. The example in the blog post is not ideal because the local variable is hosted in a compiler-generated display class. You make an excellent point Wesner, one which I should have called out in my original article. As an optimization we can often initialize "in place" and do so, but only when the consequences of that choice are unobservable. I'll update the text; thanks for the note!

  • Eric
  • Anonymous
    October 12, 2010
    Gabe: I know about the semantic differences and the more I read about them, the less I think we should bother a programmer with it. So I am not proposing to change C# to be value type / reference type agnostic, but it was mostly a comment to the language construct as a whole, that such a difference should not be made at all. It is too late to do this in C#. The compiler / the CLR could detect if an instance requires monitor and/or type information and provide the storage space if needed. This would basically mean performing the boxing only once if an instance needs to be reference type, but is easy enough to be of value type. I still believe that having the assignment and equality operator meaning different things for value / reference types is a source of many (way many) bugs.

  • Anonymous
    October 13, 2010
    marc: I'm not sure what you're proposing. Are you suggesting a system like C++ where types are neither value nor reference, but the value/reference aspect is determined at the point of use? Surely you don't want that because it just shifts the problem from the time a type is created to every time it's used! Are you instead suggesting that all types should be what are currently considered to be reference types, and make the compiler and runtime responsible for optimizing them where possible to be merely values? If so, the optimization would be extremely rare. A publicly available array of ints, for example, would have to always be an array of references to ints because you never know if some code in another assembly might want to get a reference to one of those ints. Many OO systems don't have value types, and I'm not sure that many of them even attempt this optimization.

  • Anonymous
    October 13, 2010
    "Are you instead suggesting that all types should be what are currently considered to be reference types, and make the compiler and runtime responsible for optimizing them where possible to be merely values?" I know you weren't talking to me, but I believe I have an answer that makes sense and possibly (?) has some merit. I'm not sure if this is what marc was proposing or not. But seems to me that the distinction that's valuable to the programmer is "immutable or not" rather than "value or not". An immutable sealed reference type like string might as well be a value type; a mutable value type is - well, in my opinion - confusing enough that, personally, I'd have no problem making them simply forbidden outside of unsafe code. So if there were a way to declare, say, "readonly sealed class X" and have the "readonly" modifier enforce that the class must have value semantics - that is, only have readonly fields and all fields must be of readonly types themselves (and perhaps no finalizer) - then for those specific types (and with some other caveats) it perhaps make sense to elide the distinction between value and reference type and make it a purely runtime implementation detail. In practice, there are other complications with an approach like that; for example, the question of nullability (an immutable reference type can be; an immutable value type cannot. If we grant that both are semantically "values", shouldn't the nullability question be separate from the storage mechanism? For that matter, why should default(string) be null rather than ""? My thought would be that each "readonly" type ought to be able to be used with the ? suffix for nullability, but also that it ought to be able to declare its own default value if it is NOT used with that suffix. And that, as a result, there should not be the restriction that every value type has to accept "all zeros" as a legitimate value; it can declare its own default. The CLR would also need a low-level immutable array type in order to support making "string" one of these language-level readonly types. All in all, I think it might be a very worthwhile thing to do if someone were redesigning C# from scratch, but I don't think it can be done in a way that's both sane and backward-compatible, because at minimum you'd have to turn every instance of "string" into "string?"...

  • Anonymous
    October 13, 2010
    > The whole point of a value type is that it doesn't have any memory overhead (so an array of a million ints takes 4MB > instead of 12MB), meaning that it doesn't include storage for the monitor (to enable the "lock" statement) or type > information (to enable things like co-/contravariance). Now, how am I supposed to know that by declaring something as a value type, I say nothing at all about how that value is allocated in memory, but I DO say something about what extra information the system stores with the object? Eric claims that the first is none of my business, but if so, why is the second my business? > Are you suggesting a system like C++ where types are neither value nor reference, but the value/reference aspect is > determined at the point of use? This does seem like the right model to me. > My thought would be that each "readonly" type ought to be able to be used with the ? suffix for nullability, but also that it > ought to be able to declare its own default value if it is NOT used with that suffix. And that, as a result, there should not be > the restriction that every value type has to accept "all zeros" as a legitimate value; it can declare its own default. This is an interesting suggestion, because one often does run into situations (as in the article) where you want a struct to follow some invariant, but the default values would violate that invariant.  Having a readonly keyword for classes would be oh so nice . . .

  • Anonymous
    October 13, 2010
    >My thought would be that each "readonly" type ought to be able to be used with the ? suffix for nullability, but also that it ought to be able to declare its own default value if it is NOT used with that suffix. And that, as a result, there should not be the restriction that every value type has to accept "all zeros" as a legitimate value; it can declare its own default. That's a performance nightmare; granted you might not care about that in some circumstances, but it's still a concern I would expect the CLR team to be worried about. Right now newing up an array with 1,000,000 elements is a fairly straightforward task: grab the necessary amount of memory and make sure it's zeroed (and newly allocated memory from the OS will be zeroed already. If not, writing zero to a large contiguous block of memory is super fast). If the struct has non-zero default values (2 for the first field, 12 for the second, 0x35dffe2 for the third) the runtime's new[] code has to loop through writing the default values into each location. This is particularly painful since it has to be done even if the various elements of the array are only going to be overwritten with some non-default values shortly afterwards! The same applies to fields in classes, you can't just zero out the memory when constructing an object (something that, as above, might already have been done for you in advance), the runtime has to go through and fill in a bunch of default values - which you'll probably go and overwrite in your constructor anyway.

  • Anonymous
    October 14, 2010
    Eek! That's a very good point. Ouch. I hate when reality messes with my perfectly good theories! ;-)

  • Anonymous
    October 17, 2010
    ficedula: Isn't there a command to blit a certain byte array repeatedly 1,000,000 (or any N) times?

  • Anonymous
    October 17, 2010
    configurator: You could optimise the fill, yes. It's still going to be more complex than filling with zeroes; rather than writing zero to every location, you have to fetch the default values based on the type being allocated; and rather than writing data in blocks of 16+ bytes at a time, you may end up having to write it in smaller chunks if your struct is an inconvenient size. That aside, since it is possible to "pre clear" your unused memory to zeroes, you lose out fairly significantly there. As I mentioned, memory straight from the OS allocator will be zero-filled already so you currently don't have to do anything before using it. Memory that's not come straight from the OS and was already allocated, you could arrange for that to be cleared to zero as part of garbage collection (when you're touching that memory already and it's fresh in the CPU cache, so the update is practically 'free'.) Compared to that, doing any unnecessary work filling in default values is a loss.

  • Anonymous
    October 18, 2010
    The comment has been removed

  • Anonymous
    October 18, 2010
    The comment has been removed

  • Anonymous
    October 18, 2010
    The comment has been removed

  • Anonymous
    October 19, 2010
    configurator: The idea is that when you garbage collect, the objects that get collected will be touched as part of the collection process. (At least, based on my understanding of the way the .NET GC works). After marking all the objects that are 'live' the GC then compacts the live objects - as part of this process it'll be walking over all the objects [in the generation being collected] which will generally bring that memory into the CPU cache. Zeroing it at that point is potentially cheap since you're touching it anyway. It's not that the GC only runs when the memory is already in the cache - but that the process of GC will bring it into the cache anyway [so take advantage of that to clear it out]. (I have no idea whether the .NET GC actually does preclear the memory at this point; but it seems like a logical option that they might have investigated. The costs might have turned out not to be worthwhile under real world conditions.) Writing data in blocks of 16-bytes: right, but your pattern is now 80 bytes long. Older x86 machines don't have 80 bytes of registers available for general copying of data! Newer x86 and x86-64 machines do in the form of SSE registers, but then you ideally need to be writing data to 16-byte aligned destinations, and you're taking up more registers. Writing zeros just means clearing one SSE register and then blasting those 16 bytes out again and again. (If I had to write a runtime for a language that supported default values for value types I'd certainly look at doing all these sort of things. I'd probably prefer to be writing a runtime for a system that just allowed me to zero everything out though...!)

  • Anonymous
    October 19, 2010
    @ficedula: I haven't touched assembly in a while, but I remember there being a call that did something like memset for large areas of memory, with any given piece of data. I could be wrong though.

  • Anonymous
    October 19, 2010
    @configurator: There's REP STOSD - but that only sets 4-bytes at a time, so you can set a 4-byte pattern, but no larger. x86-64 has REP STOSQ which writes 8-bytes at a time, but again, you can only set an 8-byte pattern. Great for setting a region of memory to zero (or all 0xffffffff, or 0x80808080, or whatever), but no use for setting any larger pattern. In order to set larger patterns, you need to hold the pattern in registers and write your own loop to copy each register into memory in turn. Your pattern still has to fit into available registers. (You can also use REP MOVSD to copy a region of memory from one place to another, but (a) that's slower, because it's memory-to-memory rather than register-to-memory, and (b) To copy an 80-byte pattern 1000 times over into an 80000 byte region, you'd need to set up and call REP MOVSD 1000 times ... and have your 80-byte pattern set up in memory first as the copy source.) (On modern PCs, it turns out that while the x86 string instructions (REP MOVS/STOS) are very compact - one instruction to blast as much data as you want from place to place - they're actually not as fast as using the SSE registers which can move 16-bytes at a time and can also be given cache hints.)

  • Anonymous
    October 19, 2010
    configurator & ficedula: Going on about how to initialize arrays with constant bit patterns is pointless, because it's not something you would likely want. If you didn't have to have the all-0 default constructor for value types you would want to have your own constructor called for each instance (like in C++), not have some arbitrary bit pattern copied to each element.

  • Anonymous
    October 19, 2010
    The comment has been removed

  • Anonymous
    October 19, 2010
    @Gabe: The use-case would be that you could set defaults for all the fields which maintained sensible invariants that weren't necessarily based on all zero values ... even if for the struct to be used 'for real' you then initialised the fields in a constructor to more 'useful' values. I'd agree that this isn't beneficial enough to justify all the effort needed to implement the feature though. @Just a guy: You could well be right; I'm just speculating on a possible optimisation. I'd expect the CLR team to have thought about it and decided to implement or not based on how it effects real-world workloads ... possibly it's just not worth doing, period.

  • Anonymous
    November 04, 2010
    I just wonder if this is way overboard...I mean at a top level isn't that enough for typical development?