Why are unused using directives not a warning?

As I’ve discussed before, we try to reserve warnings for only those situations where we can say with almost certainty that the code is broken, misleading or useless. One reason for trying to ensure that warnings are not “false positives” is that we don’t ever want to encourage someone to take working, correct code and break it so as to remove the warning. Another is that since many people compile with “warnings are errors” turned on, we do not want to introduce a whole lot of unnecessary build breakages should the warning turn out to be unwarranted.

An unused using directive is almost certainly not “broken”, and it’s probably not misleading. It certainly is useless though. Why not produce a warning for this, so that you can remove the useless directive from your program?

Suppose we make an unused using into a warnings. Suppose you do have “warnings are errors” turned on. And suppose you create a new console project in Visual Studio.  We generate you this code:

using System;
using System.Text;
using System.Linq;

class Program
{
static void Main(string[] arguments)
{
}
}

… which doesn’t compile. Which is better, generating code that doesn't compile, or omitting the using directives and making you type "using System;" the first time you try to use Console.WriteLine?

Either way is a really bad user experience. So, no such feature.

Comments

  • Anonymous
    January 24, 2010
    Although somewhat unrelated, this reminds me of a VS feature I wish existed. It would be nice if VS had an option to generate a using statement when tab-completing a namespace via Intellisense. At least I don't think this functionality exists...

  • Anonymous
    January 25, 2010
    I'd like a way to automatically remove all unused usings throughout a project - but it may already exist and I just haven't found it yet. The thing I'd definitely like using-wise, that VS definitely doesn't already do, is find potential "using"s that would provide an extension method that you are trying to use - the same way it provides potential "using"s that would match a class name if you type one in. This applies both to extension methods I've written myself, and to the extension methods in System.Linq that provide the query pattern for IEnumerable and company if you try to start typing a from iEnumerable select ... statement. I realize that doing that would be a fair amount of work for the intellisense engine - it's a lot easier to do some work on every unmatched classname than to do it on every unmatched method, I guess. But extension methods are rare - you could build a name->method mapping up front of every extension method defined in every assembly that the compiler is aware of, same as the way it must currently build a dictionary of name->class for every assembly to enable the 'add using' feature that already works. The dictionary of extension methods would be much smaller than the one that already exists for classes...

  • Anonymous
    January 25, 2010
    Yes, other IDEs handle this kind of thing fairly elegantly. I'm thinking specifically of editing Java in Eclipse, where the code completion knows about types in packages you haven't yet imported and adds them for you when copy and pasting, that sort of thing. I think some people write programs that NEVER call Console.WriteLine!

  • Anonymous
    January 25, 2010
    The comment has been removed

  • Anonymous
    January 25, 2010
    And it would be nice if it were easier to configure which warnings were errors in VS too...

  • Anonymous
    January 25, 2010
    The comment has been removed

  • Anonymous
    January 25, 2010
    So the problem here is really that “warnings are errors” doesn't make sense if the warning is about "useless" code.  Could that feature be updated to understand which warnings are just about "useless" code and not treat those warnings as errors?

  • Anonymous
    January 25, 2010
    I've never worried about any of this since I started using ReSharper. It has:

  • Generate a using statement when tab-completing a namespace via Intellisense

  • Find potential "using"s that would provide an extension method that you are trying to use

  • Knows about types in [assemblies] you haven't yet imported and adds them for you when copy and pasting (OK, not quite seamless: requires some Alt+Enter keypresses)

  • Anonymous
    January 25, 2010
    Perhaps another type of compiler output is needed, something like Recommendations?

  • Anonymous
    January 25, 2010
    @stuart & Pavel:  There's also the VS 2008 Power Commands that adds Organize&Remove using among other useful extensions: http://code.msdn.microsoft.com/PowerCommands

  • Anonymous
    January 25, 2010
    I think this is another example of "good design is the art of making good compromises", and this is the right compromise to make.  For me, the difference between unused using directives and other types of compiler warnings is that (AFAIK) unused using directives have no impact on the generated IL, whereas other warnings serve to tell you that the compiler may ignore some of your code because it isn't reachable, unused, etc... @Eric, Were using statements such as "using IntFunc = Func<int>;" also included for this?  I could see this being treated differently than the "you didn't actually use types from the System namespace" case.  For me, the aliasing using case sits somewhere between this case and the used variable case.

  • Anonymous
    January 25, 2010
    It sounds like the problem in your example is that the sample code includes system.linq and system.text for some reason, even though I'm not using either. If I have set 'warnings are errors', then I WANT warnings to hurt.

  • Anonymous
    January 25, 2010
    One example where you would need them, that tools seem unable to comprehend, is when using conditional compilation. #if ALLOW_FILEIO using System.IO; #else using MyApp.FakeIO; #endif

  • Anonymous
    January 25, 2010
    @Danyel is spot on in my view. <rant> Why do the Visual Studio templates feel the need to include so many using directives? In my experience it's extremely easy to add a using directive automatically - I can't remember the last time I wrote one manually... probably back in VS2003. This is just one example of Visual Studio being annoying, in my opinion. Other examples:

  • Defaulting to creating "Form1.cs" - who actually wants a type called Form1? Ask me for a better name, or use a better default.
  • Defaulting to copying a file as "OriginalName - Copy.cs" - again, I'm bound to want to give it a different name, so ask me at the point of copying... and then rename the class within, if it matched the original name.
  • Defaulting to creating variables such as "textBox1" via the designer
  • Defaulting to creating event handlers such as "button1_Click"
  • Defaulting to adding references to various assemblies I rarely use (System.Data and System.Xml). I suppose this is somewhat justified by earlier versions of Visual Studio taking an age to bring up the "Add reference" dialog, which is much impoved in VS2010. All of these make it really easy to create an app which is ugly (in terms of code) at the expense of making it easy to create an app with meaningful names. Why not guide developers towards using meaningful names to start with, instead of it being a manual extra step? </rant> I can see that the decision not to produce warnings for useless code makes some sense in the light of Visual Studio encouraging you to have useless code... and with so much code already written, it's probably too late to fix it now. It does make me sad though... oh well, at least Resharper lets me remove unused using directives fairly easily. Sorry to be such a curmudgeon on this score. I promise to revert to my normal happy self soon.
  • Anonymous
    January 25, 2010
    I agree with Todd Wilder above.   It would be nice to have another type of compiler message that pointed out if a "using" or anything else were not actually necessary.   Maybe we can call the type of message "random musings of the compiler..."!

  • Anonymous
    January 25, 2010
    The comment has been removed

  • Anonymous
    January 25, 2010
    @Jon: >  Defaulting to creating event handlers such as "button1_Click" I understand the frustration with all your other points, but what's the problem in this case? Are you unhappy about "button1" (but then you've already listed the naming of new controls as a separate point)? Or is it the "ControlName_EventName" convention - but if the latter, what's wrong with it?

  • Anonymous
    January 25, 2010
    An issue to be aware of with treating unused usings as a warning/error is that sometimes the using isn't needed in the generated code but it is needed to avoid other warnings.  The classic example is doc comments.  If you reference something in a <see> tag then you'll get a compiler warning if the item can't be found.  The using statements in the code determine whether that item is found or not.  The code itself will compile without the using statement.  It is tedious and wasteful to have to use full type names in comments so that really isn't an option.

  • Anonymous
    January 25, 2010
    @Pavel: It's the "ControlName_EventName" convention, which is made worse by violating normal naming conventions (as the control name is usually in camel case). My problem with that convention is it's non-descriptive - it describes the caller, rather than what it will do. If I have a save button and I click it, I want to hook that up to a SaveDocument method, not a button1_Click method - then:

  1. When I'm browsing the member list I can tell exactly what that method's going to do
  2. When I'm looking through the event subscriptions I know what will happen when I click on the save button
  3. If I want to hook the same method up to multiple events, I can do so with no extra work and within the same convention, because I'm describing what the action should do, rather than what's causing it. Where else do we name methods based on the circumstances under which they're called, rather than what they're meant to do?
  • Anonymous
    January 25, 2010
    The comment has been removed

  • Anonymous
    January 25, 2010
    @Tim: There's already #pragma warning disable/restore for this purpose.

  • Anonymous
    January 25, 2010
    Maybe you could introduce another level of compiler messages called "polite reminders" for cases like that.

  • Anonymous
    January 25, 2010
    @Jon: I think that it's still perfectly valid, because, well, you really shouldn't have "SaveDocument" method in your form. You'd probably have one in your model, and the event handler in the form would be just glue to call that method in the model.  However, the event handler naming in that case is correct - it's precisely just "code that gets called when Save button is clicked". I mean, a method called "SaveDocument" with a signature of "(object sender, EventArgs e)"? Am I alone in thinking that it would be a rather weird sight? > Where else do we name methods based on the circumstances under which they're called, rather than what they're meant to do? Nowhere, but event handlers are special in that the caller does not know anything about the method he's calling in their case. He's just told to call "that method" (via a delegate), and that's it.

  • Anonymous
    January 25, 2010
    > I mean, a method called "SaveDocument" with a signature of "(object sender, EventArgs e)"? Am I alone in thinking that it would be a rather weird sight? Indeed, and that's why it should be called "OnSaveDocument" to fit in with naming conventions typically associated with event handling. The "OnSaveDocument" method would then call the "SaveDocument" method of the model.

  • Anonymous
    January 25, 2010
    The comment has been removed

  • Anonymous
    January 25, 2010
    > Indeed, and that's why it should be called "OnSaveDocument" to fit in with naming conventions typically associated with event handling. "OnSaveDocument" (as noted by Mark above) is reserved for protected virtual methods raising the event, and shouldn't be used by handlers (so long as we're sticking to the standard coding style, for better or worse). Also, even "OnSaveDocument", ignoring the convention, would sound to me like a handler for event "SaveDocument" on the model (because such an event shouldn't be seen in the view/form).

  • Anonymous
    January 25, 2010
    > I've worked with plenty of people who can't remember whether "On" is the canonical prefix for methods that raise events or for handlers It's both, actually (which doesn't exactly help the confusion, but still at least there's a reason for "On" there) - in derived class, you're supposed to override "On..." if you want to handle the given event on your own object. The fact that "On..." in some base class up the chain also raises the actual event for outside clients is orthogonal to that. Now, personally, I never really understood the need for this two-stage pattern in the first place. It's not like derived classes can't attach event handlers to parent's events... I suspect it was put in place for the sake of optimization (as overriding a vtable slot is effectively "free", while delegates have to be allocated, and dispatch is a little bit slower, too), but I doubt it actually buys us much if anything today. The overhead from a typical L2O query alone is probably an order of magnitude worse.

  • Anonymous
    January 25, 2010
    The comment has been removed

  • Anonymous
    January 26, 2010
    If I have "XML Documentation File" turned on and "Warnings as Errors" turned on as well, the generated code won't compile. Sometimes people with "Warnings as Errors" turned on are willing to do a little work to get template code compiling.

  • Anonymous
    January 26, 2010
    "Now, personally, I never really understood the need for this two-stage pattern in the first place. It's not like derived classes can't attach event handlers to parent's events..." It seems to me that a big reason for the "two-stage pattern" is so that derived classes can take control over the event-raising process, even inhibiting the raising of events altogether if need be, and at the very least injecting their own behavior at a specific point relative to the event raising, rather than being at the mercy of the order of subscription (being first is relatively easy, as long as the constructor is correctly written, but being last is a lot harder, and being both first and last is even harder still!) That said, I do find the "On…" nomenclature wrong.  The word "On" implies that the method is called when the event happens, i.e. in response to the event.  I agree that "Raise…" is a much better convention, and it's what I use in my own code. As far as Jon's other rants, it's a mixed bag for me.  I agree that "Form1" is a silly default name; at the very least, use the name of the solution and project.  Maybe even provide an optional field in the new-project wizard that if filled in allows me to override that default. Also, the default references are really annoying.  I almost never need half of the ones automatically inserted for me into a project.  I almost wonder if the main reason so many are included is that for so long, it's been so painful to add a new reference, because of how long it takes the Add Reference… dialog to come up the first time.  In any case, a much more user-friendly way of managing references, and not cluttering up my project settings with extraneous ones would be very welcome. But I also agree with Aaron that requiring me to provide names for EVERY new Designer-added element would really slow me down.  Yes, I do eventually go back and rename things as needed, but I generally do that only for those elements that I actually need to access.  There are lots of structural things that go into a form that don't need any real name, or even a class field for that matter.  (In fact, what would really improve my workflow would be if some clever UI could be designed where a class field is not generated for real until I actually try to use it :) ). As for the naming of event handlers, I'm ambivalent.  On the one hand, I recognize Jon's frustration with both the capitalization convention used, as well as the apparent uselessness of the name.  I share his general preference for descriptive, self-documenting code.  On the other hand, it's unusual when an event handler – at least, one for a GUI-related event – I write is actually where the main work is done.  Quite often, I've got some helper method that the event handler actually calls off to.  This is especially true for any non-trivial event handling.  So, the event handler itself describes what event it's designed to respond to, while the helper method's name is descriptive of the actual action being taken in response to the event. The one place where the default event handler name becomes really inappropriate is when a single event handler is designed to deal with events from multiple controls.  For example, check-state changes for a radio group.  In that case, the event handler name needs generalization even if it's going to be describing merely the handling of the event.  But these situations come up infrequently enough, and are so much harder to generalize a solution for, that I'm not bothered that the IDE isn't helping me out with those. (Though, frankly…for that one specific example, it sure would have been nice if the Forms API had provided for something that represents the entire radio button group, rather than forcing me to always wrap the group in my own abstraction.  I mean, after all, the Forms code itself does need to do the abstraction anyway to properly handle the checking/unchecking of the radio buttons that share a group; why not expose that to the client code?) And finally, I cannot submit this comment without including my own pet peeve regarding compiler errors and warnings.  That is, false-positive uninitialized variable errors.  Consider:  {    int num;    bool fValid;    try    {      num = Convert.ToInt32("50");      fValid = true;    }    catch (InvalidCastException)    {      fValid = false;    }    if (fValid)    {      Console.WriteLine("Value of number is " + num); // CS0165 here    }  } The compiler doesn't like that the I haven't initialized "num" in the case where "fValid" is false. Now, I readily grant that the difficulty of the analysis I want the compiler to do as compared to the difficulty of the analysis the compiler actually does is much greater.  It's not that I don't understand the challenge.  But at the same time, as a programmer who can look at the code and know that I never use "num" before it's initialized, I get frustrated. This wouldn't be so bad, except that the fix is to initialize the variable to some random default at some point (either in the declaration or in the other code path).  And as I alluded to in the previous topic regarding initialization of value types, default initialization can actually be just as bad a bug as not initializing at all, because default initialization can result in silent errors, where the code doesn't fail immediately, but goes on to produce results that are wrong in subtle, hard-to-detect ways. I'd sure love to see the tools get better about this stuff, so that I never have to initialize something I don't really use, and so that I'm always warned when I use something that's not initialized explicitly (i.e. not just because some non-spec default initialization happened to occur).  The bugs related to getting initialization wrong in these ways that are currently not detected can be hard to track down, due to their subtlety.

  • Anonymous
    January 26, 2010
    …and lest people misunderstand, the code example in my previous comment is not a real-world example.  It's a dramatically simplified version of a real-world example, for the sole purpose of illustrating a point.  Obviously, if you're just trying to convert a string to an int where a failure to convert might happen, there are much better ways to do that.

  • Anonymous
    January 26, 2010
    The comment has been removed

  • Anonymous
    January 26, 2010
    So... what's the reasoning this type of thing can't go in the 'messages' window? that thing almost never gets used, and I can't seem to find anyway to actually make use of that window. i'd love to leave notes/non-error warnings or even better the TODO list, or throws not implemented exception list in there. instead of having to manage a separate tab for those cases.

  • Anonymous
    January 26, 2010
    @Pavel: > Now, personally, I never really understood the need for this two-stage pattern in the first place. It's not like derived classes can't attach event handlers to parent's events... I suspect it was put in place for the sake of  optimization. I do not view this as an optimization pattern - I view at the template method pattern. Having a method that is always called before and/or after event handlers are raised can be very useful. Event handlers are not promised to be raised in any particular order, so there's no way to make yourselfe first or last subscriber. Besides, I always find it awkward to see a class that subscribes to events that it (or it's base class) itself raises.

  • Anonymous
    January 26, 2010
    > I do not view this as an optimization pattern - I view at the template method pattern. Having a method that is always called before and/or after event handlers are raised can be very useful. Event handlers are not promised to be raised in any particular order,  so there's no way to make yourselfe first or last subscriber. Actually, yes, they are raised in a particular order. Well, it's up to a specific class to declare its own contract (as, ultimately, it can always define add/remove as well as raise itself), but default implementation of events always raises handlers in the order in which they were added. This is trivially observable when using ref arguments, or when passing in mutable object of reference types. In fact, the framework itself uses that pattern heavily in form of e.Cancel property of various EventArgs subclasses - that one requires a definite ordering of event handlers to be of any use. So far, even with custom events, I haven't seen any implementation that didn't guarantee order. And if one doesn't guarantee it for clients, then why should it guarantee it for derived classes? Or, put it another way - if you want "before" and "after" functionality, then why should it only be available to derived classes, and not to every client of your class? And you can trivially do the latter by providing Before.../After... events, as e.g. WPF does for many of its events. >  Besides, I always find it awkward to see a class that subscribes to events that it (or it's base class) itself raises. Why? I understand why it could feel awkward to subscribe to your own event (though then again, it can also be clear and concise in some cases - it all depends on the exact case), but what's wrong with subclasses subscribing to the events of the superclass? Events are just part of the API, just as members and properties are; it's no more wrong for a derived class to subscribe to an event on its base class than it is for a derived class to call a method or property on its base class. If anything, this leads to more uniform code, as any class accesses the event functionality in the same way, regardless of whether the event is the one inherited from superclass, or provided by some random object. I can understand it as a matter of personal subjective preference, but that alone isn't a good rationalization for it.

  • Anonymous
    January 26, 2010
    > It seems to me that a big reason for the "two-stage pattern" is so that derived classes can take control over the event-raising process, even inhibiting the raising of events altogether if need be, and at the very least injecting their own behavior at a specific point relative to the event raising, rather than being at the mercy of the order of subscription (being first is relatively easy, as long as the constructor is correctly written, but being last is a lot harder, and being both first and last is even harder still!) It's an interesting point. I find it ironic, though, that CLS spec actually defines three standard special methods for events - given event X, there's "add_X" and "remove_X", as you'd expect, but there's also "raise_X". CLR itself also special-cases all three with keywords ".addon", ".removeon", and ".fire" - same as properties have ".get" and ".set" (there's also ".custom" for both events and properties). However, neither VB nor C# seem to actually be able of either generating ".fire" event method, or using one. If they were, it would be more logical for a class that wants its children to be able to intercept raising of the event to declare said event virtual, and let them override ".fire". At the same time, I don't think that providing derived classes with the ability to intercept raising of all events (or even any events) is a reasonable defaults. We don't usually allow overriding most methods, why allow mucking with all events? And, in practice, that's how the pattern is normally used - for all events, regardless of whether it is meaningful (and secure!) to allow subclasses a backdoor or not. IMO, the only reasonable default is to treat subclasses as "just another client", with no special privileges. C# does that with non-virtual-by-default already, which the "On..." pattern breaks.

  • Anonymous
    January 26, 2010
    "At the same time, I don't think that providing derived classes with the ability to intercept raising of all events (or even any events) is a reasonable defaults. We don't usually allow overriding most methods, why allow mucking with all events?" "We" don't.  The main place this is the default is in the Forms namespace, and in related classes (e.g. BackgroundWorker).  And sure, you might argue that's poor design, but that's the design choice of those authoring that particular class library. There's nothing in C# that makes it the default, and in fact I generally do not follow the "protected virtual" pattern, specifically because in general, it makes sense to me that event implementation details are just as private as any other implementation detail I might include in a class.  I'm not specifically trying to preclude inheritance; I just feel that even when inheriting a class, only some subset of the implementation details need to be modifiable by the derived classes, and those cases will be determined as the design of the class plays out. That said, I think it's safe to assume that at least some thought along those lines was done by those who came up with the Forms API, and the fact that they have followed this "overridable by default" design indicates that in that particular domain, they found that more often it was useful and important to grant that level of access to derived classes. That said, note that this is not a universal design approach.  In Java, non-private methods are overridable by default, and I've seen very few class implementations where this has been disabled explicitly by making methods "final".  In general, implementation details meant to be restricted to the given class are simply made "private", and the remainder are (one hopes) implemented in a way that allows effective sub-classing with overrides.  I think that in practice, much less thought is put into the question of designing for inheritance than should be. At least in C#, supporting polymorphic inheritance has to be done explicitly, increasing the chances that some real thought will have been put into how best to implement that and provide for features sub-classes will need. Based on that, I take as granted that the model seen in the Forms namespace does in fact have at least some good arguments in favor of it, as a divergence from what would be the nominal default.  And in fact, I have found that for most events I override in a Control subclass, it is in fact useful to be able to follow the same technique for ensuring order of operations, and to have a model where the actions taken by the class itself are in fact distinct from client code outside the class that may have subscribed to the event.  My code is actually much more uniform this way, than if I had to create and subscribe a delegate to base class events any time I needed to enhance base class functionality. (p.s. I am a bit embarrassed at how far we've gone from the original point of this blog post.  But, obviously not so much that I'm resisting making further off-topic comments. :) )

  • Anonymous
    January 26, 2010
    @Pavel, to clarify, your assertion is correct that "yes, [events] are raised in a particular order", however what I think what @pete.d was intending to convey is that subscribers have no guarantees on order, nor should they come to expect a certain order of events being fired (via delegates). The class could make a guarantee, but I've never seen one (ever) - it's just an implementation detail. As you say, sometimes it's better to self-subscribe, other times it's better to override.

  • Anonymous
    January 26, 2010
    The bigger problem for my company is unused using () statements around database connections and other iDisposable objects that really need them. We have found countless places in our code where developers forgot to add using() {} statements around SqlConnections and are effectively leaking them.

  • Anonymous
    January 26, 2010
    The comment has been removed

  • Anonymous
    January 26, 2010
    The OnFoo pattern has a couple useful purposes that I can think of. Let's say you're implementing a base class like UIElement which could have a dozen events. Since a complicated UI could have thousands upon thousands of UIElement-derived objects, most of which won't have even one event wired up, let alone several of them, you could implement some sort of dictionary-based backing store for events. That way you can define as many events as you want for a UIElement and not have to worry about paying the price for every single one of the thousands of UIElements in an app. Of course derived controls like Button and TextBox are going to need to hook these events. Should the creation of every single TextBox invoke the creation of an event dictionary and several delegates to add to it? Clearly it's much more efficient to have controls like Button and TextBox override some functions to get their desired behaviors. The second purpose was one I ran across recently. There's a class called ObservableCollection that raises an event every time its collection changes, and I wanted a way to make a batch of changes at a time (like sorting) and only raise a single event at the end. If I could not have overridden the OnCollectionChanged function to not raise the event while making the batched changes, I would have had to reimplement the whole class myself.

  • Anonymous
    January 26, 2010
    I'd just like to publicly apologise to Eric for derailing the whole comment conversation here. I won't make it worse by replying to the various points raised since then. So, warnings and unused using directives, anyone?

  • Anonymous
    January 27, 2010
    I think that the compiler should only raise warnings for things that could reasonably be bugs in your program. For example, if you declare a variable and never use it, it could be an indication of a typo or forgetting to write some code that uses it. However, an extra using directive is hardly an indication that you were intending to use some namespace and just forgot. It simply is a way of expanding the global namespace should you need it. Warning for unused usings would be like warning for pragmas that have no effect or #defined symbols not used in an #if -- it may be useful information to know, but it's not an indication of a likely bug.

  • Anonymous
    January 27, 2010
    Wouldn't it be nice if using statements could also be added just before methods and blocks. It is the fact that they get added before a class that often makes them forgotten and therefore fall into disuse. It would make it simpler to specify the rare using statements closer to where they were actually used, and to have those that had a wider scope further away i.e. above the class declaration. Just my 2c.

  • Anonymous
    January 27, 2010
    The comment has been removed

  • Anonymous
    January 27, 2010
    The comment has been removed

  • Anonymous
    January 27, 2010
    @TaylorMichaelL, Xml comments are parsed by the compiler. If a namespace is used in the xml comments, it is not unused, and should not generate the warning.

  • Anonymous
    January 27, 2010
    It may be worth pointing out that an unused using directive may mean a bug in your code. It may be that you're using an extension method from namespace A instead of the one you intended to use from namespace B. Admittedly this is partly due to one of my very few peeves about how C# has evolved (the way that extension methods are discovered) but it really could change behaviour.

  • Anonymous
    January 27, 2010
    The comment has been removed

  • Anonymous
    January 27, 2010
    The comment has been removed

  • Anonymous
    January 27, 2010
    @Tom: "Even Visual Studio 2008 Standard without SP1 has the option to right-click, Organize Usings, Remove Unused Usings.  How any self-respecting coder could miss this is beyond me." - which is why I specifically said "throughout a project" (although what I really meant was throughout a solution). With a large codebase the idea of opening up hundreds of .cs files and right clicking each one is ridiculous. It's nice to have it in the code editor but it'd be much nicer to have it globally.

  • Anonymous
    January 27, 2010
    @Tom, "Bear in mind that VS only creates usings for existing references.  If you removed unwanted references when you first create your project, you won't get all those extraneous directives in every code module you add." This is a complete fabrication. If I remove the extraneous System.Data and System.Xml references when I initially create a project, then not only will the extraneous using statements still be there when I add another class, but Visual Studio will actually add the references that I already removed! Which is why I edited the default templates to get rid of that nonsense.

  • Anonymous
    January 28, 2010
    It'd be annoying if I temporarily commented out some code and then started getting warnings about the now-unused Using statements.

  • Anonymous
    January 28, 2010
    @Andy: You'll get exactly the same thing if you remove all uses of a local variable, but not the declaration. In the case of using directives it's extremely easy to get rid of that warning though - or you could just ignore it, if the commenting out really is temporary.

  • Anonymous
    January 28, 2010
    Seems the most obvious (and easiest?) solution is to enabled severity levels for code analysis rules (this would definitely be one) and make "Unused Using Statements" report as Information.  The Error List already supports these levels, so why not use them. As an extension of this would allow filtering which rules are reported as errors and which remain warnings when "warnings as errors" is set. Just my two cents.

  • Anonymous
    June 07, 2010
    Not always unused usings are really unused. Parts of codes that are not being compiled due to macro directives #IF may need those "unused" usings when they actually being compiled. Resharper can clean the unused usings (and other unused code parts) automatically throughout solution for you, btw.