Trade-offs

It seems like a lot of the time we (as software developers) are forced to make trade-offs.  Choosing between 2 algorithms that solve the same problem, but with different characteristics.  Sometimes it's an easy choice: option #1 is O(1) and option #2 is O(n^3).  Sometimes we're forced to compare apples and oranges: option #1 runs as O(1), but uses memory as O(n^2) and option #2 runs as O(n) and uses memory as O(n).

The thing I've been pondering lately is if we should be always striving to make the best trade-offs, or if we should we be redefining the problems so that it's always an easy choice?

The current problem I struggling with in the compiler space is accuracy versus effeciency.  Specifically in the case of exceptional flow control.  Everbody would love a compiler that modelled the control flow of exceptions smartly enough to be able to do really advanced optimizations (dead-store, common sub-expression elimination, constant propigation, SSA, etc.) across  try/catch/finally blocks (and of course get it right).  However properly modelling such flow often results in some very comples flow graphs that tend kill the throughput of most optimizers especially if the algorithms happens to be O(n) or greater where n is the number of edges.

So basically we're looking at several ideas that are 'accurate enough' to do al of the desired optimizations while still not bloating the flow graph with a large number of edges that would slow down those optimizations.  Of course even in that statement, we need to clearly define what 'accurate enough' is and how fast is fast.  Thos terms change greatly between different compilers (JIT, pre-JIT, stand-alone compiler, etc.)

--Grant

Comments

  • Anonymous
    August 05, 2004
    My 2 cents: I realize that your were just giving an example in the first paragraph, but I have been fighting this issue for weeks:

    We are preparing to release a managed version of our desktop application. Our productivity has increased dramatically, but so has the memory usage and sluggishness of our application - it was straight C++.

    Our largest customers, the top 10 US Insurance Companies, couldn't care less about our development productivity, they just see that our program runs slower and uses more memory - there are new features, but it is still a hard sell...

    If I had a choice right now, I would say make .NET use less memory. v2.0 seems to have increased the initial mem requirements by about 15 Meg at start-up...
  • Anonymous
    August 05, 2004
    Believe it or not, performance is very important around here. One of the tests we run before checking in actually measures the working set of several core scenarios and aborts the checkin if new pages get swapped in that previously weren't.
  • Anonymous
    August 05, 2004
    I know you guys take perf very seriously and I have noticed the speed (in particular) improvements in 2.0.

    But, if I compile with v1.1, reboot and run. Then compile with v2.0, reboot and run, the initial memory is about 15 M larger.

    I am one the two senior developers that really puched for .net and hence, I get all the complaints too... ;-)

    Overall, I am extremely happy with our move to .NET. I just wish I could handle our own internal political issues about it better... ;-)

  • Anonymous
    August 05, 2004
    The comment has been removed
  • Anonymous
    August 05, 2004
    Frank's comment resonates well. I think that much of our particular problem is that we tasked developers with converting from C++ to C# and they did just that. They took C++ code and tweaked the syntax until it looked like C#. Few of our devs have really taken the time to think: What is best way to do this now?

    So it you can step back and look at the problem differently, you may come up with a better solution.