Dave Yardy wrote a post questioning SQL CLR Types.  I posted the following response:

I don't know why you would want to store a serialized UDT in the database.  Having to drop the type if it changes is really, really scary.
However, I had some outstanding results using the following pattern:
Perform a calculation that spans thousands or tens of thousands of records, with a single scalar result.
Approach (it's been almost 2 years since I wrote this code, so I don't remember details)
  1. Create a custom type that has fields for each aspect of the records that need to be analyzed during the calculation
  2. Create a custom aggregate of that custom type that returns the scalar result
  3. Create a shared method on the custom type that accepts parameters to match the fields and returns a new instance of the type (a factory basically)
  4. Create a stored procedure (or direct SQL) that selects the fields from the tables that store the fields from the custom type, which may or may not root from the same table
  5. Use the fields from that select to pass the values into the shared method that returns and instance of the custom type
  6. Use the custom aggregate, aggregating the custom types
  7. Return the result
This approach worked wonderfully, for about a half-dozen processes on a project a couple of years ago.  Once I had the pattern down, I found that the pattern was very, very powerful, and insanely fast.  IIRC, you can even tell the aggregate to abort if you hit a scenario where no further records would impact the results.  I remember specifically sorting the resultset so that the aggregate would get the data in the most eager manner, where it could most often abort very soon.
Unfortunately, I don't have any metrics, but I have remembered the concept behind this pattern, knowing that someday I'll need to pull this off my tool belt and apply it again.