So why would someone who's writing a servlet worry about the time taken for a cast? I can't believe it's due to a fear of software bloat, simply because the relative impact is so minimal: a person truly concerned about bloat would think of replacing HTTP with a raw TCP connection. Or maybe UDP, to avoid all the overhead of ack/nack.
However, what if the questioner has a fundamental misunderstanding of how long it takes a computer to do anything? To be more specific, a belief that all operations take the same time, so removing a cast is just as good as removing a database lookup.
Experienced programmers will scoff at that idea. But think about it: on a human scale, all computer operations do seem to take the same amount of time. Google can come up with search suggestions in the time it takes you to type a single character; that involves a round-trip web request and some sort of database call.
Computer science classes reinforce this idea, with their emphasis on “Big O” analysis. The implications of such analysis is that an O(n2) algorithm is worse than an O(n logn) algorithm, regardless of what the code actually does — or the context in which the code is executed.
In the real world, of course, context is everything. If I need to do lookups on thousands of items, a hashed lookup (O(1)) seems far more appropriate than a linear lookup (O(N)). But what if I only have a few items? Say, 10 or 20? Or if I only occasionally do a lookup? In this case, the O(1) versus O(N) analysis just doesn't matter. And a linear structure may be easier to construct and use than a hashed structure.
This brings me back to Mel. Mel's genius was not so much in understanding the instruction set, but understanding overall context. Of knowing how long each instruction took, how quickly the memory drum spun, and most important, when this knowledge made a difference. Perhaps that understanding only comes with experience, but I find that even experienced programmers fall into the micro-optimization trap, so there must be yet another factor at work.