Jeff, I'm curious, did you always have automated tests?
In my experience this is a relatively new development, where developers are empowered to write tests. Previously (and still by far most common) is QA engineers, people whose sole job is to do testing. In such cases, a failure is not dismissed because the science of QA has progressed enough that you know there's a real issue (also due to factors such as real people testing the actual product, instead of treating it like a white box the way tests do).
On the other hand, when developers are writing the tests themselves, we definitely have different categories and different 'alarm' levels. None of us ship products that are 100% bug free, the more you know what bugs you're shipping, the better off you are. So for this case, isnt it nice to have tests that fail to show that something is a known bug? So when someone comes around to fixing it, there's a test to verify the fix all ready. This to me suggests a 'knownbug' group for example, and a triage group could regularly review these and see if its feasible to start fixing them for the next release, and so on.
Traditionally, people would comment out such tests, or even not write them until the fix is ready to be implemented. Having a test that is known to break is far more valuable and informative that either of those solutions.
I dont think it's ever OK to have a random test failure and shrug it off. It's important to always know why a test is failing. What we're saying is that you dont have to drop everything to fix it, you already have additional information provided to you by a failing test, so are better off than not having it at all.