The problem with tests failing in this way is that we often report them as failures and this can massively skew our results. If we have a suite of 50 tests targetted at the Accounts functionality of our system but the accounts tab has been removed so we are unable to navigate to it should this be reported as 1 failure or 50?
By marking one test as failing due to a check that the accounts tab is available and the rest as breaking we solve this problem. Suddenly we have 1 test failure and 49 grouped breakages, which is a far more indicative of the actual state of the system.
So a breaking test is a test that fails before it gets to what it is checking/testing/asserting. I highly recommend incorporating the breakdown of failures to include breaking tests into your automation reports.