posted 15 years ago
I dithered over setters/getts versus direct field access way back when C++ was new. I finally decided that despite the nuisance value, setters and getters were better. Not all debuggers have a good, easy-to-use way of detecting changed fields. Also a setter allows the flexibility to add validation, logging, breakpoints, etc. And, occasionally, I end up changing the underlying data type, so the abstraction of set/get methods minimizes the impact on other parts of the program. I do a lot of datatype changes, which is why I have an especial antipathy for Hungarian Notation.
As far as overhead goes, I expect that the optimizer can detect simple set/get operations and inline them to direct access where feasible, so the ultimate generated code would potentially be identical using either approach, with the edge on flexibility going to the set/get methods.
Why test-before-set? This mostly makes sense in an ORM or remoting environment. Setting a value can cause a "dirty" flag to be set in the underlying metacode in cases where the metacode is naive and doesn't itself check for changes that don't actually change anything. So doing this can reduce the overhead in remote transmissions and result in tighter SQL code generation in an ORM system.
The secret of how to be miserable is to constantly expect things are going to happen the way that they are "supposed" to happen.
You can have faith, which carries the understanding that you may be disappointed. Then there's being a willfully-blind idiot, which virtually guarantees it.