I went to https://pragprog.com/book/atevol/software-design-x-rays trying to find more excepts from your book. I thought the one about the Proximity Principle was spot on and I think all programmers have some sense of this principle but it's really more of an internal heuristic rather than a principle until someone articulates it and gives it a name.
Thinking back, that's how it was for me before I read Martin Fowler's "Refactoring" book. When I read that first chapter with the long refactoring example, I found myself saying "Yeah, I do that. Yeah, I do that, too. Yup, done that. Yup, do that all the time." What really blew my mind about Fowler's example was that I did many of those refactorings he wrote about on an ad hoc, piecemeal basis whereas he had a very purposeful, systematic, and disciplined approach that made the combination of all those changes much more impactful than I could ever hope my changes would be. They had a synergy that I could never get because my work was based on "intuition," for lack of a better word, rather than principles of design. I just made my changes because I liked how my code looked after I made them. After I read Fowler's book, I had a specific purpose for making changes based on guiding principles.
Can you point out other things you write about in your book that might be like this, things that good programmers kind of already know intuitively which perhaps could be made even more powerful if the context and reasons for doing them were based on well-articulated principles?
Here's why I think what you did by articulating the Proximity Principle in terms of code is important: it now gives tech leads and people like me who coach other developers a good way to reason about the changes we make that are based on the principle. Without a clearly articulated principle, your position for defending decisions for making changes becomes tenuous, supported only by statements like "well, I just like it better" or "it just makes your code easier to read." Those statements imply preference or style, which can be argued as being subjective and thus can be easily rejected by "Well, I like this way better, so let's just agree to disagree" at which point you'll be at an impasse or worse, you have to cave.
Recently, I've had people say things like "Oh, Selenium is the worst" or "Selenium is evil" or something like that. Paraphrasing something you wrote earlier in the book about legacy systems, "Let's build a tool that's going to make people's lives a living hell," said absolutely no one ever, right? I haven't tried to dig into those statements against Selenium mainly because I know that's probably going to lead to a long and contentious "conversation" but also because I don't have many principles aside from the Test Pyramid to argue for the good side of Selenium. I happen to like that tool and what it allows me to do and I have had no problems so far that I can attribute directly to Selenium being evil. You also mention Selenium in your book.
So, what principles might you use to argue in favor of Selenium and what do you think the context is (Andy Hunt's Rule #1: Always consider context) that people leave out or assume when they say things like "Selenium is bad, it's evil"?
One example is the Splinter pattern from chapter 4: "Pay Off Your Technical Debt". The intent of the Splinter pattern is to provide a structured way to break up hotspots into manageable pieces that can be divided among several developers to work on, rather than having a group of developers work on one large piece of code. Breaking up large modules that start to become unmanageable isn't exactly a revolutionary idea; It's something experienced developers do all the time. What sets the Splinter pattern apart is that it approaches the steps of refactorings from a social perspective. Hence, the initial goal of the pattern isn't to solve the technical problems, but rather transform the existing code into a new context where those problems can be solved with a minimized risk of coordination and code conflicts between multiple developers and teams.
Another principle that I think is followed implicitly is to organize code according to its age. Following this principle lets us evolve our systems toward increased development stability. The main reason we don't discuss the age dimension of code is because age is invisible in the code itself. However, our version-control data knows and can provide guidance for the package-level refactorings that I cover in chapter 5, "The Principles of Code Age".
One of the most important principles in the book is that "There Is No Such Thing as Just Test Code". In my day job I analyze lots of codebases from different organizations, and some of the worst hotspots and design issues tend to be in automated tests. This is a dangerous fallacy because from a maintenance perspective our test code is at least as important as our application code.
Finally, there are also several principles around change coupling. My personal favorite is to combine change coupling measures with copy-paste detectors. Most systems contain lots of duplicated code. An important question in that context is whether we should extract shared abstractions to eliminate the duplication or if the code is fine as is. This is a hard problem: just because two pieces of code look similar, that doesn't mean they should share a common abstraction. Experienced developers with deep domain knowledge often know when to draw this distinction. And the change coupling measures can help us further by pointing to true violations of the Don't Repeat Yourself (DRY) principle.
Author of Software Design X-Rays: Fix Technical Debt with Behavioral Code Analysis (2018).