Win a copy of Java EE 8 High Performance this week in the Java/Jakarta EE forum!

Junilu Lacar

+ Follow
since Feb 26, 2001
Junilu likes ...
Android Debian Eclipse IDE IntelliJ IDE Java Linux Mac Spring Ubuntu
Columbus OH
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Rancher Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Junilu Lacar

Having test cases that directly test private methods in production code is a smell. Private methods usually contain implementation-specific details that may change over time but shouldn't affect the observable behavior of a class. Having tests that depend on implementation-specific details makes your test class more brittle because any test method that depends on an implementation detail is more likely to break if the implementation changes.

It's fine to test code that goes into private methods as you're developing it. If I really want to make sure a particularly complicated piece of implementation code worked correctly, I first test the code as part of the test class itself. Then when I'm confident it works the way it should, I copy it over to the main class.

Here's an example. Let's say I wanted to use a particular search algorithm in the main class but I want to make sure I implemented it correctly. To do this, I would first create a private method in the test class, not the main class.

When this test passes, I copy it to the main class.

Then I continue to test drive any behavior that might use this particular algorithm.

When I implement the bar() method as a public method in Foo, I may decide to have it call the private findMax() method that's already there. Whether or not I choose to do that should not be evident in the bar_does_something() test case. This keeps the test implementation-agnostic and more resilient to changes. I can still be confident that if I do choose to use findMax(), any test failure would not be due to something being wrong in findMax().

When I'm done test-driving the code for all public methods in Foo that use the private findMax() method, I'll either delete it from the FooTest class or check it into source control if I think it might be useful later.
6 hours ago

The project structure you're using is very unusual. Most real-world Java projects these days are organized the way Ron showed before and tools like Maven and Gradle expect projects to be organized that way.

I suspect the problems you were having with the structure Ron suggested was an IDE configuration issue. I noticed that your package declarations were and -- that's not right. In the IDE, you should configure your project to have test/java and main/java as your content roots. See

Then in both your application and test classes, your package declarations should just be:

7 hours ago
Why wouldn't you want to copy? The java.util.Arrays class is all you need.

Note that there are no true multidimensional arrays in Java but rather you can have nested arrays, that is, arrays whose elements are themselves arrays. So when you declare int[][] grid = new int[4][5]; you're really declaring an array with 4 elements, each of which is an array of 5 int values.
20 hours ago

As far as actual testing techniques, I'd say things like using fakes, mocks, stubs, spies, and things like that are techniques. FIRST and CORRECT unit tests might also be considered as "techniques" although I'm more inclined to see them as guidelines really, as is AAA Arrange-Act-Assert.

Fuzzing is a testing technique
1 day ago
Sam, no problem, it wasn't meant to be a reproach or anything like that, I just couldn't find the right word to translate what I wanted to say in my native tongue (a Filipino dialect) which is something like being snide except in a more joking rather sardonic tone. The word we use in my dialect is pilosopo which is based on "philosophical" except it's more smart-alecky. I don't know how to convey it without making it sound bad, which it isn't supposed to be. Anyway, no harm, no foul.

As for the directories, the more commonly used ones are bin and classes
1 day ago
Aside from type inference of local variables, there don't seem to be many language changes in Java 10. If you just want to learn Java, you probably don't need a book about the latest and greatest version of the language. If you want to learn about new features introduced in Java 9 or Java 10 iteself, then of course you'd want to buy a book that has material on those kind of things.

I wouldn't worry about the increased pace too much. There are plenty of programs out there that don't even do object-orientation correctly, much less take advantage of features introduced in Java 8 and later.
2 days ago

It says

Java API docs wrote:It is strongly recommended, but not strictly required that (x.compareTo(y)==0) == (x.equals(y)). Generally speaking, any class that implements the Comparable interface and violates this condition should clearly indicate this fact. The recommended language is "Note: this class has a natural ordering that is inconsistent with equals."

Since the code uses id as the basis for equals() and name as the basis for compareTo(), the above condition is violated and clients of this particular implementation may be surprised when they try to use it if they are not made aware of this.

2 days ago
In my experience, huge numbers of constant declarations in one single public class is usually a big code smell in both scope and responsibility, especially if there are many values with limited references to them. I would also put money on many of those constants being closely related to your database, perhaps even being SQL statements or (worse) DB table column names.
2 days ago

Sam Parson wrote: ...develop good test habits for myself, so when my name is on the code I put out, it will not be something I would be ashamed of.

This is a great attitude to have and one that I take very seriously. I have done TDD for several years now. In fact, I'm arguably the most vocal proponent for the practice around here. I'm also very much about craftsmanship and clean code and TDD is one of the best ways for me to practice craftsmanship and write clean code. It's not the testing that gives you all that though, it's everything about design thinking that testing leads you to do that results in clean code and good designs.
2 days ago

Sam Parson wrote:
So this is where he and I had a bit of a disagreement on. He said I should test the min, max, and middle... For example:
In a range of integers 1 through 10...
His tests would be -> test 1, test 5, and test 10.
My tests would be -> test -2147483648, test -2147483647, test -1, test 0, test 1, test 2, test 5, test 9, test 10, test 11, test 12, test 2147483646, and test 2147483647
Now should I be expected to write tests for every number available outside the boundaries? What if number 1715385483 fails? How would I know it would be a test that fails without writing a test for every single number outside the boundary (also imagine how long these tests would take to execute all at once)? This is something I would hope QA would catch before release to customers.

This is where doing TDD as a pair or as a mob creates synergy. Again, you'd be talking about the design, in this case, the design of the tests themselves. What conditions are you testing will all those different cases? Do each of those cases represent a unique set of conditions that could result in incorrect behavior in the program or are some of them redundant? Your test cases should not be arbitrary "just in case" deals, they should have a specific goal and focus and they should reflect your understanding of the program and the implementation. Redundant tests reflect a lack of understanding of the exact behavior of your code and that's not only wasteful, it's also dangerous.

So, when you ask "should I write tests for every single number outside the boundary?" you're either being snide or you don't fully understand what the code is doing and what its limitations are, or a combination of both.
2 days ago

Sam Parson wrote:
Also, another good video talking about what is actually happening now between devs and QA -> (begin at 32:40 on his talk about the relationship between devs and QA).

It's good that you're watching all these Uncle Bob videos because, in fact, these are pretty much what I base my coaching on. FYI, my day job is as an Agile technical coach and I teach folks things like TDD and refactoring and pair programming.  The things I teach are based largely on Uncle Bob's writings and talks and those of people like him. They are also based on my own experience with development teams where we practiced all these things that Uncle Bob talks about.

I was almost going to write "QA should expect to find nothing" in my original responses but I thought that might be too radical an idea for you, that's why I said the attitude/relationship should be more cooperative, with test-focused engineers working closely with development engineers so that both sides understand the kind of rigors the code and design would be put through during downstream testing activities like when you do integration, performance/load, and penetration testing.

OP wrote:what is actually happening now between devs and QA

In my experience as a long-time developer, tech lead, and coach, the majority of teams still don't have this kind of relationship between QA and dev. It took a while for the team I worked with in my last job to get to "QA should expect to find nothing" because that only happens when teams are working together to effectively squash all existing bugs, refactor their designs, and start producing clean code and clean designs developed through techniques like TDD. Once the team gets to where they are doing TDD effectively, then the QA guys can reasonably and confidently expect to find nothing. That's when their reason to exist boils down to primarily doing manual exploratory testing, to test things that can't or don't make sense to be tested through automated means.
2 days ago
Thank you for posting that link to Uncle Bob's talk. Nowhere in that talk does he say that TDD is a testing technique. In fact, much of what he says supports the argument that TDD is a design technique.

At 12:20, he says "When you are following the three laws of TDD, what you will produce in the unit tests are the code examples of the entire system."
and a little later, "That's what unit tests are: They are little snippets of code that explain how the system works. And because you are doing TDD there is a snippet in there that explains how every part of the system works"
In other words, if you do TDD properly the test code becomes a complete detailed design specification of the code that you wrote. See Jack Reeves essays on Code is Design

At 15:30, he says "Inevitably, however, you will come to the code that's hard to test. It's hard to test beicause you did not design it to be testable."
At 18:14, he says "You have to design all your functions to be easy to test because you're writing the tests first."

He also talks about bad code and messy code. He talks about cleaning up the messes we make. One thing that the three laws don't talk about is the very important, and arguably THE most important step of TDD: Refactor. Refactoring is defined as "Improving the DESIGN of existing code."

Writing the test before writing production code only makes sense if you're writing test code so you can express an initial DESIGN and try it out.
Writing only enough production code to make a failing test pass only makes sense if you are doing incremental and emergent DESIGN.
Refactoring after you get your tests to pass is to clean up any mess that you made and make your DESIGN better.

Therefore, TDD is primarily a DESIGN technique that uses tests as the motivation and impetus for thinking about, experimenting with, and improving your DESIGN.
2 days ago

Sam Parson wrote:...and not actual unit testing or tdd. He said unit testing and tdd is running tests just to make sure it works.

This is an incomplete understanding of TDD as it relates to unit testing. TDD is a design technique, not a testing technique. It uses tests to drive how you think about design and refactoring helps you get to better designs when the tests show you something that isn't quite right. When properly done, unit tests show you examples of how your class is used and interacts with other classes to get a job done. All this is squarely in the domain of responsibility of the developer. Modern agile teams work together with "QA" or test-focused engineers. There should be no "trying to break your code" kind of attitudes but rather something more like "let's see what else we might have missed" type relationships on good teams.
3 days ago

Sam Parson wrote:My mentor/friend was saying this would be more for QA testing and not actual unit testing or tdd. He said unit testing and tdd is running tests just to make sure it works. But running tests outside the boundary scope is more the job of QA testers who should try and break your program. I'd like to think I can automate this somehow, but if pressured for time, managing test methods at the end of every class before deployment would probably cause a headache, and I would leave it to the QA people.

This is a bit odd. If you are doing TDD, leaving boundary conditions to someone else seems like a cop out. Leaving boundary conditions to QA folks is a very traditional style development attitude. Look up CORRECT testing of boundary conditions.

QA folks should really spend more of their time doing exploratory testing

3 days ago
Trying to print to the console when you're using JUnit is like kicking a hole in the floor of your car and trying to make it go "Fred Flintstone" style.  The point of using a tool like JUnit is to make your tests self-verifying. That is, the test code itself will report "pass" or "fail" and it should be unnecessary for a human to eyeball the results as printed out to the console.

Secondly, your code is way too complex. As mentioned earlier, that code should be refactored so that you can test small parts of it separately.
4 days ago