Hani Suleiman

author
+ Follow
since Nov 18, 2007
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by Hani Suleiman

Sometimes it's not possible (or realistic) to go through a huge refactoring process to enable this. Though I agree that its definitely the most 'correct' solution.

If you're using EJB3, then there are a number of containers that can be embedded in tests, such as Spring Pitchfork. Our book covers this quite extensively.

For EJB2, the solutions aren't quite as good, but there are some libraries to allow you to test session beans at least, with some restrictions.

Another alternative is to use in-container testing, almost far from ideal, but better some tests than none!
17 years ago
I'd strongly recommend having test classes in the same package, but different source tree.

The huge huge benefit of doing so is that you can use package protected, thereby making your code testable, but without breaking encapsulation.
17 years ago
Hi Walter,

Have you had a look at dbunit? It does help a bit in terms of database related testing, and makes things like populating data somewhat less painful than it could be.

The book does cover dbunit integration with TestNG.
17 years ago
The problem with allowing significant chunks of uncommitted code is that the habit becomes easy to fall into, and in my experience, you very quickly end up with people not committing for days.

It's much better (and safer) to commit regularly, and just make sure your tests are tagged appropriately.
17 years ago
Hi Robert,

Some of the techniques are definitely applicable to JUnit. In particular, much of the integration and EE testing principles and tools can be used directly with JUnit.

However, many of the patterns we discuss are just too awkward to implement with JUnit. For example, anything that uses dependencies (like async testing) or timeouts/success percentages (QoS tests).

The reason for using TestNG is probably very similar to your current frustrations. JUnit simply didnt cut it for some of the testing scenarios we were encountering, hence TestNG was born.

I'd be curious to hear of what specific situations you're running across where JUnit isnt helping. It's also possible that you have something more fundamental going on that makes testing very difficult!
17 years ago
Probably nothing exciting here unfortunately, no secret hidden features!

My personal 'easy' favourites are two things (related to groups, boringly enough): stateful tests, and dependencies.

I cant imagine writing non-stateful tests. I don't know how people manage without them. When testing a blahManager for example, do you really create a new one every time? How about db connections? Environment setup? Spring initiatlization?

Dependencies likewise mean you can have tests that build on other tests in a meaningful way. The send/receive example in another thread here highlights that pattern. Likewise, depending on named initialization groups means you can express such concepts as 'this test needs db connectivity, a jms provider, and a servlet engine'.

I'll also name a more 'hidden' neat feature, which is the object factory concept. The interesting thing here is that this allows you to control test instantiation. Why is this useful? Well, if you can control creating the test, you can do all sorts of funky stuff before returning the instance to testng. You could return a bytecode instrumented version, you could use Spring or Guice to inject its dependencies in, all sorts of craziness!
17 years ago
Hi Rogerio,

Yes, we cover testing of various layers that are commonly present in a Java EE application. While testing backend objects is generally fairly straightforward, we also cover more interesting issues like testing servlets, and running TestNG inside of a container so you can get at components that were not designed to run outside of one.

We also do cover testing of Swing code. This is usually done via the excellent Abbot framework, and we cover its integration with TestNG. Abbot allows you to write test code agains Swing UIs by simulating button clicks and user mouse interaction, and then making assertions against the resulting UI. Basically it works very similarly to how htmlunit and other web testing tools do, but is a fair bit nicer to work with, since Swing isnt constrained by that silly request/response paradigm that makes the web world so annoying
17 years ago
Hi Lasse,

I'll be honest and say that every method I've tried for functional testing of webapps has, basically, sucked.

There are some guidelines that I think help a bit, but I still don't feel that we (the developer community) have a robust solution that encourages this sort of testing and makes it easy.

For example, using htmlunit/selenium to drive a webapp, and making assertions on the resulting responses/pages is one approach. However, it's quite a fragile approach since it's often tricky to find the right balance between specificity (where UI changes can break your assertions) and vague generalizations (where tests pass even though something's broken).

Another interesting approach for testing javascript is to do it in the browser itself. Writing javascript tests (which in turn leads to following testable design for javascript, a highly under-explored topic!) and invoking those instead.
17 years ago
Hi Daniel,

In covering Guice, we show an introduction to it and how it works, then show how you can use it to wire up your test instances. We also show an integration approach for large test codebases, where you can section off groups of tests with their dependencies.

Testing an application's use of Guice is pretty trivial (thanks to Guice's lovely testable design!), more interesting was using it to instantiate your test objects themselves, so you dont have to worry about sharing manager objects for example across tests, and can decide each test's dependencies without having to manually pull them in from a central repository.
17 years ago
Jeff, I'm curious, did you always have automated tests?

In my experience this is a relatively new development, where developers are empowered to write tests. Previously (and still by far most common) is QA engineers, people whose sole job is to do testing. In such cases, a failure is not dismissed because the science of QA has progressed enough that you know there's a real issue (also due to factors such as real people testing the actual product, instead of treating it like a white box the way tests do).

On the other hand, when developers are writing the tests themselves, we definitely have different categories and different 'alarm' levels. None of us ship products that are 100% bug free, the more you know what bugs you're shipping, the better off you are. So for this case, isnt it nice to have tests that fail to show that something is a known bug? So when someone comes around to fixing it, there's a test to verify the fix all ready. This to me suggests a 'knownbug' group for example, and a triage group could regularly review these and see if its feasible to start fixing them for the next release, and so on.

Traditionally, people would comment out such tests, or even not write them until the fix is ready to be implemented. Having a test that is known to break is far more valuable and informative that either of those solutions.

I dont think it's ever OK to have a random test failure and shrug it off. It's important to always know why a test is failing. What we're saying is that you dont have to drop everything to fix it, you already have additional information provided to you by a failing test, so are better off than not having it at all.
17 years ago
As an aside, my personal experience (so usual disclaimer applies) is that when tests break, a significant percentage of the time, its not indicative of any actual bugs, other than a broken test. Sometimes the code changes but the test isnt updated. Obviously this isnt a good thing, but it isnt the end of the world for the developer to mark this as something that should be addressed.

As Jeff mentioned though, this approach can only work if you have regular triage and a process where these are not allowed to accumulate beyond a certain point.

Basically, nobody likes broken tests, but the reality of the work we do means that they do happen (sadly), and that they shouldnt get in the way of delivering software!
17 years ago
It depends on how far you want to go to automate it.

Here's an example of an implementation I've come across (which I think has gone a bit too far, but works very well if you have the requirements process very formalised)...

Every test has to have a tracking number (in this case, it was a JIRA issue). as one of its groups. The svn repo has a pre commit script that enforces this. Tests that were checked in that didnt have an issue number were not allowed in. The IDE used (IDEA) also had a JIRA plugin which hyperlinked the issue numbers, so it was easy to get to the original document via a single click.

This is a bit extreme, but its actually pretty easy to require this sort of thing without enforcing it via precommit scripts. Assuming you have any sort of basic code review process, people would just be nagged to tag everytime they commit a test without the tag, until it becomes part of the team culture.
17 years ago
I've mentioned this in a couple of other threads, and the short answer is no, I dont think cactus is a good idea at all.

There are a number of other solutions that can investigate. The ideal, of course, is to develop your code such that it can run outside of a container. That way its much easier to test and you eliminate the need for all these 'middle man' frameworks that are often awkward and uncomfortable to use. The barrier to writing tests is also lower which makes them more likely to get written.

This is particularly true for servlets. Any decent webservices engine has a servlet as a very thin frontend to the actual 'meat' of WS, to ensure that they're not tied to that environment. Your code should be the same!

Similarly for pretty much any other container bound technology. In the book we go through a whole range of Java EE APIs and show how they can be tested outside of a container, including JPA, JMS, JTA, JDBC (DataSources), JNDI, and so on.

If you do want to test in-container, then we also show how that can be done, via embedding TestNG into your app. That way when tests are run, they are run inside of the container, and you can have whatever UI you want to drive them or to get reports and so on.
17 years ago
Yes, it's actually very easy to do this sort of test via dependencies, we cover this quite extensively in the book (both in terms of async testing in general, and illustrating the same techniques later on with JMS).

The basic idea is to have test pairs, one for sending and one for receiving. The receiver depends on the sender, and has a timeout value so you dont end up blocking forever. In code, the test's annotation looks like @Test(dependsOnMethod="sendRequest", timeout=1000)

TestNG will run the first test (or configuration method, if its not an actual test) which will send the request. Then the second test is run, which will read the response (well, wait for it first), and verify the results. TestNG allows you to guarantee that the tests are run in that order, so you dont end up waiting for a response before you've sent the request out.
17 years ago
Hah! How about....real men test, wimps System.out.println in mains!

The one thing we harp on about (in an informative, educational, and highly relevant style, needless to say) is that testing is a tool. Rejecting it is like refusing to use a debugger and instead using System.out.println to debug. Yes, you can do just fine, but why choose to be so inefficient? It'll make you produce better code without much of an overhead (if you do it right), and you'll feel a lot less sheepish when people report bugs.
17 years ago