Around here where I work, we don't share servers. Just run what you need on on your desktop/dev machine and you can do what you want. If I need to "bounce" the server for some reason, I can do it without messing someone else up. And if I need to
test something in a cluster, I'll just start up a cluster on my desktop (using several ports). Or I can "check out" machines from the Lab if I need something more powerful of if I need a networked cluster.
To help with this, we have some basic domain configurations (config.xml, etc) checked into source control and an ant-based build that "localizes" them (if necessary) for individual dev environments (machine names, databases, etc).
But of course, at some time you have to get all that work from individual developers together in one place. So individual developers will check in their stuff to source control. Developers are also responsible for keeping their local work area up to date and built with a reasonably recent version of the whole project (check out other's stuff). Of course we have build scripts and regression (checkin) tests to help with this.
To check the integration, we have several things. One, we have an automated service that checks out, builds, and runs a simple set of tests every few hours. We also have a daily build that runs a more extensive set of regression tests. And then every few weeks we produce a kit with an installer that represents the "latest and greatest" from dev and hand this off to the QA group, where it gets more abuse.
This doesn't work for everyone, so if you have to have a single domain for some reason, you are just going to have to cooperate with each other. If you establish some sort of naming scheme and trust everyone to follow it that should work - for example maybe
you should never deploy "MyEjb" but you should deploy "Kory_Lasker_MyEjb" and use the "Kory_Lasker_ConnectionPool", etc. Group (rather than individual) resources would get some name after the group.