Alexander Gunnerhell

Greenhorn
+ Follow
since Mar 12, 2003
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Alexander Gunnerhell

I would appreciate if someone could help me out with a couple of tricky problems related to packaging.

I want my server and client to be in completely separate jars. However I cannot find a way to get all my metadata files to be read and/or written to from within the jar. Reading a properties file through "this.getClass().getResourceAsStream("aPackagepath");" works fine. However, at the end of a session both the client and server want's to persist session-unique metadata to the properties-files. But there is no way of getting a OutputStream through the Classloader.

If i call "this.getClass().getResource("aPackagepath");" instead I get a String representing the absolute path looking like this: "file:/<filesystempath>/client.jar!/com/fbns/fibs/srv/netsrv/currentSettings.properties". Note the exclamation mark in the absolute path above. No other file related class seems to accept the above path as valid. So this doesn't help in solving the problem with writes.

Is a file in a jar file possible to write to without unjarring first???

After this try I thought, fine, I can have all my metadata outside the respective jar. So I put all the metadata files in the same directory as where my respective jar was. Now the metadata files cannot be found at all (when using the same calls as in the previous case) , probably because they're now outside the jar classpath the Classloader sees.

Is there a way of including them in the jar file's classpath somehow???

It should be noted that all the scenarios above works fine without jar packaging.

My final question is, would it be ok to just skip executable jars alltogether and just package everything in one distribution jar and then let the examiner unjar and run the code directly??? I would be very grateful if someone that knows for sure would answer this question.
OK, maybe I should clarify my thoughts about this:
As Jack mentioned the assignment is quite clear about using non-standard APIs. The reason I asked anyhow is that something like SwingWorker should really have been in the standard APIs a long time ago (and I believe a more general version of SwingWorker will exist in the APIs in 1.5 in the disguise of FutureResult from util.concurrent).
Sure, I can use SwingUtilities.invokeXXX, but this will create duplicated, explicit threading code here and there in my controllers. So the question remains: Could SwingWorker be an exception to the general rule of not using non-standard APIs???
OK, that's good. I was thinking that only standard libs/APIs could be used.
This would mean that SwingUtilities.invokeXXX could be used but not SwingWorker, since it's not part of the standard APIs.
But if you're absolutely certain I will give SwingWorker a go.
It seems that exception handling is a bit tricky with SwingWorker. Could you elaborate about how to handle exceptions in the thread that runs SwingWorker.construct()???
Has anyone passed the exam using SwingWorker?
Mark,
yes you're right, it's hard justifying this approach given the requirements. I guess I just hoped that someone should state the opposite so I had a reason to try this idea out.
The custom types design went in the trash bin yesterday .
I'm thinking of implementing custom business types for attributes belonging to a flight entity (flightnumber, price etc).
Admittedly, at present, constraints on these types are not needed/possible since the only types related to flight that is inputted in the GUI is origin and destination. But none of these types has any constraints specified in the assignment description. The constraints cannot be deduced by looking at a database dump in my opinion.
However, my rationale is to suport future enchancements by a small type framework.
The idea behind this framework is that both the businesslogic tier in the client and the GUI-model will share these types so that future constraints on these types could be implemented in one place. Furthermore it "feels" wrong to expose for example DataInfo[] as the returntype from a business method.
My problem now is that several custom types would basically be a kind of String (or Float or Integer) with support for future additional constraints. However String is final so I cannot use String by inheritance.
Another solution is to create an abstract StringType which implements all the interfaces that String implements. The StringType then delegates all the necessary methods to an internal String attribute. Furthermore the StringType would also have an abstract private method that would validate the argument to it's constructor before creating an instance of StringType.
Now, to create the custom business type FlightNumber one would inherit from StringType and implement the abstract validate method in FlightNumber.
Q1) do you consider this as a design supporting the requirements in the assignment?
Q2) Is this an ok way of supporting custom business types in a system in general? If not could you point me to an alternative approach?
Q3) Why are all the wrapper types final in Java? It forces me into a much more code intensive approach that also seems a bit awkward.
OK, today I have thought all this through and decided to skip the scalability mechanisms.
First remember that the reasons for having the FIFO-queue and the pool with db-threads were to:
1) limit the max number of threads in the server (this reason became moot when I decided to use SUN-RMI which anyhow spawns at least one thread per client)
2) maximize availability, so that a client could issue a typical sequence of operations without waiting for each operation to finish (for example lock->modify->unlock
The main reason for scrapping scalability showed up when I discovered that there's a clear possibility for the sequence lock->db-operation->unlock to be executed out-of-order.
According to my current design each of the above operations would be separate requests to the server and therefore be executed by separate workerthreads. Now, the thread executing lock could be switched out of the cpu immediately before it has actually acquired the lock in the db but after it has dequeued the lock-request from the queue. Just after the previous dequeue another workerthread dequeues the db-operation and tries to execute the db-operation without the associated lock, which would result in an error.
This can be fixed (this was also my plan from the beginning) by "packaging" a sequence of logically related operations in one request to guarantee the correct ordering and execution in one thread.
However the unfortunate side-effect of this is that reason 2) above becomes moot as well. Which makes it impossible to defend the FIFO-queue and the pool with db-threads.
So I will have to redesign according to the typical "RMI-threads only" approach.
Mmm, I have looked through alot of profilers and decided to start w. JInsight, since it's free.
Thanks for the tip about what collection implementation to use, however I was referring to Hashtable in a generic sense not a particular implementation (perhaps I used capital letters when describing it earlier, which is confusing of course).
Yes this Hashtable will potentially be very huge, just like an index/lock structure would be in a real database. Are you questioning if a Hashtable is a suitable collection or if any of the hashtable implementations would cope? Or perhaps the memory reqirements?
I'm thinking that the tests would consist of three main test cases:
1) benchmark ordered reads after a warm-up period, this would ensure that most if not all data is cached so I can measure the effectiveness of the different parts of the application without being affected by the disks
2) random reads (80%) and writes (20%), to simulate a more realistic load, here it's important to avoid having sustained datastreams since this would just place a bottle-neck on the platters of the disks.
3) tests 1) or 2) but measuring the primary memory requirements for a huge database. This is important since I have absolutely no idea how many bytes each element in the Hashtable will use. There's a risk that memory consumption turns out to be unrealisticly high.
[ March 25, 2003: Message edited by: Alexander Gunnerhell ]
Yes, using the Data class as a semaphore makes all my other efforts moot.
Therefore my initial design of the database put all the locking in the LockManager, which is basically a hashtable with a key corresponding to the key in the file and a reference to an object containing some sort of locking mechanism. A db-operation would then lock on a single element in that hashtable.
You can't compare my threadpool with WebLogics, mine is static in runtime, i.e. you have to decide before execution what the optimal number of threads are according to the capabilities of the underlying platform. I'm pretty sure WebLogics threadpooling is much more advanced.
A discussion about what features are available in a typical J2EE server is really beyond the scope of this thread, so let's just agree on that we disagree regarding this issue?
It's nice of you to offer to pay my re-submission, but I'm not paying myself anyhow. Besides I would feel like being under pressure if you paid. So thanks, but no thanks.
I have talked to my DBA colleague now, and he thinks it will be tricky (as was expected) to make the db scale as good as the netserver probably will.
What I can do, however, is to extend my prototype with my redesigned database-package and do some benchmarks.
What I'm prepared to do is to include the above LockManager and queued read/write locks. If that's not enough to make the scaling balanced, I'll give up on my current approach.
I believe the main problem will be that searches will require traversing the hashtable and set readlocks for each and every element included in the search. This could be tackled by implementing escalating locks so that I could set a table read-lock for the whole table in one single set-operation but that is simply to much.
Do you know a good tool for finding "hotspots" in an application? Preferably I'd like a tool that I can use to instrument my code and then have it report how much time is spent in different parts of the code.
You're correct about the RMI calls, this is basically how I described it in my first post.
The reason for having multiple threads for executing clientrequests in the "database" is that a thread in the db will typically be blocked for most of it's scheduled time since it's doing I/O. Now, instead of just waiting for that single thread to stop blocking, why not let other threads get a chance to run? Of course this won't be possible for all cases.
The reason for managing those multiple threads in a pool is too be able to limit the amount of threads for executing clientrequests to avoid the overhead of too many threads.
You're correct about the callbacks. In the prototype the client observes a remotely observable server-object. When that observable server object changes state (i.e. a clientrequest has finished) it executes a method on the remotely exported observer object in the client.
I know this is more than what is required, I think I have already explained why I'm doing this and how I try to fit it into the assignment without violating the "rules".
I'd also like to point out again that I have already proven my "architecture" (from a functional point of view) in the prototype. So it's not an impossible task that lies ahead. But it might still be very hard to make the database as scalable as the rest of the server.
The prototype only had a stub simulating the database part. But the FIFO-queue, the threadpool and the callback mechanism were working fine.
Comparing my "server" with a J2EE application server is very exaggerated, but I guess you were joking about that part as well as the part about paying my re-submission.
Tell me when you're ready to pay
I'm still waiting for my colleague's thoughts about the database, but he's on vacation right now.
Even if he thinks too many advanced mechanisms is necessary to scale the database and I have to change approach, I'm still very glad I've done all this thinking/design as well as the prototype. I have really learnt alot and have had great fun so far.
Clarification:
no, this is not what I mean. RMI will handle creation and enqueueing of clientrequests (i.e. no pooling what so ever) by means of it's internal threading. Dequeueing and execution of the clientrequests in the "database" will be managed by my own threadpool. These threads will also handle the callbacks to the clients.
I'm still waiting for insights and comments on the db-scalability problem from a colleague of mine (he's a DBA) before deciding how to proceed.
I realize not many people are interested in the issues above, but I'll keep posting just in case someone else comes here later on looking for this kind of information.
I've been busy with other stuff for a while but today I had some time to continue my research:
I managed to dig up some additional information about SUN's RMI threading:
http://forum.java.sun.com/thread.jsp?thread=323360&forum=58&message=1315076
According to the thread,RMI is creating 1 accept-thread for each server and listening port. In addition to that RMI creates one additional thread for each connection (ConnectionThreads). The ConnectionThreads are supposed to time-out a while after the client closes it's connection.
Some conclusions:
- since there's only one accept-thread per remotely exported object, it becomes a precious resource, therefore callbacks are a good thing (this is assuming it works the same as with sockets where I think the accept-thread normally is used for the response back to the client as well)
- there will be a potentially indefinite number of ConnectionThreads, i.e. there's no pool just a timer for each threads life-cycle. The big question here is what is the least evil:
1) having a threadpool for creating requests on top of all this (with added overhead) or
2) minimizing overhead and disregarding the potential situation where RMI creates too many threads
I think 2) is closer to the business requirements, which means RMI will handle creation and enqueueing of client-requests and my own threadpool will execute db-operations.
Aside from RMI I have also researched thread reuse in the JVM in general, I haven't found one single trace of that current SUN JVMs have this feature. So the db threadpool stays.
Now, what's left is to think about the potential risk of the db becoming a bottle-neck, which might be a nightmare to resolve, if it's even possible.
After that I'm ready to decide how to proceed.
BJ Grau,
I agree, in reality this would probably mean both scalability and extensibility. I have deliberately chosen to interpret it as scalability only to be able to fit my scalability excercises into the exam. I don't think that this is a problem (just as long as you clearly state your rationale).
I think the requirements are so vague that you can come up with many different interpretations, therefore it's nothing strange with that most people can get away with disregarding scalability altogether.
Your second thought about db.db is more worrying, they might fail me just because the design is unbalanced. I.e. I have wasted time and money on scaling the server, but the database will anyhow become a bottle-neck.
I'm currently designing the db-part and was planning to use a LockManager based on a HashTable w. queued read-write locks for each row.
However this is still far from what could be done in terms of escalating locks to tables or the whole database to avoid row-lock overhead when searching for example. Not to mention a richer set of locktypes, for example intent locks etc. Also there are loads of theories of queued locks and what kind of db-operations to prioritize dynamically in runtime.
Clearly, the latter is way beyond my knowledge and the time I'm willing to put into this exam.
So would this make my design unbalanced? I don't know, since I'm not experienced enough to get a gut-feeling for this just by looking at the design. The only way to answer the question is to run a white-box test and identify the bottle-necks.
What's at stake for me is not only the course fee, it's the time invested and the fact that I probably won't have the energy and interest to re-do the exam the way it's supposed to be done if I fail with my current design.
I haven't decided yet...
Eugene,
you're absolutely right: my questions and thoughts are irrelevant for the assignment in itself.
The problem for me, however is that I also wanted to learn to implement some scalability "patterns" in java along the way. This in turn means that I MUST investigate and understand parts of the SUN VM and SUN RMI threading capabilities. Otherwise I might produce a solution that is less scalable than pure RMI threading. If that is the case I will for sure fail the exam. This is clearly stated in the instructions.
I have noticed the choking you talk about, however alot of other stuff was going on in the server, apart from receiving remote method invocations, so I cannot really say that it was the same cause.
I'll think about your comments a day or two and then decide. In the mean time there's at least a small chance that someone posts a supporting post . But it doesn't look like my prototype will survive
BJ Grau,
I realize that there's a common truth on this forum on what the assignment is really about in practice.
But I still think that if I put up a clear reasoning, based on the description in the assignment, why these scalability enhancements are implemented in my design documents, SUN must accept this.
Also my server-prototype is tested and is working fine and is ready to be refined for use in the assignment, so I'm not really worried about not getting it to work from a pure functional point of view.
What I am worried about though, is that my threadpool for executing db-queueries might be moot if the SUN VM is actually re-using threads internally. If it does, this might mean that my threadpool (and the FIFO-queue) is just overhead and nothing more.
Furthermore there's still the question on how SUN's RMI-implementation behaves, although I'm fairly sure after reading this forum that it will multithread if the underlying platform has this capability. So this would mean I could use RMI for concurrently create and enqueue new clientrequests.
But I guess if not a single person on this forum believes that this is an approach that SUN will approve, I'll give it up and just do it as it is meant to be done.
Anyone???