Adrian Engler

+ Follow
since Sep 18, 2006
Merit badge: grant badges
For More
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Adrian Engler

What the financial consequences of this decision will be for Oracle is questionable; it will lead to some more people taking courses (if it is not required for a certification, I don't see any reason for taking such expensive instructor-led courses), but on the other hand, the number of people getting the OCMJD and OCMEA certifications will be much lower. I think for many people, the current prices are acceptable, but it would not be acceptable to pay about $2000 more and - what would be worse for me - either take a few days holidays (or convince my employer that I should be able to attend these courses in working time). Since OCMJD and OCMEA are not really cheap now (which is probably justified, since these tests require actual manual work by those who assess the submissions), it could well be that Oracle will afterwards earn less.

In theory, the value of a certification might go up if fewer people have it. But it also depends on whether the right people have it. Even now, I have the impression that very few good Java developers and architects (at least here in Switzerland) have Java certifications. When the exam becomes much more expensive and people have to take holidays for useless courses, the certifications will become even more exotic than they are now (as far as I know, the Microsoft certifications are much more common for dotNet developers) and thereby loose value. If Oracle was serious about raising the value, they could, for instance, have raised the passing score, or something that really deserves the name "hands-on" - not a course in which people just sit several days, but perhaps half a day in which someone has to go to a center and develop or something right there. The selection of courses is rather ridiculous (e.g. a course for programming in Java qualifies someone for getting certified as an Enterprise Architect) - it is obvious that people will not take the most suitable course, but just the cheapest one that is allowed to fulfill the "hands-on" requirement. That is simply absurd.

Whether this decision that will probably lead to OCMJD and OCMEA certifications to become very rare is financially advantageous for Oracle is questionable. I see two explanations: a) as was mentioned above in this thread, it is a lot of work to mark submissions, and rather than just increasing the price a bit, they prefer to get rid of these exams, b) the reason is really what was stated - they just want to bring in line these certifications in line with the Oracle certifications, where there has been such a requirement for a long time. In this case, this is a bureaucratic decision, and they probably have not really thought out the consequences.

I personally think that option b) is the case. I cannot judge whether requiring courses for the old Oracle certifications is just as stupid or whether it makes more sense for DBA certifications (I once took exams for Oracle SQL and PL/SQL, which I can show to anyone who wants a proof that I have some knowledge of PL/SQL, but I do not have any certificate for that because that would have required attending a course, as well). It seems obvious that such courses do not make sense for OCMJD and OCMEA. It seems that they now just apply principles from Oracle exams to Java developer and architect exams for organizational and bureaucratic reasons. If this is the case, organized resistance might lead to them rethinking their decision. Does anyone have any idea how to organize a protest?

I have just taken the part I exam for OCMEA and now started with part II. With all likelihood, I will manage to finish until July (I wanted to, anyway, now I have one more reason to finish my project in July), but I am convinced that the value of OCMJD and OCMEA will decrease if in the future only those have it whom the employer sent to courses (or take holidays specifically for this and are willing to spend a lot of money for a useless course) - this is not a good criterion for restricting access to the certification.
On Monday, I passed the SCBCD test with 100% correct answers.

I had been reading books about the subject for a long time, but rather irregularly. Last year, I used web-based training from Sun (it wasn't bad, but I'm not sure whether it was really worth the price). I read several books, including the one by Bill Burke and Richard Monson-Haefel, the German book by Oliver Ihns, Dierk Harbeck, Stefan M. Heldt, and Holger Koschek the book EJB 3 in Action by Debu Panda, Reza Rahman, and Derek Lane and the book about JPA by Mike Keith and Merrick Schincariol. I had some practical experience with JPA, but not with Enterprise JavaBeans. I used practice tests from Whizlabs (there are some mistakes, but on the whole, I found them rather useful). One of the most important things was reading the specs.

I am glad about the excellent result; probably I would have passed with much less preparation.
14 years ago
Stubs and mocks, indeed, have a lot in common. They are not the "real" objects, and they can record information about what has been done with them.

According to Martin Fowler (see link above), the basic difference between stubs and mocks is that stubs use state verification and mocks use behavior verification.

In practice, if you write a class that implements the same interface as the "real" objects and provide some mechanisms for recording the state (e.g. how many times some method has been called), this is a stub. With mocks, in contrast, *every* method call is checked, and there is not a specific mechanism for recording some special events, rather expectations about *everything* (possibly with some exception) are set up and compared with what has really been called. Usually, it does not make sense, to write mocks from the ground up; rather, you should use a framework like EasyMock. If, however, you really implemented mocks from the ground up, in contrast to stubs, this would imply a heavy use of reflection.
15 years ago
I think that is the main point: to be able to use the server/db functionality either over the network or locally and to do so with as little code duplication as possible (so, you should not have the same functionality twice with and without RMI, but be able to bypass RMI).
I had a similar number of classes (and also after reducing the code significantly). I found it was really strange to work without a deadline - if you have unlimited time and want to do it well, there is the danger that it will become more and more.
Yes, different approaches have their advantages and disadvantages, but a combination of two approaches can eliminate the disadvantages. In my project, I used a combination of these two strategies

2) Using a progressive long number
3) Using a random number

I simply combined a progressive long number (a counter that was incremented each time) with a random number. The random part and the consecutive counter part were in different bit ranges of the long number. That way

a) the (anyway very small) likelihood that by accident the same cookie value is generated twice goes down to zero because in addition to the random part, a progressive number is used (with a random long value, the likelihood of the same value being generated twice is actually so low that we might not have to care, but still it is theoretically possible)

b) because of the random part, a malicious client cannot guess cookie values

Such a combination of two strategies that eliminates their disadvantages is probably not really required, but I found it made sense.
It took me much more time last year, probably, I rather did too much. It is always difficult to say what is better. Generally, it is recommended not to do anything that is not required, but sometimes is hard to say whether an additional improvement or enhancement is not still somehow along the lines of the requirements although it may not be explicitly required... Then, of course, it depends on how much time you can use for the assignment, if you have little time beside work and other things, it can take quite some time.

I'm not sure about the value, for me, it might have had some influence when I was looking for a new job last year, but together with work experience and other certificates. What was definitely an advantage of the SCJD exam is that I had some code I had all written on my own and which looks quite good (documentation, design) - two potential employers (one of them is where I am working no) wanted to see a sample of something I had programmed, and the SCJD was, of course, ideal for that purpose (also, the SCJD exam helped me to show some provable Swing experience - I have known Swing for a long time, but I never used it at work).
I suppose it would be hard to find a chapter in Head First EJB that is still fully valid for EJB 3.0 and the new SCBCD exam - the changes really affect most subjects. As far as I know, message-driven beans have changed less than other types (but even there, some things have changed, for instance, the use of annotations concerns all subjects, even message-driven beans).

So, if you only want to read things that are still relevant in EJB 3.0, it probably does not make sense to use Head First EJB, at all. However, even if there are many changes in almost all areas of EJB, EJB 3.0 is still based on earlier versions of EJB, and it may make sense to learn both the legacy versions and EJB 3.0 (there are still many more applications with earlier versions of EJB out there). This is a certain detour, and you must make sure to know the differences between the versions well (there are special books about the differences, and books about EJB 3.0 often describe the differences to earlier versions). This is a detour and probably means that you overall need more time for preparing, but on the other hand, it may also be useful to know not only the most recent version of EJB, but also its history, and then Head First EJB is a very useful book. I don't know whether such a detour makes sense, but I already have books both about EJB 2 and 3 and about the differences, I don't have any need to rush to the SCBCD exam, therefore I have decided to take this route. I will first practice the old EJB (mainly with Head First EJB), then practice with a book for people changing from EJB 2 to EJB 3 and only then practice EJB 3.
I did not submit the file, either. First, it should be created in the user's working directory, and we cannot know where the user's working directory will be until the software actually runs. Second, the file contains settings, such as the database location for which I don't know any sensible default.

Originally posted by John Mattman:
Mike, Thank you for your reply. But still i could not interpret the wording networked form.

"this mode must use the database and GUI from the networked form"

I would interpret it the following way: In local mode, the same code for the database and for the GUI must be used as in networking mode.

In other words, the places in the code where a distinction between local and networking mode is made should be minimized to the connection between GUI and database; apart from the communication between client GUI and database, the GUI and the database part should not care (or even know) about whether they are operating in local or networking mode.

Originally posted by John Mattman:
The interface that sun provided me has a find method that returns an array of record numbers.

public int [] find(String [] criteria)

In my version of the assignment, I had a find method with this signature, too. I came to the conclusion that there are several problems with this signature:

1. For finding out whether a record matches search criteria, the record contents have to be read, and the client needs the record contents for displaying, too. So, it's a waste of resources first to read the record contents to get the record numbers of the matching records and then to read the records again on the basis of these record numbers.

2. Unless the whole database it locked while a search is made (which certainly would not be a good idea), it is possible that the database contents are modified by other users between the call to find(...) and the - potentially many - read(...) calls (even if everything is done on the server). So, the consequence would be that a) records with contents that do not match the search criteria any more may be displayed in the client's table and b) one might have to deal with RecordNotFoundExceptions for records that existed when the find(...) method was called, but have been deleted before the read(...) method for their record number is called. This can only be prevented if determining whether a record matches the criteria and getting its contents is done in the same operation.

3. The find(...) method in the interface provided by Sun does not have additional parameters for search options (depending on the spec such options may not be necessary, but in my assignment there was a mismatch between the description of the search between the client and server requirements as to whether only the beginning of the string or the whole field should match, so I made both search modes selectable by the client to be on the safe side).

For these reasons, I implemented a find(...) method with a different signature - instead of just the record number, it returns an object containing both the record number and the record contents (and it has additional parameters for the search mode). This method was used by the client. Since the find(...) method defined in the interface provided by Sun still had to be implemented, I simply delegated the call to my other find(...) method, retrieved the record numbers from the record objects that it returned and returned them as an array.


Originally posted by Dmitri Christo:
...the interesting thing here is that there are conversations like this
where they suggest the opposite...

Isn't the user 'tricked' in some way if they start doing a bunch of work on the record only to find they can't complete since that record is locked by someone else?

[ March 16, 2008: Message edited by: Dmitri Christo ]

Yes, that may be the case, but the option that someone who keeps a record locked while going to lunch or going home is probably much worse. The effort for booking a record is probably not that big, so that users would probably put up with working on booking a record only for finding out that it has already been booked by someone else.

I solved this by making two checks: when a user selects a record for booking, fresh data for that record are loaded from the server, and if booking is not possible any more, the user is notified right away. If booking is still possible, the user can go on with editing (but the record is not locked), and when the data are saved, a second check happens (first a lock is obtained, then the data are read to make sure they have not been changed in a way that booking is not possible any more, and if the record data are OK, the booking is saved and the lock is released). That way, the record does not stay locked for a longer time, but people can only spend time for an update that will fail if they really do it at the same time and not if they just have stale data from an earlier search, but do not edit the record at the same time.

However, making two checks is probably unnecessary - the specification does not require it, and it may make the implementation unnecessarily complex.

Originally posted by David Winters Junior:

Im stuck on a particular issue here regarding how can i propagate exceptions back to the client. This isssue is caused as a result of the limitations provided by the interface provided by Sun.

Here is one of the methodfs provided by Sun in the provided interface:
public void update(int recNo, String[] data) throws RecordNotFoundException


I would say there are basically three options:
- catching checked exceptions and throwing unchecked exceptions (extending RuntimeException) for them
- using another interface than the one provided by Sun as the one that is exposed to the client (I don't know whether this is different in other versions of the specification, but in my version, it was only required that the interface provided by Sun was implemented, but it didn't have to be the one exposed to the client) - in that case, any exceptions including checked exceptions can be declared, of course. However, since the interface provided by Sun still has to be implemented, this would hardly lead to a clean and simple solution.
- using checked exceptions that extend the checked exceptions declared in the interface provided by Sun (such as RecordNotFoundException); in most cases, this is probably not a good solutions (unless the exception really means that a record has not been found).

The best solution in that case is probably to wrap checked exceptions in exceptions that extend RuntimeException.

In the client, I would not catch java.lang.Exception, but only the exceptions that can really be thrown by the client (both the checked exceptions like RecordNotFoundException declared in the interface provided by Sun and additional unchecked exceptions).


Originally posted by Rasmus Larsen:

I was planning on just using the logical locks on record level (as suggested by the supplied interface) and then have my internal file-db use a read/write lock to order the IO operations.. Does this sound completely off?

I simply cannot see a way to make the file-db thread-safe unless you do some kind of lock on both reads and writes... But atleast this will allow concurrent reads. One operation in particular comes to mind when securing the supplied interfaces.. Consider 2 threads both creating new records as fast as they can... The create method on the interface doesn't require a lock (since no record id exists yet) - so unless there's a "physical" IO lock, this is bound to go wrong...

Yes, I agree that the locks that have to do with the locking system that is related to the interface provided by Sun is not enough for thread-safe reading and writing. In my version of the assignment, this locking system was mainly used for an optimistic locking system (preventing users from saving records based on stale data), but for thread-safe reading and writing, additional synchronization was required.

With your type of cache, this may be different, but in my implementation, I used a RandomAccessFile, and I had a synchronized block around setting the file pointer and reading/writing - so, even reads were not really parallel. This was not a problem (at least I received maximum points for the locking part) - I think it is a trade-off; a lock for both reading and and writing operations is not a big problem as long as it is kept for a short time (just two statements), but for the other locks that are held for a longer time, I made sure that they are only used when necessary and not for all operations.

Originally posted by Mary John:
Great score......

how long did you take to finish.???

[ March 17, 2008: Message edited by: Mary John ]

I don't know how much time I worked on the SCJD assignment. From the time I downloaded the assignment (December 2007) to the time I uploaded it it took about a year, but that does not mean much because in early 2007, after having implemented the basics, I put away SCJD for a few months and prepared for the SCWCD exam instead, which I took a year ago (I wanted a second certificate in addition to SCJP quickly, and it was clear that I would get to that aim faster with the SCWCD exam than with the SCJD assignment). Then, from April to December 2007, I could only work on the SCJD assignment from time to time because during some months, I had so much work at my job that it would have been too much to work overtime as a Java developer and spend the little time on weekends and evenings that was left developing a Java application, too. I probably did too much, especially in the GUI - I had many more classes than other people who wrote on this subject (e.g. 35 top-level classes/interfaces for the GUI, 32 top-level classes/interfaces for the DB/server) - a simpler implementation would probably have meant both less work and losing fewer points for general considerations and GUI. Somehow, working on a project without a deadline was too unfamiliar to me, so the implementation grew too large - especially in the GUI part. I had an "almost" complete implementation relatively quickly, but then I reworked and extended it over and over again.