I took the SCBCD exam at the start of April and I am finding I am still asking myself the question "what questions did I get wrong?". The reason I am unsure is that I am pretty comfortable with the subject matter and I can also remember the questions that I was at all unsure of (and I checked them after the exam to make sure I answered correctly). Does anybody know how often they review the questions for the Certification exams in case of obscure or unclear wording? I think one of the big problems that probably happens is that people pass the exam and then don't actually worry about the questions that they missed. I must admit I didn't actually comment on any of the questions as I thought I was doing ok on them.
Mike, Your question hasn't garnered a response from someone who knows anything, so I'll take a shot at it ;-) I would not imagine that Sun's test was perfect; there is no certification exam with public answers that hasn't taken at least some flack over bad questions, so I imagine that would extend into tests with non-public answers. I'm sure that they keep statistics on both how many people miss a certain question, and what the overall score of the person that missed the question ended-up being. With that information, it would seem easy to identify questions that high-scorers tended to miss a lot, and put those under review. The good news is that all test takers have the same disadvantage, for the most part. Although I can imagine a question that tests a general concept that most "ordinary" test takes would get "right", but the super-knowledgable experts would get wrong due to knowing too much (reading into the question something unintended). --Dale--
Howdy, good question We do an extensive beta-test, that's administered not by Sun, but by a third-party psychometrics company that evalutes the questions in a hundred different ways, and analyzes all of the statistics. We lose quite a few questions during the beta for a wide range of reasons including: * Too hard (statistically, too many people missed it) * Too easy (statistically, too many people got it right) * The experienced people didn't get it right, but the inexperienced people DID (based on the initial self-evaluation) * The question was misleading (too many people guessed the same wrong answer) * Comments by test-takers indicated the question might be wrong, etc. Once the exam is out, we continue to take input and feedback from test takers. I can say that the EJB exam, though, had the most rigorous beta, and as a result, we *rarely* get any comments on the questions that made it into the final exam. We ALWAYS take any input/comments very seriously, though. And again, it all goes through a third-party company for whom exams is their specialty. cheers, Kathy
It's just like a fortune cookie, but instead of a cookie, it's pie. And we'll call it ... tiny ad: