Ryan Kade

Ranch Hand
+ Follow
since Aug 16, 2005
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by Ryan Kade

I hadn't thought of that Mark, thank you for the suggestion. I guess I was hoping for something a little more "out of the box", but we've already written our own UserDetailsService, so I'm sure we could do a custom filter as well. Thanks again!
15 years ago
I've searched the Spring Security forums, JavaRanch forums, the docs, and Google'd on this topic with about every variation I can think of, and have come up short. If this is addressed somewhere, my apologies for missing it.

I'd like to have Spring Security remember an authenticated session for clients that have cookies disabled. It doesn't seem to me that Spring Security supports tokens passed in a URL (for a GET) or in the request body (for a POST). Is this correct?

The exact situation is thus: we are authenticating in a browser window, and that part works as expected. However, following authentication, the user is given the option to launch a Flash application which will make HTTP requests to RESTful web services. Those requests must also be authenticated, but to the best of my knowledge, Flash cannot modify Cookie headers in an HTTP request (per this URL):

http://livedocs.adobe.com/flash/9.0/...estHeader.html

Thus, we need a way for Flash to make authenticated requests, without having access to the original username and password. The options I have uncovered so far:

- Use BlazeDS. I'm not a Flash guy, so I'm not sure what this all entails, but I've seen several people do it, so I assume it works. It may require Flex? And it may require using Flash to do the original authentication? A lot unanswered questions here for me.

- Use a Pre-Authentication Filter as specified here:

http://static.springsource.org/sprin...h.html#d4e1923

Neither of these are ideal, although they ARE legitimate options. Can someone clarify for me definitively: does Spring Security support sessions w/o cookies, or are cookies required?

Thank you!
Ryan
15 years ago
I have B&S, and with the exception of the implementation of the DB calls, I use a Contractor object rather than a String[] to manipulate and pass around my record values. Anytime a client interacts with the DB interface, it's with Contractors, which are then converted into String[]'s beneath the hood (in the DAO). It's thus very easy to ensure that all fields are defaulted to not null.

Now I suppose it's hypothetically possible that a new class could access the DB directly via my underlying DAO rather than going through my abstracted interface, and thus break my code, but I think that's overkill for this assignment. One way I mitigate this is by making the DAO have default access.

Originally posted by Alex Belisle Turcot:

Yes! That is your design choice. I preferred to throw runtime exception instead.
For all I know (opinion), this could result in loosing points in the OO section or general considerations.
You RecordNotFoundException wouldn't represent the real thing..



This is true, and is a sticking point for me. Wrapping them in RecordNotFoundException wasn't my favorite solution. I suppose runtime would work just as well, I just had preferred that the exceptions (IOException among others) be declared. But perhaps your suggestion is more true to the spirit of what the exceptions are supposed to represent.

Originally posted by Alex Belisle Turcot:

I would be careful here. There's meeting the spec and meeting the spec with points. I think it would meet the spec alright but I'm unsure how the Data class is graded (let alone general considerations).. Why not throw "IllegalArgument" when someone calls delete on a record that is already deleted, or not locked ?
But again, reading the description, you don't have to.



I guess in my opinion, not throwing an exception is better, assuming there is a graceful alternative. For example I can represent "no search results found" as easily in an empty List as I can by throwing an exception, and then no try-catch block is required. But again, the spirit of the spec seems to suggest it needs to be thrown if the user tries to delete an already deleted row.

Thanks for your comments, Alex!

I have one other thought for anyone reading ... about Javadocing the custom exceptions (RecordNotFound, DuplicateKey, etc). Is it poor form to just copy the pertinent text from the official Javadoc for Exception, and use that to document my own methods? With the exception of the purpose for which these custom exceptions are thrown, all the functionality (getMessage(), etc.) is identical.
For my search, I adopted someone else's ideas and instead of including a text box where the user can type in search parameters, I created a drop-down prepopulated with all possible permutations. Then I don't need to do any validation and its guaranteed to be an "exact match".

Yes, I realize it's not scalable. I've documented that, and I'm not sure that it's otherwise significant. It certainly makes for an extremely user friendly UI.
Have a quick question about the spec on RecordNotFoundException. It reads (B&S 2.3.2):

Any methods that throw RecordNotFoundException should do so if a specified record does not exist or is marked as deleted in the database file.



The methods it refers to are: read, update, delete, find, lock, and unlock.

My question is, how flexible is this requirement? For example, I throw RecordNotFoundException in the following circumstances:

read
Attempt to read a record marked deleted
*EOFException (if reading past end-of-file)
*IOException (generic)

update
Attempt to update a record past the end of the file
Attempt to update a record marked deleted
*InvalidLockException (custom exception if thread doesn't own lock)
*IOException (generic)

delete
*InvalidLockException (custom exception if thread doesn't own lock)
*IOException (generic)

find
If no matching records are discovered at all
*IOException (generic)

lock
does not get thrown

unlock
If a thread tries to unlock a record that is not locked

* These exceptions are wrapped inside a RecordNotFoundException and unwrapped further up the code.

Three questions:
1. Is wrapping exceptions into a RecordNotFoundException consistent with the spec?

2. My design doesn't call for lock()/unlock() to know anything about the underlying record structure. I really don't want my lock() method to have to verify that a given record is legitimate. That shouldn't be it's job. "You want a lock on record 9000? Fine, go ahead!" It's the other methods' duty to know a record is invalid. But is this consistent with the spec?

3. Similar to #2, my delete does not throw an exception if the user tries to delete a row that has already been deleted, or if it tries to delete a row beyond the end of the file. It just gracefully returns. Is this out of harmony with the spec?

The word "must" is not used, so I'm wondering how strict we need to be in following these guidelines. (Naturally, I can always document it in my choices.txt, but I want to make sure it's not an "automatic" failure or deduction of points).

Any thoughts?
[ March 28, 2008: Message edited by: Ryan Kade ]
Thanks for the prompt reply!

Yes I agree, there's no easy way to retrofit existing code for unit testing. Some of our business objects would require complete refactoring and I think the potential to introduce risk is greater than the risk posed by leaving it un-unit tested.

The standards checking suggestion is great. I've seen CheckStyle used in some other open source projects I've downloaded, so I'll check that.

I neglected to mention, we do also have an automated deploy to a testing environment working, thanks for that suggestion as well.

I appreciate your input!
Ryan
Welcome Paul and Andy, thanks for your time.

We just started using CruiseControl about two months ago. We had an automated build script already in place (via Ant) but had no CI process at all ... you only found out if the build was broken after you updated from version control and tried. Now we find out immediately, and it works great.

But I'm told that discovering broken commits is just the tip of the iceberg in benefits one can draw from CI. The next step I'd like to go to is getting some more Unit Tests written for our code--sadly, a Herculean task all by itself--so that they can be run by CruiseControl during each build as well. Someday I'd even like to migrate our culture to TDD.

What other starter steps do you recommend we take to maximize the CI benefit?

Thanks!
Ryan
[ August 28, 2007: Message edited by: Ryan Kade ]
Hello everyone,

JBoss seems to still have a problem knowning when to recompile JSPs on new deployments. Whenever I deploy an EAR that has JSPs with an earlier compile date (say, for example, when I'm doing a rollback to a previous version out of version control), JBoss does not recompile them, and thus the "old" (but newer time-stamped) JSPs stay active.

The only way to avoid this is to (1) "touch" all the JSPs each time we do a build (even when reverting to an old build out of VC), (2) delete the /work directory manually, or (3) do a restart after each deploy. None of these is desirable, particularly when dealing with a production environment.

It's actually a Tomcat issue that has been logged in their bug-tracking system (bug #33453) and also in JBoss's JIRA (#JBAS-3081). The Tomcat folks appear not to want to integrate a fix, and the solution instead was to have JBoss (or Tomcat) actually delete the /work directory automatically after each deploy.

This would work for us, and was supposed to be available in 4.0.4. We're running 4.0.5.GA and I can confirm the /work directory is NOT being deleted. I even downloaded a fresh JBoss install and pushed a dummy WAR just to test it.

Is anyone else seeing this problem? Thanks!

Ryan
[ August 17, 2007: Message edited by: Ryan Kade ]
17 years ago
Hi manuel,

No, I've been reading, but haven't had time to respond. Exclusions are one way to do it, although I don't think you have the syntax right. You have to specify the <exclusion> inside the <dependency>. At least, that's what the online docs say--it's possible a top-level <exclusion> tag exists and they just haven't updated the docs.

Another (and apparently better) way to do it is by specifying the transitive dependency as optional. In your example, the POM for Project A should include:



Then when project B adds Project A as a dependency, it won't also grab "x.jar".

You also may be able to use the "provided" scope to accomplish this, depending on what you're doing.

More information is here:

Optional Dependencies and Dependency Exclusions
17 years ago
After further research, I decided to do this in the reverse. Rather than setting the scope of all the libraries needed by my WARs (yet located in the EAR) to "provided" in the master POM, I do it in each individual WAR file. This functions as an effective exclusion (for transitive dependencies), while still allowing the dependency to be declared in both the WAR and the EAR.

This strikes me as more consistent with the use of the provided scope.

Example:



[ June 27, 2007: Message edited by: Ryan Kade ]
[ June 27, 2007: Message edited by: Ryan Kade ]
17 years ago
I'm migrating our build process from Ant 1.7.0 to Maven 2.0.6, and I've got a whole series of JARs that I've written POMs for, most of which reside inside one of two WARs, which in turn live inside a single EAR.

Some of the projects inside the two WARs share libraries, and so I don't want to have those libraries duplicated in the WAR WEB-INF/lib directories, but rather include them in the EAR and make a reference to them in the WAR manifest. Maven doesn't support this directly, but they do have a workaround as described here:

Creating Skinny WARs

I don't like this solution, since it requires me to specify by name the libraries I want to exclude (using the warSourceExcludes parameter of the maven-war-plugin), which sort of breaks Maven's flexibility, no? Now I have to specify filenames that I'd rather have Maven construct based on my groupId-artifactId-version.

So I concocted an alternative solution. All my POMs for this particular project (JARs, WARs, EAR) inherit from one "master" POM (not the Super POM). I have a huge DependencyManagement section which specifies all the libraries any of the sub-projects would need, along with preferred versions.

When I run across a library that I need to be in the EAR and not in a WAR's WEB-INF/lib, I set it's scope to "provided" in the master POM. Then, in the Dependency section of the EAR, I override it to "compile." For example:



I still have to add the libraries to the manifest--no getting out of that--but then I don't have to specifically exclude them.

In this way, my WAR has no problem compiling, but still looks to the EAR for those joint dependencies.

My question: is this a bad way to do this? Am I hijacking the purpose of the "provided" scope and potentially introducing problems for myself down the road?

What do y'all think?
17 years ago
No ... that's their Head First book, which I'm sure is good. But for the SCJP book, they used a different publisher. This is the one I'm talking about:

http://www.amazon.com/Certified-Programmer-310-055-Certification-Guides/dp/0072253606
My company has a Safari account to view online books, but I don't see the K&B SCJP book there. Is it available as a part of a different online subscription anywhere?

Note: not asking for it for free ... looking for a paid (legal) online option.

Ryan
Did some searching both in the forums and Google and didn't find a good answer to this question: how do I access an enum using EL in JSP? Is it even possible?

If I have an enum:



I'd like to be able to access it in the JSP:



Any suggestions?
18 years ago
JSP