Win a copy of Java by Comparison (eBook) this week in the Java in General forum!

Tim Holloway

Bartender
+ Follow
since Jun 25, 2001
Tim likes ...
Android Eclipse IDE Linux
Forum Moderator
Tim Holloway currently moderates these forums:
Long-time moderator for the Tomcat and JavaServer Faces forums. Designer and manager for the mousetech.com enterprise server farm, which runs VMs, a private cloud and a whole raft of Docker containers.

These days, doing a lot of IoT stuff with Arduinos and Raspberry Pi's.
Jacksonville, Florida USA
Cows and Likes
Cows
Total received
85
In last 30 days
4
Total given
11
Likes
Total received
1277
Received in last 30 days
20
Total given
40
Given in last 30 days
1
Forums and Threads
Scavenger Hunt
expand Rancher Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Tim Holloway

Welcome the the Ranch, Meenaski!

Using tools to assist in designing, building, installing, monitoring, debugging, and tuning code is one thing. We've been doing that for many years now.

But so far every attempt to eliminate the programmer entirely has failed, and there have been many, many efforts, going back at least to the 1970s if not before. I could list names, but most people wouldn't recognize them.

Most code generation tools can solve a single problem. They usually keep a set of templates to which you add selections and the result is a functional module or program in 5 minutes or less.

And then the user says "That's great, but can you...?"

Or, as I am fond of quoting, the deadliest words in IT are "All You Have To Do Is..."

Users invariable come up with bizarre but often very reasonable requests to do things that the code generator's creators could have never imagined. And in the case of most code generators, trying to modify the generated code will be more effort than writing the code from scratch would be. On top of which, since the code was machine-generated, the "programmer" who's trying to make the changes probably doesn't understand what's really being done in the code, much less how to modify it safely.

It's why my signature line for these last few years has been "An IDE is no substitute for an Intelligent Developer."

As I said, management has been trying to eliminate those expensive and troublesome software developers for a long, long time and so far completely failed. As the tools got more sophisticated, so have the demands on the software. I don't see that changing until software either becomes totally standardized and immutable "black boxes" or some sort of advanced Artificial Intelligence is developed that can function as flexibly and as intelligently as a human mind.
2 hours ago
The "j_security_check" is the key. The only thing you have to do is produce an HTML or JSP page containing a form whose action is "j_security_check" and has the j_username and j_password controls on it.

As I said before, you never request this page yourself. It is automatically displayed when a user needs to be authenticated. The server knows that to do without any help.

The server also is responsible for checking the username/password against whatever security database ("Realm") you've defined as part of the server configuration. Again, no application logic is written.

The login doesn't know or care that the application is a Struts application and works exactly the same whether it's Struts 1, Struts 2, JavaServer Faces, or even just brute-force servlets and JSPs. You don't write code, you don't have to include any special JARs in the WEB-INF/lib directory. Everything is already written, debugged and installed in the server except for the login and loginfail form page templates and the security rules in web.xml.

Because the security is external to the application, bad requests can be bounced before they can get in and exploit the application and you can change your credentials "database" without making modifications to the application. You can keep the credentials in a file, a JDBC database, an LDAP/Active Directory server, or even use a single-signon manager with no changes.
2 hours ago
"Form Based Authorization" usually means Container-Managed Authentication - the J2EE standard security system as configured in WEB-INF/web.xml.

You should not have to change anything between Struts versions or even if you switched from Struts to some other application framework entirely. Because with container-based authentication, it isn't the application program that processes the form, it's the webapp server itself. And because of that, Struts does not process the login. In fact, Struts never even sees the login.

When you use J2EE standard security, an attempt to access a secured URL (one that matches one of the secured URL path patterns in web.xml), the web application server halts the request dead, parks it, sends the login page defined in web.xml to the user, accepts the user's response, checks credentials, and if they don't match, sends out the login-fail page to the user, accepts the user's response and if the credentials don't match, repeats sending the login-fail page until the user submits valid credentials or goes away. If the user enters valid credentials, the original parked URL request is "un-parked" and resumed as though login had never been requested.

As a consequence, you should never attempt to access the login or login-fail pages via a direct URL request, because that won't be properly handled by the server's login services.

Because 100% of the login logic is built into the server and not the application, there's no way for the struts servlet to see the request or process it. For best results and maximum security, a login page should have no struts-specific features on it at all. Otherwise you risk either triggering security exploits or running into problems where you're requesting resources before you're allowed to use them.

Note that this is how things work when you use the J2EE standard container security. If your "form based authorization" is in fact just a do-it-yourself login and it's based on Struts, then it's not going to require any changes that any other web page managed by Struts would. Although I do have to make my standard disclaimer that unless you're a full-time professionally-trained security professional, there's about a 95% chance that your work of genius has a major security hole in it and probably odds of over 75% that it's something that non-technical persons can break through in under 15 minutes. Which is why you should always either use the J2EE standard security, a well-tested security product, or both.
22 hours ago
It seems to me like you want a central control for all possible URL paths of interest. One thing I learned from Struts (and so did the people who designed Struts) was that it can be a real nuisance to separate strongly-related components.

So unless I'm missing something (which is very likely), you'd need your interceptor to either be able to translate a URL path fragment to a resource path using some simple mechanism like what I showed earlier, or you'd need to keep a dictionary that mapped the URL path fragments to their corresponding resource paths. This means doing something like keeping a mapping file that the Interceptor loads into a Map at startup. And it means that every time you add a new JSON-related URL or change the location of a resource for an existing URL, you'd have to change that map file. And properly, if you delete a URL from the list, delete the mapping, but that's less critical.

So the disadvantage of doing things like this in the Interceptor is that you'd have to potentially edit 2 files instead of one every time you did maintenance. One being the class file with the logic for handling the URL, one for the mapping.
1 day ago
Hmmm. That's a bit much for me to make sense of this early in the morning. Let me approach it the way I'd expect to do it (based on similar projects).

First, it looks like you may be using Maven to build your project. If you want to build a WAR, the resource files for the WAR would normally be placed under src/main/webapp. not src/main/resources. Everything in src/main/webapp gets copied over to the WAR, so src/main/webapp/WEB-INF/classes/log4j.xml for example would end up in the WAR as /WEB-INF/classes/log4j.xml.

When I'm doing ReST, I have dedicated classes to major functions and annotated methods in those classes to minor functions. So, for example. I'd have a com.xyx.payment.Bank class with delete, update and add methods, each one being mapped to a ReST URL. So the easiest way to get a unique JSON resource for these methods would be to simple provide a shared method in the class, a base class or a static utility class that implemented a getJsonResource method and just hard-code the resource path as an argument. Like so:



I notice that you're doing stuff with general Spring web functions, but I think maybe you should be using Spring ReST. As you can see from the above example coding to use the JSR-standard ReST annotations is probably going to be simpler even before you add Spring-specific support to it.
2 days ago
Just to be clear, when you serialize an object, the JVM writes the contents of the object out in a flat (serial) format. The details of that format are not public and tend to vary between java versions, so a file that was produced by serializing from one JVM version might not be able to be de-serialized by another. It gets worse when you are serializing between different JVMs on a network, since there's a greater chance of version mismatch.

When you write out a JPA Entity object, however, it is not serialized. Instead the property values of that object are used to build a SQL INSERT or REPLACE statement - or more often, to be passed as parameters to a previously-build parameterized statement. So the transportation mechanisms and formats are very different between JPA and java.io.Serializable.
2 days ago
https://docs.oracle.com/javase/tutorial/jdbc/basics/connecting.html

Note that you do not have to register a driver anymore. The practice of using Class.forName to load and register JDBC drivers has been obsolete for a very long time now. All up-to-date JDBC drivers register themselves automatically.
2 days ago
OpenJDK is a very large project, so it's hard to say. Generally you want as much of a language environment to be native to that language as possible, but Java does have a lot of classes that have to tap into OS-specific and/or hardware-specific features (native code classes) and it's hard to write the core interpreter in Java because you've got the chicken-and-egg conundrum there. Probably the memory manager, security manager and JIT compiler have substantial native code in them as well.

OpenJDK isn't archived in git or svn like most projects. They use Mercurial. Mercurial is somewhat like git (slightly pre-dates it, I think). The command-line client program is named "hg" (chemistry fans will appreciate this). Mercurial was also used for the source archives of the Xen virtual machine manager, if memory serves.
4 days ago
Well, if the relationship type was simple, then you could have a ManyToMany linkage between people. But even then it would be easier to maintain a link table (at least when you're working with JPA). So for your needs, a link table where the IDs of the two people are foreign key columns and the relationship type is a value column would be best. You can further constrain the relationship by making both person IDs be part of a compound key or unique constraint on the link table.

There is, by the way, an entire class of database where relationships are as important (or more so) than the data itself. The Neo4J database is an example of this.

Unlike JDBC, Neo4J is not a SQL DBMS and it has its own rather idiosyncratic query language, since SQL isn't well-suited for the kinds of queries you'd do. It is well-documented, though, and there is a Spring Data sub-project devoted to it.
OK. I didn't recognize it thanks to the embedded outer join and lack of a WHERE clause. Definitely legal, and definitely what you almost never want. And don't want in this particular case.

So no bug. My solution still stands as I originally stated it, though. Lose the nameRoot.join() line. It's not appropriate here.
JDBC connections and queries are two of the best uses for try-with-resources. It makes coding much simpler and more reliable.
This is classic code-and-lookup operation. Historically it's been a more efficient use of disk space to do it that way. These days I'd not be so sure, since modern databases could certainly track values and compress them (possibly into internal codes) without getting significantly more complex or intelligent than they are. And, in fact, might even use space more efficiently that way. But as a general rule, I still prefer the old explicit lookup. Which you can map to a Java enum for programming convenience if the set of codes is fixed.

For relatively small sets of code values, a code foreign key to a lookup table wouldn't be all that inefficient if the system cached the lookup table in RAM. This is one of the things you get pretty much for free when using an ORM.
AH. Now I see. I think maybe there's a bug in the SQL generation, since I don't think that's legal SQL, but the actual problem is that third statement which explicitly joins the addressTable to the query.

As I said earlier, the @OneToOne annotation takes care of the join. The addition of an explicit nameRoot.join is not merely redundant, but in this case it's actually harmful.

The whole point of having an ORM is so that you DON'T have to build up your own SQL operations.
The main difference between gmake and make is that gmake has more fancy features. The original Unix make was OK for automated compiling and linking of C and assembly code, but gmake added features that made it more convenient. Fewer, shorter (and more cryptic rules), conditional operations in the build process, stuff like that.

Make/gmake is often assisted in the build process. The "configure" tool can be used to check to see if all of the build dependencies are present in the system and to construct a makefile. Here dependencies aren't just libraries, it also checks to see if certain tools are installed, and it does things like check the compiler being used to make sure that certain features are supported. Unlike, say, Maven, the configure utility does not attempt to resolve dependencies, just make sure that they are satisfied. Then there are the "make makers", such as cmake, which help generate makefiles.

There's a classic triad of commands used to build many of the Linux applications and libraries. It goes like this:
Often people will use the "tee" command on the make stage so that build output not only displays on the console, but also gets recorded to a log file.
5 days ago

Arun Singh Raaj wrote:Thanks!
Actually I want to know the "logic" for search operation.



That depends on what you want to search and how. It could be as simple as a servlet doing a database lookup all the way up to a Lucene full-text search.