Win a copy of Programmer's Guide to Java SE 8 Oracle Certified Associate (OCA) this week in the OCAJP forum!
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

large amount of information

 
Brendan Kennedy
Ranch Hand
Posts: 65
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I want to use (java)XSLT as a way of displaying results from a search. Is it faster to use a SQL database and then convert the results to XML, or use a large XML file, or use many small XML files?
The information base will eventually be very big, so I am wondering what is the best way to store it if I'm always going to be using XSLT for my front end.
On an unrelated topic, does anyone know if the Whitespace problem of JAXB(it removes all whitespace when you unmarshal your doc) has been fixed yet?
Best Regards,
Brendan
 
Balaji Loganathan
author and deputy
Bartender
Posts: 3150
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Brendan Kennedy:
I want to use (java)XSLT as a way of displaying results from a search. Is it faster to use a SQL database and then convert the results to XML, or use a large XML file, or use many small XML files?
The information base will eventually be very big, so I am wondering what is the best way to store it if I'm always going to be using XSLT for my front end.

For me to search a 3.46MB XML file which has 36,345 nodelist using SAX took 2.645 secs.
my xml strcuture looks like this
<root>
<data>
<name>foo</name>
<desc>foo desc</desc>
<data/>
...
...
</root>
I'm searching for <name> value match.
Just FYI.
 
Jayadev Pulaparty
Ranch Hand
Posts: 662
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Balaji,
A little question i have in this context. What are the exact driving conditions for employing SAX/XSLT/DOM as our approach for extracting the results? I know that DOM is very memory-exhaustive, but isn't it that XSLT also builds a DOM tree and then carrys out the transformation? To implement the SAX search mechanism to extract the required info(while breezing past the xml document)can become very tricky. Isn't that so?
Please clarify,
Thanks.
 
Balaji Loganathan
author and deputy
Bartender
Posts: 3150
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Jayadev Pulaparty:
Balaji,
A little question i have in this context. What are the exact driving conditions for employing SAX/XSLT/DOM as our approach for extracting the results? I know that DOM is very memory-exhaustive, but isn't it that XSLT also builds a DOM tree and then carrys out the transformation? To implement the SAX search mechanism to extract the required info(while breezing past the xml document)can become very tricky. Isn't that so?
Please clarify,
Thanks.

Thats a difficult question for me ,which myself trying to figure out for a long time.
At present i'm using SAX for VERY large XML documents and DOM for small document.At present i'm learning SAX API for searching XML,so will get back to you soon on my problems.
Posted by:Jaya:I know that DOM is very memory-exhaustive, but isn't it that XSLT also builds a DOM tree and then carrys out the transformation?
I'm not sure whether XALAN uses DOM tree for transforming XML, but i found FOP processor uses SAX Api for transforming XML in to PDF using xsl:fo. FOP inturn uses jars from Xerces,Xalan,batik etc., This document on XALAN may help you http://xml.apache.org/xalan-j/design/design2_0_0.html#overarch
They might be some article on this,would someone please share with us. ?

Thank you.
Regards
Balaji
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic