You'll need to write the code that saves the page to disk yourself. Note that the visit method does not currently do that. The ImageCrawler example does it for all the images - it's probably easier to extend that example to also save the HTML, since the code already shows how to treat file names.
Note that the example as is does not work as it assumes that all URLs start with "http://uci.edu/" - which due to the redirect to "https://uci.edu/" is not correct. But that's an easy fix.
OK, so you want the HTML, JS and CSS, but not the images. You may need to enable binary content in the config, as crawler4j seems to regard part of what that site serves as binary. (There's an error message to that effect in its output.)
Apart form that you'll need to alter the "visit" method to save HTML, JS and CSS files. I had already mentioned where to find example code for that.
OK, so you DO want the images after all. In that case, starting with the image crawler example might be easier, you'll just need to adapt it to store HTML, JS and CSS as well.
Note that that particular web site also has an uncommon extension (".ece"), so the code needs to accommodate it and treat it as HTML. But that, too, is a small change.
Let us know if you have specific questions about making these changes. I don't know if crawler4j actually supports this use case - it would mean keeping file names in sync so that the HTML files reference the corresponding JS, CSS and image files; have you found anything regarding this?
"How many licks ..." - I think all of this dog's research starts with these words. Tasty tiny ad:
a bit of art, as a gift, the permaculture playing cards