Win a copy of Head First Android this week in the Android forum!

Gary W. Lucas

Ranch Hand
+ Follow
since Jun 25, 2014
Cows and Likes
Cows
Total received
10
In last 30 days
0
Total given
0
Likes
Total received
16
Received in last 30 days
0
Total given
13
Given in last 30 days
2
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Gary W. Lucas

I am designing a file format that will include CRC checks.  Although I will be working in Java, eventually I will swap files with C/C++ and Python implementations.  So I with to be sure that the CRC's generated by Java will match those created in other environments.

To that end, I would like to know if Java's CRC32's follow a specific standard.

I am pretty sure that they do because (a) I've done some testing with other implementations and (b)  Java-access to zip files probably wouldn't work if they didn't.  But the Oracle Javadoc for the CRC32 class does not cite a standard (and omission which is often a sign of trouble).

Does anyone know this for sure?

Thanks.

P.S.  I've also found some interesting example code at https://introcs.cs.princeton.edu/java/61data/CRC32.java.html that suggests that the Java CRC32 follows a standard, though again I am not absolutely sure.

2 weeks ago
Thanks.  I suspected that the situation was going to be pretty much what you described in your answer, but it is good to get confirmation from a knowledgeable person.  
6 months ago
I've been working with the Apache Commons Imaging folks to make some updates to the API and the question has come up about making the library compatible with Android.  I was wondering if this idea is practical or even desirable.  Does anyone have experience with this issue? What do you recommend?

Commons Imaging is a pure Java language implementation of image-related operations.  I use it because it offers support for the TIFF image format that was not formerly available except through the JAI add in.

As I understand it, the java.awt package is not supported under Android.  That presents a challenge because Commons Imaging uses java.awt.BufferedImage extensively. It also uses Color and some of the more specialized color operations like ICC_Profiles. A few other AWT classes are used to a small degree such as GraphicsDevice, GraphicsEnvironment, etc.


Thanks in advance for your help.
6 months ago
Thanks for your analysis. You picked up on a lot of the nuances of the problem.

I think that I may be leaning toward your suggestion for the non-zero winding rule.   Based on your example of the C-shape, I can see where it might be less likely to produce a terrible-looking graphic in the event of a malformed polygon. Ideally, as long as the contouring software works, things like that shouldn't happen. But practically, I suspect that floating-point round-off or just the truncation to integer pixel coordinates could lead to issues.

In terms of nesting...

For contour polygons, you can have any number of nested polygons (as in the a set of concentric rings).   The standard way of dealing with this is to have one enclosing polygon that includes holes for just the outermost of its nested children. So in the pond-inside-an-island case, you have:

1. The mainland polygon contains the lake as a hole
2. The lake polygon contains the island as a hole
3. The island contains the pond as a hole
4. The pond stands alone

In mapping and Geographic Information Systems (GIS), there is a standard file format for this called the "Shapefile format" which gives enclosing polygons in clockwise order and enclosed polygons in counter-clockwise order.  So an enclosed polygon would be given in the file twice: once as itself (clockwise) and once as a hole (counterclockwise).    The contouring implementation that I wrote has orientation information available to it (though it uses the opposite order: regular polygons are counterclockwise and holes are clockwise).  So it is easy enough for me to adopt to whatever conventions would be required for either winding rule.



Thanks.

Gary

6 months ago
Thanks for the responses.  Basically, you've confirmed what I suspected.

In testing with a large number of polygons, I haven't seen any significant performance difference between the two approaches.  And, certainly, the graphical part of the process is so fast that it's not worth worrying about.

I was more concerned about robustness in the face of problematic geometries.  Since the contours are built from real-world data sources, locally steep gradients can lead to closely spaced contours. Occasionally,  there might be "spike" features where the angle between two subsequent segments is small.  I have pre-processing I can run to eliminate some of this, but I am sure sooner or later something will slip through. I wondered if one approach might be less likely than the other to produce strange-looking graphics.

In theory, and by definition, contouring will never give rise to polygons that self-intersect or intersect or cross other polygons.  But the edges of a contour may contain small-scale features that potentially complicate the rendering.  You can see some of this in the "smoothing" image in the web article I cited (picture below):

6 months ago
I've written a contouring class that produces "nested polygons" and I would like to create Path2D objects that can be used for rendering area-fill operations with embedded "holes" for the interior polygons.  Path2D provides the option of using two different winding rules, WIND_EVEN_ODD and WIND_NON_ZERO.   I can use either, but I was wondering if one or the other would be preferred in terms of robustness and speed.  While the polygons I produce are generally non-self intersecting, they can be rather complex and have fine-level details along their borders.  I am a bit concerned about issues with closely spaced polygons or numerical errors.


If you would like to see more about what I'm working on, I've posted a some description at  https://github.com/gwlucastrig/Tinfour/wiki/Using-The-Tinfour-Contouring-API
6 months ago
I've just updated the documentation for the Tinfour open-source Java project.  Tinfour supports the creation of triangular mesh structures from unstructured data sets (collections of randomly-positioned data points). The focus of the project is the Delaunay triangulation. And while that topic is a bit specialized, I believe that there is enough general material in the documentation that visitors here at the Coderanch may find it interesting (and maybe even useful).

You can visit my main documentation page at The Tinfour Documentation Page

In particular, I've updated notes on Natural Neighbor Interpolation which is a technique for creating smooth surfaces from unstructured data such as elevation surveys, weather observations, etc.  I've also posted new material describing the algorithms used by the interpolator (see A Fast and Accurate Algorithm for Natural Neighbor Interpolation)

Finally, I've received a lot of support from folks here at the Coderanch, and I'd like to thank you all for your many helpful suggestions and encouragement.

Gary
8 months ago
Thanks for your thoughtful reply.    I think the information you provide will be useful.

I'm looking forward to reading your book.

Gary

Also, a bit of an apology...  After I made my original post, it occurred to me that you might not be able to find the question buried in all that text I wrote.  Sorry the post wasn't a bit more to-the-point.

9 months ago
First off, your book looks like it will be very useful and I am looking forward to reading it.

So far, I've only taken a high-level approach to Python (basically, treating it as a scripting language).  I am kicking around the idea of attempting a more ambitious programming effort. I have a software library written in Java for performing custom data compression and other kinds of analysis on raster data (particularly geophysical data). I would like to implement a compatible Python solution. Since my typical data set involves millions (and sometimes billions) of data values, I am concerned about throughput and efficiency.

And here I find myself in unfamiliar (and confusing) territory.  As I understand it, NumPy is written in C/C++ with Python bindings.  Is this the right model for what I am doing, or can I accomplish it entirely in Python? Some of the processing I perform involves tight loops and repetitive computations over large grids. Most of the math is all arithmetic (not much trig or log-based functions).

My other consideration is that I want my work to be in a form that people can actually use without too much trouble. I would also want it to be reasonably compatible with other tool sets like NumPy, SciPy, etc.

Thanks in advance for your consideration.  And good luck with your book!

Gary
10 months ago
I've posted some new articles on techniques for lossless data compression of raster data. The techniques were implemented in Java and source code is also available.  I've experimented with data compression for both integer and floating-point data types.  So far, I've been mostly working with geophysical information (elevation, ocean currents, surface temperature), but the techniques should be useful for a reasonably broad range of numerical data applications.

If you are interested, you can read more at

Lossless Compression for Raster Data using Optimal Predictors

and

Lossless Compression for Floating-Point Data  

Feel free to let me know if you have any questions or suggestions.

Gary
11 months ago
Recently, I contributed an enhancement to the open-source Apache Commons Imaging project to enable it to read high-resolution elevation data from the U.S. Geological Survey (USGS) Cloud-Optimized GeoTIFF files. GeoTIFFs are a variation of the TIFF image format that includes information that allows their imagery to be applied to map-based applications.

Anyway, I just posted an article and some example Java code describing how to use the GeoTIFF files to create shaded-relief map imagery. The article includes a basic algorithm for lighting and color rendering as well as some pictures that show off the quality of the USGS data. It also includes a bit of background on the GeoTIFF standard and links to some code for a B-Spline surface fitting class implemented in Java.

If you are interested in these topics, you can find the article at  The Gridfour Project's Elevation GeoTIFFs Article
1 year ago
Thanks for your reply.  I think your advice will be useful in my investigations.  I'm thinking about a mobile platform that starts out with some basic behaviors, but learns to optimize them over time.  For example, if I were implementing a walker (and that's just an example, I'm not that good), it would start off with some basic gaits pre-programmed into the system, but would gradually improve them based on experience negotiating its environment.   So that seems to fit the pattern you suggested with the Training and Inference areas.

I look forward to reading your book and, no doubt, significantly revising some of my ideas.

Gary
I have been looking into machine learning applications for small mobile robot applications.  Do you recommendations on the best way to apply deep reinforcement learning techniques with more modest processors?  

I see that you discuss a bi-pedal walker in your Appendix B.  I'm particularly looking forward to reading about that one.

Thanks.  And good luck with you book!

Gary
Good point on the determinant.  It's kind of a reminder that, if the three points define a valid triangle, they can also be used to construct the axes for a 3D coordinate system.

Incidentally, the area computation I posted earlier is algebraically equivalent to the determinant (with an appropriate assignment of variables to the vertices).
1 year ago