I turn this in to a List of coordinates sorted in descending order by probability. Now I want to choose a random entry in the list such that any entry *could* be chosen but with a weighted leaning towards the beginning of the list with the higher probabilities.

This is one attempt I made where the coordinates are added to the list multiple times depending on the probability. Slightly works but not weighted enough.

I also tried using probability-squared but this was way to heavily weighted to the higher probabilities.

Any suggestions for a compromise approach?

Here's another set of probabilities after a few shots (no ships yet).

The most difficult part of wild life photography is getting them to sign the model release.

I am afraid my thinking goes along this line: since you have probabilities and there are ships of finite sizes, then I would overlay a grid and pick the highest probabilities from that grid. So in the classic Battle Ship, you have 2, 3, 4, and 5 hole boats. I would start with the 2 grid and choose the coordinates with the highest probability.

Les

Out on HF and heard nobody, but didn't call CQ? Nobody heard you either. 73 de N7GH

The most difficult part of wild life photography is getting them to sign the model release.

So lay the grid out, I like the 2X2, and take a random shot, then search say 2 or 3, or even a random number, grids in each direction from the random shot, and choose the highest probability in the area.

Out on HF and heard nobody, but didn't call CQ? Nobody heard you either. 73 de N7GH

*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

`nextGaussian()`. The standard deviation is one, so it generates numbers greater than one.

You need to tweak my code snippet so the bell curve has a cutoff point after the last index of your list. You can do this by simply re-rolling when the index is greater or equal to the size of the list, but I'm not sure how that influences the distribution.

Anyway, try it out, generate a bunch of indices and let us know if the distribution is fair enough.

*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

Check out this class from the Apache commons lib: http://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/distribution/LaplaceDistribution.html

The most difficult part of wild life photography is getting them to sign the model release.

First: the distribution structure that I have in mind is this one (sort of inverse cumulative density function):

say wel have the cells A, B and C each with probability 20%, D and E with each 15% and F with 10%, then I use a TreeMap(Double, List<Cells>) with reversed natural ordering:

<0, List(A, B, C)>

<.6, List(D, E)>

<.9, List(F)>

Now, drawing from the uniform distribution with draw = random.nextDouble(), and using TreeSet.floorEntry(draw) will give us the relevant list, to pick a cell from.

If this 'natural' distritbution is not good enough, then the easiest way to do something about it is to tweek or change the uniform distribution somewhat.

Stephan already gave some examples, where he corrected the standard normal distribution, because that distribution gives a change of about 32% of getting a value outside of (-1, 1).

But that and the Laplace are pretty complicated to apply, needing to be truncated.

There are easier ways. Think of the graph of the CDF of a Uniform(0,1) distribution. That is simply the graph of y = x, for x in the segment [0, 1].

It follows that

**any**function that starts in (0,0), ends in (1,1) and is non-decreasing, is a candidate to get some random value between 0 and 1.

For instance, take this function: f(x) = 2x for x in (0, 1/4), 2/3 * x + 1/3 for x in (1/4, 1), and let the random variable X have this CDF. Then P[X < 1/4] = .5, P[X < .5] = 2/3, so that P[ 1/4 < X < .5] = 1/6.

As you see, the chances are a little shifted towards the lower values. The rule is in this case: draw from the U(0,1), and if x < 1.4 then take 2 * x as outcome, otherwise 2/3x + 1/3. Within constraints, you can vary a and b until you get the desired shift.

Other simple candidates are: y = x^2 (already tried by Carey), y = sqr(x) shifting the chances to the end, y = sin(x * pi / 2) et cetera. Just make a drawing of y = x, and take any non-decreasing function starting in (0,0) and ending in (1,1). Should be a lot of fun.

But I must correct a blunder of mine.

If you have a random variable X with CDF: F(x) = 2x for x in (0, 1/4) and F(x) = (2/3)x + 1/3 for x in (1/4, 0), then the rule for drawing a value x with x = random.nextDouble() is:

if x <= .5 then outcome = .5x, else outcome = 1.5x - .5.

I forgot to express X in terms of U(0,1).

I don't want to talk about maths anymore, so I leave it to the interested reader to explain this.

For the very enthousiast reader: if we take as CDF: sin(x * pi/2), how must we transform our outcome of random.nextDouble()?

And now, shut up, Souris! Ay ay sir...