• Post Reply Bookmark Topic Watch Topic
  • New Topic

Random between 0 and 1 inclusive  RSS feed

 
Ted Schrey
Ranch Hand
Posts: 57
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I need to have inclusive for use in a probability determination. Specifically the probability that something exists. if 0.0, definitely does not exist. anything between 0.0 and 1.0, it might exist to the degree of probability and if 1.0 it definitely exists. I know I can't use Math.random() since it excludes 1.0.

Is there a way to do this?
 
Simon Roberts
Author
Ranch Hand
Posts: 182
9
Java Linux Netbeans IDE Scala Ubuntu
  • Likes 2
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Well, I guess it depends how exactly you want a pure rectangular probability.

Maybe there's a standard way to do this, but if so, I don't know it.

Can you simply say that > 0.99999999 (or some suitably chosen figure to fit with the characteristics of IEEE 754 64bit notation) counts as a one? Can you "round" it at a certain number of decimal places, effectively ensuring that 1 might occur, and shifting the spread appropriately. Are you proving something for a PhD thesis, or is this a high school simulation? Or where exactly in between?

Maybe you can take Math.random() * 2 and if it's > 1, throw it away and retry.

Maybe you can use the UUID.randomUUID, and divide it appropriately.

I'd be interested to hear if there's a standard mathematician's approach to this, though I've never needed anything accurate enough to care (the probability of exactly 1, or exactly zero, seem pretty darn small anyway, for practical purposes, even though one of them (zero) is possible.
 
Paweł Baczyński
Bartender
Posts: 2083
44
Firefox Browser IntelliJ IDE Java Linux Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
The probability of hitting 1.0 (if it was included in the range) is 1/7036874417766 (source).
It is negligible.
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hello,

This sounds like differential calculus where you need to consider the greatest possible (tends toward 1) value as == one. Something like this could work. You can use BigInteger with many int randoms if you need more possibilities but you will always have to consider the greatest possible value as == 1.



output:
0.6666666666666666
0.7777777777777778
1.0
0.1111111111111111
1.0
0.1111111111111111
0.0
0.4444444444444444
0.1111111111111111
0.1111111111111111
zeroCount: 114
oneCount: 87


 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Also, be careful about double precision:

System.out.println( (double)Integer.MAX_VALUE-1 == Integer.MAX_VALUE-1);

output:
true

So you are fine, you will get 1.0...
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
More like this:

System.out.println( ((double)(Integer.MAX_VALUE-1))/(Integer.MAX_VALUE-1));

output:
1.0

kind of useless because I guess the right part is casted to a double anyway before computing the result.

Anyway, keep in mind the precision factor although. It can sometimes play nasty tricks on you...

Example:



output:
false
22.330000000000002
22.33
 
Stephan van Hulst
Saloon Keeper
Posts: 7987
143
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
If that degree of precision is necessary, then double is the wrong data type in the first place.

I learned in Calculus that 0.999~ = 1.0. This means that if you have infinite precision, your range will be inclusive.
 
Campbell Ritchie
Marshal
Posts: 56545
172
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
So what do you want? Randomly‑chosen numbers between 0.0 and 1.0 inclusive?

As Stephan said, if you have infinite precision then you cannot distinguish 0.999999… from 1.0. But you cannot have infinite precision on a computer.

This question is too difficult for “beginning” so I shall move it.
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Stephan van Hulst wrote:If that degree of precision is necessary, then double is the wrong data type in the first place.

I learned in Calculus that 0.999~ = 1.0. This means that if you have infinite precision, your range will be inclusive.


He just wants random data between 0 and 1 inclusive. I don't think he needs precision greater than a double. I was just giving an old school example of what can happen using doubles, diverging a bit from the OP topic, I agree. He doesn't need to add those numbers, he just wants a random value between 0 and 1 inclusive after all.

About infinite precision; including 1 if you do have infinite precision: I mentioned something similar in my first reply:

This sounds like differential calculus where you need to consider the greatest possible (tends toward 1) value as == one...


Take care,
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
In the end, we can't ever be absolutely sure that something is going to happen, hence no 1s possible in our Universe ;-)
 
Campbell Ritchie
Marshal
Posts: 56545
172
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
A.J. Côté wrote: . . . System.out.println( (double)Integer.MAX_VALUE-1 == Integer.MAX_VALUE-1); . . .
A double can represent exactly any integer which will fit into the 53 bits' precision available in the IEEE754 format. So all values in the range of an int can be converted to a double safely, but the larger values in the range of a long mostly cannot be represented as doubles. So that should work nicely.

But why the -1? That appears unnecessary. What is wrong with using this method of the Random object?
Here follows a new version of AJC's formula omitting the − 1 parts.
double myRandom = (1.0 * randomObject.nextInt() - Integer.MIN_VALUE) / (2.0 * Integer.MAX_VALUE + 1.0);
The 1.0 * bit converts the int to a double; you can use a cast instead. That should give you 2³² different values subject to the usual precision problems you get with doubles. Because of the asymmetrical nature of two's numbers you cannot get the nice distribution between 0.0 and 1.0 you would like.
System.out.println((1.0 * Integer.MIN_VALUE - Integer.MIN_VALUE) /
        (2.0 * Integer.MAX_VALUE + 1.0) == 0);
System.out.println((1.0 * Integer.MAX_VALUE - Integer.MIN_VALUE) /
        (2.0 * Integer.MAX_VALUE + 1.0) == 1);

Alternatively, try
randomObject(nextInt(1000000001) * 1.0 / 1000000000.0
That should give a distribution between 0.0 and 1.0 in steps of approx 0.0000000001. For any precision write your own Random class or try this. That is about the only way to do it I can find which doesn't have the problem of not exactly reaching 1.0. The method with BigInteger suffers from a similar problem that the largest possible result is 2⁻¹.
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Campbell,

The minus 1 (-1) is because:

myRandom.nextInt(possibilities);

will never return possibilities, the max value returned will be possibilities-1.

Please view my first reply to this thread for context.

Cheers,
 
Campbell Ritchie
Marshal
Posts: 56545
172
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I meant the −1 here.
(double)Integer.MAX_VALUE-1
Sorry for the confusion. Because you have −1 on both sides, it cancels itself out. It was the division of MAX_VALUE by itself that I was using. Not possibilities − 1, which you were correct about.
 
Henry Wong
author
Sheriff
Posts: 23295
125
C++ Chrome Eclipse IDE Firefox Browser Java jQuery Linux VI Editor Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Simon Roberts wrote:
Maybe you can take Math.random() * 2 and if it's > 1, throw it away and retry.


Agree with Simon. I would also go for the bigger range and discard technique ... although, I would likely choose a smaller bigger range, and hence, have a lower chance of discard.

Henry
 
Winston Gutkowski
Bartender
Posts: 10575
66
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Ted Schrey wrote:I need to have inclusive for use in a probability determination. Specifically the probability that something exists. if 0.0, definitely does not exist. anything between 0.0 and 1.0, it might exist to the degree of probability and if 1.0 it definitely exists. I know I can't use Math.random() since it excludes 1.0.

Like the others, I suspect that Double is not what you want, since it's a 64-bit value, but the seed used by Random (which, these days, is what Math.random() actually uses) is only 48 bits, which means that it's quite possible that neither 0.0 nor (1 - 1Ulp) will ever get generated. And the same would be true of generating very large "probability" integers.

On the other hand, if you're not too worried about the granularity of your probability, why not use Random.nextInt(101) (or 1001, or 10001)? This will return an integer between 0 and 100 inclusive, with a good probability that both 0 and 100 are as likely to appear as any other value.

HIH

Winston
 
Tim Holloway
Saloon Keeper
Posts: 18795
74
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'm a little out of practice on floating-point terminology, so I may get some of the terms scrambled, but I think I need to outline what's really happening here and how it shapes the options.

A floating-point number in binary form consists of 2 components: the exponent and the mantissa. I don't recall offhand what the exponent ranges for Java are, but it's not important. The random-number generator is designed so that the exponent is always zero. The random number bits are all in the mantissa. The algorithm is such that all bit combinations are equally probable. To crunch it down to 8 bits, that would mean that the valid mantissa bits would range from 0x00 to 0xFF. This provides 256 unique random values, but, owing to the way we interpret them, their integral values would only range from 0-255 and EXCLUDE 256.

I think it was Einstein who once remarked that it's interesting, although counter-intuitive, that a ruler that is marked into 10ths of an inch should contain 11 markings. However, our marking container doesn't permit such excursions, and that's why - after scaling, the generated range runs from 0.0 inclusive to 1.0 exclusive.

The normal way to get around that is to use smoke and mirrors. For example, for dice, you multiply the 0-1.0 range by 6, truncate to an integer and add 1.

Technically, floating-point random number generation is sinful (to steal Knuth's quote) to begin with, since floating-point is not a smooth progression, but a series of very tiny steps. And in any event. most floating-point operations are fuzzy enough to begin with that actually including 1.0 wouldn't be considered.

You can try to scale to get a full range including both 0 and 1.0, but it's so close to the original that at best, it's likely to interfere with the randomness of the generated sample.

So, my own personal recommendation is to review the requirements to see if that's really essential, and if so, drag out Knuth Volume 2.
 
Winston Gutkowski
Bartender
Posts: 10575
66
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Winston Gutkowski wrote:This will return an integer between 0 and 100 inclusive, with a good probability that both 0 and 100 are as likely to appear as any other value.

I should add that, if you need to, you can turn that into a probability by dividing by 100 (or whatever granularity you choose), eg:
HIH

Winston
 
Campbell Ritchie
Marshal
Posts: 56545
172
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tim Holloway wrote:. . .
A floating-point number in binary form consists of . . . the exponent and the mantissa.
. . .
… and the sign bit at the extreme left of the number.
 
Tim Holloway
Saloon Keeper
Posts: 18795
74
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Campbell Ritchie wrote:
Tim Holloway wrote:. . .
A floating-point number in binary form consists of . . . the exponent and the mantissa.
. . .
… and the sign bit at the extreme left of the number.

And the exponent sign bit, usually.

I'd be careful about asserting the bit position of the mantissa sign, though!
 
Winston Gutkowski
Bartender
Posts: 10575
66
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tim Holloway wrote:I'm a little out of practice on floating-point terminology, so I may get some of the terms scrambled, but I think I need to outline what's really happening here and how it shapes the options.

I worry that we're getting a bit tangential here. The problem (as I see it) is that OP wants a random probability expressed as a value between 0.0 and 1.0 inclusive, where the likelihood of either extreme appearing is (a) pretty good, and (b) the same as any other distinct value in between them.

So this isn't simply a problem of what the random() method returns, but also of the limitations of our PRNG (in this case, Random); and since the number of sequences that can be returned is limited by the number of possible seeds (2⁴⁸), which is fewer than the number of possible Double values between 0 and 1 (2⁵³ - 1), it seems perfectly possible to me that Math.random() might never return 0.0.

@Ted: You can approximate what Random.nextFloat() would return by calling my probability() method above with the value 16777215 (2²⁴ - 1), which would give you a probability to roughly 7 decimal places of granularity while still being well inside the seed limit. Would that be enough for you?

Winston
 
Tim Holloway
Saloon Keeper
Posts: 18795
74
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Perhaps in the IEEE format. I was using IBM's floating point years before that, and it has some weird things in it. Modern IBM mainframes support both FP formats, although in Java, you have to specifically request "native" format, as the write-once/run-anywhere constraint demands IEEE format.

I wasn't actually referencing any specific FP format here, though. Presence or absence - much less location - of any sign bits is immaterial to the scope of the random number generator. As is location or size of the exponent and mantissa fields - or even the explicit presence of an exponent at all. Strictly speaking, the problem exists in any collection of bits taken as an unsigned number whose minimum value is zero and whose maximum value is all 1s.
 
Guillermo Ishi
Ranch Hand
Posts: 789
C++ Linux Python
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
What would you have if you read the number, and then randomly you added 0 or the difference between your highest number and 1 to the result?
 
Stephan van Hulst
Saloon Keeper
Posts: 7987
143
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Then you'd have a double value in the range [0,10), which doesn't even have a uniform distribution.

[edit]

I posted this as Guillermo edited his post.

I'm not sure what adding this difference would achieve, besides messing with the probability distribution.
 
Guillermo Ishi
Ranch Hand
Posts: 789
C++ Linux Python
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Stephan van Hulst wrote:Then you'd have a double value in the range [0,10), which doesn't even have a uniform distribution.

[edit]

I posted this as Guillermo edited his post.

I'm not sure what adding this difference would achieve, besides messing with the probability distribution.

Yes, it was second or third idea
Adding the difference or 0 randomly would allow you to have 0.. 1 inclusive. I don't think it would mess with the distribution except that it would reduce the probability of getting 0 unless you fixed that.

Also, you might be able to multiply it by some number to simply scale the range .000000001 (or whatever) amount.
 
Stephan van Hulst
Saloon Keeper
Posts: 7987
143
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Guillermo Ishi wrote:Adding the difference or 0 randomly would allow you to have 0.. 1 inclusive. I don't think it would mess with the distribution except that it would reduce the probability of getting 0 unless you fixed that.

It's like rolling two dice and adding the numbers together. You're going to get something that approaches the Gaussian distribution.

Also, you might be able to multiply it by some number to simply scale the range .000000001 (or whatever) amount.

Yes, but you'd need to scale by some value that also depends on the precision you're going for. If that precision is less than what Random provides, you might as well just round the number and be done with it. And a higher precision is useless, because Random simply won't generate some values at all.

As a matter of fact, I think that's the solution to this problem, (other than the one Simon proposed, which I like). Just round the number to the intended precision, using the appropriate rounding mode.
 
Tim Holloway
Saloon Keeper
Posts: 18795
74
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Guillermo Ishi wrote:What would you have if you read the number, and then randomly you added 0 or the difference between your highest number and 1 to the result?


If I'm not mistaken, that would cut the probability of pulling a zero in half, double the probabilities for all the intervening values, except for 1.0, which would have again half the probability of any intermediate value.

So, distorted probability set.
 
Stephan van Hulst
Saloon Keeper
Posts: 7987
143
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Geez Louise, rounding would achieve exactly the same. I think it's clear: Simon's way is the way to go. It's correct, intuitive, and the probability distribution is uniform.
 
Paul Clapham
Sheriff
Posts: 22829
43
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Ted Schrey wrote:I need to have inclusive for use in a probability determination. Specifically the probability that something exists. if 0.0, definitely does not exist. anything between 0.0 and 1.0, it might exist to the degree of probability and if 1.0 it definitely exists. I know I can't use Math.random() since it excludes 1.0.


You're concerned about something which will happen on average once every 7036874417766 tries? My recommendation would be to just use Math.random() anyway and tell the users that it could perhaps produce 1.0 but they just haven't waited long enough for that to happen. Remember you have to wait a long time for 0.0 to be chosen too, you're not likely to see that in your lifetime.

Or if the actual number of alternatives you want to choose between is smaller than that, say 100000, then choose a number between 0 and 100000 inclusive and divide by 100000.
 
Winston Gutkowski
Bartender
Posts: 10575
66
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Stephan van Hulst wrote:Geez Louise, rounding would achieve exactly the same. I think it's clear: Simon's way is the way to go. It's correct, intuitive, and the probability distribution is uniform.

I disagree. Sorry to keep harping on this, but a simple rounding of the value, whether multiplied by some "granularity" factor or not, does not solve the problem, for reasons already mentioned by Tim - namely, that the extremes (0.0 and 1.0) would be only half as likely to appear as any other value.

The fact is that Random is a pseudo-random source of bits, and Random.nextDouble() - which is what Math.random() is based on - is already a "hybrid" method based on two calls to next(int) that converts the resulting 53-bit integer to a double.

So why not just use nextInt() and divide? Providing the upper limit (ie, your "granularity") is reasonable, it ensures that the extremes OP wants are built in to the algorithm from the start, and don't rely on any fancy rounding to achieve them.

Winston
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Henry Wong wrote:
Simon Roberts wrote:
Maybe you can take Math.random() * 2 and if it's > 1, throw it away and retry.


Agree with Simon. I would also go for the bigger range and discard technique ... although, I would likely choose a smaller bigger range, and hence, have a lower chance of discard.

Henry


Yup, it makes perfect sense to get even distribution.
 
Stephan van Hulst
Saloon Keeper
Posts: 7987
143
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Winston Gutkowski wrote:I disagree. Sorry to keep harping on this, but a simple rounding of the value, whether multiplied by some "granularity" factor or not, does not solve the problem, for reasons already mentioned by Tim - namely, that the extremes (0.0 and 1.0) would be only half as likely to appear as any other value.

Sorry if I was unclear, I was exasperated with myself :P

I realized what Tim said applied to my solution as well..
 
Tim Halloran
Greenhorn
Posts: 5
Android C++ Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Winston Gutkowski wrote:
So why not just use nextInt() and divide? Providing the upper limit (ie, your "granularity") is reasonable, it ensures that the extremes OP wants are built in to the algorithm from the start, and don't rely on any fancy rounding to achieve them.

Winston


Be careful here it is easy to get this wrong, for a description see what nextInt(int n) says about this in its Javadoc. One draft of Effective Java (second edition) went into great depth on the issues of constructing a correct nextInt(int n) method for Random, Josh presented, formally, how and why dividing went wrong (with regard to the distribution) but I think that got mostly cut from the book at publication.

That said examining how nextInt(int n) works may be helpful.
 
Tim Holloway
Saloon Keeper
Posts: 18795
74
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Of all the suggestions so far, Mssr. Côté's seems most viable. I have a nasty suspicion that there's a "gotcha" even there, but I can't think what it might be.

On the other hand, it's not ideal because it cuts the granularity in approximately half. And statistics says that you could end up with a long run of numbers being rejected for being over the limit. Unlike the standard random function that always returns a value in fixed time "O", this approach has an extremely small, but nevertheless real probability that it could take an infinite amount of time to return an acceptable result!
 
Winston Gutkowski
Bartender
Posts: 10575
66
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tim Halloran wrote:Be careful here it is easy to get this wrong ... One draft of Effective Java (second edition) went into great depth on the issues of constructing a correct nextInt(int n) method for Random, Josh presented, formally, how and why dividing went wrong (with regard to the distribution) but I think that got mostly cut from the book at publication.

Fair enough, and I see your point; but one has to assume that the writers of Random were competent enough for it to produce a decent statistical spread of numbers.

I actually tested Random.nextInt(10000001) a few times over a billion calls, and it produced both 0 and 10000000 within predictable limits (100 ± a few) on every occasion, so Random.nextInt(10000001) / 10000000.0 should return a reasonable spread of doubles between 0.0 and 1.0, since you can be assured that 0 / 10000000.0 will return 0.0 and 10000000 / 10000000.0 will return 1.0.

Winston
 
Winston Gutkowski
Bartender
Posts: 10575
66
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Simon Roberts wrote:Maybe you can take Math.random() * 2 and if it's > 1, throw it away and retry...

Doh! The penny finally dropped. Yes, I like that solution too.

Not too sure about making the factor less than 2 though, because you might run into skews resulting from the seed only being 48 bits.

I have a question for the IEEE geeks here: Why does nextDouble() divide a 53-bit random number by (double) (1L << 53) when double only has a 52-bit mantissa. I assume there's a good reason, but darned if I know what it is.

Winston
 
Campbell Ritchie
Marshal
Posts: 56545
172
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
But double has a 53 bit mantissa. Unless you have NaN or ±∞ or you are in the subnormal range, i.e. where |d| < Double.MIN_NORMAL, you impute a 1. at the left end of the mantissa, which makes it into 53 bits. That 1 bit isn't there, but the remaining 52 bits represent the fractional part of the mantissa after imputing the 1.
When you get into the subnormal range, you do not impute the 1. so you get 0. on the left and the however many bits there are in the mantissa as the fractional part only. As the bits get fewer and farther to the right, the precision declines.
 
Tim Holloway
Saloon Keeper
Posts: 18795
74
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Winston Gutkowski wrote:
Fair enough, and I see your point; but one has to assume that the writers of Random were competent enough for it to produce a decent statistical spread of numbers.


And that, precisely, is the problem. The random function isn't picking numbers like you pick playing cards from a deck. It's picking numbers like you throw dice. Meaning that there is a finite and calculable probability that you'll do the equivalent of throwing "snake eyes" 32 times in a row.

Or, to put it another way, if you flip a coin 6 times and it comes up heads 6 times, do you know that the probability is that it will come up heads on the 7th throw? It's 50%! The actual probability of getting 7 heads in a row is something like 0.5x0.5x0.5x0.5x0.5x0.5x0.5, and I'll let you feed that your calculator, but it's the reason why you cannot reasonably rule out a true random number generator that only returns when a number that meets a selected criteria is generated getting stuck forever. The odds are mighty darned small, but they're never 0.

Incidentally, some of the later comments on the quirks of floating-point remind me of the limitations the IBM float and long-float formats: "6½" digits for (short) float and "14½" digits for long float. And if your browser didn't render the character right, thats six-and-a-half.
 
Winston Gutkowski
Bartender
Posts: 10575
66
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tim Holloway wrote:And that, precisely, is the problem. The random function isn't picking numbers like you pick playing cards from a deck. It's picking numbers like you throw dice. Meaning that there is a finite and calculable probability that you'll do the equivalent of throwing "snake eyes" 32 times in a row.

I'm not quite sure where the discussion about coins comes into play.

Unfortunately, Ted hasn't returned to comment on our posts, but my assumption, based on his post and the subject line, is that he wants to generate a random "probability" value between 0.0 and 1.0 inclusive, where the probability of either extreme occurring is the same as that of any other value in between.

My solution (nexInt(n + 1)) / (double) n) fits the bill because nexInt(n + 1) returns an integer between 0 and n inclusive, with each possible integer being equally likely to appear (providing n isn't too huge). And that appears to be borne out in my tests.

And thinking about it, I do actually have a worry about Simon's solution: namely, that it assumes that nextDouble() will eventually return 0.0, and I'm not at all sure sure that it must, because it seems perfectly possible that a 48-bit LCG might never produce two consecutive 0's from successive calls to next(26) and next(27).

Fascinating little problem though. Thanks Ted!

Winston
 
Tim Holloway
Saloon Keeper
Posts: 18795
74
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Winston Gutkowski wrote:
I'm not quite sure where the discussion about coins comes into play.


That applies to A.J. Côté's suggestion of taking random(2*n) and discarding values > n. Which, as I said, is probably the most likely way to get a passable and random result, but not in an absolutely determinable amount of time. since it has to keep trying until a passable result comes back from the generator.

The solution (nexInt(n)) / (double) (n - 1)) is questionable because if the random function being used is what I suspect it is (I haven't read the actual official JVM source, but I'm assuming it's based on Knuth), then the nextInt(n) method is actually floor(nextValue(n)*n) in pseudo-code. In other words, invoking the floating-point random function, scaling up, then truncating down to an even integer value. Which means that (nexInt(n)) / (double) (n - 1)) would be equivalent to (floor(nextFloat(n)*n) / (double) (n - 1)) and if you reduce out the formula, you'd be better off using nextFloat(n+1). In all cases, the actual internal random value runs between zero inclusive and 1 exclusive, and all else it just shuffling and scaling. Both of which have an impact on precision and randomness.
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!