This week's giveaway is in the Cloud/Virtualization forum.
We're giving away four copies of Production-Ready Serverless (Operational Best Practices) and have Yan Cui on-line!
See this thread for details.
Win a copy of Production-Ready Serverless (Operational Best Practices) this week in the Cloud/Virtualization forum!

Mike Simmons

Ranch Hand
+ Follow
since Mar 05, 2008
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Mike Simmons

I see you're replacing the value of readTextFile[6], and then printing habitatFile[6].  What's the difference between these?  How are these variables related?
2 days ago
Try this program instead, to see what it does:
Favorites in our household:

Stranger Things
La Casa de Papel (boringly renamed "Money Heist" )
The Last Kingdom
La Reina del Sur (Queen of the South, the original)
Pablo Escobar, El Patron del Mal
El Ministerio del Tiempo (The Ministry of Time)

Tim Holloway wrote:Anyway, while there's a lot of stuff I like on Netflix, one recent favorite was produced by Spanish Television: El Ministerio del Tiempo (Ministry of Time)

Well, this sort of became a Netflix series, after the fact.  For season 3 at least.  They've been doing a lot of partnerships with stations and production companies in various countries.  

The Last Kingdom also fits this description, originally coming solely from BBC, but now co-produced with Netflix.  So now we get more elaborate muddy English villages ("cities", such as they were...) to have the battles in. :) Great fun.  Likewise La Casa de Papel, originally produced by a Spanish studio, later acquired by Netflix for further production.  Eagerly awaiting the next installment of that one...
1 week ago
Meh - the overhead of creating an extra object here is pretty trivial I think.  Especially as there's no escaped reference to it, it's easy for a modern JVM to free this immediately after completion of the code.  I'm much more concerned with whether it improves readability or not.... and I would say, it doesn't, really.  Too bad - because this is the sort of thing that happens pretty frequently, and it would be nice to have a short non-repetitive way to handle it.

For comparison, the Kotlin equivalent is much nicer:

And you end up with "name" being guaranteed by the compiler to be non-null.
1 month ago
I agree with the general response that null probably should be aggressively prevented here.  But I know there are plenty of times when something may be null and we need to work around that anyway.  Here's the most concise Optional code I can see for that:

It's a bit more verbose than the original with null check:

The only advantage I see of Optional here is that the Optional code only uses "employee" once, while the null check code uses it twice.  So in cases where "employee" is actually a much longer expression, the Optional version can concisely express what you want with no repetition, which is kind of nice.  So:

compared to



1 month ago

Piet Souris wrote:Java allows you to write 50_000_000_000, to prevent all possible misery  ;

Well,  that still wouldn't prevent me from mixing up "million" and "billion" in my head, which I believe was the issue here.  But it's a good point nonetheless
4 months ago
@Piet - regarding Day 12, I had a similar problem, kept getting "too small" for my answer.  Finally discovered that for "fifty billion" I had typed 50000000 rather than 50000000000.  Not to imply that anyone *else* would ever be so foolish... but don't forget to recheck the little things that seem obvious.

The thing about day 12 is, I was actually pretty proud of the nice efficient implementation that I had, which will calculate each new generation with a minimum of computation.  Except... it turned out to be no help at all in the actual problem. :| . Classic example of premature optimization.

As for the rest, I'm way behind.  I expect to be coming back to these for some time though - they're pretty fun.
4 months ago
@Tim, I had the same experience for day 11 part 2.  I let version 1 keep running while I worked on some speed enhancements for part 2... but then it eventually completed before I was done with the revisions.  I lost motivation to continue optimizing after that since I'm behind on other problems.

But I don't think it's luck, exactly, that the answer is found early on.  More like regression to mean - as the rectangles get bigger and bigger, they become more "average" in a sense, taking in more of a mix of positive and negative values.  Still, it's at least theoretically possible a max could occur later, so we need to complete the search I guess.
4 months ago
Sorry, I misspoke - I meant that doing it kN times, proportionate to the length, becomes O(N^2) overall.  Each individual remove in an ArrayList is indeed O(N), unless it's at the end.

Of course this can also be slightly improved by using an ArrayDeque, O(1) at both ends, but that still doesn't solve the problem in the middle, which is what was needed for day 5.

And obviously, to use LinkedList effectively here you have to use its ListIterator, not any indexed method.  That's precisely why it worked so much better than ArrayList here.
4 months ago

Stephan van Hulst wrote:I really liked that one, because it's the first real example I've seen of a case where a LinkedList greatly outperforms an ArrayList.

Really?  That's a little surprising - I would guess that may just mean that the list lengths aren't usually big enough to notice how big the difference can be.  But as an example, my first solution to day 5 part 2 used a LinkedList, and part 2 runs in 0.25 seconds.  Replacing the LinkedList with an ArrayList causes the code to run in 3.2 seconds.  If you make the input string longer, it gets much worse, as it's fundamentally an O(N^2 ) operation to delete from the ArrayList anywhere but the end.

To be clear, I wasn't using the Lists as stacks, but rather, removing from the list as I went.  I eventually refactored to use a stack instead, which didn't seem to change performance substantially from the original 0.25 seconds - though other changes have gotten it down to about 0.18 seconds.  Hard to tell how much the stack itself contributed really.

Unfortunately I'm way behind since then on the other challenges.  Don't you people have anything else you need to be doing? ;)  I look forward to catching up over time.  Thanks for posting about it and bringing it to my attention.
4 months ago

Campbell Ritchie wrote:If you iterate the entry set, you may get a different order of iteration.

I should hope not.  From the javadoc for java.util.SortedMap:

The map is ordered according to the natural ordering of its keys, or by a Comparator typically provided at sorted map creation time. This order is reflected when iterating over the sorted map's collection views (returned by the entrySet, keySet and values methods).

4 months ago
Well, in the first place, I wouldn't be so concerned about creating a new Map now and then - sometimes it's warranted. Especially if the keys are changing, as was the case in your original code.  Didn't you originally want a TreeMap<Double, T> rather than a TreeMap<T, Double>?  If that's the ultimate goal, you might as well do it all at once, and the cost of a new map is essentially nothing, since you need to reinsert each entry anyway.  I originally did this with that custom remap() function I wrote on the fly, but I now realize Collectors.toMap() works as well:

However if you want a TreeMap<T, Double>, and are willing to mutate the input map (fine here since you just created it with the counting collector, and no one else has a reference) then you can use some creative casting to accomplish what you want even faster:

The above code generates warnings for unchecked casts, which can be cleaned up with the previously shown coerciveCast util method:

This works as long as you know the thing passed in really is a TreeMap, and the values can really be any Object type.  So the same map can have new types for its values.  But make sure no one tries to access the original map using the TreeMap<T, Long> reference, as that will probably throw ClassCastException if you try to access any Long values in the map - they aren't Longs any more.
5 months ago
Hi Piet - thanks for clarifications.

I'm not clear which code you're using that latest code with, but I don't believe it works.  The second argument to collectingAndThen() needs to be something like a  

Function<TreeMap<T, Long>, TreeMap<T, Double>>

whereas you have a

Function<Long, Double>
5 months ago
Or, just for fun, the condensed version :
5 months ago
Stefan, regarding your Accumulator code - nice!  

I see you're still doing a CDF, which is to say a map of observation to cumulative probability, rather than the inverse function.  I.e. Map<T, BigDecimal> rather than Map<BigDecimal, T>.  I think the inverse function is what Stefan actually needs, as previously noted, but I'll go with your interpretation here.

Also I've already noted my feelings on BigDecimal for this problem, but here I'll accept it and move on.  I guess there is a possible benefit in being able to pass in a MathContext, at least for some applications.

I found one small optimization to make in the combine() method, always merging the smaller map into the bigger one:

As for the general design... I see you're aggressively reusing the same Map instance throughout.  That can work.  But I feel it's imposing a lot of costs as well, having to do all your counting by adding BigDecimal.ONE for every single observation, when long would be much faster.  Also doing a TreeMap log(N) lookup for every access, while you really only need the sorted nature of the map after the counting has been done.  I'm thinking it's better to let Collectors.groupBy() and counting() do most of that work.  If BigDecimal is desired, that really only is needed for the division; it can be done in a downstream transformation.  And if we really want to reuse the map rather than recopying it, we can still do that too, with a little... ummm... questionable casting. ;)

Of course, if you reverse the map as I think Piet intended, then you might as well make at least one new Map along the way, since you need to map on different keys.  But here we're assuming that is not what is needed.
5 months ago