Warren Dew

blacksmith
+ Follow
since Mar 04, 2004
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by Warren Dew

The bytes going out are supposed to be a copy of the bytes going in. That obviously means that the server must receive the bytes before it can send them back.

In the XKCD example, how does the server decide to send something back after it has received the 3 bytes of "HAT"? Why did it wait until it received 3 bytes - why didn't it stop at 2, or 1, and allocate and send back the buffer starting with just "H" or "HA"? The server must know somehow that the actual data length is only 3, as opposed to 500, to know to reply after 3 bytes, even if it mistakenly uses the 500 for the amount of memory to put in the reply.

But hey, I decided to look it up myself in RFC 6520. It turns out that the client does provide the length redundantly, just as I suspected. There is a length defined for the overall message, and then within the message, there is a length defined for the payload of the message, with the remainder presumed to be padding. It's this defining of two dependent lengths that allows the client to send inconsistent lengths to cause the bug.

RFC 6520 even explicitly warns against this bug:

"If the payload_length of a received HeartbeatMessage is too large, the received HeartbeatMessage MUST be discarded silently."

But how does the server know that the client has finished sending the data? If it has been sent a length of 500, why isn't it waiting until 500 bytes come in?

Is the length sent by the client redundantly? If so, it sounds like the real bug is in the TLS spec.
It's a good basic explanation, but it doesn't answer my question.
I thought I understood the bug, but the I realized I still had a question about it. So the client sends a length for the heartbeat, then sends data. The bug is what happens when the data is shorter than the length. But why doesn't the server wait until the full length worth of bytes comes through? Why does it echo the heartbeat when, from its point of view, it doesn't have the full heartbeat yet?

Jeanne Boyarsky wrote:Ugh. And thanks.


I agree with the reaction.

I've found a site that provides a method to use an integration test directory separate from the unit test directory:

http://johndobie.blogspot.com/2011/06/seperating-maven-unit-integration-tests.html

This method appears to work for the most part, though I wasn't able to get the resource files moved using his method. For future reference, I used a slight variation:


13 years ago

Marco Zanini wrote:2) I create an object of that class by calling the constructor, at this point in my program there are 2 objects running: the main class and the Runnable one.


Perhaps this is where the misconception is. At this point, there is still only one thread running, the main thread. The Runnable is just an object that got created; it's not "running". If you constructed a String, it wouldn't be "running" either; it would also just be an object that got created.

It isn't until you call Thread.start() that there is more than one thread. Then that new thread calls Runnable.run(), so the run() method is running in the new thread. Meanwhile, the main thread is still around, doing other things.
In an ideal world, things would work as you describe. In the real world, they don't, quite. I'll give you two examples:

1. Maven's own repository has removed older versions of artifacts. This has resulted in unexpected build breakages for organizations that didn't keep their own repository when the newer version was incompatible with their build. Reason enough to have your own organizational repo, to back it up, and perhaps even to keep it under version control if you ever change it, such as by removing old versions of artifacts or adding new versions. The artifacts themselves don't change, but the repo does.

2. The Maven team is notorious for releasing bug fixes in their own plugins in snapshots. That has often meant that people had to use snapshots. They seem to be doing better on this recently, but again, in the real world, sometimes one may have to compromise on strict version specification.

If the Maven team itself can't maintain perfect discipline, why should anyone else be expected to? Sometimes people have to do the practical things needed to work in the real world.
13 years ago
Versions of artifacts selected for a maven build can depend on what's available in the repository. This can result in a build breaking unexpectedly when the repository is updated. For that reason, it can be useful to have a history of what was in a repository at a given point in time. That's what I think the justification was in that project.

Granted adequate specification of version numbers in pom files alleviates that issue.
13 years ago

Peter Johnson wrote:1. You place JARs into SCM? Oh my. How many copies of commons-logging do you have in there wasting space?


This is hardly fixed by Maven. On my only previous major maven project before the current one, the architect and build people's fix to the random maven induced build breakages included our own maven repo, something which I think is mandatory for any serious commercial use of maven. Then, guess what, our maven repo was checked in to our version control system.

Disk space is cheap. Checking binaries in to a version control system is not a problem in this day and age.
13 years ago
On some projects, the distinction between unit tests and integration tests is clear, if not in the place most people think of it. In particular, on my current project, unit tests are the tests that can be run without JBoss running; integration tests are the ones that require the JBoss container.

This appears to be the distinction that the maven lifecycle envisions, with a setup phase that would allow deployment into the container and starting it, which happens after the package phase but before the integration testing phase.

That said, there doesn't seem to be any established convention for where the tests should go. You can use some plugins and just name the tests differently, keeping them all in the test directory; other plugins envision an integration test directory in parallel to the test directory.

To me, the latter seems to be more consistent with the philosophy of maven, as the idea behind having a separate test directory in the first place is to distinguish tests on something other than just the name. I haven't actually gotten to the point where I'm converting integration tests to maven yet, though.

Has anyone actually gotten to that point, what was your situation, and how did you handle it?
13 years ago
According to the article, Grafter was already working at Google, not Sun.

I don't think C# can ever displace Java as a server side development platform. C# ties you to Windows, and there's just too much of financial advantage to using a free operating system for those running big server farms. Cross platform development is a huge advantage, and Microsoft's business model prevents their actively supporting a cross platform language.

That's not to say Java won't ever be displaced. Lots of people think Python or Ruby are the way to go now. I personally don't agree, but something will undoubtedly come along in time.

I do think that the Java community process has flaws, and has resulted in questionable feature sets in the last couple of releases. On the other hand, I also don't think that Microsoft is capable of producing a better language.

I do think it's a pity Sun can't keep up their level of support. I'm not sure they need to, though. Other players - notably IBM - also have a vested interest in the success of Java.
16 years ago
And here I was thinking the answer to the original question was, "it depends on whether you want a Short (upper case S) object, or a short (lower case s) value".
16 years ago
So, the following is thread safe due to immutability:


And the following is safe due to synchronization:

Now consider the following class:

Two questions:

1. Is this thread safe?

2. Can someone point to the relevant part of the Java Language Specification?

The reason I ask is that I can only find comments about "final" ensuring thread safety in the context of immutable objects, by ensuring a store barrier before exiting the constructor. Is such a barrier ensured even if only some of the member data is final? Does the barrier come at the end of the constructor, or might it occur before the initialization of nonfinal members?

(Edited to remove an unintended "synchronized" before the accessor of the final variable in the Mixed class.)
[ October 22, 2008: Message edited by: Warren Dew ]
I'm presently using YourKit for the first time, to address some memory usage issues. I've had a couple of issues that cause me a bit of concern.

First, it has thrown some stack traces in a window that requests them to be sent to the company. That would be a good sign that they care about their product, but it raises some concerns when combined with my second issue.

Second, the included memory calculations (size of object plus descendants) seem questionable in some cases. In particular, I was surprised to find several items stored in our session having widely varying included sizes when I expected them to be identical. Digging a bit deeper, it looks like the descendant objects add up properly for the larger items, but they add up to far more than is listed for the smaller items.

Has anyone else seen anything similar? Am I missing something, or is there reason to worry about the dependability of the results? Should I push my company to buy another product, even though they've paid for a couple of YourKit licenses already?
18 years ago
I like the fact that you distinguish between unit tests and functional tests. I think this is a key distinction.

However, I disagree about what is more important. Unit tests can help get the code working more easily, but it's the functional test suite that's really important.

Think about it for a minute ... suppose you have a beautiful suite of 2000 unit tests, all with nice mocks to insulate them from the other tests, that runs in 20 seconds. It tests every single method using mock objects. You run the whole thing frequently, every time you compile. What are you getting out of it?

Well, you're getting something out of the one test that tests the unit you're working on at the moment. With all the tests made fully independent through the use of mocks and such, though, the other 1999 tests are doing exactly nothing. You're not touching the code they test, so their results cannot change based on the work you're doing. You might as well not run them.

The time would better be spent running one integration test for the unit that you're working on. One integration test won't take any more time than a full suite of unit tests, and it does you a lot more good. It not only tests your own code, but also the interfaces with the code that your code uses. It's in those interfaces that the more difficult bugs generally hide; without integration tests, you'd never catch them at all. Then run the half hour integration suite before you check the code in, over lunch or something.

As for the case where one bug fix broke 47 tests - that can only happen if the 47 tests, or the units they were testing, were dependent on the bug that was fixed. If the tests are written correctly, those 47 units will need to be changed to reflect the fact that the bug they were depending on no longer exists. If the tests are written incorrectly, the tests themselves should be fixed. Making integration tests into independent unit tests just sweeps the problem under the rug, where the bugs can breed and come back to bite you harder later. Better to fix the problem now, even if it means fixing 47 more files.
[ March 30, 2006: Message edited by: Warren Dew ]
18 years ago