• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Java Code Review and Psychology

 
Greenhorn
Posts: 7
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi All!

We are running an experiment to improve how people do code review. We have an idea connected with psychology stuff and we are testing whether it's actually useful and helpful.
If you have 15 minutes to spare between now and Wednesday (Aug 21) and have some Java skills, could you help us out and participate in this experiment?

https://codereview-experiment.eu/

The experiment involves doing a code review and a mind bending psychological experiment. Of course, your data will only be stored and used in anonymized form.

Please let me know if you have further questions!

PS: We are going to donate 5 USD to a non-profit organization on the participants' behalf (see more about this in the first page of the experiment)!

Thanks a lot!
Alberto
 
Marshal
Posts: 79177
377
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Welcome to the Ranch

Please tell us who is conducting this experiment, and what it is in aid of. Plain publication? PhD thesis? Something else?
 
Rancher
Posts: 261
12
IntelliJ IDE Spring Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Since I have a PhD myself, I have always been intrigued by academic scientific research. On the site it is stated that the participants need to stay focussed until the end for the data to be valid. Based on what objective criteria will you include/exclude data to limit the bias in your study? i'm just curious...
 
Saloon Keeper
Posts: 10705
86
Eclipse IDE Firefox Browser MySQL Database VI Editor Java Windows ChatGPT
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I would prefer to see all the questions first before committing to spending the time answering them.
 
Alberto Bacchelli
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Campbell Ritchie,

Thank you for your reply! Sure, I am happy to give you more details:

- Who is doing it: Davide (Ph.D. student at Delft University of Technology, The Netherlands), Gul (Assistant Professor at University of Gothenburg, Sweden), and Me (SNSF Professor at University of Zurich, Switzerland).
- What it is in aid of: This work is part of a scientific article that we are going to submit to a software engineering venue. It is going to be both an article and part of Davide's PhD thesis. As soon as the article is accepted (hopefully soon!) we will post here a link to the free version of the article (that is, not behind a paywall), so that we can discuss about what we found.

Hope this clarifies it!

Thank you,
Alberto
 
Alberto Bacchelli
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Carey Brown wrote:I would prefer to see all the questions first before committing to spending the time answering them.



Sure, I understand your concern. I cannot give you all the content beforehand, otherwise it will defy the scope of the experiment, but I am very happy to let you know how it is structured:

1. We will ask some demographic questions (e.g., how much you program, how many code reviews you do, ...). These are mostly multiple-choice questions, so very quick to answer.
2. Then, we ask you to do the code review, asking you to focus on the bugs (that is, errors that make the code not work in all scenarios) rather than maintainability issues (e.g., readability).
3. Then, we move to the psychology part: We ask you to pay attention to words that we display on the screen very quickly and write 3 of them. We ask you to do this 3 times at increasing speed.
4. Finally, we ask you some more multiple-choice questions that should not take more than 1 minute to fill.

I hope this clarifies your doubts!

Thank you,
Alberto
 
Alberto Bacchelli
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Brecht Geeraerts wrote:Since I have a PhD myself, I have always been intrigued by academic scientific research. On the site it is stated that the participants need to stay focussed until the end for the data to be valid. Based on what objective criteria will you include/exclude data to limit the bias in your study? i'm just curious...



Hi Brecht,

Thank you for your question! Is it OK if I come back to you after we close the experiment? I would not want to influence the behavior of any of the participants Otherwise, I would be also happy to discuss about it in private with you.

Thank you,
Alberto
 
Marshal
Posts: 8857
637
Mac OS X VI Editor BSD Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Are these related to some extent?

https://coderanch.com/t/714433/code-reviews/engineering/Experiment-Code-Reviews-GitLab (from 6 days ago)
https://coderanch.com/t/697113/open-source/Automation-Modern-Code-Review (from 27 July 2018)

And I'm sure I saw more of this kind from even earlier times.
 
Alberto Bacchelli
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Liutauras!

Liutauras Vilda wrote:Are these related to some extent?

https://coderanch.com/t/714433/code-reviews/engineering/Experiment-Code-Reviews-GitLab (from 6 days ago)




The one above is related, but at the moment we are focusing on the one in this thread.


Liutauras Vilda wrote:
https://coderanch.com/t/697113/open-source/Automation-Modern-Code-Review (from 27 July 2018)

And I'm sure I saw more of this kind from even earlier times.



The one above is not related to us It is from a former colleague at UZH.

In general, as we are studying how to improve software development, we found out that the code ranch community is very interested and responsive. For this reason, from time to time, we like to try to involve you in our studies. If you are interested, these are other two example studies where people from code ranch participated to our surveys providing us with very useful data: ESEC/FSE 2019 and ICSE 2018.

Cheers,
Alberto
 
Liutauras Vilda
Marshal
Posts: 8857
637
Mac OS X VI Editor BSD Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Alberto Bacchelli, thanks for clarifying. Good luck with your research.
 
Sheriff
Posts: 17644
300
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Likes 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Alberto Bacchelli wrote:
2. Then, we ask you to do the code review, asking you to focus on the bugs (that is, errors that make the code not work in all scenarios) rather than maintainability issues (e.g., readability).


Interesting. I do exactly the opposite for code reviews I conduct. My thinking is that bugs should be address by unit testing. Team level code reviews are for ensuring everyone who looks at the code comes to the same understanding of its intent. That often doesn't happen when code is not expressive and readable.

In my book, bugs are caused by misunderstanding. If you focus first on bugs without making the code readable and understandable, then you are ignoring the root of the problem.
 
Ranch Hand
Posts: 65
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Done! Good luck with your research!
 
Alberto Bacchelli
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Michael Krimgen wrote:Done! Good luck with your research!



Thank you so much!
 
Alberto Bacchelli
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Junilu Lacar wrote:
Interesting. I do exactly the opposite for code reviews I conduct. My thinking is that bugs should be address by unit testing. Team level code reviews are for ensuring everyone who looks at the code comes to the same understanding of its intent. That often doesn't happen when code is not expressive and readable.

In my book, bugs are caused by misunderstanding. If you focus first on bugs without making the code readable and understandable, then you are ignoring the root of the problem.



This is definitely reasonable to do and we interviewed many developers who do it as well.
In this case, we decided to focus on errors, but the other types of maintainability issues (readability and understandability) are subject of our future work!
 
Liutauras Vilda
Marshal
Posts: 8857
637
Mac OS X VI Editor BSD Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Junilu Lacar wrote:My thinking is that bugs should be address by unit testing. Team level code reviews are for ensuring everyone who looks at the code comes to the same understanding of its intent.


That's something to engrave.

So the review feedback shouldn't point out a presence of bugs, but the lack of unit tests for the particular parts of the code which are not tested (hence bugs slipped in).
 
Brecht Geeraerts
Rancher
Posts: 261
12
IntelliJ IDE Spring Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I also made my contribution!

I'd definitely be interested to read the peer-reviewed manuscript once it is published!  
 
Junilu Lacar
Sheriff
Posts: 17644
300
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Liutauras Vilda wrote:So the review feedback shouldn't point out a presence of bugs, but the lack of unit tests for the particular parts of the code which are not tested (hence bugs slipped in).


Yes, pretty much.

We start out code reviews by running all the tests. That way, we at least know that there are no known bugs.

Then we go through the tests and make sure we all have the same understanding of what the intent is for each one. We don't have the code's author walk us through the code. Rather, the author of the code stands back, lets the reviewers tell the story as they understand it from what has been written, then have the author confirm that we understood it correctly. Any misunderstanding could indicate that bugs are already in the code or could potentially be introduced to the code later. That means we either refactor the code (test and possibly, production) so that it expresses intent more clearly and/or write more tests to show that we have a shared understanding of the intent and any problems that could arise from misunderstanding that intent.

For what it's worth, I failed to identify any of the bugs in the sample code. Maybe I'm just getting old or maybe I've just become too reliant on automated unit tests to find bugs for me.
 
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Brecht Geeraerts wrote:I also made my contribution!

I'd definitely be interested to read the peer-reviewed manuscript once it is published!  



Definitely! As soon as we publish the manuscript we will share it in this thread
 
Davide Preosti
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Junilu Lacar wrote:

We start out code reviews by running all the tests. That way, we at least know that there are no known bugs.

Then we go through the tests and make sure we all have the same understanding of what the intent is for each one.



That's very interesting! So you read first the test code and then the production? We have done a study last year (yep, also here in code ranch, among others) where we were looking at this. We called it "TDR" (Test-Driven Review). You can find it here: https://pure.tudelft.nl/portal/files/51391076/icse2019.pdf.

TLDR: very few people apply this practice. Who applies it, reported that they do it to get an idea of what the code does before diving into the real implementation. They use tests as "documentation": indeed, a unit test can be seen as the smallest use case possible. However, among the cons of this practice, the main is that developers write horrible tests (sometimes they don't even write them), so TDR would be impossible to apply.

BTW: we would need 10 more people to participate, do you know anyone that could be interested in such study?

Results are almost ready
 
Junilu Lacar
Sheriff
Posts: 17644
300
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Sturgeon's Law pretty much applies: 90% of developers write crap code period.

I am an experienced developer and I did not see any of the bugs you had in the survey code. As I said, maybe I'm getting old or maybe I have grown too reliant on unit tests to catch bugs for me.

Just because most developers write crappy tests or dont write them at all doesn't mean that Test-Driven Reviews won't work. You just have to spend time teaching people how to do it properly. My gut and experience tells me that relying on code reviews to catch bugs is unreliable at best.
 
Junilu Lacar
Sheriff
Posts: 17644
300
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The test code is indeed regarded as a design specification and an example of how the production code is supposed to work.

Reading the test code first helps everyone on the team gain a common understanding of the intent of the code. I teach people to start debugging from the tests. If there's a problem with production code then there should be at least one test that fails. If there is a problem but no failing test, then you're missing a test. So you need to understand what the problem is first by writing a failing test to verify your understanding. Only then can you confidently change production code to fix the bug and make the failing test pass.
 
With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
reply
    Bookmark Topic Watch Topic
  • New Topic