Win a copy of The Little Book of Impediments (e-book only) this week in the Agile and Other Processes forum!
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

Is it same in Hadoop to have two data nodes of 50 GB each and have 1 data node of 100 MB?

 
Monica Shiralkar
Ranch Hand
Posts: 873
1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Is it same in Hadoop to have two data nodes of 50 GB each and have 1 data node of 100 MB? If not so which one is better(faster processing).

thanks
 
chris webster
Bartender
Posts: 2407
33
Linux Oracle Postgres Database Python Scala
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hadoop is all about distributing your data and your processing across multiple cheap machines. The data is replicated so there are e.g. 3 copies of each block of data, with diifferent copies on different machines. If you have more nodes than replicas, e.g. 3 replicas across 6 nodes, then on average each node only contains half the total original data volume. Hadoop knows where your data is replicated, so it can decide to process different subsets of your data on different nodes at the same time. This is how Hadoop allows you to exploit the power of distributed processing.

If you only have two nodes, and your replication factor is 2 or more, then each node contains all your data anyway, so Hadoop cannot decide how to break up the processing in this way. And if you only have one node, then nothing is distributed at all.
 
Vikrama Sanjeeva
Ranch Hand
Posts: 760
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
chris webster wrote:Hadoop is all about distributing your data and your processing across multiple cheap machines. The data is replicated so there are e.g. 3 copies of each block of data, with diifferent copies on different machines. If you have more nodes than replicas, e.g. 3 replicas across 6 nodes, then on average each node only contains half the total original data volume. Hadoop knows where your data is replicated, so it can decide to process different subsets of your data on different nodes at the same time. This is how Hadoop allows you to exploit the power of distributed processing.

If you only have two nodes, and your replication factor is 2 or more, then each node contains all your data anyway, so Hadoop cannot decide how to break up the processing in this way. And if you only have one node, then nothing is distributed at all.


The first case you mentioned i.e of 3 replicas across 6 nodes, you mentioned Hadoop can decide what to process where.

Whereas, in your last example, i.e two nodes with replication factor is 2 or more, in this case you said, Hadoop cannot decide how to breakup processing.

My question, why in 2nd case, Hadoop cannot decide ? If both nodes are deployed on two separate machines, and one machine is loaded and not have good resources as compare to the other, then don't you think YARN will select the second machine to process the task ?

Thanks.

Viki.
 
Jesper de Jong
Java Cowboy
Saloon Keeper
Posts: 15642
47
Android IntelliJ IDE Java Scala Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
One node with 100 MB will be faster than two nodes with 50 GB, because in the first case, there is 1000x as little data.

You probably meant 100 GB instead of 100 MB.
 
Monica Shiralkar
Ranch Hand
Posts: 873
1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
One node with 100 MB will be faster than two nodes with 50 GB, because in the first case, there is 1000x as little data.

You probably meant 100 GB instead of 100 MB.


Yes I meant 100 GB. So will One node  of 100 GB be faster or two nodes of 50 GB each?

thanks
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic