It will works but 1~2 GB is really a small amount of data.
Hadoop shines when the dataset it -really, really- big.
Check your requirements and write a prototype to see whether Hadoop is going to be helpful or not.
You might find that a hand-crafted solution is more vital (and less memory-stresser) than Hadoop.
You can certainly run Hadoop with small amounts of data on a single machine e.g. to explore how it works. But Hadoop and map-reduce in general are really designed for "big data" problems that are suitable for this kind of parallel batch processing across lots of machines e.g. bit ETL processes,or analysing huge amounts of data, etc. In your example, 2GB is not "Big Data" - 2GB will fit in RAM on most laptops these days, and you could store more data than that on your phone! - so as Hussein says, a normal database might be a better solution. Also, you need to think about what you want to do with your data, as map-reduce might not be particularly relevant for your needs if you are doing lots of OLTP or serving data to busy websites, for example.
No more Blub for me, thank you, Vicar.
Gartner says :Bigdata will be most advanced analytics products by 2015 !
Time to Become Big data architect by learning Hadoop(Developer,
Mahout, Splunk,R etc) from scratch to expert level