The size of a table will depend on lots of factors - the data-types (numbers and dates take less space than characters, use VARCHAR, etc), the data (sparse data, distribution of values etc), indexes, partitioning, standard overhead per row, the storage block size in your database, replication, rollback requirements etc - and many of these things will vary depending on the database (Oracle or MySQL etc). If you just want a rough idea, Jeanne's approach is as good as any. If you want a more accurate estimate, you need to look at the recommendations on how to estimate table space for your specific database - talk to your DBA.
Also, you seem to be confusing memory requirements and disk requirements. Unless you are using an in-memory database, your database will store data on disk, so you do not need to worry about how much
memory a table needs, unless you plan to keep running a full query on it i.e.
SELECT * FROM my_table; to read all the data. In general it is your queries that will determine how much data you fetch into RAM, not your table size. However, your database itself uses plenty of memory, so you or your DBA would need to consider this when deciding how to ensure your database has enough resources.
Prabhu wrote:What would be the best/least expensive data structure to load the below example data to the DB?
Don't try to optimise your database by hand. The database will store the data efficiently according to its own internal rules. One of the many good reasons to use a database is that as a developer you don't have to worry about the internal physical storage mechanisms the database uses, because you can work with the abstractions provided by the relational data model and SQL (e.g. you work with tables, not files). Concentrate on getting your data model right for your application, and make sure your queries are optimised to use indexes properly, and let the database get on with doing its job.