Sorry to seem unhelpful, Dhaval, but this is way too big a question to resolve through a few forum queries. It sounds like you need to find an experienced data architect to work with you on this project. I'm not sure if dimensional modelling might be overkill for this project - your local DB expert will have to advise you on that.
Here are a few thoughts, but you really should get a local DB expert to help you with this stuff.
Is the set of parameters (readings?) from each weather station reasonably fixed i.e. do you get the same 20 (?) parameters (with different values) each hour from each station?
If so, then
you should probably store each set of readings from a single station as a single record in your table, with each parameter in a different column, because this will make it much easier to correlate them e.g. to see the temperature when it's cloudy at station #1. You can always add extra columns for new parameters later if necessary.
Station ID | Date | Hour | Rainfall | Cloud Cover |
1 | 30/04/2011 | 11 | 3.5 | 100 |
2 | 30/04/2011 | 11 | 1.5 | 65 |
1 | 30/04/2011 | 12 | 2.0 | 80 |
2 | 30/04/2011 | 12 | 2.5 | 90 |
The other advantage here is that it's really easy to compare the same parameter across different stations/dates etc e.g. to do things like getting the average rainfall for a set of stations/dates, etc.
A dimensional model is very flexible, but has some costs and disadvantages. For example, if you had a dimensional model with a separate record for each Station/Date/Hour/Parameter, you might have to do a lot of fairly expensive "pivot" operations to put the logical record back together to do things like "find me the station readings where rainfall > 3cm and cloud cover <80%". These queries are much easier if you store each set of readings in a single record instead, as described above.
Also, it seems like your data is basically all "facts" and only a few dimensions i.e. most of the data is observed values (facts), with just the station/date/time as potential dimensions, and these "dimensions" are the primary key for a given set of observations anyway. So your main data table is already a "fact" table in any case.
So let's assume you keep the readings as single records for each station/date/time combination, with each parameter in a different column e.g. Rainfall, Sunshine Hours, Cloud Cover etc. The unique business key on this table would probably be something like: Station ID, Reading Date, Reading Hour (1 to 24, or 0 to 23?). You could consider having a surrogate key (numeric ID) as the primary key instead, but to be honest I'm not sure there would be any advantage in this case, as you'd still need a unique index on the business key anyway.
Station ID is a foreign key pointing to the parent Weather Station (your "client" entity?) record in a separate table. The primary key for the Weather Station should be a numeric ID (generated from a sequence or similar) i.e. a surrogate key, as this will make it easier to manage any changes to your weather station details without impacting the large volume of data in your main table of readings. You'd probably want to index the Station ID in your main data table to make it easier to find all the readings for a given station.
You might have other tables e.g. for the organisation(s) who manage individual stations, staff contact details, or whatever else you need, but these should be easy to design.
The volume of data and the kind of operations you want to do on the readings are also significant for your design. How much data will you be storing?
If you have 24 readings per day * 365 days per year then that's only 8760 readings per year from each weather station, or less than 90,000 per station over 10 years. If you have 10,000 stations, that's still less than a billion records for all your weather station readings over 10 years. This is a significant but not unmanageable amount of data for an enterprise RDBMS. Also, because the "fact" records are basically all numeric values, individual records do not take up much storage space either.
But if your team decides the volume of data is too much for a single table, then you would have to think about e.g. putting the data for each weather station in a separate table (but this makes it a little bit harder to compare/aggregate data across different stations), or data for each year in a separate table (harder to compare data from 31 December 2010 to 1 January 2011). Your approach to this question would depend on your requirements, but my instinct would still be to keep the readings together i.e. any given set of readings for a given station/date/time should still be held as a single record, whichever table it ends up in.
Either way, you would certainly need to work with your DBAs to ensure the database is optimised for your needs e.g. use of indexes, partitioning etc to improve performance and maintenance.
Table partitions are like physical "sub-tables" based on a particular key e.g. date: each set of data for a particular value of the partition key is held in a separate physical section of the table. So when you query the table for a given date, for example, the query engine knows it only has to read the relevant partition instead of searching the whole index/table. It is also possible to back up/restore partitions individually so your DBA can back up last week's data easily, without having to do anything with last year's data. On the other hand, if you do a lot of queries that hit most of your partitions, then partitioning can sometimes slow down your query performance. You'd need to talk to your data architect/DBA about this.
Meanwhile, back to summary data. If the weather station readings are fixed i.e. the readings for a particular station/time are never updated, it means that you should be able to extract and store summary (roll-up) data regularly, without having to worry about updates to the original source values. Even if the source values are occasionally updated (e.g. to correct errors in the readings), you can still design processes to manage these changes e.g. by refreshing related summary data etc.
So you might have separate tables containing e.g. average or total readings per station per month, which would make it easier for people to run queries for this information without having to trawl through all the source records every time. You would need to speak to your users to find out what queries they would want to run regularly, and design suitable tables and processes to generate and store this data regularly.
If you're using Oracle, you can use materialised views, which are "snapshots" i.e. they consist of a SQL SELECT query to define the data you want to see, and a physical store for that data, which you can refresh regularly to bring in the latest data from your source tables. If you use MVs, it's a good idea to coordinate them with your table partitioning strategy, because Oracle can do clever stuff here e.g. it can tell if it only has to refresh a single partition of data in the MV, rather than the entire data set.
You might want to think about archiving strategies as well. How long do you want to keep the readings online, or how often do you want to archive them off to a separate store (if at all)? If you archive data, how do you provide access to it if necessary? Would you bother archiving your summary data, or would you keep it available throughout the life of the system? If you've archived the original data, how do you manage things like summary tables e.g. if you need to re-build them for some reason?
Anyway, I think that's enough from me. Others may have different advice to give you, but I hope this will at least give you some idea of the issues you need to talk about with your local DB expert.
Good luck.
Chris