Hi I have an issue I would like to get some help, I have some ideas to fix it, but I would like thoughts...
There are some queries, big queries performing a lot of joins, and a lot of validations (where's) and they are taking too much time, around an hour per query, all part of a ejb scheduler job...
When these jobs start, there's a lot of lagging over the system, when it ends, system comes to normal functioning...
Analyzing the query, i see there are big tables involved, tables with around a million records
Maybe some of you would suggest to optimize this query, well yes i'm analyzing that, but in a short opinion let's thing this query can't change too much, think about this tables are growing through history and they won't stop growing (I know this is not right, and off course the right thing to do is handle history tables) if I optimize this query it doesn't matter at some point of the time they will take a lot of time, as the records of the tables are a lot.
I'm thinking in using a cache option to load this huge tables, and instead of taking the records from DB taking them from cache, like Redis, and gain performance from there...
There is no magic bullet that solves all query performance issues.
You have ever expanding tables, so can you partition them? That would reduce the number of rows that need to be handled.
How about a materialised view?
Are all the indexes kept up to date (depends on the db)?
Are there indexes?
And so on.
Because, unless the cache is holding only a selection of the rows I really can't see what you'd gain.
And if it is a selection then you can achieve the same thing by partitioning or a materialised view.
posted 1 year ago
Thanks Dave, I'll go with the materialised view, and maybe cache too, it would impact, thanks.. I'll let you know how it went...
The City calls upon her steadfast protectors. Now for a tiny ad:
Programmatically Create PDF Using Free Spire.PDF with Java