posted 13 years ago
- Does your user really need all those rows in one go? You can think of paginating and using LIMIT clauses to limit how much data is fetched.
- Are you sending "select *" queries ? Select queries should ask for only columns which the app really needs for a use case.
- If the query has joins and where clauses, restructure the query so that the most restrictive subsets are selected first.
- Your DB will have a explain plan command. Go through the explain plan documentation for your DB. The plan describes how your DB is going to process your query, and approximate rows it might hit for each stage.
You can use it to find out the clauses or joins which hit too many rows. Iteratively create additional table indexes and reexamine explain plan, until it can't be optimized further. Usually, this step can improve query times dramatically.
- Increase your DB memory cache sizes in its configuration. More the memory you give it, more it can cache and return results from memory without hitting the slower storage drive.
- Finally, you can increase web server socket timeouts as a last ditch, but it's a bad idea. The focus should be on reducing that 12 minutes to 30 secs or 1 minute, rather than some hack to accommodate 12 minutes.
After all that, you may find that the query still too slow, perhaps under load when many users are trying. Then it's time to solve it at the architectural level, using something like memcached. The idea is same - cache as much as possible in memory, if you want quick response times.