I am getting this error when my java code trying to acess DB (Oracle 9i) thr' JDBC driver. It happens only in the production environment that to occationally. We can not duplicate the problem on our test servers, which has exact same configuration. Here is the trace.
java.sql.SQLException: Closed Statement at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:158) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:305) at oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:3945) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3000) at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3059)
I thought and searched for all my connections is there any closing statements before calling executeQuery() method,
Please suggest me is there any other place get chance to close connections
Could you be so kind to describe in more details these things: 1. configuration of data source 2. usage of transaction managers 3. what container 4. the passage of code where your application tries to execute statement.
It can be concurrence problem as well as transaction management problem as well as database configuration problem.
The statement preparation initiates the application, but the database server does a piece of work - analyzes it and stores while the particular connection exists. When the connection is about to close - all prepared statements close immediately for it. While the time life of opened connection there is a possibility to use once prepared statement many times with changed parameter values.
I think the most likely explanation is that your JDBC objects are referenced by instance or, even worse, class variables. So, when one thread closes a Connection or Statement, then another thread is unable to use the Statement.
It is best to reference JDBC objects by local variables. You should be creating the objects as late as possible and close them soon after usage. If this causes a problem due to the frequent opening of Connections, then you should use Connection pooling.
We also had faced this problem when tried to connect and execute the statements using remote JDBC Connection. We changed it to RPC call and then have a local JDBC call to resolve this problem. Our application has regional servers located in each region talking to its own DB. Not sure if your configuration is also something similar in nature.
You may want to verify that the class that opens the connection does not have the connection variable as static.
A static variable is declared outside the scope of the instance and threads would have problems with this.
I have made some progress in understanding the problem...
Basically, I have been looking at the code from a threads perspective. There are only three objects used, log4j, oracleXML Publisher and hibernate. Discounting log4j, the Oracle object is created, opened and closed within the procedure, so I concluded that hibernate was the most likely cause of the problem (i know, theres an element of educated guesswork in this).
Anyway, I found the hibernate timeouts (all around 300 seconds) and increased them to 3000 seconds... and lo and behold the reports are generated successfully, although some of them take over 20 minutes each.
So, this is a performance problem with the SQL itself:
, ROUND(SUM(DECODE(op, 3, 0, count))* :uplift,0) x_events
, ROUND(SUM(DECODE(op, 3, count, 0))* :uplift,0) y_events
WHERE ref_time >= :period_start
AND ref_time < :period_end
AND feed = 13
AND agreement = :agreement
AND line = 1
GROUP BY service, reply
ORDER BY 1,2
The SQL in question is querying data for a specific month from a monthly partitioned table that contains about 11 million records a month. We currently have 99 partitions.
Executing the sql in Toad / Sql developer / sqlplus takes around 20 seconds...so why 20 minutes?
Looking in the session browser, I can see that the 20 minute query is looking through all 99 partitions where the 20 second query is only looking at 1 partition for the month in question.
I am definitely using the same query in my tests as I am cutting and pasting from the current active query.
I'm not sure if this is heading off topic for the forum thread, but the question now is...
Why is Oracle using a clearly innefficient execution plan when we are using a query with bound variables and how do we get it to use a more efficient execution plan?