Mike:
There is a mistake in your code. You are using a *local* KeyDB to get a key.
If multiple requests are processed simultaneously, there may be duplicate keys. Cause there may be 2+ KeyDB instances are accessing the same table, retrieving the same *next* key.
The solution is to synchornize on the *KeyDB* ojbect.
SearchController extends HTTPServlet implements Serializable
{
public void DOPOST(HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException
{
processAddToList(HttpServletRequest request,
HttpServletResponse response);
}
processAddToList(HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException
{
comment Data bean that can also do inserts and updates of itself to db
KeyDB k = new KeyDB;
comment This method sets the table that the PK number is needed for
k.setTable(ListCONSTANT);
comment This determines the next PK number and sets it into k.id
k.getNextUniqueVal;
comment Data bean that can also do inserts-updates-deletes of itself to db
ListDB lst = new ListDB;
lst.setID(k.getID);
try
{
lst.insert;
}
catch(Exception e)
{
comment This throws a the following error periodically
One or more values in the INSERT statement, UPDATE statement, or foreign
key update caused by a DELETE statement are not valid because the
primary key, unique constraint or unique index identified by 1
constrains table TEST.LIST from having duplicate rows for those
columns. SQLSTATE 23505
}
}
}