Some of the things that happen when an update occurs are the following:
- If unique indexes exist (e.g., primary key), verify that that the new key (if changed) is unique.
- If RI is involved, make sure no foreign key relationships are violated.
- If there are BEFORE (or INSTEAD OF) triggers, fire those triggers.
- Update each modified index key.
- Write the record to the table.
- If RI is involved (e.g., cascade operations) modify all “child” tables (and recurse downward as necessary).
- If AFTER triggers exist, fire those.
- If replication involved, write the record to the replication queue.
- Unlock the record if necessary
Of those things, the potentially expensive ones are triggers and RI. A trigger could obviously run a long time (it is user defined), but it would not typically block other operations. RI is relatively expensive, but one update should not typically take longer than another.
Here is what might be happening:
- Some user A causes an large index scan on this table. An obvious way this could happen is by setting a filter that covers most/all of the index (e.g., a query/filter with EMPID > 1″ which would cover the entire table. Setting such an AOF could end up reading the entire index (while holding a read lock). Another possibility would be reading the table in index order while a restrictive filter is set. Scanning from one record to the next record that passes the filter could read a large number of index pages.
- While user A scans the index, user B updates the table. It would block on that index scan while waiting for the write lock.
If the index is highly fragmented, an index scan (the first time through) could be very expensive. Once the index is cached, subsequent operations would be much faster (probably).
If this is the problem, it would make sense to re-index the table. And you might consider using a larger index page size (if you are using ADT tables). A larger page size can help mitigate the cost of large index scans on fragmented indexes.