Issue


An error on a duplicate key (loadid/batchid) during authoring is raised.
As a result, batches and loads will not be submitted and will prevent integration jobs from being submitted.

Cause

Referring to the logs, we can see errors such as:

Caused by: org.springframework.dao.DuplicateKeyException: PreparedStatementCallback; SQL [INSERT INTO DL_BATCH (B_LOADID, B_STATUS, B_BATCHID, B_SUBMITDATE, B_LOADCREATOR, B_LOADTYPE, B_JOBCOMPLETIONDATE, B_SUBMITTER, B_LOADCREDATE) VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?) ON CONFLICT (B_LOADID) DO UPDATE SET B_STATUS = ?, B_BATCHID = ?, B_SUBMITDATE = ?, B_LOADCREATOR = ?, B_LOADTYPE = ?, B_JOBCOMPLETIONDATE = ?, B_SUBMITTER = ?, B_LOADCREDATE = ?]; ERROR: duplicate key value violates unique constraint "i1dl_batch"
  Detail: Key (b_batchid)=(3061) already exists.; nested exception is org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "i1dl_batch"
  Detail: Key (b_batchid)=(3061) already exists.


This is caused by the next value of the BatchID SQL sequence already existing in tables (typically from a previous batch on another environment). Therefore, this value cannot be used as PK.

Solution

One way to fix that is to manually set the sequence to start at a higher value.

Identify current values

First, get the current value of the SQL sequence and the higher value used as batchid. (in a non-buggy situation, these two values are equal)

select last_value from semarchy_repository.seq_batchid;
select max(batchid) from semarchy_repository.mta_integ_batch;

Modify the sequence

Then we need to fix the sequence so the newly generated batchid will be available as PK.

alter sequence semarchy_repository.seq_batchid
restart with <number>

Replace <number> by a value at least greater than the max(bachid) + 1


Tip: Identify sequences

To retrieve information about sequences :

In Postgres

SELECT * FROM information_schema.sequences 

In SQL Server

SELECT * FROM sys.sequences