Skip to content

node: Avoid duplicate key constraints when changing chain shard#6199

Open
erayack wants to merge 3 commits intographprotocol:masterfrom
erayack:fix-chain-shard-duplicate-key-constraint
Open

node: Avoid duplicate key constraints when changing chain shard#6199
erayack wants to merge 3 commits intographprotocol:masterfrom
erayack:fix-chain-shard-duplicate-key-constraint

Conversation

@erayack
Copy link
Copy Markdown

@erayack erayack commented Oct 16, 2025

Description

This PR fixes issue #6196 by implementing a robust backup naming system that prevents duplicate key constraint violations when reverting chain shard changes.

Problem

When changing a chain's shard using graphman chain change-shard, the existing chain is renamed to <chain>-old. However, if a subsequent attempt is made to revert the chain's shard back to the original shard, the operation fails due to a unique constraint violation because <chain>-old already exists in the database.

Solution

  • Robust backup naming: Implemented next_backup_name() function that generates unique backup names by appending numeric suffixes when conflicts exist
  • Backup reuse optimization: When reverting to the original shard, the system now reuses the existing backup instead of creating a new one
  • Backup preservation: Previous backups are preserved with unique names to maintain data integrity
  • Logging: Added logging to track backup operations and outcomes

Changes

  • Added ChainSwapOutcome struct to track the results of chain swap operations
  • Implemented next_backup_name() function for conflict-free backup naming
  • Enhanced change_block_cache_shard() function with robust backup handling logic
  • Added detailed logging for backup operations

Testing

The fix handles the following scenarios:

  1. Initial shard change (creates <chain>-old)
  2. Reverting to original shard (reuses existing backup)
  3. Multiple shard changes (preserves all backups with unique names)

Fixes #6196

@github-actions
Copy link
Copy Markdown

This pull request hasn't had any activity for the last 90 days. If there's no more activity over the course of the next 14 days, it will automatically be closed.

@github-actions github-actions Bot added the Stale label Jan 15, 2026
@github-actions github-actions Bot closed this Jan 29, 2026
@lutter
Copy link
Copy Markdown
Collaborator

lutter commented Jan 29, 2026

We really need to review this, this shouldn't have been closed

@lutter lutter reopened this Jan 29, 2026
@github-actions github-actions Bot removed the Stale label Jan 30, 2026
@erayack erayack force-pushed the fix-chain-shard-duplicate-key-constraint branch 2 times, most recently from c140bbc to ed4e528 Compare April 26, 2026 09:22
- Implement robust backup naming system to avoid conflicts
- Add support for reusing existing backups when appropriate
- Preserve previous backups with unique names
- Add comprehensive logging for backup operations

Fixes graphprotocol#6196
@erayack erayack force-pushed the fix-chain-shard-duplicate-key-constraint branch from ed4e528 to c5439e8 Compare April 26, 2026 09:33
@erayack
Copy link
Copy Markdown
Author

erayack commented Apr 26, 2026

Hi @lutter, I rebased PR #6199 onto current master and resolved the conflict in node/src/manager/commands/chain.rs.

The PR is now mergeable. I kept the original fix intent, but ported it to the current async chain-store code, preserved the safe chain-store creation ordering, simplified the backup-name selection flow, and added focused tests for backup-name formatting.

@erayack erayack changed the title Fix duplicate key constraint when reverting chain shard changes fix: duplicate key constraint when reverting chain shard changes Apr 26, 2026
@erayack erayack changed the title fix: duplicate key constraint when reverting chain shard changes node: Avoid duplicate key constraints when changing chain shard Apr 26, 2026
Comment thread node/src/manager/commands/chain.rs Outdated

// Update the current chain name to chain-old
update_chain_name(conn, &chain_name, &new_name).await?;
let previous_backup_final_name = if let Some(backup) = existing_backup.as_ref() {
Copy link
Copy Markdown
Member

@dimitrovmaksim dimitrovmaksim Apr 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your code will hit the same constraint in the following case:
mainnet on shard_a
mainnet-old on shard_b

Then running:
graphman chains change-shard mainnet shard_b

  • Line 326 computes the next free backup name, e.g. mainnet-old-1.
  • Line 327 renames mainnet-old -> mainnet-old-1.
  • Line 330 then tries to rename mainnet-old-1 -> mainnet.

At that point the original mainnet row still exists, so the rename to mainnet violates the unique constraint on chains.name. The current mainnet -> mainnet-old-{next} rename only happens later on lines 339-340.

More generally, I’m not sure preserving multiple old caches is the right approach here. Moving a chain cache between shards should be a rare maintenance operation, and old caches are only useful as an immediate rollback. Reusing an existing backup on the target shard could make sense for an immediate rollback, although in other cases it may be too outdated. For a sequence like A -> B -> C, keeping the old cache on A does not seem useful, so probably dropping that previous backup before preserving the current B cache as the new mainnet-old would make the behavior simpler and avoid having multiple stale caches around the shards.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the flow to keep only one rollback backup and to order the reuse case as mainnet-old -> temp, mainnet -> mainnet-old, temp -> mainnet so the unique constraint cannot fire.

One thing I’m less certain about is policy: when an older mainnet-old exists and we are not reusing it, do you want that old backup dropped only after the new change-shard succeeds, or should it be deleted earlier as part of simplifying the behavior? Also, are you okay with updating the in-memory BlockStore mapping explicitly during these renames, or would you prefer a simpler approach even if it leaves cleanup to a reload?

Copy link
Copy Markdown
Member

@dimitrovmaksim dimitrovmaksim Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late reply, I was contemplating what we really want to happen here, so we don't make you go back and forth. What i'm thinking is:

  • Add a check if {chain}-old already exists and:
    • If its shard matches the target shard, prompt the user that it will re-use that backup (they can decline and remove it with chain remove if they want)
    • If target shard is a different shard, print hint to use chain remove.

The point is to give more control to the indexers rather than doing stuff for then in the background

@erayack
Copy link
Copy Markdown
Author

erayack commented Apr 29, 2026

Implemented the behavior you suggested. @dimitrovmaksim

Current flow is now:

  • if {chain}-old exists on a different shard, the command aborts and tells the user to remove it explicitly with graphman chain remove
  • if {chain}-old exists on the target shard, the command prompts before reusing it
  • otherwise it keeps the current cache as {chain}-old and creates a fresh cache on the target shard

I also changed the reuse ordering to avoid the unique constraint issue: {chain}-old -> temp, {chain} -> {chain}-old, temp -> {chain}

I added focused unit tests for the boundary decision logic.

.execute(conn).await?;
Ok(())

Ok(ChainSwapOutcome {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this return value is redundant. The transaction does not compute any new state; ChainSwapOutcome just returns reuse_existing_backup and allocated_chain.is_some(), which are already known before entering the transaction. This can return () and the struct can be completelly removed.

.await
.ok_or_else(|| anyhow!("unknown chain: {}", &chain_name))?;
let new_name = format!("{}-old", &chain_name);
let ident = chain_store.chain_identifier().await?;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing I missed is that the current chain and the backup Identities may differ. graphman provides a command to update the genesis hash graphman chain update-genesis, which is part of the ChainIdentifier, so we should fetch the backup identifier as well and compare them and abort If they are different.

Comment on lines +319 to +353
let existing_backup_disposition = existing_backup_disposition(
existing_backup.as_ref().map(|backup| backup.shard.as_str()),
target_shard.as_str(),
);
let reuse_existing_backup = matches!(
existing_backup_disposition,
ExistingBackupDisposition::PromptReuse
);

conn.transaction::<(), StoreError, _>(|conn| {
async {
let shard = Shard::new(shard.to_string())?;
match existing_backup_disposition {
ExistingBackupDisposition::ProceedFresh => {}
ExistingBackupDisposition::AbortWrongShard(backup_shard) => {
bail!(
"`{}` already exists on shard `{}`. Remove it with `graphman chain remove {}` before changing `{}` to shard `{}`",
canonical_backup_name,
backup_shard,
canonical_backup_name,
chain_name,
target_shard,
);
}
ExistingBackupDisposition::PromptReuse => {
let prompt = format!(
"`{}` already exists on shard `{}` and will be reused as the active `{}` chain.\nProceed?",
canonical_backup_name, target_shard, chain_name
);
if !prompt_for_confirmation(&prompt)? {
println!(
"Aborting. Remove `{}` with `graphman chain remove {}` if you want to create a fresh cache on shard `{}`.",
canonical_backup_name, canonical_backup_name, target_shard
);
return Ok(());
}
}
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ExistingBackupDisposition enum seems a little over-modeled for this control flow. It only wraps the None / same shard / wrong shard cases.
It can be inlide to something like

let reuse_existing_backup = match existing_backup.as_ref() {
      None => false,
      Some(backup) if backup.shard != target_shard => {
          bail!(...);
      }
      Some(backup) => {
         ...
      }
  };

This will remove the need to manage the lifetime on the enum.


// Create a new chain with the name in the destination shard
let _ = add_chain(conn, &chain_name, &shard, ident).await?;
let temp_backup_name = if let Some(backup) = existing_backup.as_ref() {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The temp backup name helpers may be too much for what we need. It's only a transient name for the three-way rename, so something hardcoded like {chain}-old-temp should probably be enough.

update_chain_name(conn, &chain_name, &canonical_backup_name).await?;

if reuse_existing_backup {
debug_assert!(temp_backup_name.is_some());
Copy link
Copy Markdown
Member

@dimitrovmaksim dimitrovmaksim Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would not use the debug_assert! here, for --release build it will be compiled out, and it will panic on the unwrap() after that anyways.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] Cannot Revert Chain Shard Change Due to Duplicate Key Constraint

3 participants