Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: duplicate key value violates unique constraint "PK_SMStreams" #12

Open
Aandree5 opened this issue Feb 20, 2025 · 12 comments
Open

Bug: duplicate key value violates unique constraint "PK_SMStreams" #12

Aandree5 opened this issue Feb 20, 2025 · 12 comments
Labels
bug Something isn't working needs more details Unclear or needing more details

Comments

@Aandree5
Copy link
Contributor

Describe the Bug

I've noticed this a few times, I just updated the container to use the 0.7.3 version, and got this error and cannot start up. I has happened in the past, not necessarely when updating but when restarting it.

Could this be because a migration is not compatible with 0.3.1 (the one I was on), or was there duplicate data, and if so, how was it allowed?

Stream Master Version

0.7.3

Relevant Logs

2025-02-20 09:13:02.766 UTC [83] ERROR:  duplicate key value violates unique constraint "PK_SMStreams"
2025-02-20 09:13:02.766 UTC [83] DETAIL:  Key ("Id")=(dfbfc4a9ddfbf50d6b3349e7a97f0101) already exists.
2025-02-20 09:13:02.766 UTC [83] CONTEXT:  SQL statement "INSERT INTO "SMStreams" ("Id", "ClientUserAgent", "FilePosition", "IsHidden", 
                                         "IsUserCreated", "M3UFileId", "ChannelNumber", 
                                         "M3UFileName", "Group", "EPGID", "Logo", "Name", 
                                         "Url", "StationId", "IsSystem", "CUID", "SMStreamType", 
                                         "NeedsDelete", "ChannelName", "ChannelId", 
                                         "CommandProfileName", "TVGName", "ExtInf")
                SELECT t.new_id, s."ClientUserAgent", s."FilePosition", s."IsHidden", 
                       s."IsUserCreated", t.m3ufileid, s."ChannelNumber", s."M3UFileName", 
                       s."Group", s."EPGID", s."Logo", s."Name", s."Url", s."StationId", 
                       s."IsSystem", s."CUID", s."SMStreamType", s."NeedsDelete", s."ChannelName", 
                       s."ChannelId", s."CommandProfileName", s."TVGName", s."ExtInf"
                FROM temp_batch_update t
                INNER JOIN "SMStreams" s ON t.old_id = s."Id""
        PL/pgSQL function inline_code_block line 28 at SQL statement
2025-02-20 09:13:02.766 UTC [83] STATEMENT:  DO $$
        DECLARE
            duplicate_count INTEGER;
        BEGIN
            -- Only proceed if migration hasn't been done
            IF NOT EXISTS (SELECT 1 FROM "SystemKeyValues" WHERE "Key" = 'didIDMigration') THEN
                -- Create temporary tables for streams and m3ufiles data
                CREATE TEMP TABLE temp_SMStreams AS
                SELECT "Id", "Url", "CUID", "ChannelId", "EPGID", "TVGName", "Name", "M3UFileId"
                FROM "SMStreams";

                CREATE TEMP TABLE temp_M3UFiles AS
                SELECT "Id", COALESCE("M3UKey", 0) AS "M3UKey"
                FROM "M3UFiles";

                -- Create a temporary table for batch processing
                CREATE TEMP TABLE temp_batch_update (old_id TEXT, new_id TEXT, m3ufileid INT);

                -- Insert new IDs into the batch update table
                INSERT INTO temp_batch_update (old_id, new_id, m3ufileid)
                SELECT s."Id", generate_m3u_key_value(f."M3UKey", s."M3UFileId", s."Url", s."CUID", 
                                                      s."ChannelId", s."EPGID", s."TVGName", s."Name"), s."M3UFileId"
                FROM temp_SMStreams s
                LEFT JOIN temp_M3UFiles f ON s."M3UFileId" = f."Id"
                WHERE s."M3UFileId" IS NOT NULL AND s."M3UFileId" >= 0;

                -- Update SMStreams with new IDs
                INSERT INTO "SMStreams" ("Id", "ClientUserAgent", "FilePosition", "IsHidden", 
                                         "IsUserCreated", "M3UFileId", "ChannelNumber", 
                                         "M3UFileName", "Group", "EPGID", "Logo", "Name", 
                                         "Url", "StationId", "IsSystem", "CUID", "SMStreamType", 
                                         "NeedsDelete", "ChannelName", "ChannelId", 
                                         "CommandProfileName", "TVGName", "ExtInf")
                SELECT t.new_id, s."ClientUserAgent", s."FilePosition", s."IsHidden", 
                       s."IsUserCreated", t.m3ufileid, s."ChannelNumber", s."M3UFileName", 
                       s."Group", s."EPGID", s."Logo", s."Name", s."Url", s."StationId", 
                       s."IsSystem", s."CUID", s."SMStreamType", s."NeedsDelete", s."ChannelName", 
                       s."ChannelId", s."CommandProfileName", s."TVGName", s."ExtInf"
                FROM temp_batch_update t
                INNER JOIN "SMStreams" s ON t.old_id = s."Id";

                -- Update SMChannelStreamLinks with new IDs
                INSERT INTO "SMChannelStreamLinks" ("SMStreamId", "SMChannelId", "Rank")
                SELECT t.new_id, l."SMChannelId", l."Rank"
                FROM temp_batch_update t
                INNER JOIN "SMChannelStreamLinks" l ON t.old_id = l."SMStreamId";

                -- Delete old SMChannelStreamLinks
                DELETE FROM "SMChannelStreamLinks"
                WHERE "SMStreamId" IN (SELECT old_id FROM temp_batch_update);

                -- Delete old SMStreams
                DELETE FROM "SMStreams"
                WHERE "Id" IN (SELECT old_id FROM temp_batch_update);

                -- Drop temporary tables
                DROP TABLE temp_batch_update;
                DROP TABLE temp_SMStreams;
                DROP TABLE temp_M3UFiles;

                -- Add the didIDMigration entry to SystemKeyValues
                INSERT INTO "SystemKeyValues" ("Key", "Value") VALUES ('didIDMigration', 'true');

                RAISE NOTICE 'Migration completed successfully.';
            ELSE
                -- Check for duplicate didIDMigration entries
                SELECT COUNT(*) INTO duplicate_count
                FROM "SystemKeyValues"
                WHERE "Key" = 'didIDMigration';

                IF duplicate_count > 1 THEN
                    -- Keep the first entry and delete the rest
                    WITH ordered_keys AS (
                        SELECT ctid
                        FROM "SystemKeyValues"
                        WHERE "Key" = 'didIDMigration'
                        ORDER BY ctid
                        LIMIT 1
                    )
                    DELETE FROM "SystemKeyValues"
                    WHERE "Key" = 'didIDMigration'
                    AND ctid NOT IN (SELECT ctid FROM ordered_keys);

                    RAISE NOTICE 'Cleaned up % duplicate didIDMigration entries.', duplicate_count - 1;
                END IF;

                RAISE NOTICE 'Migration has already been performed. No action needed.';
            END IF;
        END $$
fail: Microsoft.EntityFrameworkCore.Database.Command[20102]
      Failed executing DbCommand (27ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
      BEGIN;
      
      -- Function to generate MD5 hash
      CREATE OR REPLACE FUNCTION generate_md5(key TEXT, M3UFileId INT)
      RETURNS TEXT AS $$
      DECLARE
          hash TEXT;
      BEGIN
          SELECT md5(concat(key, '_', M3UFileId)) INTO hash;
          RETURN hash;
      END;
      $$ LANGUAGE plpgsql;
      
      -- Function to generate M3UKey value
      CREATE OR REPLACE FUNCTION generate_m3u_key_value(M3UKey INT, M3UFileId INT, Url TEXT, 
                                                        CUID TEXT, ChannelId TEXT, EPGID TEXT, 
                                                        TVGName TEXT, Name TEXT)
      RETURNS TEXT AS $$
      DECLARE
          key TEXT;
      BEGIN
          CASE M3UKey
              WHEN 0 THEN key := Url;
              WHEN 1 THEN key := CUID;
              WHEN 2 THEN key := ChannelId;
              WHEN 3 THEN key := EPGID;
              WHEN 4 THEN key := COALESCE(TVGName, Name);
              WHEN 5 THEN 
                  IF TVGName IS NOT NULL AND EPGID IS NOT NULL THEN
                      key := TVGName || '_' || EPGID;
                  END IF;
              WHEN 6 THEN key := Name;
              WHEN 7 THEN 
                  IF Name IS NOT NULL AND EPGID IS NOT NULL THEN
                      key := Name || '_' || EPGID;
                  END IF;
              ELSE
                  RAISE EXCEPTION 'Invalid M3UKey value: %', M3UKey;
          END CASE;
          
          IF key IS NOT NULL THEN
              RETURN generate_md5(key, M3UFileId);
          ELSE
              RETURN NULL;
          END IF;
      END;
      $$ LANGUAGE plpgsql;
      
      DO $$
      DECLARE
          duplicate_count INTEGER;
      BEGIN
          -- Only proceed if migration hasn't been done
          IF NOT EXISTS (SELECT 1 FROM "SystemKeyValues" WHERE "Key" = 'didIDMigration') THEN
              -- Create temporary tables for streams and m3ufiles data
              CREATE TEMP TABLE temp_SMStreams AS
              SELECT "Id", "Url", "CUID", "ChannelId", "EPGID", "TVGName", "Name", "M3UFileId"
              FROM "SMStreams";
      
              CREATE TEMP TABLE temp_M3UFiles AS
              SELECT "Id", COALESCE("M3UKey", 0) AS "M3UKey"
              FROM "M3UFiles";
      
              -- Create a temporary table for batch processing
              CREATE TEMP TABLE temp_batch_update (old_id TEXT, new_id TEXT, m3ufileid INT);
      
              -- Insert new IDs into the batch update table
              INSERT INTO temp_batch_update (old_id, new_id, m3ufileid)
              SELECT s."Id", generate_m3u_key_value(f."M3UKey", s."M3UFileId", s."Url", s."CUID", 
                                                    s."ChannelId", s."EPGID", s."TVGName", s."Name"), s."M3UFileId"
              FROM temp_SMStreams s
              LEFT JOIN temp_M3UFiles f ON s."M3UFileId" = f."Id"
              WHERE s."M3UFileId" IS NOT NULL AND s."M3UFileId" >= 0;
      
              -- Update SMStreams with new IDs
              INSERT INTO "SMStreams" ("Id", "ClientUserAgent", "FilePosition", "IsHidden", 
                                       "IsUserCreated", "M3UFileId", "ChannelNumber", 
                                       "M3UFileName", "Group", "EPGID", "Logo", "Name", 
                                       "Url", "StationId", "IsSystem", "CUID", "SMStreamType", 
                                       "NeedsDelete", "ChannelName", "ChannelId", 
                                       "CommandProfileName", "TVGName", "ExtInf")
              SELECT t.new_id, s."ClientUserAgent", s."FilePosition", s."IsHidden", 
                     s."IsUserCreated", t.m3ufileid, s."ChannelNumber", s."M3UFileName", 
                     s."Group", s."EPGID", s."Logo", s."Name", s."Url", s."StationId", 
                     s."IsSystem", s."CUID", s."SMStreamType", s."NeedsDelete", s."ChannelName", 
                     s."ChannelId", s."CommandProfileName", s."TVGName", s."ExtInf"
              FROM temp_batch_update t
              INNER JOIN "SMStreams" s ON t.old_id = s."Id";
      
              -- Update SMChannelStreamLinks with new IDs
              INSERT INTO "SMChannelStreamLinks" ("SMStreamId", "SMChannelId", "Rank")
              SELECT t.new_id, l."SMChannelId", l."Rank"
              FROM temp_batch_update t
              INNER JOIN "SMChannelStreamLinks" l ON t.old_id = l."SMStreamId";
      
              -- Delete old SMChannelStreamLinks
              DELETE FROM "SMChannelStreamLinks"
              WHERE "SMStreamId" IN (SELECT old_id FROM temp_batch_update);
      
              -- Delete old SMStreams
              DELETE FROM "SMStreams"
              WHERE "Id" IN (SELECT old_id FROM temp_batch_update);
      
              -- Drop temporary tables
              DROP TABLE temp_batch_update;
              DROP TABLE temp_SMStreams;
              DROP TABLE temp_M3UFiles;
      
              -- Add the didIDMigration entry to SystemKeyValues
              INSERT INTO "SystemKeyValues" ("Key", "Value") VALUES ('didIDMigration', 'true');
      
              RAISE NOTICE 'Migration completed successfully.';
          ELSE
              -- Check for duplicate didIDMigration entries
              SELECT COUNT(*) INTO duplicate_count
              FROM "SystemKeyValues"
              WHERE "Key" = 'didIDMigration';
      
              IF duplicate_count > 1 THEN
                  -- Keep the first entry and delete the rest
                  WITH ordered_keys AS (
                      SELECT ctid
                      FROM "SystemKeyValues"
                      WHERE "Key" = 'didIDMigration'
                      ORDER BY ctid
                      LIMIT 1
                  )
                  DELETE FROM "SystemKeyValues"
                  WHERE "Key" = 'didIDMigration'
                  AND ctid NOT IN (SELECT ctid FROM ordered_keys);
      
                  RAISE NOTICE 'Cleaned up % duplicate didIDMigration entries.', duplicate_count - 1;
              END IF;
      
              RAISE NOTICE 'Migration has already been performed. No action needed.';
          END IF;
      END $$;
      
      COMMIT;
Error executing script 012_migrate_new_channel_ids.sql: 23505: duplicate key value violates unique constraint "PK_SMStreams"

DETAIL: Detail redacted as it may contain sensitive data. Specify 'Include Error Detail' in the connection string to include this information.
fail: StreamMaster.Infrastructure.EF.PGSQL.PGSQLRepositoryContext[0]
      An error occurred during database initialization
      Npgsql.PostgresException (0x80004005): 23505: duplicate key value violates unique constraint "PK_SMStreams"
      
      DETAIL: Detail redacted as it may contain sensitive data. Specify 'Include Error Detail' in the connection string to include this information.
         at Npgsql.Internal.NpgsqlConnector.ReadMessageLong(Boolean async, DataRowLoadingMode dataRowLoadingMode, Boolean readingNotifications, Boolean isReadingPrependedMessage)
         at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)
         at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming, CancellationToken cancellationToken)
         at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming, CancellationToken cancellationToken)
         at Npgsql.NpgsqlDataReader.NextResult()
         at Npgsql.NpgsqlCommand.ExecuteReader(Boolean async, CommandBehavior behavior, CancellationToken cancellationToken)
         at Npgsql.NpgsqlCommand.ExecuteReader(Boolean async, CommandBehavior behavior, CancellationToken cancellationToken)
         at Npgsql.NpgsqlCommand.ExecuteNonQuery(Boolean async, CancellationToken cancellationToken)
         at Npgsql.NpgsqlCommand.ExecuteNonQuery()
         at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteNonQuery(RelationalCommandParameterObject parameterObject)
         at Microsoft.EntityFrameworkCore.RelationalDatabaseFacadeExtensions.ExecuteSqlRaw(DatabaseFacade databaseFacade, String sql, IEnumerable`1 parameters)
         at Microsoft.EntityFrameworkCore.RelationalDatabaseFacadeExtensions.ExecuteSqlRaw(DatabaseFacade databaseFacade, String sql, Object[] parameters)
         at StreamMaster.Infrastructure.EF.PGSQL.PGSQLRepositoryContext.ApplyCustomSqlScripts() in /src/StreamMaster.Infrastructure.EF.PGSQL/PGSQLRepositoryContext.cs:line 74
         at StreamMaster.Infrastructure.EF.PGSQL.PGSQLRepositoryContext.MigrateDatabaseAsync() in /src/StreamMaster.Infrastructure.EF.PGSQL/PGSQLRepositoryContext.cs:line 34
        Exception data:
          Severity: ERROR
          SqlState: 23505
          MessageText: duplicate key value violates unique constraint "PK_SMStreams"
          Detail: Detail redacted as it may contain sensitive data. Specify 'Include Error Detail' in the connection string to include this information.
          Where: SQL statement "INSERT INTO "SMStreams" ("Id", "ClientUserAgent", "FilePosition", "IsHidden", 
                                       "IsUserCreated", "M3UFileId", "ChannelNumber", 
                                       "M3UFileName", "Group", "EPGID", "Logo", "Name", 
                                       "Url", "StationId", "IsSystem", "CUID", "SMStreamType", 
                                       "NeedsDelete", "ChannelName", "ChannelId", 
                                       "CommandProfileName", "TVGName", "ExtInf")
              SELECT t.new_id, s."ClientUserAgent", s."FilePosition", s."IsHidden", 
                     s."IsUserCreated", t.m3ufileid, s."ChannelNumber", s."M3UFileName", 
                     s."Group", s."EPGID", s."Logo", s."Name", s."Url", s."StationId", 
                     s."IsSystem", s."CUID", s."SMStreamType", s."NeedsDelete", s."ChannelName", 
                     s."ChannelId", s."CommandProfileName", s."TVGName", s."ExtInf"
              FROM temp_batch_update t
              INNER JOIN "SMStreams" s ON t.old_id = s."Id""
      PL/pgSQL function inline_code_block line 28 at SQL statement
          SchemaName: public
          TableName: SMStreams
          ConstraintName: PK_SMStreams
          File: nbtinsert.c
          Line: 664
          Routine: _bt_check_unique
fail: Microsoft.Extensions.Hosting.Internal.Host[9]
      BackgroundService failed
      Npgsql.PostgresException (0x80004005): 23505: duplicate key value violates unique constraint "PK_SMStreams"
      
      DETAIL: Detail redacted as it may contain sensitive data. Specify 'Include Error Detail' in the connection string to include this information.
         at Npgsql.Internal.NpgsqlConnector.ReadMessageLong(Boolean async, DataRowLoadingMode dataRowLoadingMode, Boolean readingNotifications, Boolean isReadingPrependedMessage)
         at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)
         at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming, CancellationToken cancellationToken)
         at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming, CancellationToken cancellationToken)
         at Npgsql.NpgsqlDataReader.NextResult()
         at Npgsql.NpgsqlCommand.ExecuteReader(Boolean async, CommandBehavior behavior, CancellationToken cancellationToken)
         at Npgsql.NpgsqlCommand.ExecuteReader(Boolean async, CommandBehavior behavior, CancellationToken cancellationToken)
         at Npgsql.NpgsqlCommand.ExecuteNonQuery(Boolean async, CancellationToken cancellationToken)
         at Npgsql.NpgsqlCommand.ExecuteNonQuery()
         at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteNonQuery(RelationalCommandParameterObject parameterObject)
         at Microsoft.EntityFrameworkCore.RelationalDatabaseFacadeExtensions.ExecuteSqlRaw(DatabaseFacade databaseFacade, String sql, IEnumerable`1 parameters)
         at Microsoft.EntityFrameworkCore.RelationalDatabaseFacadeExtensions.ExecuteSqlRaw(DatabaseFacade databaseFacade, String sql, Object[] parameters)
         at StreamMaster.Infrastructure.EF.PGSQL.PGSQLRepositoryContext.ApplyCustomSqlScripts() in /src/StreamMaster.Infrastructure.EF.PGSQL/PGSQLRepositoryContext.cs:line 74
         at StreamMaster.Infrastructure.EF.PGSQL.PGSQLRepositoryContext.MigrateDatabaseAsync() in /src/StreamMaster.Infrastructure.EF.PGSQL/PGSQLRepositoryContext.cs:line 34
         at StreamMaster.API.Services.PostStartup.ExecuteAsync(CancellationToken cancellationToken) in /src/StreamMaster.API/Services/PostStartup.cs:line 40
         at Microsoft.Extensions.Hosting.Internal.Host.TryExecuteBackgroundServiceAsync(BackgroundService backgroundService)
        Exception data:
          Severity: ERROR
          SqlState: 23505
          MessageText: duplicate key value violates unique constraint "PK_SMStreams"
          Detail: Detail redacted as it may contain sensitive data. Specify 'Include Error Detail' in the connection string to include this information.
          Where: SQL statement "INSERT INTO "SMStreams" ("Id", "ClientUserAgent", "FilePosition", "IsHidden", 
                                       "IsUserCreated", "M3UFileId", "ChannelNumber", 
                                       "M3UFileName", "Group", "EPGID", "Logo", "Name", 
                                       "Url", "StationId", "IsSystem", "CUID", "SMStreamType", 
                                       "NeedsDelete", "ChannelName", "ChannelId", 
                                       "CommandProfileName", "TVGName", "ExtInf")
              SELECT t.new_id, s."ClientUserAgent", s."FilePosition", s."IsHidden", 
                     s."IsUserCreated", t.m3ufileid, s."ChannelNumber", s."M3UFileName", 
                     s."Group", s."EPGID", s."Logo", s."Name", s."Url", s."StationId", 
                     s."IsSystem", s."CUID", s."SMStreamType", s."NeedsDelete", s."ChannelName", 
                     s."ChannelId", s."CommandProfileName", s."TVGName", s."ExtInf"
              FROM temp_batch_update t
              INNER JOIN "SMStreams" s ON t.old_id = s."Id""
      PL/pgSQL function inline_code_block line 28 at SQL statement
          SchemaName: public
          TableName: SMStreams
          ConstraintName: PK_SMStreams
          File: nbtinsert.c
          Line: 664
          Routine: _bt_check_unique
crit: Microsoft.Extensions.Hosting.Internal.Host[10]
      The HostOptions.BackgroundServiceExceptionBehavior is configured to StopHost. A BackgroundService has thrown an unhandled exception, and the IHost instance is stopping. To avoid this behavior, configure this to Ignore; however the BackgroundService will not be restarted.
      Npgsql.PostgresException (0x80004005): 23505: duplicate key value violates unique constraint "PK_SMStreams"
      
      DETAIL: Detail redacted as it may contain sensitive data. Specify 'Include Error Detail' in the connection string to include this information.
         at Npgsql.Internal.NpgsqlConnector.ReadMessageLong(Boolean async, DataRowLoadingMode dataRowLoadingMode, Boolean readingNotifications, Boolean isReadingPrependedMessage)
         at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)
         at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming, CancellationToken cancellationToken)
         at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming, CancellationToken cancellationToken)
         at Npgsql.NpgsqlDataReader.NextResult()
         at Npgsql.NpgsqlCommand.ExecuteReader(Boolean async, CommandBehavior behavior, CancellationToken cancellationToken)
         at Npgsql.NpgsqlCommand.ExecuteReader(Boolean async, CommandBehavior behavior, CancellationToken cancellationToken)
         at Npgsql.NpgsqlCommand.ExecuteNonQuery(Boolean async, CancellationToken cancellationToken)
         at Npgsql.NpgsqlCommand.ExecuteNonQuery()
         at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteNonQuery(RelationalCommandParameterObject parameterObject)
         at Microsoft.EntityFrameworkCore.RelationalDatabaseFacadeExtensions.ExecuteSqlRaw(DatabaseFacade databaseFacade, String sql, IEnumerable`1 parameters)
         at Microsoft.EntityFrameworkCore.RelationalDatabaseFacadeExtensions.ExecuteSqlRaw(DatabaseFacade databaseFacade, String sql, Object[] parameters)
         at StreamMaster.Infrastructure.EF.PGSQL.PGSQLRepositoryContext.ApplyCustomSqlScripts() in /src/StreamMaster.Infrastructure.EF.PGSQL/PGSQLRepositoryContext.cs:line 74
         at StreamMaster.Infrastructure.EF.PGSQL.PGSQLRepositoryContext.MigrateDatabaseAsync() in /src/StreamMaster.Infrastructure.EF.PGSQL/PGSQLRepositoryContext.cs:line 34
         at StreamMaster.API.Services.PostStartup.ExecuteAsync(CancellationToken cancellationToken) in /src/StreamMaster.API/Services/PostStartup.cs:line 40
         at Microsoft.Extensions.Hosting.Internal.Host.TryExecuteBackgroundServiceAsync(BackgroundService backgroundService)
        Exception data:
          Severity: ERROR
          SqlState: 23505
          MessageText: duplicate key value violates unique constraint "PK_SMStreams"
          Detail: Detail redacted as it may contain sensitive data. Specify 'Include Error Detail' in the connection string to include this information.
          Where: SQL statement "INSERT INTO "SMStreams" ("Id", "ClientUserAgent", "FilePosition", "IsHidden", 
                                       "IsUserCreated", "M3UFileId", "ChannelNumber", 
                                       "M3UFileName", "Group", "EPGID", "Logo", "Name", 
                                       "Url", "StationId", "IsSystem", "CUID", "SMStreamType", 
                                       "NeedsDelete", "ChannelName", "ChannelId", 
                                       "CommandProfileName", "TVGName", "ExtInf")
              SELECT t.new_id, s."ClientUserAgent", s."FilePosition", s."IsHidden", 
                     s."IsUserCreated", t.m3ufileid, s."ChannelNumber", s."M3UFileName", 
                     s."Group", s."EPGID", s."Logo", s."Name", s."Url", s."StationId", 
                     s."IsSystem", s."CUID", s."SMStreamType", s."NeedsDelete", s."ChannelName", 
                     s."ChannelId", s."CommandProfileName", s."TVGName", s."ExtInf"
              FROM temp_batch_update t
              INNER JOIN "SMStreams" s ON t.old_id = s."Id""
      PL/pgSQL function inline_code_block line 28 at SQL statement
          SchemaName: public
          TableName: SMStreams
          ConstraintName: PK_SMStreams
          File: nbtinsert.c
          Line: 664
          Routine: _bt_check_unique
info: Microsoft.Hosting.Lifetime[0]
      Application is shutting down...
info: StreamMaster.Application.Statistics.Commands.SetIsSystemReadyRequest[0]
      System build 0.7.2
@Aandree5 Aandree5 added the bug Something isn't working label Feb 20, 2025
@carlreid
Copy link
Owner

I'm not too sure. In the newer release of StreamMaster (as in, after we have commit history for) the container ran a script called did_migrate_streams.sh which has this content:

did_migrate_streams.sh
#!/bin/bash

# Check if $PGDATA directory exists, if not set a default value
if [ ! -d "/config/DB" ]; then
    echo "/config/DB directory does not exist. Assuming test"
    . "/var/lib/postgresql/data/env.sh"
    PGDATA=/var/lib/postgresql/data
else
    . /env.sh
fi

# Variables
batchSize=5000 # Increased batch size
dbDir="$PGDATA"
tempFileStreams="$dbDir/streams.csv"
tempFileM3UFiles="$dbDir/m3ufiles.csv"
tempFileBatch="$dbDir/batch_update.csv"
errorFile="$dbDir/did_errors.log"
tempTable="temp_batch_update"
delimiter='^' # Custom delimiter

# Clean up any existing files
rm -f "$errorFile" "$tempFileStreams" "$tempFileM3UFiles" "$tempFileBatch"

# Ensure PostgreSQL connection details are set
if [[ -z "$POSTGRES_USER" || -z "$POSTGRES_PASSWORD" || -z "$POSTGRES_DB" || -z "$POSTGRES_HOST" ]]; then
    echo "Missing required PostgreSQL environment variables."
    exit 1
fi

# Database connection command
PG_CMD="psql -h $POSTGRES_HOST -U $POSTGRES_USER -d $POSTGRES_DB"

# Query to check if the record exists
checkMigration=$($PG_CMD -t -c "SELECT COUNT(*) FROM \"SystemKeyValues\" WHERE \"Key\" = 'didIDMigration';")
checkMigration=$(echo "$checkMigration" | xargs) # Trim whitespace

if [[ "$checkMigration" -eq 0 ]]; then
    echo "didIDMigration does not exist. Proceeding with migration..."
else
    exit 0
fi

# Function to generate MD5 hash
generate_md5() {
    local key=$1
    local M3UFileId=$2
    echo -n "${key}_${M3UFileId}" | md5sum | awk '{print $1}'
}

# Function to generate M3UKey value
generate_m3u_key_value() {
    local M3UKey=$1
    local M3UFileId=$2
    local Url=$3
    local CUID=$4
    local ChannelId=$5
    local EPGID=$6
    local TVGName=$7
    local Name=$8

    local key

    case $M3UKey in
    0) key=$Url ;;
    1) key=$CUID ;;
    2) key=$ChannelId ;;
    3) key=$EPGID ;;
    4) key=${TVGName:-$Name} ;;
    5)
        if [[ -n $TVGName && -n $EPGID ]]; then
            key="${TVGName}_${EPGID}"
        fi
        ;;
    6) key=$Name ;;
    7)
        if [[ -n $Name && -n $EPGID ]]; then
            key="${Name}_${EPGID}"
        fi
        ;;
    *)
        echo "Invalid M3UKey value: $M3UKey" >&2
        ;;
    esac

    if [[ -n $key ]]; then
        generate_md5 "$key" "$M3UFileId"
    else
        echo ""
    fi
}

# Step 1: Fetch SMStreams and M3UFiles from PostgreSQL
echo "Fetching SMStreams and M3UFiles from the database..."
$PG_CMD -c "\COPY (SELECT \"Id\", \"Url\", \"CUID\", \"ChannelId\", \"EPGID\", \"TVGName\", \"Name\", \"M3UFileId\" FROM \"SMStreams\") TO '$tempFileStreams' WITH CSV HEADER DELIMITER '$delimiter';"
$PG_CMD -c "\COPY (SELECT \"Id\", COALESCE(\"M3UKey\", 0) AS \"M3UKey\" FROM \"M3UFiles\") TO '$tempFileM3UFiles' WITH CSV HEADER DELIMITER '$delimiter';"

if [[ $? -ne 0 ]]; then
    echo "Failed to fetch data from the database."
    exit 1
fi

if [[ ! -f "$tempFileStreams" || ! -s "$tempFileStreams" ]]; then
    echo "Error: Stream data file $tempFileStreams not created or is empty."
    exit 1
fi
echo "Stream data file $tempFileStreams fetched successfully."

# Step 2: Build M3UFile mapping
declare -A m3uKeyMapping
while IFS="$delimiter" read -r Id M3UKey; do
    m3uKeyMapping["$Id"]=$M3UKey

done < <(tail -n +2 "$tempFileM3UFiles") # Skip the header line

# Step 3: Process streams and generate new IDs
processedCount=0
totalCount=$(wc -l <"$tempFileStreams")
((totalCount--)) # Subtract the header line
>"$tempFileBatch"

while IFS="$delimiter" read -r Id Url CUID ChannelId EPGID TVGName Name M3UFileId; do
    # Skip the header line
    [[ "$Id" == "Id" ]] && continue

    # Handle edge cases where M3UFileId is empty or invalid
    if [[ -z "$M3UFileId" || "$M3UFileId" -lt 0 ]]; then
        M3UKey="0"
    else
        M3UKey=${m3uKeyMapping["$M3UFileId"]}
    fi

    if [[ -z $M3UKey ]]; then
        M3UKey="0"
    fi

    # Generate the new ID
    newId=$(generate_m3u_key_value "$M3UKey" "$M3UFileId" "$Url" "$CUID" "$ChannelId" "$EPGID" "$TVGName" "$Name")

    if [[ -n $newId ]]; then
        echo "$Id,$newId,$M3UFileId" >>"$tempFileBatch"
        ((processedCount++))
    fi

    # Process batch when size limit is reached
    if [[ $((processedCount % batchSize)) -eq 0 ]]; then
        echo "Updating batch of $batchSize records... (Processed: $processedCount/$totalCount)"
        $PG_CMD <<EOF
-- Step 1: Create a temporary table for batch processing
CREATE TEMP TABLE $tempTable (old_id TEXT, new_id TEXT, m3ufileid INT);

-- Step 2: Copy batch data into the temporary table
\COPY $tempTable FROM '$tempFileBatch' WITH CSV;

-- Step 3: Recreate SMStreams with new IDs
INSERT INTO "SMStreams" (
    "Id",
    "ClientUserAgent",
    "FilePosition",
    "IsHidden",
    "IsUserCreated",
    "M3UFileId",
    "ChannelNumber",
    "M3UFileName",
    "Group",
    "EPGID",
    "Logo",
    "Name",
    "Url",
    "StationId",
    "IsSystem",
    "CUID",
    "SMStreamType",
    "NeedsDelete",
    "ChannelName",
    "ChannelId",
    "CommandProfileName",
    "TVGName",
    "ExtInf"
)
SELECT 
    temp.new_id,
    streams."ClientUserAgent",
    streams."FilePosition",
    streams."IsHidden",
    streams."IsUserCreated",
    temp.m3ufileid,
    streams."ChannelNumber",
    streams."M3UFileName",
    streams."Group",
    streams."EPGID",
    streams."Logo",
    streams."Name",
    streams."Url",
    streams."StationId",
    streams."IsSystem",
    streams."CUID",
    streams."SMStreamType",
    streams."NeedsDelete",
    streams."ChannelName",
    streams."ChannelId",
    streams."CommandProfileName",
    streams."TVGName",
    streams."ExtInf"
FROM $tempTable temp
INNER JOIN "SMStreams" streams
ON temp.old_id = streams."Id";

-- Step 4: Recreate SMChannelStreamLinks with new IDs
INSERT INTO "SMChannelStreamLinks" ("SMStreamId", "SMChannelId", "Rank")
SELECT
    temp.new_id,
    links."SMChannelId",
    links."Rank"
FROM $tempTable temp
INNER JOIN "SMChannelStreamLinks" links
ON temp.old_id = links."SMStreamId";

-- Step 5: Delete old SMChannelStreamLinks
DELETE FROM "SMChannelStreamLinks"
WHERE "SMStreamId" IN (SELECT old_id FROM $tempTable);

-- Step 6: Delete old SMStreams
DELETE FROM "SMStreams"
WHERE "Id" IN (SELECT old_id FROM $tempTable);

-- Step 7: Drop the temporary table
DROP TABLE $tempTable;

EOF
        >"$tempFileBatch"
    fi
done < <(tail -n +2 "$tempFileStreams") # Skip the header line

# Step 4: Update remaining records in the batch
if [[ -s "$tempFileBatch" ]]; then
    echo "Updating final batch of records... (Processed: $processedCount/$totalCount)"
    $PG_CMD <<EOF
-- Step 1: Create a temporary table for batch processing
CREATE TEMP TABLE $tempTable (old_id TEXT, new_id TEXT, m3ufileid INT);

-- Step 2: Copy batch data into the temporary table
\COPY $tempTable FROM '$tempFileBatch' WITH CSV;

-- Step 3: Recreate SMStreams with new IDs
INSERT INTO "SMStreams" (
    "Id",
    "ClientUserAgent",
    "FilePosition",
    "IsHidden",
    "IsUserCreated",
    "M3UFileId",
    "ChannelNumber",
    "M3UFileName",
    "Group",
    "EPGID",
    "Logo",
    "Name",
    "Url",
    "StationId",
    "IsSystem",
    "CUID",
    "SMStreamType",
    "NeedsDelete",
    "ChannelName",
    "ChannelId",
    "CommandProfileName",
    "TVGName",
    "ExtInf"
)
SELECT 
    temp.new_id,
    streams."ClientUserAgent",
    streams."FilePosition",
    streams."IsHidden",
    streams."IsUserCreated",
    temp.m3ufileid,
    streams."ChannelNumber",
    streams."M3UFileName",
    streams."Group",
    streams."EPGID",
    streams."Logo",
    streams."Name",
    streams."Url",
    streams."StationId",
    streams."IsSystem",
    streams."CUID",
    streams."SMStreamType",
    streams."NeedsDelete",
    streams."ChannelName",
    streams."ChannelId",
    streams."CommandProfileName",
    streams."TVGName",
    streams."ExtInf"
FROM $tempTable temp
INNER JOIN "SMStreams" streams
ON temp.old_id = streams."Id";

-- Step 4: Recreate SMChannelStreamLinks with new IDs
INSERT INTO "SMChannelStreamLinks" ("SMStreamId", "SMChannelId", "Rank")
SELECT
    temp.new_id,
    links."SMChannelId",
    links."Rank"
FROM $tempTable temp
INNER JOIN "SMChannelStreamLinks" links
ON temp.old_id = links."SMStreamId";

-- Step 5: Delete old SMChannelStreamLinks
DELETE FROM "SMChannelStreamLinks"
WHERE "SMStreamId" IN (SELECT old_id FROM $tempTable);

-- Step 6: Delete old SMStreams
DELETE FROM "SMStreams"
WHERE "Id" IN (SELECT old_id FROM $tempTable);

-- Step 7: Drop the temporary table
DROP TABLE $tempTable;

EOF
fi

# Add the didIDMigration entry to SystemKeyValues
echo "Adding the didIDMigration entry to SystemKeyValues..."
$PG_CMD -c "INSERT INTO \"SystemKeyValues\" (\"Key\", \"Value\") VALUES ('didIDMigration', 'true');"

if [[ $? -eq 0 ]]; then
    echo "Successfully added the didIDMigration entry."
else
    echo "Failed to add the didIDMigration entry." >>"$errorFile"
    exit 1
fi

echo "Migration completed successfully. Processed $processedCount streams."
echo "Temporary files retained in $dbDir."

Where I converted it to a SQL migration script, rather than having some kind of rogue shell script handling this DB specific logic. It should only run once. Where didIDMigration will be set in the SystemKeyValues.

My guess based on your logs for: DETAIL: Key ("Id")=(dfbfc4a9ddfbf50d6b3349e7a97f0101) already exists. is that this migration logic is producing an ID collision. There must be a combination of something in your list of channels producing the same ID.

Could you try this SQL script on your DB, and see if it produces an outcome/collision?

WITH generated_ids AS (
    WITH temp_SMStreams AS (
        SELECT "Id", "Url", "CUID", "ChannelId", "EPGID", "TVGName", "Name", "M3UFileId"
        FROM "SMStreams"
        WHERE "M3UFileId" IS NOT NULL AND "M3UFileId" >= 0
    ),
    temp_M3UFiles AS (
        SELECT "Id", COALESCE("M3UKey", 0) AS "M3UKey"
        FROM "M3UFiles"
    )
    SELECT 
        s."Id" as old_id,
        CASE f."M3UKey"
            WHEN 0 THEN md5(concat(s."Url", '_', s."M3UFileId"))
            WHEN 1 THEN md5(concat(s."CUID", '_', s."M3UFileId"))
            WHEN 2 THEN md5(concat(s."ChannelId", '_', s."M3UFileId"))
            WHEN 3 THEN md5(concat(s."EPGID", '_', s."M3UFileId"))
            WHEN 4 THEN md5(concat(COALESCE(s."TVGName", s."Name"), '_', s."M3UFileId"))
            WHEN 5 THEN CASE 
                WHEN s."TVGName" IS NOT NULL AND s."EPGID" IS NOT NULL 
                THEN md5(concat(s."TVGName" || '_' || s."EPGID", '_', s."M3UFileId"))
                ELSE NULL
            END
            WHEN 6 THEN md5(concat(s."Name", '_', s."M3UFileId"))
            WHEN 7 THEN CASE 
                WHEN s."Name" IS NOT NULL AND s."EPGID" IS NOT NULL 
                THEN md5(concat(s."Name" || '_' || s."EPGID", '_', s."M3UFileId"))
                ELSE NULL
            END
        END as new_id,
        s."M3UFileId" as m3ufileid,
        f."M3UKey" as key_type,
        s."Name" as channel_name
    FROM temp_SMStreams s
    LEFT JOIN temp_M3UFiles f ON s."M3UFileId" = f."Id"
)
SELECT 
    g1.old_id,
    g1.new_id,
    g1.m3ufileid,
    g1.key_type,
    g1.channel_name,
    COUNT(*) OVER (PARTITION BY g1.new_id) as collision_count
FROM generated_ids g1
WHERE EXISTS (
    SELECT 1 
    FROM generated_ids g2 
    WHERE g1.new_id = g2.new_id 
    AND g1.old_id != g2.old_id
)
ORDER BY g1.new_id, g1.m3ufileid;

@Aandree5
Copy link
Contributor Author

Just tried the script and it spits out tons of colisions, some counts up to 800+!
But it's odd, for example I only have one M3U fiel for PlutoTV, and it has around 212 streams, this is coming back with 209 colisions.

I've trimmed the channel name slightlyto help with formatting

 6a83377bca0f09d3f6e7d1aa8d716f40 | f806252045a28179caaf0ec8f0b5f298 |         3 |        4 | Pluto TV Conspi |             209
 7193da7307604409545b1ceab046c31a | f806252045a28179caaf0ec8f0b5f298 |         3 |        4 | Pluto TV Scienc |             209
 d16b6ec929d2190d1ef44c0e4df79d1b | f806252045a28179caaf0ec8f0b5f298 |         3 |        4 | Pluto TV Space  |             209
 b3957ccbd8287817c4b41485027afd7d | f806252045a28179caaf0ec8f0b5f298 |         3 |        4 | Pluto TV Nature |             209
 52277753efe0ecbd8ba469a9e6e6a6dc | f806252045a28179caaf0ec8f0b5f298 |         3 |        4 | Pluto TV Animal |             209
 618dd15a3db3b0c2566df674bdb4e4e5 | f806252045a28179caaf0ec8f0b5f298 |         3 |        4 | Pluto TV Cult F |             209

@carlreid
Copy link
Owner

Hmm, that does seem like a lot... The new_id column there seems to be making the same ID for the various channel names.

From your output, it seems 4 is for key_type, which is equal to M3UKey. Meaning that the following is used to generate the new_id:

md5(concat(COALESCE(s."TVGName", s."Name"), '_', s."M3UFileId"))

Given that your channel_name output differs, it indicates that TVGName is the same for all the channels, causing the collision.

I would just say, why not combine the TVGName and Name to create a more unique ID. Though the backend also does the same thing (M3UKey.TvgName is 4):

    private static string? GenerateM3UKeyValue(M3UKey m3uKey, SMStream smStream)
    {
        string? key = m3uKey switch
        {
            M3UKey.URL => smStream.Url,
            M3UKey.CUID => smStream.CUID,
            M3UKey.ChannelId => smStream.ChannelId,
            M3UKey.TvgID => smStream.EPGID,
            M3UKey.TvgName => string.IsNullOrEmpty(smStream.TVGName) ? smStream.Name : smStream.TVGName,
            M3UKey.Name => smStream.Name,
            M3UKey.TvgName_TvgID =>
            (!string.IsNullOrEmpty(smStream.TVGName) && !string.IsNullOrEmpty(smStream.EPGID))
                ? $"{smStream.TVGName}_{smStream.EPGID}"
                : null,
            M3UKey.Name_TvgID =>
        (!string.IsNullOrEmpty(smStream.Name) && !string.IsNullOrEmpty(smStream.EPGID))
            ? $"{smStream.Name}_{smStream.EPGID}"
            : null,
            _ => throw new ArgumentOutOfRangeException(nameof(m3uKey), m3uKey, null),
        };
        return string.IsNullOrEmpty(key) ? null : FileUtil.EncodeToMD5(key);
    }

So that can't be the way to fix it, or else the key the back-end generated would be misaligned causing further issues. Could change it in two places, though I am not then sure that some users with working set ups would then break without running some new ID generation migration.

@Aandree5
Copy link
Contributor Author

I see what you mean, it could be that it had been fixed in a newer update maybe.
I had this issue when testing the fork from xteve that I can't remember the name now! But it worked fine in xteve, my thinking is that if this tool is to be used to manage EPG and XML files it should handle it things like this, sometime we don't control the EPG source and use this tool to merge, edit and serve a cleaned out version.

I'm happy to test and help fixing any issues that would arise from this change. This was one of the reasons I could not used the fork I mentioned, some streams were missing because the TVGName was missing from the EPG file so it would only load one stream.

I'm not sure how you could do it another way, do you think that joining both as a single Id would be feasible?

@carlreid
Copy link
Owner

if this tool is to be used to manage EPG and XML files it should handle it things like this, sometime we don't control the EPG source and use this tool to merge, edit and serve a cleaned out version.

I agree that it makes sense that it should be able to handle such cases, ideally. I assume that TVGName is for "TV Guide Name", meaning it should be more accurate that just channel name? This would come from tvg-name in the M3U.

If the M3U Key is used to locate a channel in an M3U. Then I don't know why you wouldn't just combine all of these values available per channel as a single identifier. It should then avoid these types of issues?

I do see though, if you edit an M3U there is a "Channel Name Field" option here:
Image

I wonder if this is what drives the m3uKey value. In which case, yours might be set to TvgName, where changing it to something else like TvgId might then work.

@Aandree5
Copy link
Contributor Author

if this tool is to be used to manage EPG and XML files it should handle it things like this, sometime we don't control the EPG source and use this tool to merge, edit and serve a cleaned out version.

I agree that it makes sense that it should be able to handle such cases, ideally. I assume that TVGName is for "TV Guide Name", meaning it should be more accurate that just channel name? This would come from tvg-name in the M3U.

If the M3U Key is used to locate a channel in an M3U. Then I don't know why you wouldn't just combine all of these values available per channel as a single identifier. It should then avoid these types of issues?

I do see though, if you edit an M3U there is a "Channel Name Field" option here:
Image

I wonder if this is what drives the m3uKey value. In which case, yours might be set to TvgName, where changing it to something else like TvgId might then work.

That's a really good point! You're probably right, because the dropdown weren't working for me at the beginning I completely forgot about those options. I will give that a go tomorrow to see if it helps when tuning the previous script.

@Aandree5
Copy link
Contributor Author

Ok so, I removed all the m3u files I had, then added 6 m3u different files for testing and a couple things happened:

  1. Picked the TvgName_TvgID option to work as Id, and no file loaded, kept complaining that the file did not have streams (not sure if it was coming back with invalid Id). Logs below:
09:23:47","Information","0","","Adding M3U \u0027TESTING\u0027",\n\n"
09:23:47","Information","0","","Reading m3uFile TESTING",\n"
09:23:47","Critical","0","","Exception M3U \u0027TESTING\u0027 contains no streams",\n"
09:23:47","Error","0","","M3U \u0027TESTING\u0027 contains no streams",\n\n"
  1. Picked 'Name_TvgID' and the files actually loaded, added a few streams to channels and ran the previous SQL script you provided, came back with no duplicates!!! Seems to have worked!

Now I'm not sure what happened with 1, but I thinks this issue is fixed by picking the correct fields as Id, would be interesting if we could help choosing this, because apparently you won't noticed until it's too late and you're migrating, maybe a check that would just read something like, found channel collisions, please pick another combination of Id fields - not sure how much work it would be or if we could even advise on the best combination to use, but would be a nice feature.

@carlreid
Copy link
Owner

not sure how much work it would be or if we could even advise on the best combination to use, but would be a nice feature.

I guess that the "Add EPG" dialog could have some kind of "Test" button, or some kind of verification that runs automatically before the "accept" button becomes enabled.

The issue mostly, is that the M3U needs to exist in some way, to be able try and parse its contents to validate if a collision would happen. Though even the "M3u key Mapping" should probably have options disabled if they don't exist in the M3U too.

@Aandree5
Copy link
Contributor Author

It's a fair point, this might be a lot more work than it looks

Repository owner locked and limited conversation to collaborators Feb 27, 2025
@carlreid carlreid converted this issue into a discussion Feb 27, 2025
@carlreid carlreid reopened this Feb 27, 2025
@carlreid carlreid converted this issue into a discussion Feb 27, 2025
@carlreid carlreid reopened this Feb 27, 2025
@carlreid
Copy link
Owner

Hmm, wonder why it's not letting me convert it to a discussion. Will try again later...

Otherwise I think it would be good to move this to a discussion under "Ideas" as a feature request. Where more upfront validation is performed before committing to the initial save.

Repository owner unlocked this conversation Mar 2, 2025
@burnbrigther
Copy link

burnbrigther commented Mar 2, 2025

What is the solution for this? I tried the migration procedure and I'm running in to this. As I mentioned earlier in the duplicate bug, I could get errors to go away by simply dropping the streammaster db and letting the scripts rebuild, then restoring the various configuration files in the db, but that feel like a poor hack and may be the source of other issues I'm running in to such as visibilty of stream groups not working right.
I guess I could always start fresh.

@carlreid
Copy link
Owner

carlreid commented Mar 2, 2025

If you have a way to reproduce going from state A -> B in terms of some kind of upgrade causing problems, then I am eager to know. Starting fresh really isn't ideal in my eyes.

Could you add a bit about which version you were maybe running before, and what migration steps you have taken? It could be that this fork is attempting to do a previous migration again which is causing the conflict. Though as I wrote earlier in this issue discussion, it should be avoided so long as didIDMigration is stored in SystemKeyValues table.

@carlreid carlreid added the needs more details Unclear or needing more details label Mar 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs more details Unclear or needing more details
Projects
None yet
Development

No branches or pull requests

3 participants