By Max Vernon


2013-03-13 17:25:47 8 Comments

I have a table that is used by a legacy application as a substitute for IDENTITY fields in various other tables.

Each row in the table stores the last used ID LastID for the field named in IDName.

Occasionally the stored proc gets a deadlock - I believe I've built an appropriate error handler; however I'm interested to see if this methodology works as I think it does, or if I'm barking up the wrong tree here.

I'm fairly certain there should be a way to access this table without any deadlocks at all.

The database itself is configured with READ_COMMITTED_SNAPSHOT = 1.

First, here is the table:

CREATE TABLE [dbo].[tblIDs](
    [IDListID] [int] NOT NULL 
        CONSTRAINT PK_tblIDs 
        PRIMARY KEY CLUSTERED 
        IDENTITY(1,1) ,
    [IDName] [nvarchar](255) NULL,
    [LastID] [int] NULL,
);

And the nonclustered index on the IDName field:

CREATE NONCLUSTERED INDEX [IX_tblIDs_IDName] 
ON [dbo].[tblIDs]
(
    [IDName] ASC
) 
WITH (
    PAD_INDEX = OFF
    , STATISTICS_NORECOMPUTE = OFF
    , SORT_IN_TEMPDB = OFF
    , DROP_EXISTING = OFF
    , ONLINE = OFF
    , ALLOW_ROW_LOCKS = ON
    , ALLOW_PAGE_LOCKS = ON
    , FILLFACTOR = 80
);

GO

Some sample data:

INSERT INTO tblIDs (IDName, LastID) 
    VALUES ('SomeTestID', 1);
INSERT INTO tblIDs (IDName, LastID) 
    VALUES ('SomeOtherTestID', 1);
GO

The stored procedure used to update the values stored in the table, and return the next ID:

CREATE PROCEDURE [dbo].[GetNextID](
    @IDName nvarchar(255)
)
AS
BEGIN
    /*
        Description:    Increments and returns the LastID value from tblIDs
        for a given IDName
        Author:         Max Vernon
        Date:           2012-07-19
    */

    DECLARE @Retry int;
    DECLARE @EN int, @ES int, @ET int;
    SET @Retry = 5;
    DECLARE @NewID int;
    SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
    SET NOCOUNT ON;
    WHILE @Retry > 0
    BEGIN
        BEGIN TRY
            BEGIN TRANSACTION;
            SET @NewID = COALESCE((SELECT LastID 
                FROM tblIDs 
                WHERE IDName = @IDName),0)+1;
            IF (SELECT COUNT(IDName) 
                FROM tblIDs 
                WHERE IDName = @IDName) = 0 
                    INSERT INTO tblIDs (IDName, LastID) 
                    VALUES (@IDName, @NewID)
            ELSE
                UPDATE tblIDs 
                SET LastID = @NewID 
                WHERE IDName = @IDName;
            COMMIT TRANSACTION;
            SET @Retry = -2; /* no need to retry since the operation completed */
        END TRY
        BEGIN CATCH
            IF (ERROR_NUMBER() = 1205) /* DEADLOCK */
                SET @Retry = @Retry - 1;
            ELSE
                BEGIN
                SET @Retry = -1;
                SET @EN = ERROR_NUMBER();
                SET @ES = ERROR_SEVERITY();
                SET @ET = ERROR_STATE()
                RAISERROR (@EN,@ES,@ET);
                END
            ROLLBACK TRANSACTION;
        END CATCH
    END
    IF @Retry = 0 /* must have deadlock'd 5 times. */
    BEGIN
        SET @EN = 1205;
        SET @ES = 13;
        SET @ET = 1
        RAISERROR (@EN,@ES,@ET);
    END
    ELSE
        SELECT @NewID AS NewID;
END
GO

Sample executions of the stored proc:

EXEC GetNextID 'SomeTestID';

NewID
2

EXEC GetNextID 'SomeTestID';

NewID
3

EXEC GetNextID 'SomeOtherTestID';

NewID
2

EDIT:

I've added a new index, since the existing index IX_tblIDs_Name is not being used by the SP; I assume the query processor is using the clustered index since it needs the value stored in LastID. Anyway, this index IS used by the actual execution plan:

CREATE NONCLUSTERED INDEX IX_tblIDs_IDName_LastID 
ON dbo.tblIDs
(
    IDName ASC
) 
INCLUDE
(
    LastID
)
WITH (FILLFACTOR = 100
    , ONLINE=ON
    , ALLOW_ROW_LOCKS = ON
    , ALLOW_PAGE_LOCKS = ON);

EDIT #2:

I've taken the advice that @AaronBertrand gave and modified it slightly. The general idea here is to refine the statement to eliminate unnecessary locking, and overall to make the SP more efficient.

The code below replaces the code above from BEGIN TRANSACTION to END TRANSACTION:

BEGIN TRANSACTION;
SET @NewID = COALESCE((SELECT LastID 
        FROM dbo.tblIDs 
        WHERE IDName = @IDName), 0) + 1;

IF @NewID = 1
    INSERT INTO tblIDs (IDName, LastID) 
    VALUES (@IDName, @NewID);
ELSE
    UPDATE dbo.tblIDs 
    SET LastID = @NewID 
    WHERE IDName = @IDName;

COMMIT TRANSACTION;

Since our code never adds a record to this table with 0 in LastID we can make the assumption that if @NewID is 1 then the intention is append a new ID to the list, else we are updating an existing row in the list.

5 comments

@A-K 2013-03-15 20:37:26

First, I would avoid making a round trip to the database for every value. For example, if your application knows it needs 20 new IDs, do not make 20 round trips. Make only one stored procedure call, and increment the counter by 20. Also it might be better to split your table into multiple ones.

It is possible to avoid deadlocks altogether. I have no deadlocks at all in my system. There are several ways to accomplish that. I will show how I would use sp_getapplock to eliminate deadlocks. I have no idea if this will work for you, because SQL Server is closed source, so I cannot see the source code, and as such I do not know if I have tested all possible cases.

The following describes what works for me. YMMV.

First, let us start with a scenario where we always get a considerable amount of deadlocks. Second, we shall use sp_getapplock eliminate them. The most important point here is to stress test your solution. Your solution may be different, but you need to expose it to high concurrency, as I will demonstrate later.

Prerequisites

Let us set up a table with some test data:

CREATE TABLE dbo.Numbers(n INT NOT NULL PRIMARY KEY); 
GO 

INSERT INTO dbo.Numbers 
    ( n ) 
        VALUES  ( 1 ); 
GO 
DECLARE @i INT; 
    SET @i=0; 
WHILE @i<21  
    BEGIN 
    INSERT INTO dbo.Numbers 
        ( n ) 
        SELECT n + POWER(2, @i) 
        FROM dbo.Numbers; 
    SET @i = @i + 1; 
    END;  
GO

SELECT n AS ID, n AS Key1, n AS Key2, 0 AS Counter1, 0 AS Counter2
INTO dbo.DeadlockTest FROM dbo.Numbers
GO

ALTER TABLE dbo.DeadlockTest ADD CONSTRAINT PK_DeadlockTest PRIMARY KEY(ID);
GO

CREATE INDEX DeadlockTestKey1 ON dbo.DeadlockTest(Key1);
GO

CREATE INDEX DeadlockTestKey2 ON dbo.DeadlockTest(Key2);
GO

The following two procedures are quite likely to embrace in a deadlock:

CREATE PROCEDURE dbo.UpdateCounter1 @Key1 INT
AS
SET NOCOUNT ON ;
SET XACT_ABORT ON;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRANSACTION ;
UPDATE dbo.DeadlockTest SET Counter1=Counter1+1 WHERE [email protected];
SET @[email protected];
UPDATE dbo.DeadlockTest SET Counter1=Counter1+1 WHERE [email protected];
COMMIT;
GO

CREATE PROCEDURE dbo.UpdateCounter2 @Key2 INT
AS
SET NOCOUNT ON ;
SET XACT_ABORT ON;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRANSACTION ;
SET @[email protected];
UPDATE dbo.DeadlockTest SET Counter2=Counter2+1 WHERE [email protected];
SET @[email protected]+10000;
UPDATE dbo.DeadlockTest SET Counter2=Counter2+1 WHERE [email protected];
COMMIT;
GO

Reproducing deadlocks

The following loops should reproduce more than 20 deadlocks every time you run them. If you get less than 20, increase the number of iterations.

In one tab, run this;

DECLARE @i INT, @DeadlockCount INT;
SELECT @i=0, @DeadlockCount=0;

WHILE @i<5000 BEGIN ;
  BEGIN TRY 
    EXEC dbo.UpdateCounter1 @Key1=123456;
  END TRY
  BEGIN CATCH
    SET @DeadlockCount = @DeadlockCount + 1;
    ROLLBACK;
  END CATCH ;
  SET @i = @i + 1;
END;
SELECT 'Deadlocks caught: ', @DeadlockCount ;

In another tab, run this script.

DECLARE @i INT, @DeadlockCount INT;
SELECT @i=0, @DeadlockCount=0;

WHILE @i<5000 BEGIN ;
  BEGIN TRY 
    EXEC dbo.UpdateCounter2 @Key2=123456;
  END TRY
  BEGIN CATCH
    SET @DeadlockCount = @DeadlockCount + 1;
    ROLLBACK;
  END CATCH ;
  SET @i = @i + 1;
END;
SELECT 'Deadlocks caught: ', @DeadlockCount ;

Make sure you start both within a couple of seconds.

Using sp_getapplock to eliminate deadlocks

Alter both procedures, rerun the loop, and see that you no longer have deadlocks:

ALTER PROCEDURE dbo.UpdateCounter1 @Key1 INT
AS
SET NOCOUNT ON ;
SET XACT_ABORT ON;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRANSACTION ;
EXEC sp_getapplock @Resource='DeadlockTest', @LockMode='Exclusive';
UPDATE dbo.DeadlockTest SET Counter1=Counter1+1 WHERE [email protected];
SET @[email protected];
UPDATE dbo.DeadlockTest SET Counter1=Counter1+1 WHERE [email protected];
COMMIT;
GO

ALTER PROCEDURE dbo.UpdateCounter2 @Key2 INT
AS
SET NOCOUNT ON ;
SET XACT_ABORT ON;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRANSACTION ;
EXEC sp_getapplock @Resource='DeadlockTest', @LockMode='Exclusive';
SET @[email protected];
UPDATE dbo.DeadlockTest SET Counter2=Counter2+1 WHERE [email protected];
SET @[email protected]+10000;
UPDATE dbo.DeadlockTest SET Counter2=Counter2+1 WHERE [email protected];
COMMIT;
GO

Using a table with one row to eliminate deadlocks

Instead of invoking sp_getapplock, we can modify the following table:

CREATE TABLE dbo.DeadlockTestMutex(
ID INT NOT NULL,
CONSTRAINT PK_DeadlockTestMutex PRIMARY KEY(ID),
Toggle INT NOT NULL);
GO

INSERT INTO dbo.DeadlockTestMutex(ID, Toggle)
VALUES(1,0);

Once we have this table created and populated, we can replace the following line

EXEC sp_getapplock @Resource='DeadlockTest', @LockMode='Exclusive';

with this one, in both procedures:

UPDATE dbo.DeadlockTestMutex SET Toggle = 1 - Toggle WHERE ID = 1;

You can rerun the stress test, and see for yourself that we have no deadlocks.

Conclusion

As we have seen, sp_getapplock can be used to serialize access to other resources. As such it can be used to eliminate deadlocks.

Of course, this can significantly slow down modifications. To address that, we need to choose the right granularity for the exclusive lock, and whenever possible, work with sets instead of individual rows.

Before using this approach you need to stress test it yourself. First, you need to make sure you get at least a couple dozen deadlocks with your original approach. Second, you should get no deadlocks when you rerun the same repro script using modified stored procedure.

In general, I do not think there is a good way to determine if your T-SQL is safe from deadlocks just by looking at it or looking at the execution plan. IMO the only way to determine if your code is prone to deadlocks is to expose it to high concurrency.

Good luck with eliminating deadlocks! We do not have any deadlocks in our system at all, which is great for our work-life balance.

@Mark Storey-Smith 2013-03-16 01:07:10

+1 as sp_getapplock is a useful tool that isn't well known. Given an 'orrible mess that might take time to pick apart, it's a handy trick to serialise a process that's deadlocking. But, should it be the first choice for a case like this which is easily understood and can (perhaps should) be dealt with by standard locking mechanisms?

@A-K 2013-03-16 01:35:32

@MarkStorey-Smith It is my first choice because I have researched and stress tested it only once, and I can reuse it in any situation - the serializing has already happened, so everything that happens after sp_getapplock does not effect the outcome. With standard locking mechanisms, I can never be so sure - adding an index or just getting another execution plan can cause deadlocks where before there were none. Ask me how I know.

@Dale K 2019-09-13 04:14:47

I guess I'm missing something obvious, but how does using UPDATE dbo.DeadlockTestMutex SET Toggle = 1 - Toggle WHERE ID = 1; prevent deadlocks?

@Max Vernon 2013-03-14 20:26:01

Mike Defehr showed me an elegant way to accomplish this in a very lightweight way:

ALTER PROCEDURE [dbo].[GetNextID](
    @IDName nvarchar(255)
)
AS
BEGIN
    /*
        Description:    Increments and returns the LastID value from tblIDs for a given IDName
        Author:         Max Vernon / Mike Defehr
        Date:           2012-07-19

    */

    DECLARE @Retry int;
    DECLARE @EN int, @ES int, @ET int;
    SET @Retry = 5;
    DECLARE @NewID int;
    SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
    SET NOCOUNT ON;
    WHILE @Retry > 0
    BEGIN
        BEGIN TRY
            UPDATE dbo.tblIDs 
            SET @NewID = LastID = LastID + 1 
            WHERE IDName = @IDName;

            IF @NewID IS NULL
            BEGIN
                SET @NewID = 1;
                INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID);
            END
            SET @Retry = -2; /* no need to retry since the operation completed */
        END TRY
        BEGIN CATCH
            IF (ERROR_NUMBER() = 1205) /* DEADLOCK */
                SET @Retry = @Retry - 1;
            ELSE
                BEGIN
                SET @Retry = -1;
                SET @EN = ERROR_NUMBER();
                SET @ES = ERROR_SEVERITY();
                SET @ET = ERROR_STATE()
                RAISERROR (@EN,@ES,@ET);
                END
        END CATCH
    END
    IF @Retry = 0 /* must have deadlock'd 5 times. */
    BEGIN
        SET @EN = 1205;
        SET @ES = 13;
        SET @ET = 1
        RAISERROR (@EN,@ES,@ET);
    END
    ELSE
        SELECT @NewID AS NewID;
END
GO

(For completeness, here is the table associated with the stored proc)

CREATE TABLE [dbo].[tblIDs]
(
    IDName nvarchar(255) NOT NULL,
    LastID int NULL,
    CONSTRAINT [PK_tblIDs] PRIMARY KEY CLUSTERED 
    (
        [IDName] ASC
    ) WITH 
    (
        PAD_INDEX = OFF
        , STATISTICS_NORECOMPUTE = OFF
        , IGNORE_DUP_KEY = OFF
        , ALLOW_ROW_LOCKS = ON
        , ALLOW_PAGE_LOCKS = ON
        , FILLFACTOR = 100
    ) 
);
GO

This is the execution plan for the latest version:

enter image description here

And this is the execution plan for the original version (deadlock susceptible):

enter image description here

Clearly, the new version wins!

For comparison, the intermediate version with the (XLOCK) etc, produces the following plan:

enter image description here

I'd say that's a win! Thanks for everyone's help!

@Mark Storey-Smith 2013-03-14 23:32:33

Should indeed work but you're using SERIALIZABLE where it's not applicable. Phantom rows can't exist here, so why use an isolation level that exists to prevent them? Also, if someone calls your procedure from another or from a connection where an outer transaction was started, any further actions they initiate will proceed at SERIALIZABLE. That can get messy.

@Paul White says GoFundMonica 2014-12-24 06:46:10

SERIALIZABLE doesn't exist to prevent phantoms. It exists to provide serializable isolation semantics, i.e. the same persistent effect on the database as if the transactions involved had executed serially in some unspecified order.

@Mike DeFehr 2013-03-15 14:00:22

Not to steal Mark Storey-Smith's thunder, but he is onto something with his post above (that has incidentally received the most upvotes). The advice I gave Max was centered around the "UPDATE set @variable = column = column + value" construct which I find really cool, but I think may be undocumented (it has to be supported, though because it is there specifically for the TCP benchmarks).

Here is a variation of Mark's answer - because you are returning the new ID value as a recordset, you can do away with the scalar variable entirely, no explicit transaction should be necessary either, and I would agree that messing around with isolation levels is unnecessary as well. The result is very clean and pretty slick...

ALTER PROC [dbo].[GetNextID]
  @IDName nvarchar(255)
  AS
BEGIN
SET NOCOUNT ON;

DECLARE @Output TABLE ([NewID] INT);

UPDATE dbo.tblIDs SET LastID = LastID + 1
OUTPUT inserted.[LastId] INTO @Output
WHERE IDName = @IDName;

IF(@@ROWCOUNT = 1)
    SELECT [NewID] FROM @Output;
ELSE
    INSERT dbo.tblIDs (IDName, LastID)
    OUTPUT INSERTED.LastID AS [NewID]
    VALUES (@IDName,1);
END

@Mark Storey-Smith 2013-03-16 14:50:13

Agreed this should be immune to deadlock but it is prone to a race condition on the insert, if you omit the transaction.

@Mark Storey-Smith 2013-03-13 19:57:05

Use of the XLOCK hint on either your SELECT approach or the following UPDATE should be immune to this type of deadlock:

DECLARE @Output TABLE ([NewId] INT);
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;

BEGIN TRANSACTION;

UPDATE
    dbo.tblIDs WITH (XLOCK)
SET 
    LastID = LastID + 1
OUTPUT
    INSERTED.[LastId] INTO @Output
WHERE
    IDName = @IDName;

IF(@@ROWCOUNT = 1)
BEGIN
    SELECT @NewId = [NewId] FROM @Output;
END
ELSE
BEGIN
    SET @NewId = 1;

    INSERT dbo.tblIDs
        (IDName, LastID)
    VALUES
        (@IDName, @NewId);
END

SELECT [NewId] = @NewId ;

COMMIT TRANSACTION;

Will return with a couple of other variants (if not beaten to it!).

@Dale K 2019-09-13 04:13:14

While XLOCK will prevent an existing counter from being updated from multiple connections, wouldn't you need a TABLOCKX to prevent multiple connections from adding the same new counter?

@Mark Storey-Smith 2019-09-13 13:29:14

@DaleBurrell No, you'd have PK or unique constraint on IDName.

@Aaron Bertrand 2013-03-13 17:31:11

I fixed a similar deadlock in a system last year by changing this:

IF (SELECT COUNT(IDName) FROM tblIDs WHERE IDName = @IDName) = 0 
  INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID)
ELSE
  UPDATE tblIDs SET LastID = @NewID WHERE IDName = @IDName;

To this:

UPDATE tblIDs SET LastID = @NewID WHERE IDName = @IDName;
IF @@ROWCOUNT = 0
BEGIN
  INSERT ...
END

In general, selecting a COUNT just to determine presence or absence is quite wasteful. In this case since it is either 0 or 1 it's not like it's a lot of work, but (a) that habit can bleed into other cases where it will be much costlier (in those cases, use IF NOT EXISTS instead of IF COUNT() = 0), and (b) the additional scan is completely unnecessary. The UPDATE performs essentially the same check.

Also, this looks like a serious code smell to me:

SET @NewID = COALESCE((SELECT LastID FROM tblIDs WHERE IDName = @IDName),0)+1;

What is the point here? Why not just use an identity column or derive that sequence using ROW_NUMBER() at query time?

@Max Vernon 2013-03-13 17:35:49

Most tables we have are using an IDENTITY. This table supports some legacy code written in MS Access that would be fairly involved to retrofit. The SET @NewID= line simply increments the value stored in the table for the given ID (but you already know that). Can you expand on how I could use ROW_NUMBER()?

@Aaron Bertrand 2013-03-13 17:37:00

@MaxVernon not without knowing what LastID really means in your model. What is its purpose? The name isn't exactly self-explanatory. How does Access use it?

@Max Vernon 2013-03-13 17:39:03

A function in Access wants to add a row to any given table that does not have an IDENTITY. First Access calls GetNextID('WhatevertheIDFieldIsCalled') to get the next ID to use, then inserts it into the new row along with whatever data is needed.

@Max Vernon 2013-03-13 17:42:28

I will implement your change. A pure case of "less is more"!

@A-K 2013-03-13 18:09:06

Your fixed deadlock may re-emerge. Your second pattern is also vulnerable: sqlblog.com/blogs/alexander_kuznetsov/archive/2010/01/12/… To eliminate deadlocks I would use sp_getapplock. May mixed load system with hundreds of users does not have deadlocks.

@Max Vernon 2013-03-13 18:23:23

I don't know if you saw, but my original code is using SERIALIZABLE (not sure if that makes any difference to your comment). Interesting that the documentation for sp_getapplock indicates that a return value of 3 is The lock request was chosen as a deadlock victim. I wonder how often THAT happens!

@Aaron Bertrand 2013-03-13 18:35:52

@AlexKuznetsov yes, it may, it is something Max will have to test. It may also emerge with MERGE (no pun intended), a change which will also have to be tested. Surely you, of all people, are not suggesting that MERGE is any less vulnerable to various bugs and concurrency issues than any of the other patterns?

@A-K 2013-03-13 18:42:09

Sure, MERGE is not free of bugs. I would suggest trying out sp_getapplock - it works for me.

Related Questions

Sponsored Content

2 Answered Questions

[SOLVED] Investigating errors from strange query

2 Answered Questions

[SOLVED] SQL Transaction - use values before commit?

1 Answered Questions

1 Answered Questions

[SOLVED] deteriorating stored procedure running times

1 Answered Questions

[SOLVED] Oracle GoldenGate add trandata errors

1 Answered Questions

[SOLVED] sql server 2005 deadlock error 3930

3 Answered Questions

[SOLVED] Inserting and updating and selecting at thousands of times per second

1 Answered Questions

[SOLVED] Handling exceptions in stored procedures called using insert-exec blocks

1 Answered Questions

[SOLVED] PAD_INDEX and FILLFACTOR on clustered Identity index

  • 2012-08-29 10:14:18
  • Seph
  • 7641 View
  • 3 Score
  • 1 Answer
  • Tags:   sql-server index

Sponsored Content