By Vivekh


2014-12-30 12:11:33 8 Comments

I would like to know whether there is any way to determine why it is taking too much time say 3 minutes for deleting 50K records from a database hosted in Sql Azure. I have clustered index on primary key and a non-clustered index on ID which is being used in where condition in Delete Query

DECLARE @RowsDeleted INTEGER
SET @RowsDeleted = 1

WHILE (@RowsDeleted > 0)
BEGIN
    DELETE top(100) FROM table WHERE ScenarioID= @ID
    SET @RowsDeleted = @@ROWCOUNT
END

Is there any way to know whether there are any locks on this table

indexes:

ALTER TABLE [dbo].[ActCost] ADD  CONSTRAINT [PK_ActCost] PRIMARY KEY CLUSTERED 
(
    [ActCostID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON)
GO



 Alter NONCLUSTERED INDEX [IX_ActCost_ScenarioID] ON [dbo].[ActCost] ([ScenarioID] ASC)
INCLUDE ( <list_all_other_columns_returned> );
GO


CREATE TABLE [dbo].[ActCost](
    [ActCostID] [int] IDENTITY(1,1) NOT NULL,
    [ActID] [int] NOT NULL,
    [ActCostTypeID] [int] NOT NULL,
    [Description] [nvarchar](200) NOT NULL,
    [Cost] [float] NOT NULL,
    [CostPerProductUnit] [float] NOT NULL,
    [CostPerEndProductUnit] [float] NOT NULL,
    [OtherValue] [float] NULL,
    [OtherID] [int] NULL,
    [Comment1] [nvarchar](200) NULL,
    [Comment2] [nvarchar](200) NULL,
    [OPerProductUnit] [float] NULL,
    [OPerHour] [float] NULL,
    [OCostPerUnit] [float] NULL,
    [OCostPerHour] [float] NULL,
    [PerfEnh_ProcessID] [int] NOT NULL,
    [PerfEnh_MillID] [int] NOT NULL,
    [ScenarioID] [int] NOT NULL,
 CONSTRAINT [PK_ActCost] PRIMARY KEY CLUSTERED 
(
    [ActCostID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON)
)

GO

Azure Space : Available 2 GB Used :500MB

I have another database with same data but with more Available Size which works fine i mean it takes just 20-30 seconds to do the same job , Is it going to be a factor ?

enter image description here

Blocking inspection : enter image description here

2 comments

@Aaron Bertrand 2014-12-30 15:05:14

The title says blocking, but the question says locking. I'll deal with the former since the latter should be a given: yes, locks need to be taken in order to perform a delete.

To check for blocking, in the window where you're performing the deletes:

SELECT @@SPID; -- make note of this

In another window:

SELECT blocking_session_id, wait_type
  FROM sys.dm_exec_requests
  WHERE session_id = <spid from above>;

I know this DMV was available in the last two SQL Database refreshes, but not quite sure what version you're on.

Now, if you get a blocker, check what they are doing.

SELECT r.[status], r.command, r.blocking_session_id, 
    r.wait_type, query = SUBSTRING(CONVERT(NVARCHAR(MAX), 
    t.[text]), r.statement_start_offset/2, 
    COALESCE(NULLIF(r.statement_end_offset,-1), 
    LEN(CONVERT(NVARCHAR(MAX),t.[text]))*2)/2)
  FROM sys.dm_exec_requests AS r
  OUTER APPLY sys.dm_exec_sql_text(r.[sql_handle]) AS t
  WHERE r.session_id = <blocking session>;

You can't run DBCC INPUTBUFFER in this case because in SQL Database this is somewhat protected - you can only run this against your own @@SPID.

It might also be that log file autogrowth is the problem - if the clustered index delete here actually has to go and delete rows from many indexes and/or indexed views, the problem could be a log bottleneck. My usual go-to here is the default trace, but I don't believe you have access to that in Azure. Thankfully in your scenario we can see that this is the case due to the wait type shown - your query is actually being slowed down intentionally by the system because you're generating too much log activity. I'm not sure exactly how this algorithm works, but if it is happening when you delete 100 rows, it may be time to migrate to an instance with better "guaranteed" performance (which may mean paying more). Simply spinning up a new instance on the same plan may even help, as you'll likely end up on newer generation hardware which should naturally perform log operations quicker and may help you avoid hitting the governor.

You should also be committing transactions at some level of frequency within the loop. Otherwise chunking doesn't really help because you're likely still holding the locks on the first 100 rows while you're trying to delete the last 100 rows. Committing smaller chunks more frequently will have less impact on the log and potentially on locking/blocking too.

DECLARE @RowsDeleted INT = 1, @counter INT = 1;

BEGIN TRANSACTION;

WHILE (@RowsDeleted > 0)
BEGIN
    DELETE top(100) FROM dbo.table WHERE ScenarioID = @ID;
    SELECT @RowsDeleted = @@ROWCOUNT, @counter += 1;
    IF @counter % 10 = 0
    BEGIN
      COMMIT TRANSACTION;
      BEGIN TRANSACTION;
    END
END

COMMIT TRANSACTION;

You can also put a waitfor in the loop - maybe pausing long enough will prevent tripping the log rate governor.

@Vivekh 2014-12-30 16:21:46

No lockings i guess as the 2nd Query rerturned nothing and also Using Commit transactions also didnt helped much

@Aaron Bertrand 2014-12-30 16:28:49

Returned nothing or returned blocking_session_id = 0? Make sure you run it multiple times, because you might have run it in between the actual DELETE operations. And make sure you pass the right session_id.

@Vivekh 2014-12-30 16:34:09

yes i have used the correct sessionID and no it doesnt returned anything.0 rows

@Aaron Bertrand 2014-12-30 16:44:53

Then you aren't running it while the delete is actually happening, sorry.

@Vivekh 2014-12-30 17:27:07

I have run it while delete i mean along with the delete

@Aaron Bertrand 2014-12-30 22:54:07

@Vivekh I don't how else to say this, but if you ran it from a different window with the original window's spid and while the delete was running, there's no way it should return 0 rows. One of these things is not true.

@Vivekh 2014-12-31 07:15:31

@Kris Gruttemeyer 2014-12-30 13:04:34

3 minutes for 50k records using a 'batch style' delete is actually not too bad. It's also a great way to minimize locking/blocking. Any time you run a DELETE operation, there will be locks. What you have done here is minimize that by only doing 100 records at a time.

So, to answer your question, yes, there are locks taken on the table while the delete operation is running. Those locks are minimized because you are doing the delete in 100 record batches. Remember that when you update a table (any DUI operation, really), you also have to go make those same changes on the associated indexes.

Related Questions

Sponsored Content

1 Answered Questions

3 Answered Questions

[SOLVED] Parent-Child Tree Hierarchical ORDER

2 Answered Questions

1 Answered Questions

[SOLVED] Should I add a clustered Index on a foreign key that is not unique?

  • 2015-08-26 07:49:20
  • chris
  • 1357 View
  • 1 Score
  • 1 Answer
  • Tags:   sql-server index

2 Answered Questions

1 Answered Questions

[SOLVED] deteriorating stored procedure running times

2 Answered Questions

[SOLVED] SELECT/INSERT Deadlock

1 Answered Questions

4 Answered Questions

[SOLVED] Why is Clustered Index on Primary Key compulsory?

0 Answered Questions

Is it possible to change the data-type of a primary key column in SQL Azure?

Sponsored Content