By Abe Miessler


2018-11-12 23:49:56 8 Comments

I'm attempting to add a NOT NULL constraint to a table I have (so I can use it as part of a primary key, so I can create a spatial index...) but it is taking a really long time - 1 hour 31 minutes and counting.

I had read this post where the answer seemed to be that it was taking so long because the person was adding a default value but i'm not doing that:

ALTER TABLE unacastVisitRaw ALTER COLUMN timestamp varchar(64) NOT NULL

The table is rather large (290 million records) but it still seems like a long time for just adding a constraint and not changing any values.

Is there anything I can do to speed up this operation? Why would it be taking so long if I'm not updating the existing records?

UPDATE: Here are the wait stats for the session enter image description here

2 comments

@Joe Obbish 2018-11-18 04:12:58

I'm testing on a slightly different version than you (2019 developer), but have been told that the storage engine stuff is mostly the same. Setting aside trivial reasons like blocking, I'd say that there are three main reasons an ALTER TABLE statement might run longer than you'd prefer:

  1. The underlying table is large and there isn't a covering index available.
  2. The ALTER TABLE statement requires most or all pages to be modified.
  3. The ALTER TABLE statement has a lot of work to do and must run at MAXDOP 1.

Your ALTER TABLE statement hits all three. Some ALTER TABLE operations require all pages or even all rows to be modified. Changing a column definition from NULL to NOT NULL is one of those. I couldn't find a good way to show this so I'll resort to undocumented commands which may not work on your version. First create a small table:

DROP TABLE IF EXISTS dbo.ALTER_ME_FAST;

CREATE TABLE dbo.ALTER_ME_FAST (
    COL1 BIGINT NULL,
    COL2 BIGINT NULL,
    COL3 BIGINT NULL,
    COL4 BIGINT NULL,
    COL5 BIGINT NULL,
    COL6 BIGINT NULL,
    COL7 BIGINT NULL,
    COL8 BIGINT NULL,
    COL9 BIGINT NULL
);

INSERT INTO dbo.ALTER_ME_FAST WITH (TABLOCK)
SELECT TOP (687) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)), 0, 0, 0, 0, 0, 0, 0, 0
FROM master..spt_values;

I can see all of the allocated pages with the undocumented dmv sys.dm_db_database_page_allocations:

SELECT allocated_page_file_id, allocated_page_page_id, page_free_space_percent
FROM sys.dm_db_database_page_allocations 
(5, OBJECT_ID(N'dbo.ALTER_ME_FAST'), 0, NULL, 'DETAILED')
WHERE page_type = 1;

The page ids stay the same after running this command:

ALTER TABLE dbo.ALTER_ME_FAST ALTER COLUMN COL5 BIGINT NOT NULL;

A before and after diff using DBCC PAGE shows that the old column was dropped and replaced with a new column for all rows:

dbcc page diff

On my machine, doing the same thing on a table with 25 million rows takes 18 seconds. Here are the session wait stats for me:

session wait stats

In my case, the resources available to the query aren't getting throttled. It simply has a lot of CPU work to do. SQL Server isn't giving us a lot of clues, so to understand where all of that CPU time is going we need to look deeper. One way to do that is through ETW tracing. Using PerfView during the ALTER TABLE operation shows the call stacks that use the most CPU cycles all relate to changing data on the pages of the heap:

ETW

This might seem to be unhelpful, but it's not. On my server I don't have the option of reducing throttling to make the ALTER TABLE faster. One way to look at it is if I need to modify all of the rows anyway, why not do it in parallel instead of serial? The following code ends up with the same result but it can run in parallel:

DROP TABLE IF EXISTS dbo.ALTER_ME_NEW;

CREATE TABLE dbo.ALTER_ME_NEW (
    COL1 BIGINT NULL,
    COL2 BIGINT NULL,
    COL3 BIGINT NULL,
    COL4 BIGINT NULL,
    COL5 BIGINT NOT NULL,
    COL6 BIGINT NULL,
    COL7 BIGINT NULL,
    COL8 BIGINT NULL,
    COL9 BIGINT NULL
);

INSERT INTO dbo.ALTER_ME_NEW WITH (TABLOCK)
SELECT *
FROM dbo.ALTER_ME_SLOW
OPTION (MAXDOP 4);

On my machine the new code takes only six seconds. In practice, coming out ahead with such a technique requires a sufficient server resources, a maintenance window, and a table structure that lends itself well to it. You mentioned in your question that you need to change the primary key. If your primary key is your clustered index then you'll need to rewrite the table's data again anyway, so making a copy in a heap might not be that bad. You can just create the primary key and clustered index that you wanted in parallel after the insert has finished.

I can't say for sure if the parallel insert approach is appropriate for your scenario. If you are already getting throttled with just an ALTER TABLE statement then it probably won't be. But it could be a good option to consider in the future if you need similar operations to be as fast as possible.

@eckes 2018-11-18 20:39:26

Any idea why the nullable change is not only in the metadata and where to find which change requires a rewrite.

@David Browne - Microsoft 2018-11-13 00:41:49

Why would it be taking so long?

You can look at the Session Wait Stats to see. eg

declare @session_id int = 121 

select top 10 *
from sys.dm_exec_session_wait_stats
where session_id = @session_id
order by wait_time_ms desc

select command, wait_type, wait_time, last_wait_type, cpu_time, writes, total_elapsed_time
from sys.dm_exec_requests 
where session_id = @session_id 

where 121 is the session id of the session running the query.

Why would it be taking so long if I'm not updating the existing records?

It probably wouldn't. Changing the nullability is a change to the data type, not a constraint. And it appears that it requires a rewrite of your table.

@Abe Miessler 2018-11-13 00:48:49

Thanks for the tip - I updated my question with the stats. Does that look like it's rewriting the table? Is there any way to avoid that?

@David Browne - Microsoft 2018-11-13 15:52:57

It looks like it is rewriting the table. In the future you can scale up to speed up such operations.

Related Questions

Sponsored Content

1 Answered Questions

0 Answered Questions

Why are my statistics getting out of date so quickly?

1 Answered Questions

0 Answered Questions

Partitioning Table: SQL Server

3 Answered Questions

3 Answered Questions

[SOLVED] Reading ASYNC_NETWORK_IO wait stats

1 Answered Questions

[SOLVED] deteriorating stored procedure running times

1 Answered Questions

[SOLVED] Inserting a 30 million row table from a linked server

5 Answered Questions

[SOLVED] UPDATE with JOIN on 100mm records, how to do this better? (in T-SQL)

Sponsored Content