By kristof

2008-11-24 17:17:56 8 Comments

Possible Duplicate:
Split string in SQL

I have seen a couple of questions related to string concatenation in SQL. I wonder how would you approach the opposite problem: splitting coma delimited string into rows of data:

Lets say I have tables:

userTypedTags(userID,commaSeparatedTags) 'one entry per user

And want to insert data into table

userTag(userID,tagID) 'multiple entries per user

Inspired by Which tags are not in the database? question


Thanks for the answers, actually more then one deserves to be accepted but I can only pick one, and the solution presented by Cade Roux with recursions seems pretty clean to me. It works on SQL Server 2005 and above.

For earlier version of SQL Server the solution provided by miies can be used. For working with text data type wcm answer will be helpful. Thanks again.


@Nathan Wheeler 2010-05-14 21:10:33

You can also achieve this effect using XML, as seen here, which removes the limitation of the answers provided which all seem to include recursion in some fashion. The particular use I've made here allows for up to a 32-character delimiter, but that could be increased however large it needs to be.

create FUNCTION [dbo].[Split] (@sep VARCHAR(32), @s VARCHAR(MAX))
        SELECT r.value('.','VARCHAR(MAX)') as Item
        FROM (SELECT CONVERT(XML, N'<root><r>' + REPLACE(REPLACE(REPLACE(@s,'& ','&amp; '),'<','&lt;'), @sep, '</r><r>') + '</r></root>') as valxml) x
        CROSS APPLY x.valxml.nodes('//root/r') AS RECORDS(r)

Then you can invoke it using:

SELECT * FROM dbo.Split(' ', 'I hate bunnies')

Which returns:

|I        |
|hate     |
|bunnies  |

I should note, I don't actually hate bunnies... it just popped into my head for some reason.
The following is the closest thing I could come up with using the same method in an inline table-valued function. DON'T USE IT, IT'S HORRIBLY INEFFICIENT! It's just here for reference sake.

CREATE FUNCTION [dbo].[Split] (@sep VARCHAR(32), @s VARCHAR(MAX))
        SELECT r.value('.','VARCHAR(MAX)') as Item
        FROM (SELECT CONVERT(XML, N'<root><r>' + REPLACE(@s, @sep, '</r><r>') + '</r></root>') as valxml) x
        CROSS APPLY x.valxml.nodes('//root/r') AS RECORDS(r)

@p.campbell 2010-05-14 21:13:16

@md5sum: Great solution!

@Cade Roux 2010-05-14 22:06:59

@md5sum - if you could get this into an inline table-valued function, I'd be onboard. Typically scalar functions perform horribly on SQL Server. I'd love to see this benchmarked against the ITVF solution.

@Nathan Wheeler 2010-05-17 16:38:42

@Cade Roux - I'm sorry, but I don't think I understand exactly what you're asking here... this is a table-valued function. Unless you're just wanting it to return the selected value rather than inserting it into the temp table?

@Cade Roux 2010-05-17 16:59:41

@md5sum - inline table valued functions (without a BEGIN) generally perform poorer and are handled completely different than multi-statement table-valued functions (as this is). Of course you never can tell untill you benchmark them...

@Cade Roux 2010-05-17 17:01:28

@md5sum I see I mentioned scalar functions - I probably forgot to finish my thought - performance on UDFs generally is best on inline TVF, then multi-statement TVF, then scalar functions (which are completely horrible).

@Nathan Wheeler 2010-05-17 17:27:59

@Cade Roux - The inline version (the best incarnation I could come up with) is a complete joke in comparison. I even tried it a couple extra times just to ensure that something hadn't kicked off on the server to screw with my results. Version posted above splitting 3168 rows of "I HATE BUNNIES" consistently takes ~110ms. The inline version using the same parameters consistently takes ~1:25.175. I can post the inline version if you'd like. I may have done something horribly wrong when I built it, but as far as I can tell I did it the only proper way possible.

@Cade Roux 2010-08-26 12:54:50

@md5sum I'm late voting you up, but wanted to note that I'd be interested to see how it scales for a lot more rows. I know that even simple scalar functions can perform horribly versus inline code on hundreds of thousands of rows.

@Nathan Wheeler 2010-08-26 14:21:34

@Cade Roux - I'm not really sure. I know that I've used it for around 1000 and it performs fine. However, I would argue that SQL Server isn't the place to do that amount of string manipulation anyway (at least not in any reasonable application architecture design patterns).

@Cade Roux 2010-08-26 20:05:35

@md5sum I agree this is probably a modeling/design issue. However, if you've got terabytes of data sometimes you don't want to pull them into yet another system to do something with them. Obviously the design of the data should be improved to facilitate the anticipated processing. Sometimes it's not so easy to stream millions of rows through something better than T-SQL. Which is why design defects are 1000x times more difficult to compensate for when discovered in production stage of life-cycle.

@Tabloo Quijico 2010-11-10 16:38:01

@md5sum thanks for the solutions, but the first example needs the parameters renamed to @sep and @s respectively. (couldn't change it myself)

@Nathan Wheeler 2010-11-10 17:07:19

@Tabloo Quijico - amazing nobody else caught the mismatched parameter names. Thanks!

@J.B. 2011-06-24 20:10:47

Awesome solution!

@arviman 2011-10-25 20:31:09

In this example, you're limiting the text to 5 characters. So your output should return I | hate | bunni

@jkirkwood 2011-10-28 18:25:57

If you have amperstands (&) in your data this will not work unless you string replace them with '&amp;'

@Nathan Wheeler 2012-02-02 22:26:31

@arviman - Yeah, A friend of mine pointed that out... I just never got around to updating the answer here.

@Nathan Wheeler 2012-02-02 22:26:46

@jkirkwood - Yes, since you're converting it to XML, you have to format all XML "special characters" to their appropriate "escaped" value (can't think of the right word for that right now). So, it would apply to < and > as well.

@Ryan 2013-01-14 16:42:39

This is a great solution! I just changed the VARCHAR(5) to NVARHAR(MAX) and it works great for me! Thanks!

@Jeremy Thompson 2013-04-17 23:55:50

It actually returns I hate bunni due to the VARCHAR(5) :)

@Nathan Wheeler 2013-04-18 16:50:48

@JeremyThompson - I could have sworn that was fixed at one time... you can see from the comments above that it was mentioned before :)

@Colin Niu 2013-11-15 10:41:05

got bad performance when changed to uniqueidentifier, cost me 2s to split 250 guid values, don't know why...

@Nathan Wheeler 2014-02-28 22:35:28

@ColinNiu - What did you change to uniqueidentifier? You could check the execution, but I'm betting that the process of converting the strings to uniqueidentifiers is what will be eating your performance, rather than the process of splitting them up.

@JoshBerke 2014-08-12 22:15:29

Edit number 7 changed the first function to an inline function. Which is confusing because the answer discusses the differences between the two solutions. Just in case anyone else is confused and sees this comment

@Tim Friesen 2015-08-18 17:21:49

Strange, in my testing I find that the table valued function version (I went into the history of this response to get it) performed more than 8 times WORSE than the inline version. I will stick with the inline version thank you.

@Tim Friesen 2015-08-18 17:29:19

@JoshBerke, I have edited this answer to fix the issue you mentioned.

@Tim Friesen 2015-08-18 19:12:29

Nevermind, my edit was rejected even though the goal of my edit was to bring back the original intent that the author was conveying. Poor reviewing in my opinion.

@Darren 2012-03-15 05:32:02

I up-voted "Nathan Wheeler" answer as I found "Cade Roux" answer did not work above a certain string size.

Couple of points

-I found adding the DISTINCT keyword improved performance for me.

-Nathan's answer only works if your identifiers are 5 characters or less, of course you can adjust that...If the items you are splitting are INT identifiers as I am you can us same as me below:

    @sep VARCHAR(32), 
    @result TABLE (
        Id INT NULL
    DECLARE @xml XML
    SET @XML = N'<root><r>' + REPLACE(@s, @sep, '</r><r>') + '</r></root>'

    INSERT INTO @result(Id)
    SELECT DISTINCT r.value('.','int') as Item
    FROM @xml.nodes('//root//r') AS RECORDS(r)


@Zarepheth 2013-12-18 21:18:35

Depending upon what is being split, there could be negative consequences to using DISTINCT. Namely, that maybe the resulting table is supposed to contain some repeated values, but with DISTINCT it only has unique values.

@Marek 2012-04-24 12:43:14

I usually do this with the following code:

create function [dbo].[Split](@string varchar(max), @separator varchar(10))
returns @splited table ( stringPart varchar(max) )
with execute as caller
    declare @stringPart varchar(max);
    set @stringPart = '';

    while charindex(@separator, @string) > 0
        set @stringPart = substring(@string, 0, charindex(@separator, @string));
        insert into @splited (stringPart) values (@stringPart);
        set @string = substring(@string, charindex(@separator, @string) + len(@separator), len(@string) + 1);


You can test it with this query:

declare @example varchar(max);
set @example = 'one;string;to;rule;them;all;;';

select * from [dbo].[Split](@example, ';');

@sayap 2011-07-22 10:37:37

Using CLR, here's a much simpler alternative that works in all cases, yet 40% faster than the accepted answer:

using System;
using System.Collections;
using System.Data.SqlTypes;
using System.Text.RegularExpressions;
using Microsoft.SqlServer.Server;

public class UDF
    public static IEnumerable RegexSplit(SqlString s, SqlString delimiter)
        return Regex.Split(s.Value, delimiter.Value);

    public static void FillRow(object row, out SqlString str)
        str = new SqlString((string) row);

Of course, it is still 8 times slower than PostgreSQL's regexp_split_to_table.

@Iain Samuel McLean Elder 2012-03-16 13:01:22

How did you benchmark this?

@sayap 2012-03-18 21:16:23

EXPLAIN ANALYSE with PostgreSQL, and the poor man's version of it, i.e. checking "Include Actual Execution Plan" in SSMS with SQL Server. Exact same table with millions of records in both databases.

@karaken12 2012-07-30 14:13:13

I like this (although I haven't benchmarked it yet), but you should include the TableDefinition property on the SqlFunction attribute so that data tools can generate a proper function definition.

@Tomalak 2011-03-24 17:38:29

Here's a Split function that is compatible with SQL Server versions prior to 2005.

CREATE FUNCTION dbo.Split(@data nvarchar(4000), @delimiter nvarchar(100))  
RETURNS @result table (Id int identity(1,1), Data nvarchar(4000)) 
    DECLARE @pos   INT
    DECLARE @start INT
    DECLARE @len   INT
    DECLARE @end   INT

    SET @len   = LEN('.' + @delimiter + '.') - 2
    SET @end   = LEN(@data) + 1
    SET @start = 1
    SET @pos   = 0

    WHILE (@pos < @end)
        SET @pos = CHARINDEX(@delimiter, @data, @start)
        IF (@pos = 0) SET @pos = @end

        INSERT @result (data) SELECT SUBSTRING(@data, @start, @pos - @start)
        SET @start = @pos + @len


@Zarepheth 2013-12-18 21:33:21

+1 for avoiding recursion (since SQL Server does it so poorly), avoiding XML (since SQL does not have an easy API for escaping special XML characters), and also avoiding CLR code (since some companies' data centers don't allow custom code on shared SQL Server instances).

@user39603 2008-11-24 17:22:49

I use this function (SQL Server 2005 and above).

create function [dbo].[Split]
    @string nvarchar(4000),
    @delimiter nvarchar(10)
returns @table table
    [Value] nvarchar(4000)
    declare @nextString nvarchar(4000)
    declare @pos int, @nextPos int

    set @nextString = ''
    set @string = @string + @delimiter

    set @pos = charindex(@delimiter, @string)
    set @nextPos = 1
    while (@pos <> 0)
        set @nextString = substring(@string, 1, @pos - 1)

        insert into @table

        set @string = substring(@string, @pos + len(@delimiter), len(@string))
        set @nextPos = @pos
        set @pos = charindex(@delimiter, @string)

@kristof 2008-11-25 12:47:57

Thanks for that. I think that will also work in SQLServer 2000

@user39603 2008-11-26 12:24:21

You're right. I thought table-valued functions were introduced with SQL Server 2005, but they weren't.

@David 2010-05-04 00:31:29

@commaCheck is not used, other than an assignment, so it can be removed.

@David 2010-05-04 00:43:11

Also, set @string = substring(@string, @pos + 1, len(@string)) should be set @string = substring(@string, @pos + len(@delimiter), len(@string))

@Martin Smith 2010-07-31 12:18:37

For the particular case of splitting strings into words I've come across another solution for SQL Server 2008.

with testTable AS
SELECT 1 AS Id, N'how now brown cow' AS txt UNION ALL
SELECT 2, N'she sells sea shells upon the sea shore' UNION ALL
SELECT 3, N'red lorry yellow lorry' UNION ALL
SELECT 4, N'the quick brown fox jumped over the lazy dog'

SELECT display_term, COUNT(*) As Cnt
 FROM testTable
CROSS APPLY sys.dm_fts_parser('"' + txt + '"', 1033, 0,0)
GROUP BY display_term


display_term                   Cnt
------------------------------ -----------
the                            3
brown                          2
lorry                          2
sea                            2

@quetzalcoatl 2013-01-08 12:32:16

Interesting, but one must note that it requires "Full-Text Search" to be installed & available

@Martin Smith 2013-01-08 12:37:09

@quetzalcoatl - And it also requires sysadmin permissions as well. Still might be useful for somebody.

@Peter PitLock 2014-10-15 09:49:08

Absolutely perfect. This is like the opposite of the "Stuff" command. No need for testing a barrage of split functions only to doubt whether you really want to install it on production. Perfect for my requirements. Thanks!

@Rory 2009-10-17 16:49:36

Slight modification of the solution above so it works with variable-length delimiters.

create FUNCTION dbo.fn_Split2 (@sep nvarchar(10), @s nvarchar(4000))
    WITH Pieces(pn, start, stop) AS (
      SELECT 1, 1, CHARINDEX(@sep, @s)
      SELECT pn + 1, stop + (datalength(@sep)/2), CHARINDEX(@sep, @s, stop + (datalength(@sep)/2))
      FROM Pieces
      WHERE stop > 0
    SELECT pn,
      SUBSTRING(@s, start, CASE WHEN stop > 0 THEN stop-start ELSE 4000 END) AS s
    FROM Pieces

NB: I've used datalength() since len() reports incorrectly if there are trailing spaces.

@Cade Roux 2008-11-24 18:01:53

There are a wide varieties of solutions to this problem documented here, including this little gem:

CREATE FUNCTION dbo.Split (@sep char(1), @s varchar(512))
    WITH Pieces(pn, start, stop) AS (
      SELECT 1, 1, CHARINDEX(@sep, @s)
      SELECT pn + 1, stop + 1, CHARINDEX(@sep, @s, stop + 1)
      FROM Pieces
      WHERE stop > 0
    SELECT pn,
      SUBSTRING(@s, start, CASE WHEN stop > 0 THEN stop-start ELSE 512 END) AS s
    FROM Pieces

@Rory 2009-10-17 16:47:02

Fantastic function. Could do with using nchar() and nvarchar(). Also see below for suggestion with variable-length delimiter.

@Mike Schenk 2010-02-12 17:23:17

On SQL Server 2008, this version fails when there are more than 101 items in the list: "The statement terminated. The maximum recursion 100 has been exhausted before statement completion."

@Cade Roux 2010-02-12 18:20:07

@MikeSchenk You can use the OPTION (MAXRECURSION n) hint ( to change the level of recursion - however, it's not allowed in UDF definitions. This question (…‌​) would imply that you can specify it outside the UDF and have it still work.

@cbp 2010-04-14 05:39:46

Warning: fails with larger input strings (above about 1000 characters). "The maximum recursion 100 has been exhausted before statement completion."

@Nathan Wheeler 2010-05-14 21:11:03

I realize I'm jumping into this incredibly late... like 1.5 years, but there's a pretty interesting solution that doesn't require any recursion that wasn't mentioned here. I've posted it below.

@Lajos Arpad 2012-12-21 15:36:38

How can I call this function from a select query? I get the following error: Cannot find either column "dbo" or the user-defined function or aggregate "dbo.fn_Split", or the name is ambiguous.

@Cade Roux 2012-12-21 20:40:48

@LajosArpad You should post your entire example (with the actual code you are trying) in another question.

@Yann Semet 2008-11-24 17:53:15

SELECT substring(commaSeparatedTags,0,charindex(',',commaSeparatedTags))

will give you the first tag. You can proceed similarly to get the second one and so on by combining substring and charindex one layer deeper each time. That's an immediate solution but it works only with very few tags as the query grows very quickly in size and becomes unreadable. Move on to functions then, as outlined in other, more sophisticated answers to this post.

@wcm 2008-11-24 17:35:20

I wrote this awhile back. It assumes the delimiter is a comma and that the individual values aren't bigger than 127 characters. It could be modified pretty easily.

It has the benefit of not being limited to 4,000 characters.

Good luck!

ALTER Function [dbo].[SplitStr] ( 
        @txt text 
Returns @tmp Table 
                value varchar(127)
        declare @str varchar(8000) 
                , @Beg int 
                , @last int 
                , @size int 

        set @size=datalength(@txt) 
        set @Beg=1 

        set @str=substring(@txt,@Beg,8000) 
        IF len(@str)<8000 set @[email protected] 
        ELSE BEGIN 
                set @last=charindex(',', reverse(@str)) 
                set @str=substring(@txt,@Beg,[email protected]) 
                set @[email protected][email protected]+1 

        declare @workingString varchar(25) 
                , @stringindex int 

        while @Beg<[email protected] Begin 
                WHILE LEN(@str) > 0 BEGIN 
                        SELECT @StringIndex = CHARINDEX(',', @str) 

                                @workingString = CASE 
                                        WHEN @StringIndex > 0 THEN SUBSTRING(@str, 1, @StringIndex-1) 
                                        ELSE @str 

                        INSERT INTO 
                                (cast(rtrim(ltrim(@workingString)) as varchar(127)))
                        SELECT @str = CASE 
                                WHEN CHARINDEX(',', @str) > 0 THEN SUBSTRING(@str, @StringIndex+1, LEN(@str)) 
                                ELSE '' 
                set @str=substring(@txt,@Beg,8000) 

                if @[email protected] set @[email protected]+1 
                else IF len(@str)<8000 set @[email protected] 
                ELSE BEGIN 
                        set @last=charindex(',', reverse(@str)) 
                        set @str=substring(@txt,@Beg,[email protected]) 
                        set @[email protected][email protected]+1 



@321X 2012-06-05 12:58:45

Great + not being limited!! Thanks

Related Questions

Sponsored Content

44 Answered Questions

16 Answered Questions

4 Answered Questions

[SOLVED] Inserting multiple rows in a single SQL query?

32 Answered Questions

[SOLVED] How do I UPDATE from a SELECT in SQL Server?

15 Answered Questions

[SOLVED] How can I do an UPDATE statement with JOIN in SQL?

38 Answered Questions

[SOLVED] How to return only the Date from a SQL Server DateTime datatype

24 Answered Questions

[SOLVED] How do I perform an IF...THEN in an SQL SELECT?

24 Answered Questions

13 Answered Questions

9 Answered Questions

[SOLVED] How do I escape a single quote in SQL Server?

Sponsored Content