By Jeff Atwood

2008-12-03 16:16:43 8 Comments

How do I parameterize a query containing an IN clause with a variable number of arguments, like this one?

WHERE Name IN ('ruby','rails','scruffy','rubyonrails')

In this query, the number of arguments could be anywhere from 1 to 5.

I would prefer not to use a dedicated stored procedure for this (or XML), but if there is some elegant way specific to SQL Server 2008, I am open to that.


@Lukasz Szozda 2016-05-02 10:20:19

In SQL Server 2016+ you could use STRING_SPLIT function:

DECLARE @names NVARCHAR(MAX) = 'ruby,rails,scruffy,rubyonrails';

WHERE Name IN (SELECT [value] FROM STRING_SPLIT(@names, ','))


DECLARE @names NVARCHAR(MAX) = 'ruby,rails,scruffy,rubyonrails';

FROM Tags t
  ON t.Name = [value]


The accepted answer will of course work and it is one of the way to go, but it is anti-pattern.

E. Find rows by list of values

This is replacement for common anti-pattern such as creating a dynamic SQL string in application layer or Transact-SQL, or by using LIKE operator:

SELECT ProductId, Name, Tags
FROM Product
WHERE ',1,2,3,' LIKE '%,' + CAST(ProductId AS VARCHAR(20)) + ',%';


To improve the STRING_SPLIT table function row estimation, it is a good idea to materialize splitted values as temporary table/table variable:

DECLARE @names NVARCHAR(MAX) = 'ruby,rails,scruffy,rubyonrails,sql';

INSERT INTO #t(val) SELECT s.[value] FROM STRING_SPLIT(@names, ',') s;

FROM Tags tg
JOIN #t t
  ON t.val = tg.TagName

SEDE - Live Demo

Related: How to Pass a List of Values Into a Stored Procedure

Original question has requirement SQL Server 2008. Because this question is often used as duplicate, I've added this answer as reference.

@Daniel 2019-06-28 14:12:27

I haven't perf tested this, but I feel like this is the cleanest 2016+ solution. I'd still like to be able to just pass an array of int, but until then...

@Adel Mourad 2019-07-12 20:47:05

Step 1:-

string[] Ids = new string[] { "3", "6", "14" };
string IdsSP = string.Format("'|{0}|'", string.Join("|", Ids));

Step 2:-

@CurrentShipmentStatusIdArray [nvarchar](255) = NULL

Step 3:-

Where @CurrentShipmentStatusIdArray is null or @CurrentShipmentStatusIdArray LIKE '%|' + convert(nvarchar,Shipments.CurrentShipmentStatusId) + '|%'


Where @CurrentShipmentStatusIdArray is null or @CurrentShipmentStatusIdArray LIKE '%|' + Shipments.CurrentShipmentStatusId+ '|%'

@Mili 2019-02-25 16:57:00

In SQL SERVER 2016 or higher you can use STRING_SPLIT.

DECLARE @InParaSeprated VARCHAR(MAX) = 'ruby,rails,scruffy,rubyonrails'
DECLARE @Delimeter VARCHAR(10) = ','
    Tags T
    INNER JOIN STRING_SPLIT(@InputParameters,@Delimeter) SS ON T.Name = SS.value
    Count DESC

I use this because some times join faster than Like Operator works in my queries.
In addition you can put unlimited number of inputs in any separated format that you like.
I like this ..

@Bryan 2016-01-28 04:07:52

You can do this in a reusable way by doing the following -

public static class SqlWhereInParamBuilder
    public static string BuildWhereInClause<t>(string partialClause, string paramPrefix, IEnumerable<t> parameters)
        string[] parameterNames = parameters.Select(
            (paramText, paramNumber) => "@" + paramPrefix + paramNumber.ToString())

        string inClause = string.Join(",", parameterNames);
        string whereInClause = string.Format(partialClause.Trim(), inClause);

        return whereInClause;

    public static void AddParamsToCommand<t>(this SqlCommand cmd, string paramPrefix, IEnumerable<t> parameters)
        string[] parameterValues = parameters.Select((paramText) => paramText.ToString()).ToArray();

        string[] parameterNames = parameterValues.Select(
            (paramText, paramNumber) => "@" + paramPrefix + paramNumber.ToString()

        for (int i = 0; i < parameterNames.Length; i++)
            cmd.Parameters.AddWithValue(parameterNames[i], parameterValues[i]);

For more details have a look at this blog post - Parameterized SQL WHERE IN clause c#

@Gowdhaman008 2012-11-23 18:13:47

If we have strings stored inside the IN clause with the comma(,) delimited, we can use the charindex function to get the values. If you use .NET, then you can map with SqlParameters.

DDL Script:

    ([ID] int, [Name] varchar(20))

    ([ID], [Name])
    (1, 'ruby'),
    (2, 'rails'),
    (3, 'scruffy'),
    (4, 'rubyonrails')


DECLARE @Param nvarchar(max)

SET @Param = 'ruby,rails,scruffy,rubyonrails'

WHERE CharIndex(Name,@Param)>0

You can use the above statement in your .NET code and map the parameter with SqlParameter.

Fiddler demo

EDIT: Create the table called SelectedTags using the following script.

DDL Script:

Create table SelectedTags
(Name nvarchar(20));

INSERT INTO SelectedTags values ('ruby'),('rails')


DECLARE @list nvarchar(max)
SELECT @list=coalesce(@list+',','')+st.Name FROM SelectedTags st

WHERE CharIndex(Name,@Param)>0

@John Saunders 2012-11-30 23:34:27

Can you show an example of this working where there is not a hard-coded list of possible values?

@Gowdhaman008 2012-12-03 13:32:09

@JohnSaunders, I have edited the script without using any hardcoded list. Please verify.

@Richard Vivian 2014-05-17 07:38:30

One limitation with this option. CharIndex returns 1 if the string is found. IN returns a match for an exact terms. CharIndex for "Stack" will return 1 for a term "StackOverflow" IN will not. There is a minor tweek to this answer using PatIndex above that encloses names with '<' % name % '>' that overcomes this limitation. Creative solution to this problem though.

@Joel Spolsky 2008-12-03 16:41:17

Here's a quick-and-dirty technique I have used:

WHERE '|ruby|rails|scruffy|rubyonrails|'
LIKE '%|' + Name + '|%'

So here's the C# code:

string[] tags = new string[] { "ruby", "rails", "scruffy", "rubyonrails" };
const string cmdText = "select * from tags where '|' + @tags + '|' like '%|' + Name + '|%'";

using (SqlCommand cmd = new SqlCommand(cmdText)) {
   cmd.Parameters.AddWithValue("@tags", string.Join("|", tags);

Two caveats:

  • The performance is terrible. LIKE "%...%" queries are not indexed.
  • Make sure you don't have any |, blank, or null tags or this won't work

There are other ways to accomplish this that some people may consider cleaner, so please keep reading.

@Matt Rogish 2008-12-03 16:43:31

That will be hella slow

@Jeff Atwood 2008-12-03 16:48:04

yeah, it is 10x slower, but it's very easily parameterized, heh. Not sure how much faster it would be to call fnSplit() as proposed by Longhorn213's answer

@Will Hartung 2008-12-03 16:48:30

Yes, this is a table scan. Great for 10 rows, lousy for 100,000.

@StingyJack 2008-12-03 16:50:38

@Matt, I agree. The method from Mark Brackett will likely scale better.

@Mike Shepard 2008-12-03 16:51:36

I agree...this is a good solution for a small table. Doesn't require any temp tables or a bunch of parameters.

@Tony Andrews 2008-12-03 16:51:47

Longhorn213's fnSplit function would be called once, taking a little time, but is then able to take advantage of an index on Tags.Name. Joel's solution probably requires a full scan of Tags, which may be slow for a big table. Having said that, I do use Joel's method myself for small tables.

@Joel Coehoorn 2008-12-03 17:16:29

Make sure you test on tags that have pipes in them.

@tvanfosson 2008-12-03 20:14:44

This doesn't even answer the question. Granted, it's easy to see where to add the parameters, but how can you accept this a solution if it doesn't even bother to parameterize the query? It only looks simpler than @Mark Brackett's because it isn't parameterized.

@Matt Rogish 2008-12-03 20:37:22

tvanfosson: Good point. You're not using parameters, but actually still just strings...

@Jeff Atwood 2008-12-03 21:10:22

"Granted, it's easy to see where to add the parameters" it's like the np-complete thing.. we've reduced the query to a typical form which is trivial to parameterize. The problem with IN is the inherent variability, how many INs can we have? 50? 1000? 10000?

@tvanfosson 2008-12-05 17:40:06

Apparently in MS-SQL the number is so large that they don't say what it is. If you're getting upwards of 10K, then the table join solution is probably better. This particular query is just going to keep getting worse and worse as the number increases. Imagine scanning a 50K char string each time.

@Joel Coehoorn 2008-12-11 14:26:30

In this case, we're obviously talking about tags, and the SO system limits you to 5 total, so it probably won't be that bad.

@Mark Brackett 2008-12-18 14:05:17

@Joel - there's actually 2 inefficiencies in this solution. The parsing of the char string (the '|' + @tags + '|'), and the size of the table - since this needs a table scan. The former shouldn't be an issue with SO's tag system, but the latter certainly could be (there's about 16500 tags now)

@Robert C. Barth 2008-12-18 17:29:48

I've used this method with success in the past. I've also tested it. On a "typical" table of 500k rows, this method takes about four seonds. You can optimized by pre-creating the piped parameter and storing that as a field. Doing so reduces the query time by about half.

@spencer7593 2009-05-29 21:27:06

@Joel: Clever, and it works. So what if it's going to do an index scan, performance only has to be "good enough". Not knowing the constraints on the Name column, I'm going to consider the edge cases (null, empty string, contains pipe character), as well as the obscure corner case, a Name value containing a wildcard e.g. 'pe%ter' is going to match '|peanut|butter|' but not '|butter|peanut|'. (Yes, it's an obscure case, one that isn't going to be tested in QA, but will get exercised in production.) It's a fairly easy workaround (in some DBMS) to escape the wildcards.

@A-K 2009-08-19 22:21:57

What if your tag is 'ruby|rails'. It will match, which will be wrong. When you roll out such solutions, you need to either make sure tags do not contain pipes, or explicitly filter them out: select * from Tags where '|ruby|rails|scruffy|rubyonrails|' like '%|' + Name + '|%' AND name not like '%!%'

@Stephen Holt 2015-09-14 10:19:47

Agree with above comments... this is not a full or complete answer to the problem. If you catered for the case where the string contains pipes, (which you can using the above approach, but it's a bit more complex) then the answer would be better.

@Major 2018-01-03 14:34:22

Working with string in SQL is very slow. You should avoid it.

@guru008 2017-07-30 18:44:15

Create a temp table where names are stored, and then use the following query:

select * from Tags 
where Name in (select distinct name from temp)
order by Count desc

@Kurt Miller 2018-09-24 10:01:08

Hmm.... I see a temp table solution here. Where I work, when you encounter a difficulty that you don't understand or know how to properly circumvent, we simply create a table field (a flag) or a temp table... Do such solutions make you a better software engineer ? That's the question

@Marc Gravell 2011-06-15 11:04:06

If you are calling from .NET, you could use Dapper dot net:

string[] names = new string[] {"ruby","rails","scruffy","rubyonrails"};
var tags = dataContext.Query<Tags>(@"
select * from Tags 
where Name in @names
order by Count desc", new {names});

Here Dapper does the thinking, so you don't have to. Something similar is possible with LINQ to SQL, of course:

string[] names = new string[] {"ruby","rails","scruffy","rubyonrails"};
var tags = from tag in dataContext.Tags
           where names.Contains(tag.Name)
           orderby tag.Count descending
           select tag;

@Sam Saffron 2011-06-15 11:09:49

which happens to be what we use on this page, for the actual question asked (dapper)

@Marc Gravell 2014-07-16 16:48:30

@cs0815 2014-10-20 14:41:59

This falls over if names is long

@Bartosz X 2017-04-12 09:49:47

There is a nice, simple and tested way of doing that:

/* Create table-value string: */
CREATE TYPE [String_List] AS TABLE ([Your_String_Element] varchar(max) PRIMARY KEY);
/* Create procedure which takes this table as parameter: */

CREATE PROCEDURE [dbo].[usp_ListCheck]
@String_List_In [String_List] READONLY  
FROM [dbo].[Tags] a
JOIN @String_List_In b ON a.[Name] = b.[Your_String_Element];

I have started using this method to fix the issues we had with the entity framework (was not robust enough for our application). So we decided to give the Dapper (same as Stack) a chance. Also specifying your string list as table with PK column fix your execution plans a lot. Here is a good article of how to pass a table into Dapper - all fast and CLEAN.

@Mark Brackett 2008-12-03 16:35:54

You can parameterize each value, so something like:

string[] tags = new string[] { "ruby", "rails", "scruffy", "rubyonrails" };
string cmdText = "SELECT * FROM Tags WHERE Name IN ({0})";

string[] paramNames = tags.Select(
    (s, i) => "@tag" + i.ToString()

string inClause = string.Join(", ", paramNames);
using (SqlCommand cmd = new SqlCommand(string.Format(cmdText, inClause))) {
    for(int i = 0; i < paramNames.Length; i++) {
       cmd.Parameters.AddWithValue(paramNames[i], tags[i]);

Which will give you:

cmd.CommandText = "SELECT * FROM Tags WHERE Name IN (@tag0, @tag1, @tag2, @tag3)"
cmd.Parameters["@tag0"] = "ruby"
cmd.Parameters["@tag1"] = "rails"
cmd.Parameters["@tag2"] = "scruffy"
cmd.Parameters["@tag3"] = "rubyonrails"

No, this is not open to SQL injection. The only injected text into CommandText is not based on user input. It's solely based on the hardcoded "@tag" prefix, and the index of an array. The index will always be an integer, is not user generated, and is safe.

The user inputted values are still stuffed into parameters, so there is no vulnerability there.


Injection concerns aside, take care to note that constructing the command text to accomodate a variable number of parameters (as above) impede's SQL server's ability to take advantage of cached queries. The net result is that you almost certainly lose the value of using parameters in the first place (as opposed to merely inserting the predicate strings into the SQL itself).

Not that cached query plans aren't valuable, but IMO this query isn't nearly complicated enough to see much benefit from it. While the compilation costs may approach (or even exceed) the execution costs, you're still talking milliseconds.

If you have enough RAM, I'd expect SQL Server would probably cache a plan for the common counts of parameters as well. I suppose you could always add five parameters, and let the unspecified tags be NULL - the query plan should be the same, but it seems pretty ugly to me and I'm not sure that it'd worth the micro-optimization (although, on Stack Overflow - it may very well be worth it).

Also, SQL Server 7 and later will auto-parameterize queries, so using parameters isn't really necessary from a performance standpoint - it is, however, critical from a security standpoint - especially with user inputted data like this.

@tvanfosson 2008-12-03 16:53:02

Basically the same as my answer to the "related" question and obviously the best solution since it is constructive and efficient rather than interpretive (much harder).

@Mark Cidade 2008-12-18 18:55:35

This is how LINQ to SQL does it, BTW

@Pure.Krome 2009-01-02 02:15:45

Isn't there a max number of Parameters? so if the user doesn't know how many tags, it might go over the max_number (around 200 or 255 params?). Secondly, why is using params better than just a dynamic sql with the values constructed on the fly (replace @Tag1 with the value, in the above example)?

@Ray 2009-02-04 23:27:47

@Pure: The whole point of this is to avoid SQL Injection, which you would be vulnerable to if you used dynamic SQL.

@Mark 2009-08-19 19:01:55

Injection concerns aside, take care to note that constructing the command text to accomodate a variable number of parameters (as above) impede's SQL server's ability to take advantage of cached queries. The net result is that you almost certainly loose the value of using parameters in the first place (as opposed to merely inserting the predicate strings into the SQL itself).

@Mark Brackett 2010-02-11 12:17:14

@God of Data - Yes, I suppose if you need more than 2100 tags you'll need a different solution. But Basarb's could only reach 2100 if the average tag length was < 3 chars (since you need a delimiter as well).

@Nick Craver 2011-06-15 15:58:14

@Mark - that's only half true, as it would cache a plan for each version and even that may be fine (if not, why not optimize for an ad hoc workload?). For example in a paging scenario most of your queries will be using the page size number of parameters when populating things (an SO question list, for example).

@Ian Boyd 2012-06-19 21:00:36

i've read it four times, and i still have no idea what it's doing. QuotedStr() it is!

@Erik Hart 2014-01-11 12:54:56

Auto parameterization in SQL Server is by default only enabled for single parameter queries. Everything more complex is treated as an ad-hoc query. It is possible to force parameterization, which may give you problems elsewhere. So a parameterized query is still best.

@Erik Hart 2014-01-11 13:00:09

This is a good solution (insert a parameter placeholder for each IN value). However, SQL Server will reuse query plans by string equality, causing a new plan to be created for every different number of parameters. If the IN clause contains only a few, it's not bad. You may get a dozen query plans, for max. 12 values, but for a max 1000 values, there may be up to 1000 query plans neccessary. Some object-relational mappers use specific algorithms to split such queries into multiple, with recurring numbers of parameters to match existing query plans.

@bonCodigo 2014-06-27 13:20:31

Assume the tags are dynamic. E.g. Multi-Value extended select mode List Box. User is allowed to choose one or more. In that case, the tags can be one or more. So how can define the number of tags to be passed in SQL string. cmd.CommandText = "SELECT * FROM Tags WHERE Name IN (@tag0,@tag1,@tag2,@tag3, ....., @tagN)" N is variable...based on user selection. What's the catch?

@Mark Brackett 2014-06-27 13:42:07

@bonCodigo - your selected values are in an array; you just loop over the array and add a parameter (suffixed with the index) for each one.

@bonCodigo 2014-06-27 13:47:53

SQL Query is in a static class as a static string e.g. ..."WHERE TS.[SESSIONE] IN (@SessionList) AND ..." Array iteration is clear to me and I have built a set of parameters based on your answer. However connecting them to above query is an issue since the parameter is @SessionList where as array created parameters are @Session1, @Session2...etc. Hm... Did I just missed out {(0)} to replace @SessionList?

@monojohnny 2014-10-31 15:37:33

This will also go wrong in the case where the (admittedly unusual) case where the client DB has the option DECIMAL=COMMA - you will need to add a trailing space after every comma when generating the string to avoid this.... ("1,5" -> means one-and-a-half, not "one then a five", "1, 5" (comma-space)-> "one then a five"

@Mark Brackett 2017-01-05 23:54:29

@monojohnny - I'd suspect that the decimal = comma wouldn't be a problem, as the comma delimits are between parameter names. Though it's tricial (and, arguably, better) to string.Join(", ") to make it more human readable....

@Suncat2000 2018-11-30 13:23:55

That's a C# solution, not SQL.

@Harshil Doshi 2020-03-24 12:43:18

any workaround in case of negative values?

@Derek Greer 2016-12-30 03:34:02

This is a reusable variation of the solution in Mark Bracket's excellent answer.

Extension Method:

public static class ParameterExtensions
    public static Tuple<string, SqlParameter[]> ToParameterTuple<T>(this IEnumerable<T> values)
        var createName = new Func<int, string>(index => "@value" + index.ToString());
        var paramTuples = values.Select((value, index) => 
        new Tuple<string, SqlParameter>(createName(index), new SqlParameter(createName(index), value))).ToArray();
        var inClause = string.Join(",", paramTuples.Select(t => t.Item1));
        var parameters = paramTuples.Select(t => t.Item2).ToArray();
        return new Tuple<string, SqlParameter[]>(inClause, parameters);


        string[] tags = {"ruby", "rails", "scruffy", "rubyonrails"};
        var paramTuple = tags.ToParameterTuple();
        var cmdText = $"SELECT * FROM Tags WHERE Name IN ({paramTuple.Item1})";

        using (var cmd = new SqlCommand(cmdText))

@Metaphor 2013-04-06 02:39:54

Here is another alternative. Just pass a comma-delimited list as a string parameter to the stored procedure and:

CREATE PROCEDURE [dbo].[sp_myproc]
    @UnitList varchar(MAX) = '1,2,3'
select column from table
where ph.UnitID in (select * from CsvToInt(@UnitList))

And the function:

CREATE Function [dbo].[CsvToInt] ( @Array varchar(MAX))
returns @IntTable table
(IntValue int)
    declare @separator char(1)
    set @separator = ','
    declare @separator_position int
    declare @array_value varchar(MAX)

    set @array = @array + ','

    while patindex('%,%' , @array) <> 0

        select @separator_position = patindex('%,%' , @array)
        select @array_value = left(@array, @separator_position - 1)

        Insert @IntTable
        Values (Cast(@array_value as int))
        select @array = stuff(@array, 1, @separator_position, '')

@Martin Smith 2015-11-07 14:57:20

In SQL Server 2016+ another possibility is to use the OPENJSON function.

This approach is blogged about in OPENJSON - one of best ways to select rows by list of ids.

A full worked example below

     Name  VARCHAR(50),
     Count INT

VALUES      ('VB',982), ('ruby',1306), ('rails',1478), ('scruffy',1), ('C#',1784)


CREATE PROC dbo.SomeProc
FROM   dbo.Tags T
WHERE  T.Name IN (SELECT J.Value COLLATE Latin1_General_CI_AS
                  FROM   OPENJSON(CONCAT('[', @Tags, ']')) J)


EXEC dbo.SomeProc @Tags = '"ruby","rails","scruffy","rubyonrails"'

DROP TABLE dbo.Tags 

@Erik Hart 2014-01-12 00:32:38

(Edit: If table valued parameters are not available) Best seems to be to split a large number of IN parameters into multiple queries with fixed length, so you have a number of known SQL statements with fixed parameter count and no dummy/duplicate values, and also no parsing of strings, XML and the like.

Here's some code in C# I wrote on this topic:

public static T[][] SplitSqlValues<T>(IEnumerable<T> values)
    var sizes = new int[] { 1000, 500, 250, 125, 63, 32, 16, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
    int processed = 0;
    int currSizeIdx = sizes.Length - 1; /* start with last (smallest) */
    var splitLists = new List<T[]>();

    var valuesDistSort = values.Distinct().ToList(); /* remove redundant */
    int totalValues = valuesDistSort.Count;

    while (totalValues > sizes[currSizeIdx] && currSizeIdx > 0)
    currSizeIdx--; /* bigger size, by array pos. */

    while (processed < totalValues)
        while (totalValues - processed < sizes[currSizeIdx]) 
            currSizeIdx++; /* smaller size, by array pos. */
        var partList = new T[sizes[currSizeIdx]];
        valuesDistSort.CopyTo(processed, partList, 0, sizes[currSizeIdx]);
        processed += sizes[currSizeIdx];
    return splitLists.ToArray();

(you may have further ideas, omit the sorting, use valuesDistSort.Skip(processed).Take(size[...]) instead of list/array CopyTo).

When inserting parameter variables, you create something like:

foreach(int[] partList in splitLists)
    /* here: question mark for param variable, use named/numbered params if required */
    string sql = "select * from Items where Id in("
        + string.Join(",", partList.Select(p => "?")) 
        + ")"; /* comma separated ?, one for each partList entry */

    /* create command with sql string, set parameters, execute, merge results */

I've watched the SQL generated by the NHibernate object-relational mapper (when querying data to create objects from), and that looks best with multiple queries. In NHibernate, one can specify a batch-size; if many object data rows have to be fetched, it tries to retrieve the number of rows equivalent to the batch-size

SELECT * FROM MyTable WHERE Id IN (@p1, @p2, @p3, ... , @p[batch-size])

,instead of sending hundreds or thousands of

SELECT * FROM MyTable WHERE [email protected]

When the remaining IDs are less then batch-size, but still more than one, it splits into smaller statements, but still with certain length.

If you have a batch size of 100, and a query with 118 parameters, it would create 3 queries:

  • one with 100 parameters (batch-size),
  • then one with 12
  • and another one with 6,

but none with 118 or 18. This way, it restricts the possible SQL statements to likely known statements, preventing too many different, thus too many query plans, which fill the cache and in great parts never get reused. The above code does the same, but with lengths 1000, 500, 250, 125, 63, 32, 16, 10-to-1. Parameter lists with more than 1000 elements are also split, preventing a database error due to a size limit.

Anyway, it's best to have a database interface which sends parameterized SQL directly, without a separate Prepare statement and handle to call. Databases like SQL Server and Oracle remember SQL by string equality (values change, binding params in SQL not!) and reuse query plans, if available. No need for separate prepare statements, and tedious maintenance of query handles in code! ADO.NET works like this, but it seems like Java still uses prepare/execute by handle (not sure).

I had my own question on this topic, originally suggesting to fill the IN clause with duplicates, but then preferring the NHibernate style statement split: Parameterized SQL - in / not in with fixed numbers of parameters, for query plan cache optimization?

This question is still interesting, even more than 5 years after being asked...

EDIT: I noted that IN queries with many values (like 250 or more) still tend to be slow, in the given case, on SQL Server. While I expected the DB to create a kind of temporary table internally and join against it, it seemed like it only repeated the single value SELECT expression n-times. Time was up to about 200ms per query - even worse than joining the original IDs retrieval SELECT against the other, related tables.. Also, there were some 10 to 15 CPU units in SQL Server Profiler, something unusual for repeated execution of the same parameterized queries, suggesting that new query plans were created on repeated calls. Maybe ad-hoc like individual queries are not worse at all. I had to compare these queries to non-split queries with changing sizes for a final conclusion, but for now, it seems like long IN clauses should be avoided anyway.

@spencer7593 2009-05-29 23:18:15

The original question was "How do I parameterize a query ..."

Let me state right here, that this is not an answer to the original question. There are already some demonstrations of that in other good answers.

With that said, go ahead and flag this answer, downvote it, mark it as not an answer... do whatever you believe is right.

See the answer from Mark Brackett for the preferred answer that I (and 231 others) upvoted. The approach given in his answer allows 1) for effective use of bind variables, and 2) for predicates that are sargable.

Selected answer

What I want to address here is the approach given in Joel Spolsky's answer, the answer "selected" as the right answer.

Joel Spolsky's approach is clever. And it works reasonably, it's going to exhibit predictable behavior and predictable performance, given "normal" values, and with the normative edge cases, such as NULL and the empty string. And it may be sufficient for a particular application.

But in terms generalizing this approach, let's also consider the more obscure corner cases, like when the Name column contains a wildcard character (as recognized by the LIKE predicate.) The wildcard character I see most commonly used is % (a percent sign.). So let's deal with that here now, and later go on to other cases.

Some problems with % character

Consider a Name value of 'pe%ter'. (For the examples here, I use a literal string value in place of the column name.) A row with a Name value of `'pe%ter' would be returned by a query of the form:

select ...
 where '|peanut|butter|' like '%|' + 'pe%ter' + '|%'

But that same row will not be returned if the order of the search terms is reversed:

select ...
 where '|butter|peanut|' like '%|' + 'pe%ter' + '|%'

The behavior we observe is kind of odd. Changing the order of the search terms in the list changes the result set.

It almost goes without saying that we might not want pe%ter to match peanut butter, no matter how much he likes it.

Obscure corner case

(Yes, I will agree that this is an obscure case. Probably one that is not likely to be tested. We wouldn't expect a wildcard in a column value. We may assume that the application prevents such a value from being stored. But in my experience, I've rarely seen a database constraint that specifically disallowed characters or patterns that would be considered wildcards on the right side of a LIKE comparison operator.

Patching a hole

One approach to patching this hole is to escape the % wildcard character. (For anyone not familiar with the escape clause on the operator, here's a link to the SQL Server documentation.

select ...
 where '|peanut|butter|'
  like '%|' + 'pe\%ter' + '|%' escape '\'

Now we can match the literal %. Of course, when we have a column name, we're going to need to dynamically escape the wildcard. We can use the REPLACE function to find occurrences of the %character and insert a backslash character in front of each one, like this:

select ...
 where '|pe%ter|'
  like '%|' + REPLACE( 'pe%ter' ,'%','\%') + '|%' escape '\'

So that solves the problem with the % wildcard. Almost.

Escape the escape

We recognize that our solution has introduced another problem. The escape character. We see that we're also going to need to escape any occurrences of escape character itself. This time, we use the ! as the escape character:

select ...
 where '|pe%t!r|'
  like '%|' + REPLACE(REPLACE( 'pe%t!r' ,'!','!!'),'%','!%') + '|%' escape '!'

The underscore too

Now that we're on a roll, we can add another REPLACE handle the underscore wildcard. And just for fun, this time, we'll use $ as the escape character.

select ...
 where '|p_%t!r|'
  like '%|' + REPLACE(REPLACE(REPLACE( 'p_%t!r' ,'$','$$'),'%','$%'),'_','$_') + '|%' escape '$'

I prefer this approach to escaping because it works in Oracle and MySQL as well as SQL Server. (I usually use the \ backslash as the escape character, since that's the character we use in regular expressions. But why be constrained by convention!

Those pesky brackets

SQL Server also allows for wildcard characters to be treated as literals by enclosing them in brackets []. So we're not done fixing yet, at least for SQL Server. Since pairs of brackets have special meaning, we'll need to escape those as well. If we manage to properly escape the brackets, then at least we won't have to bother with the hyphen - and the carat ^ within the brackets. And we can leave any %and _ characters inside the brackets escaped, since we'll have basically disabled the special meaning of the brackets.

Finding matching pairs of brackets shouldn't be that hard. It's a little more difficult than handling the occurrences of singleton % and _. (Note that it's not sufficient to just escape all occurrences of brackets, because a singleton bracket is considered to be a literal, and doesn't need to be escaped. The logic is getting a little fuzzier than I can handle without running more test cases.)

Inline expression gets messy

That inline expression in the SQL is getting longer and uglier. We can probably make it work, but heaven help the poor soul that comes behind and has to decipher it. As much of a fan I am for inline expressions, I'm inclined not use one here, mainly because I don't want to have to leave a comment explaining the reason for the mess, and apologizing for it.

A function where ?

Okay, so, if we don't handle that as an inline expression in the SQL, the closest alternative we have is a user defined function. And we know that won't speed things up any (unless we can define an index on it, like we could with Oracle.) If we've got to create a function, we might better do that in the code that calls the SQL statement.

And that function may have some differences in behavior, dependent on the DBMS and version. (A shout out to all you Java developers so keen on being able to use any database engine interchangeably.)

Domain knowledge

We may have specialized knowledge of the domain for the column, (that is, the set of allowable values enforced for the column. We may know a priori that the values stored in the column will never contain a percent sign, an underscore, or bracket pairs. In that case, we just include a quick comment that those cases are covered.

The values stored in the column may allow for % or _ characters, but a constraint may require those values to be escaped, perhaps using a defined character, such that the values are LIKE comparison "safe". Again, a quick comment about the allowed set of values, and in particular which character is used as an escape character, and go with Joel Spolsky's approach.

But, absent the specialized knowledge and a guarantee, it's important for us to at least consider handling those obscure corner cases, and consider whether the behavior is reasonable and "per the specification".

Other issues recapitulated

I believe others have already sufficiently pointed out some of the other commonly considered areas of concern:

  • SQL injection (taking what would appear to be user supplied information, and including that in the SQL text rather than supplying them through bind variables. Using bind variables isn't required, it's just one convenient approach to thwart with SQL injection. There are other ways to deal with it:

  • optimizer plan using index scan rather than index seeks, possible need for an expression or function for escaping wildcards (possible index on expression or function)

  • using literal values in place of bind variables impacts scalability


I like Joel Spolsky's approach. It's clever. And it works.

But as soon as I saw it, I immediately saw a potential problem with it, and it's not my nature to let it slide. I don't mean to be critical of the efforts of others. I know many developers take their work very personally, because they invest so much into it and they care so much about it. So please understand, this is not a personal attack. What I'm identifying here is the type of problem that crops up in production rather than testing.

Yes, I've gone far afield from the original question. But where else to leave this note concerning what I consider to be an important issue with the "selected" answer for a question?

@Luis Siquot 2012-04-19 14:18:31

can you please let us know if you use or like parameterized querys? in this particular case is it correct to jump over de rule of 'use parameterized querys' and sanitize with the original language? THANKS a lot

@spencer7593 2012-04-23 20:55:37

@Luis: yes, i prefer using bind variables in SQL statements, and will only avoid bind variables when using them causes a performance problem. my normative pattern for the original problem would be to dynamically create the SQL statement with the required number of placeholders in the IN list, and then bind each value to one of the placeholders. See the answer from Mark Brackett, which is the answer that I (and 231 others) upvoted.

@Eli Ekstein 2015-06-11 15:16:32

I'd approach this by default with passing a table valued function (that returns a table from a string) to the IN condition.

Here is the code for the UDF (I got it from Stack Overflow somewhere, i can't find the source right now)

CREATE FUNCTION [dbo].[Split] (@sep char(1), @s varchar(8000))
    WITH Pieces(pn, start, stop) AS (
      SELECT 1, 1, CHARINDEX(@sep, @s)
      SELECT pn + 1, stop + 1, CHARINDEX(@sep, @s, stop + 1)
      FROM Pieces
      WHERE stop > 0
      SUBSTRING(@s, start, CASE WHEN stop > 0 THEN stop-start ELSE 512 END) AS s
    FROM Pieces

Once you got this your code would be as simple as this:

select * from Tags 
where Name in (select s from dbo.split(';','ruby;rails;scruffy;rubyonrails'))
order by Count desc

Unless you have a ridiculously long string, this should work well with the table index.

If needed you can insert it into a temp table, index it, then run a join...

@Kent Fredric 2008-12-03 17:04:00

This is possibly a half nasty way of doing it, I used it once, was rather effective.

Depending on your goals it might be of use.

  1. Create a temp table with one column.
  2. INSERT each look-up value into that column.
  3. Instead of using an IN, you can then just use your standard JOIN rules. ( Flexibility++ )

This has a bit of added flexibility in what you can do, but it's more suited for situations where you have a large table to query, with good indexing, and you want to use the parametrized list more than once. Saves having to execute it twice and have all the sanitation done manually.

I never got around to profiling exactly how fast it was, but in my situation it was needed.

@SQL Police 2015-06-10 06:17:09

This is not nasty at all! Even more, it is IMHO a very clean way. And if you look into the execution plan, you see that it is the same like the IN clause. Instead of a temp table, you could also create a fixed table with indexes, where you store the parameters together with the SESSIONID.

@Jodrell 2012-08-15 16:32:20

If you've got SQL Server 2008 or later I'd use a Table Valued Parameter.

If you're unlucky enough to be stuck on SQL Server 2005 you could add a CLR function like this,

    IsDeterministic = true,
    SystemDataAccess = SystemDataAccessKind.None,
    IsPrecise = true,
    FillRowMethodName = "SplitFillRow",
    TableDefinintion = "s NVARCHAR(MAX)"]
public static IEnumerable Split(SqlChars seperator, SqlString s)
    if (s.IsNull)
        return new string[0];

    return s.ToString().Split(seperator.Buffer);

public static void SplitFillRow(object row, out SqlString s)
    s = new SqlString(row.ToString());

Which you could use like this,

declare @desiredTags nvarchar(MAX);
set @desiredTags = 'ruby,rails,scruffy,rubyonrails';

select * from Tags
where Name in [dbo].[Split] (',', @desiredTags)
order by Count desc

@George Stocker 2015-03-12 18:11:18

I use a more concise version of the top voted answer:

List<SqlParameter> parameters = tags.Select((s, i) => new SqlParameter("@tag" + i.ToString(), SqlDbType.NVarChar(50)) { Value = s}).ToList();

var whereCondition = string.Format("tags in ({0})", String.Join(",",parameters.Select(s => s.ParameterName)));

It does loop through the tag parameters twice; but that doesn't matter most of the time (it won't be your bottleneck; if it is, unroll the loop).

If you're really interested in performance and don't want to iterate through the loop twice, here's a less beautiful version:

var parameters = new List<SqlParameter>();
var paramNames = new List<string>();
for (var i = 0; i < tags.Length; i++)  
    var paramName = "@tag" + i;

    //Include size and set value explicitly (not AddWithValue)
    //Because SQL Server may use an implicit conversion if it doesn't know
    //the actual size.
    var p = new SqlParameter(paramName, SqlDbType.NVarChar(50) { Value = tags[i]; } 

var inClause = string.Join(",", paramNames);

@brykneval 2014-01-01 18:32:02

Use a dynamic query. The front end is only to generate the required format:

DECLARE @invalue VARCHAR(100)
SELECT @invalue = '''Bishnu'',''Gautam'''

SELECT @dynamicSQL = 'SELECT * FROM #temp WHERE [name] IN (' + @invalue + ')'
EXEC (@dynamicSQL)

SQL Fiddle

@Martin Smith 2015-11-07 15:24:02

This is not safe against SQL injection.

@Brian Knoblauch 2016-07-01 20:16:15

Please don't do this. While this may sometimes be done in a not totally unreasonable way. It truly is an accident (incident) waiting to happen if done in the wrong place with the wrong data (which is usually the case).

@mangeshkt 2012-07-26 05:39:30

Use the following stored procedure. It uses a custom split function, which can be found here.

 create stored procedure GetSearchMachingTagNames 
    @PipeDelimitedTagNames varchar(max), 
    @delimiter char(1) 
         select * from Tags 
         where Name in (select data from [dbo].[Split](@PipeDelimitedTagNames,@delimiter) 

@ASP.Net Developer 2015-03-18 07:43:22

    create FUNCTION [dbo].[ConvertStringToList]

      (@str VARCHAR (MAX), @delimeter CHAR (1))
        @result TABLE (
            [ID] INT NULL)


    SET @x = '<t>' + REPLACE(@str, @delimeter, '</t><t>') + '</t>'

    INSERT INTO @result
    SELECT DISTINCT x.i.value('.', 'int') AS token
    FROM @x.nodes('//t') x(i)
    ORDER BY 1



select * from table where id in ([dbo].[ConvertStringToList(YOUR comma separated string ,',')])

@Peter Mortensen 2015-05-18 19:23:01

Some explanation would be nice.

@David Robbins 2008-12-03 17:11:52

We have function that creates a table variable that you can join to:

ALTER FUNCTION [dbo].[Fn_sqllist_to_table](@list  AS VARCHAR(8000),
                                           @delim AS VARCHAR(10))
  Position INT,
  Value    VARCHAR(8000))
      DECLARE @myPos INT

      SET @myPos = 1

      WHILE Charindex(@delim, @list) > 0
            INSERT INTO @listTable
            VALUES     (@myPos,LEFT(@list, Charindex(@delim, @list) - 1))

            SET @myPos = @myPos + 1

            IF Charindex(@delim, @list) = Len(@list)
              INSERT INTO @listTable
              VALUES     (@myPos,'')

            SET @list = RIGHT(@list, Len(@list) - Charindex(@delim, @list))

      IF Len(@list) > 0
        INSERT INTO @listTable
        VALUES     (@myPos,@list)



@Name varchar(8000) = null // parameter for search values    

select * from Tags 
where Name in (SELECT value From fn_sqllist_to_table(@Name,',')))
order by Count desc

@Mark Brackett 2008-12-03 16:53:19

For SQL Server 2008, you can use a table valued parameter. It's a bit of work, but it is arguably cleaner than my other method.

First, you have to create a type

CREATE TYPE dbo.TagNamesTableType AS TABLE ( Name nvarchar(50) )

Then, your ADO.NET code looks like this:

string[] tags = new string[] { "ruby", "rails", "scruffy", "rubyonrails" };
cmd.CommandText = "SELECT Tags.* FROM Tags JOIN @tagNames as P ON Tags.Name = P.Name";

// value must be IEnumerable<SqlDataRecord>
cmd.Parameters.AddWithValue("@tagNames", tags.AsSqlDataRecord("Name")).SqlDbType = SqlDbType.Structured;
cmd.Parameters["@tagNames"].TypeName = "dbo.TagNamesTableType";

// Extension method for converting IEnumerable<string> to IEnumerable<SqlDataRecord>
public static IEnumerable<SqlDataRecord> AsSqlDataRecord(this IEnumerable<string> values, string columnName) {
    if (values == null || !values.Any()) return null; // Annoying, but SqlClient wants null instead of 0 rows
    var firstRecord = values.First();
    var metadata = SqlMetaData.InferFromValue(firstRecord, columnName);
    return values.Select(v => 
       var r = new SqlDataRecord(metadata);
       return r;

@Mark 2009-08-19 19:03:33

You can't [easily] use TVPs with Linq To Sql, so you need to fall back onto the good old SqlCommand object. I'm having to do exactly this right now because to get around Linq-To-Sql's lousey round-trip update/insert habit.

@Jeff Atwood 2011-04-04 05:19:37

we tested this and table valued parameters are DOG slow. It is literally faster to execute 5 queries than it is to do one TVP.

@Hash 2011-07-18 00:27:56

Any idea how to prepare this statement? I get this error when I call cmd.Prepare(). Prepare method requires all variable length parameters to have an explicitly set non-zero length.

@Nick Chammas 2011-10-13 19:47:49

@JeffAtwood - Have you tried reshuffling the query to something like SELECT * FROM tags WHERE IN (SELECT name from @tvp);? In theory, this really should be the fastest approach. You can use relevant indexes (e.g. an index on tag name that INCLUDEs count would be ideal), and SQL Server should be doing a few seeks to grab all the tags and their counts. What does the plan look like?

@Fredrik Johansson 2013-05-02 13:49:48

I've also tested this and it is FAST AS LIGHTNING (compared to constructing a large IN string). I had some problems setting the parameter though since I was constantly getting "Failed to convert parameter value from a Int32[] to a IEnumerable`1.". Anyway, solved that and here's a sample I made

@Mark Brackett 2013-05-02 18:17:27

@FredrikJohansson - Out of 130 upvotes, you may be the only run that's actually tried to run this! I made a mistake reading the docs, and you actually need an IEnumerable<SqlDataRecord>, not just any IEnumerable. Code has been updated.

@Fredrik Johansson 2013-05-03 06:57:52

@MarkBrackett Great with an update! Accually this code really saved the day for me since I'm quering a Lucene search-index and it sometimes returns more than 50.000 or so hits that need to be doublechecked against SQL server - So I create an array of int[] (document/SQL keys) and then the code above comes in. The whole OP now takes less than 200ms :)

@Lucero 2013-10-18 10:15:47

@Keith, there is no practical limit when using TVPs - that's one of the good reasons to use them.

@jjxtra 2014-08-26 22:49:34

Was talking to a db admin today and he suggested string split over TVP. I tried it out and the split was indeed faster. Granted I'm passing in large strings that split to over 10,000 values, but it still surprised me. Looking at the raw sql generated, TVP simply generates SQL that manually inserts one row at a time into a table parameter, so there can end up being a lot of query code to parse.

@Martin Smith 2015-11-07 15:26:36

@JeffAtwood - Regarding performance create a primary key on the TVP so it uses an index. If it is likely to have many rows using OPTION (RECOMPILE) might also help.

@Doug 2018-03-28 17:00:32

I've been using this code for years before encountering this problem: Using the first record to set the SqlMetaData gives you a four-character string (see SqlMetaData.cs source code ). So the other three tags don't match anything (or match only "ruby", "rail", and "scru"). Anyone else seeing this behavior?

@Rich 2019-05-17 10:47:01

I just got tripped up by the bug @Doug mentioned in the sample code: the metadata MaxLength limits the compared characters to 4 in the case of first word "ruby." The minimal fix was to use the longest value to set the data type: var firstRecord = values.OrderByDescending(v=> v?.Length ?? 0).First();

@Doug 2019-05-17 15:37:40

@Rich - I ended up using metadata = New SqlMetaData(columnName, SqlDbType.NVarChar, -1) for strings which sets the type to nvarchar(max). Not sure if this affects performance or not.

@Jason Kleban 2012-08-10 14:29:44

Here's a cross-post to a solution to the same problem. More robust than reserved delimiters - includes escaping and nested arrays, and understands NULLs and empty arrays.

C# & T-SQL string[] Pack/Unpack utility functions

You can then join to the table-valued function.

@Darek 2013-06-03 22:54:08

Here is another answer to this problem.

(new version posted on 6/4/13).

    private static DataSet GetDataSet(SqlConnectionStringBuilder scsb, string strSql, params object[] pars)
        var ds = new DataSet();
        using (var sqlConn = new SqlConnection(scsb.ConnectionString))
            var sqlParameters = new List<SqlParameter>();
            var replacementStrings = new Dictionary<string, string>();
            if (pars != null)
                for (int i = 0; i < pars.Length; i++)
                    if (pars[i] is IEnumerable<object>)
                        List<object> enumerable = (pars[i] as IEnumerable<object>).ToList();
                        replacementStrings.Add("@" + i, String.Join(",", enumerable.Select((value, pos) => String.Format("@_{0}_{1}", i, pos))));
                        sqlParameters.AddRange(enumerable.Select((value, pos) => new SqlParameter(String.Format("@_{0}_{1}", i, pos), value ?? DBNull.Value)).ToArray());
                        sqlParameters.Add(new SqlParameter(String.Format("@{0}", i), pars[i] ?? DBNull.Value));
            strSql = replacementStrings.Aggregate(strSql, (current, replacementString) => current.Replace(replacementString.Key, replacementString.Value));
            using (var sqlCommand = new SqlCommand(strSql, sqlConn))
                if (pars != null)
                    //Fail-safe, just in case a user intends to pass a single null parameter
                    sqlCommand.Parameters.Add(new SqlParameter("@0", DBNull.Value));
                using (var sqlDataAdapter = new SqlDataAdapter(sqlCommand))
        return ds;


@Rockfish 2012-05-29 23:49:39

Here's a technique that recreates a local table to be used in a query string. Doing it this way eliminates all parsing problems.

The string can be built in any language. In this example I used SQL since that was the original problem I was trying to solve. I needed a clean way to pass in table data on the fly in a string to be executed later.

Using a user defined type is optional. Creating the type is only created once and can be done ahead of time. Otherwise just add a full table type to the declaration in the string.

The general pattern is easy to extend and can be used for passing more complex tables.

-- Create a user defined type for the list.
CREATE TYPE [dbo].[StringList] AS TABLE(
    [StringValue] [nvarchar](max) NOT NULL

-- Create a sample list using the list table type.
DECLARE @list [dbo].[StringList]; 
INSERT INTO @list VALUES ('one'), ('two'), ('three'), ('four')

-- Build a string in which we recreate the list so we can pass it to exec
-- This can be done in any language since we're just building a string.
DECLARE @str nvarchar(max);
SET @str = 'DECLARE @list [dbo].[StringList]; INSERT INTO @list VALUES '

-- Add all the values we want to the string. This would be a loop in C++.
SELECT @str = @str + '(''' + StringValue + '''),' FROM @list

-- Remove the trailing comma so the query is valid sql.
SET @str = substring(@str, 1, len(@str)-1)

-- Add a select to test the string.
SET @str = @str + '; SELECT * FROM @list;'

-- Execute the string and see we've pass the table correctly.

@eulerfx 2008-12-03 16:30:13

I would pass a table type parameter (since it's SQL Server 2008), and do a where exists, or inner join. You may also use XML, using sp_xml_preparedocument, and then even index that temporary table.

@crokusek 2011-12-13 23:52:06

Ph.E's answer has an example building temp table (from csv).

@Peter Meyer 2008-12-19 05:40:15

I heard Jeff/Joel talk about this on the podcast today (episode 34, 2008-12-16 (MP3, 31 MB), 1 h 03 min 38 secs - 1 h 06 min 45 secs), and I thought I recalled Stack Overflow was using LINQ to SQL, but maybe it was ditched. Here's the same thing in LINQ to SQL.

var inValues = new [] { "ruby","rails","scruffy","rubyonrails" };

var results = from tag in Tags
              where inValues.Contains(tag.Name)
              select tag;

That's it. And, yes, LINQ already looks backwards enough, but the Contains clause seems extra backwards to me. When I had to do a similar query for a project at work, I naturally tried to do this the wrong way by doing a join between the local array and the SQL Server table, figuring the LINQ to SQL translator would be smart enough to handle the translation somehow. It didn't, but it did provide an error message that was descriptive and pointed me towards using Contains.

Anyway, if you run this in the highly recommended LINQPad, and run this query, you can view the actual SQL that the SQL LINQ provider generated. It'll show you each of the values getting parameterized into an IN clause.

@ArtOfCoding 2010-02-12 19:22:03

Another possible solution is instead of passing a variable number of arguments to a stored procedure, pass a single string containing the names you're after, but make them unique by surrounding them with '<>'. Then use PATINDEX to find the names:

FROM Tags 
WHERE PATINDEX('%<' + Name + '>%','<jo>,<john>,<scruffy>,<rubyonrails>') > 0

Related Questions

Sponsored Content

32 Answered Questions

[SOLVED] Finding duplicate values in a SQL table

  • 2010-04-07 18:17:29
  • Alex
  • 2817433 View
  • 1935 Score
  • 32 Answer
  • Tags:   sql duplicates

30 Answered Questions

[SOLVED] How do I perform an IF...THEN in an SQL SELECT?

40 Answered Questions

28 Answered Questions

[SOLVED] How can I prevent SQL injection in PHP?

34 Answered Questions

[SOLVED] How do I UPDATE from a SELECT in SQL Server?

47 Answered Questions

[SOLVED] How do I import an SQL file using the command line in MySQL?

43 Answered Questions

[SOLVED] How to return only the Date from a SQL Server DateTime datatype

47 Answered Questions

30 Answered Questions

Sponsored Content