By Peter Mortensen

2009-10-05 17:52:54 8 Comments

How do I iterate through each line of a text file with Bash?

With this script:

echo "Start!"
for p in (peptides.txt)
    echo "${p}"

I get this output on the screen:

./ line 3: syntax error near unexpected token `('
./ line 3: `for p in (peptides.txt)'

(Later I want to do something more complicated with $p than just output to the screen.)

The environment variable SHELL is (from env):


/bin/bash --version output:

GNU bash, version 3.1.17(1)-release (x86_64-suse-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.

cat /proc/version output:

Linux version ([email protected]) (gcc version 4.1.2 20061115 (prerelease) (SUSE Linux)) #1 SMP Mon Nov 27 11:46:27 UTC 2006

The file peptides.txt contains:



@codeforester 2017-01-14 03:30:41

A few more things not covered by other answers:

Reading from a delimited file

# ':' is the delimiter here, and there are three fields on each line in the file
# IFS set below is restricted to the context of `read`, it doesn't affect any other code
while IFS=: read -r field1 field2 field3; do
  # process the fields
  # if the line has less than three fields, the missing fields will be set to an empty string
  # if the line has more than three fields, `field3` will get all the values, including the third field plus the delimiter(s)
done < input.txt

Reading from the output of another command, using process substitution

while read -r line; do
  # process the line
done < <(command ...)

This approach is better than command ... | while read -r line; do ... because the while loop here runs in the current shell rather than a subshell as in the case of the latter. See the related post A variable modified inside a while loop is not remembered.

Reading from a null delimited input, for example find ... -print0

while read -r -d '' line; do
  # logic
  # use a second 'read ... <<< "$line"' if we need to tokenize the line
done < <(find /path/to/dir -print0)

Related read: BashFAQ/020 - How can I find and safely handle file names containing newlines, spaces or both?

Reading from more than one file at a time

while read -u 3 -r line1 && read -u 4 -r line2; do
  # process the lines
  # note that the loop will end when we reach EOF on either of the files, because of the `&&`
done 3< input1.txt 4< input2.txt

Based on @chepner's answer here:

-u is a bash extension. For POSIX compatibility, each call would look something like read -r X <&3.

Reading a whole file into an array (Bash versions earlier to 4)

while read -r line; do
done < my_file

If the file ends with an incomplete line (newline missing at the end), then:

while read -r line || [[ $line ]]; do
done < my_file

Reading a whole file into an array (Bash versions 4x and later)

readarray -t my_array < my_file


mapfile -t my_array < my_file

And then

for line in "${my_array[@]}"; do
  # process the lines

Related posts:

@masterxilo 2019-03-07 14:00:26

note that instead of command < input_filename.txt you can always do input_generating_command | command or command < <(input_generating_command)

@Bruno De Fraine 2009-10-05 18:00:20

One way to do it is:

while read p; do
  echo "$p"
done <peptides.txt

As pointed out in the comments, this has the side effects of trimming leading whitespace, interpretting backslash sequences, and skipping the trailing line if it's missing a terminating linefeed. If these are concerns, you can do:

while IFS="" read -r p || [ -n "$p" ]
  printf '%s\n' "$p"
done < peptides.txt

Exceptionally, if the loop body may read from standard input, you can open the file using a different file descriptor:

while read -u 10 p; do
done 10<peptides.txt

Here, 10 is just an arbitrary number (different from 0, 1, 2).

@Peter Mortensen 2009-10-05 18:16:05

How should I interpret the last line? File peptides.txt is redirected to standard input and somehow to the whole of the while block?

@Warren Young 2009-10-05 18:30:07

"Slurp peptides.txt into this while loop, so the 'read' command has something to consume." My "cat" method is similar, sending the output of a command into the while block for consumption by 'read', too, only it launches another program to get the work done.

@Karl Katzke 2013-07-30 16:27:28

This didn't work for me. The second ranked answer, which used cat and a pipe, did work for me.

@xastor 2013-11-07 07:48:39

This method seems to skip the last line of a file.

@Dss 2014-01-14 16:53:10

Can this be done in reverse starting at the bottom of the file?

@Bruno De Fraine 2014-01-16 09:28:11

@Dss Then I would use a solution based on cat but replace cat by tac.

@Dss 2014-01-16 14:58:26

@BrunoDeFraine I've tried that but tac seems to make each space a new line. I need the full line delimited by the newline char. maybe I'm doing it wrong.

@Dss 2014-01-16 16:38:35

@BrunoDeFraine Ok I found this: ..change cat to tac and it works. Thanks!

@Bruno De Fraine 2014-01-16 19:25:48

@Dss I meant the solution from Warren Young ; just replace cat by tac and you should read the lines in reverse.

@Mike Q 2014-08-19 17:01:57

Double quote the lines !! echo "$p" and the file.. trust me it will bite you if you don't!!! I KNOW! lol

@Jose Antonio Alvarez Ruiz 2016-08-31 10:35:52

That -u option made my day ;) Thanks!

@dawg 2016-09-07 14:15:52

Both versions fail to read a final line if it is not terminated with a newline. Always use while read p || [[ -n $p ]]; do ...

@Veda 2017-01-17 11:11:56

This does not work for lines that end with a backslash "\". Lines ending with a backslash will be prepended to the next line (and the \ will be removed).

@Egor Hans 2017-11-12 14:37:04

@Veda Now that's weird. What I would expect is, you get an extra n after the backslash, and the lines get concatenated. Because that would mean, the backslash escapes the backslash of \n, causing it to be interpreted literally rather than as a newline. But the fact that the backslash disappears, as well as the newline, means it's consumed for some kind of escaping like expected, but gets merged with the original newline character into something that isn't printed... Do you have a tool that displays unprinted characters in some way? Would interest me what that results in.

@Veda 2017-11-13 15:32:16

@EgorHans the \ escapes the "\n" character which is a single character. Google for an "ascii table". Character 10 is \n and character 13 is \r. Linux "xxd" tool will show you the characters. A file with a\na\n\\n will look like: 610a 610a 5c0a (0a is hex for 10, so \n). So the last case the "5c" character or the "\" is escaping a single character.

@Egor Hans 2017-11-14 16:30:35

@Veda Ah OK, now I understand better. Didn't realize the file content gets dumped into the execution flow the way it's inside the file, where of course \n is one single character. For some reason I've been thinking it gets backresolved to the control sequence while processed. Still, it's somewhat weird that an escaped \n is something without a printed representation. One would expect it to resolve to the char sequence "\n" when escaped.

@Shmiggy 2018-10-03 12:21:36

Your soul be blessed for that different file descriptor command, made me happy, wasted 8 days on an error generated by standard input replacement. +1

@Miguel Ortiz 2019-03-19 13:26:31

The first example is lacking an ";" after done.

@Alexander Mills 2019-06-04 06:33:33

can you please mention what the -r flag does?

@Bruno De Fraine 2019-06-05 07:13:17

@AlexanderMills -r disables the interpretation of backslashes as escape sequences. The empty IFS disables that read splits up the line in fields. And because read fails when it encounters end-of-file before the line ends, we also test for a non-empty line.

@Stan Graves 2009-10-05 18:18:47

Option 1a: While loop: Single line at a time: Input redirection

echo Start
while read p; do 
    echo $p
done < $filename

Option 1b: While loop: Single line at a time:
Open the file, read from a file descriptor (in this case file descriptor #4).

exec 4<$filename
echo Start
while read -u4 p ; do
    echo $p

Option 2: For loop: Read file into single variable and parse.
This syntax will parse "lines" based on any white space between the tokens. This still works because the given input file lines are single-word tokens. If there were more than one token per line, then this method would not work. Also, reading the full file into a single variable is not a good strategy for large files.

filelines=`cat $filename`
echo Start
for line in $filelines ; do
    echo $line

@Peter Mortensen 2009-10-05 20:03:56

For option 1b: does the file descriptor need to be closed again? E.g. the loop could be an inner loop.

@Stan Graves 2009-10-05 21:09:15

The file descriptor will be cleaned up with the process exits. An explicit close can be done to reuse the fd number. To close a fd, use another exec with the &- syntax, like this: exec 4<&-

@masgo 2014-06-04 13:50:49

Thank you for Option 2. I ran into huge problems with Option 1 because I needed to read from stdin within the loop; in such a case Option 1 will not work.

@Egor Hans 2017-11-12 16:44:57

You should point out more clearly that Option 2 is strongly discouraged. @masgo Option 1b should work in that case, and can be combined with the input redirection syntax from Option 1a by replacing done < $filename with done 4<$filename (which is useful if you want to read the file name from a command parameter, in which case you can just replace $filename by $1).

@user5359531 2018-11-12 23:21:43

I need to loop over file contents such as tail -n +2 myfile.txt | grep 'somepattern' | cut -f3, while running ssh commands inside the loop (consumes stdin); option 2 here appears to be the only way?

@dawg 2016-02-03 19:15:14

Suppose you have this file:

$ cat /tmp/test.txt
Line 1
    Line 2 has leading space
Line 3 followed by blank line

Line 5 (follows a blank line) and has trailing space    
Line 6 has no ending CR

There are four elements that will alter the meaning of the file output read by many Bash solutions:

  1. The blank line 4;
  2. Leading or trailing spaces on two lines;
  3. Maintaining the meaning of individual lines (i.e., each line is a record);
  4. The line 6 not terminated with a CR.

If you want the text file line by line including blank lines and terminating lines without CR, you must use a while loop and you must have an alternate test for the final line.

Here are the methods that may change the file (in comparison to what cat returns):

1) Lose the last line and leading and trailing spaces:

$ while read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
'Line 5 (follows a blank line) and has trailing space'

(If you do while IFS= read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt instead, you preserve the leading and trailing spaces but still lose the last line if it is not terminated with CR)

2) Using process substitution with cat will reads the entire file in one gulp and loses the meaning of individual lines:

$ for p in "$(cat /tmp/test.txt)"; do printf "%s\n" "'$p'"; done
'Line 1
    Line 2 has leading space
Line 3 followed by blank line

Line 5 (follows a blank line) and has trailing space    
Line 6 has no ending CR'

(If you remove the " from $(cat /tmp/test.txt) you read the file word by word rather than one gulp. Also probably not what is intended...)

The most robust and simplest way to read a file line-by-line and preserve all spacing is:

$ while IFS= read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'    Line 2 has leading space'
'Line 3 followed by blank line'
'Line 5 (follows a blank line) and has trailing space    '
'Line 6 has no ending CR'

If you want to strip leading and trading spaces, remove the IFS= part:

$ while read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
'Line 5 (follows a blank line) and has trailing space'
'Line 6 has no ending CR'

(A text file without a terminating \n, while fairly common, is considered broken under POSIX. If you can count on the trailing \n you do not need || [[ -n $line ]] in the while loop.)

More at the BASH FAQ

@dawg 2018-11-04 16:30:40

My I ask why the downvote?

@Jahid 2015-06-09 15:09:00

Use a while loop, like this:

while IFS= read -r line; do
   echo "$line"
done <file


  1. If you don't set the IFS properly, you will lose indentation.

  2. You should almost always use the -r option with read.

  3. Don't read lines with for

@David C. Rankin 2015-06-23 02:31:22

Why the -r option?

@Jahid 2015-06-23 06:01:40

@DavidC.Rankin The -r option prevents backslash interpretation. Note #2 is a link where it is described in detail...

@Florin Andrei 2017-02-17 00:06:23

Combine this with the "read -u" option in another answer and then it's perfect.

@Jahid 2017-02-17 05:37:47

@FlorinAndrei : The above example doesn't need the -u option, are you talking about another example with -u?

@Egor Hans 2017-11-12 16:49:02

Looked through your links, and was surprised there's no answer that simply links your link in Note 2. That page provides everything you need to know about that subject. Or are link-only answers discouraged or something?

@Jahid 2017-11-13 02:00:39

@EgorHans : link only answers are generally deleted.

@Egor Hans 2017-11-14 16:34:18

Ah. Alright, never suggesting a link-only answer again. Maybe there even were some, we'll never know.

@Anjul Sharma 2016-03-08 16:10:51

If you don't want your read to be broken by newline character, use -

while IFS='' read -r line || [[ -n "$line" ]]; do
    echo "$line"
done < "$1"

Then run the script with file name as parameter.

@Alan Jebakumar 2015-08-30 05:00:05

@Peter: This could work out for you-

echo "Start!";for p in $(cat ./pep); do
echo $p

This would return the output-


@fedorqui 2016-06-16 10:43:25

@codeforester 2017-01-14 02:55:12

This answer is defeating all the principles set by the good answers above!

@dawg 2017-05-02 17:18:50

Please delete this answer.

@Egor Hans 2017-11-12 14:08:38

Now guys, don't exaggerate. The answer is bad, but it seems to work, at least for simple use cases. As long as that's provided, being a bad answer doesn't take away the answer's right to exist.

@Charles Duffy 2018-09-20 16:36:07

@EgorHans, I disagree strongly: The point of answers is to teach people how to write software. Teaching people to do things in a way that you know is harmful to them and the people who use their software (introducing bugs / unexpected behaviors / etc) is knowingly harming others. An answer known to be harmful has no "right to exist" in a well-curated teaching resource (and curating it is exactly what we, the folks who are voting and flagging, are supposed to be doing here).

@Charles Duffy 2018-09-20 16:40:36

@EgorHans, ...incidentally, the worst data-loss incident I've been personally witness to was caused by ops staff doing something that "seemed to work" in a script (using an unquoted expansion for a filename to be deleted -- when that name was supposed to be able to contain only hex digits). Except a bug in a different piece of software wrote a name with random contents, which had a whitespace-surrounded *, and a massive trove of billing-data backups was lost.

@Whome 2015-06-30 08:15:45

Here is my real life example how to loop lines of another program output, check for substrings, drop double quotes from variable, use that variable outside of the loop. I guess quite many is asking these questions sooner or later.

##Parse FPS from first video stream, drop quotes from fps variable
while read -r line; do
  if [[ $FPS == "unknown" ]] && [[ $line == *".codec_type=\"video\""* ]]; then
    echo ParseFPS $line
  if [[ $FPS == "parse" ]] && [[ $line == *".r_frame_rate="* ]]; then
    echo ParseFPS $line
done <<< "$(ffprobe -v quiet -print_format flat -show_format -show_streams -i "$input")"
if [ "$FPS" == "unknown" ] || [ "$FPS" == "parse" ]; then 
  echo ParseFPS Unknown frame rate
echo Found $FPS

Declare variable outside of the loop, set value and use it outside of loop requires done <<< "$(...)" syntax. Application need to be run within a context of current console. Quotes around the command keeps newlines of output stream.

Loop match for substrings then reads name=value pair, splits right-side part of last = character, drops first quote, drops last quote, we have a clean value to be used elsewhere.

@Egor Hans 2017-11-12 14:14:37

While the answer is correct, I do understand how it ended up down here. The essential method is the same as proposed by many other answers. Plus, it completely drowns in your FPS example.

@Sine 2013-11-14 14:23:16

# Change the file name from "test" to desired input file 
# (The comments in bash are prefixed with #'s)
for x in $(cat test.txt)
    echo $x

@Toby Speight 2015-06-08 16:32:22

This answer needs the caveats mentioned in mightypile's answer, and it can fail badly if any line contains shell metacharacters (due to the unquoted "$x").

@Egor Hans 2017-11-12 14:17:01

I'm actually surprised people didn't yet come up with the usual Don't read lines with for...

@mightypile 2013-10-04 13:30:51

This is no better than other answers, but is one more way to get the job done in a file without spaces (see comments). I find that I often need one-liners to dig through lists in text files without the extra step of using separate script files.

for word in $(cat peptides.txt); do echo $word; done

This format allows me to put it all in one command-line. Change the "echo $word" portion to whatever you want and you can issue multiple commands separated by semicolons. The following example uses the file's contents as arguments into two other scripts you may have written.

for word in $(cat peptides.txt); do $word; $word; done

Or if you intend to use this like a stream editor (learn sed) you can dump the output to another file as follows.

for word in $(cat peptides.txt); do $word; $word; done > outfile.txt

I've used these as written above because I have used text files where I've created them with one word per line. (See comments) If you have spaces that you don't want splitting your words/lines, it gets a little uglier, but the same command still works as follows:

OLDIFS=$IFS; IFS=$'\n'; for line in $(cat peptides.txt); do $line; $line; done > outfile.txt; IFS=$OLDIFS

This just tells the shell to split on newlines only, not spaces, then returns the environment back to what it was previously. At this point, you may want to consider putting it all into a shell script rather than squeezing it all into a single line, though.

Best of luck!

@Joao Costa 2013-10-30 12:37:32

This doesn't meet the requirement (iterate through each line) if the file contains spaces or tabs, but can be useful if you want to iterate through each field in a tab/space separated file.

@maxpolk 2013-12-08 17:58:50

The bash $(<peptides.txt) is perhaps more elegant, but it's still wrong, what Joao said correct, you are performing command substitution logic where space or newline is the same thing. If a line has a space in it, the loop executes TWICE or more for that one line. So your code should properly read: for word in $(<peptides.txt); do .... If you know for a fact there are no spaces, then a line equals a word and you're okay.

@mightypile 2013-12-22 15:49:26

@JoaoCosta,maxpolk : Good points that I hadn't considered. I've edited the original post to reflect them. Thanks!

@mklement0 2013-12-22 16:09:52

Using for makes the input tokens/lines subject to shell expansions, which is usually undesirable; try this: for l in $(echo '* b c'); do echo "[$l]"; done - as you'll see, the * - even though originally a quoted literal - expands to the files in the current directory.

@Toby Speight 2015-06-08 16:34:12

Don't forget to quote your usages of "$word" and "$line"...

@dblanchard 2015-11-13 19:18:45

Joao and maxpolk, you are addressing the issue I'm having, but I'm still getting a separate iteration for each half of each line with a space: > cat linkedin_OSInt.txt"foo bar""baz bux" > for url in $(<linkedin_OSInt.txt); do echo "$url"; done"foo bar""baz bux" I'll try the other approaches here, but would like understand why this one doesn't work.

@mightypile 2015-11-24 00:53:16

@dblanchard: The last example, using $IFS, should ignore spaces. Have you tried that version?

@Znik 2015-11-27 12:44:47

please notice for changing cat peptides.txt by find / . for loop will not start before internal cat finishes. between these steps it is possible buffer overflow.

@Egor Hans 2017-11-12 14:23:33

The way how this command gets a lot more complex as crucial issues are fixed, presents very well why using for to iterate file lines is a a bad idea. Plus, the expansion aspect mentioned by @mklement0 (even though that probably can be circumvented by bringing in escaped quotes, which again makes things more complex and less readable).

@David Tabernero M. 2018-06-07 23:07:41

This is, in a readable way, the only answer that also reads the latest line of a file, which is a pro.

@Warren Young 2009-10-05 17:54:38

cat peptides.txt | while read line
   # do something with $line here

@JesperE 2009-10-05 18:02:21

In general, if you're using "cat" with only one argument, you're doing something wrong (or suboptimal).

@Peter Mortensen 2009-10-05 18:10:41

I have tried it and it works (as well as Bruno De Fraine's).

@Warren Young 2009-10-05 18:12:45

Yes, it's just not as efficient as Bruno's, because it launches another program, unnecessarily. If efficiency matters, do it Bruno's way. I remember my way because you can use it with other commands, where the "redirect in from" syntax doesn't work.

@Gordon Davisson 2009-10-06 00:57:27

There's another, more serious problem with this: because the while loop is part of a pipeline, it runs in a subshell, and hence any variables set inside the loop are lost when it exits (see This can be very annoying (depending on what you're trying to do in the loop).

@Ogre Psalm33 2011-11-21 16:35:05

@JesperE would you care to elaborate with an alternative example?

@Warren Young 2011-11-21 16:37:55

@Ogre: He means you should be doing it like Bruno did in his accepted answer. Both work. Bruno's way is just a bit more efficient, since it doesn't run an external command to do the file reading bit. If the efficiency matters, do it Bruno's way. If not, then do it whatever way makes the most sense to you.

@JesperE 2011-11-22 10:38:59

@OgrePsalm33: Warren is right. The "cat" command is used for concatenating files. If you are not concatenating files, chances are that you don't need to use "cat".

@Ogre Psalm33 2011-11-22 21:43:54

Ok, makes sense. I wanted to make a point of it because I see a lot of overused examples in scripts and such, where "cat" simply serves as an extra step to get the contents of a single file.

@mat kelcey 2014-02-26 21:33:56

I use "cat file | " as the start of a lot of my commands purely because I often prototype with "head file |"

@ACK_stoverflow 2014-06-18 19:25:16

@matkelcey Also, how else would you put an entire file into the front of a pipeline? Bash gives you here strings, which are awesome (especially for things like if grep -q 'findme' <<< "$var") but not portable, and I wouldn't want to start a large pipeline with one. Something like cat ifconfig.output | grep inet[^6] | grep -v '' | awk '{print $2}' | cut -d':' -f2 is easier to read, since everything follows from left to right. It's like strtoking with awk instead of cut because you don't want empty tokens - it's sort of an abuse of the command, but that's just how it's done.

@Savage Reader 2014-12-22 13:02:48

This may be not that efficient, but it's much more readable than other answers.

@tishma 2015-09-03 09:59:35

+1 for readability, and also modularity - this code can easily be put into a more complex pipeline by replacing 'cat ...' with output of something else.

@Znik 2015-11-27 12:42:15

It is much better resolve than Bruno has written. It is specially usefull when data is created dynamically by command. Using Bruno's solution, loop will receive any data after command will completly done. Your solution gives command result on line into loop, without taking buffer from system. for example replace 'cat peptides.txt' by 'find /', or in previous solution 'done <peptides.txt' by 'done < $(find /)' . it can fail execution because there is a chance for overflow buffer or consume all memory.

@Ryan 2018-02-28 01:20:24

By the time you care about the difference in performance you won't be asking SO these sorts of questions.

@Mike D 2018-06-05 14:09:45

< peptides.txt | while read line...

@Cory Ringdahl 2018-08-29 00:27:12

This is, however, great for grep, sed, or any other text manipulation prepending the read.

@user5359531 2018-11-12 22:58:37

this does not work if any of the commands inside your loop run commands via ssh; the stdin stream gets consumed (even if ssh is not using it), and the loop terminates after the first iteration.

@Charles Duffy 2018-11-30 03:08:20

@MikeD, that's a zsh-ism; it doesn't work in bash.

@tripleee 2019-01-30 18:38:50

As in the accepted answer, this will have unpleasant surprises without read -r in some corner cases. Basically always use read -r unless you specifically require the quirky behavior of plain legacy read.

@januarvs 2019-04-18 21:41:31

It skips the last line. So as workaround, must add empty line at the last.

@Warren Young 2019-04-19 02:43:19

@januarvs: It only does that if the last line of your file has no LF terminator, which will cause lots of other things to fail, too.

Related Questions

Sponsored Content

59 Answered Questions

[SOLVED] Get the source directory of a Bash script from within the script itself

  • 2008-09-12 20:39:56
  • Jiaaro
  • 1475077 View
  • 4435 Score
  • 59 Answer
  • Tags:   bash directory

18 Answered Questions

[SOLVED] How do I tell if a regular file does not exist in Bash?

  • 2009-03-12 14:48:43
  • Bill the Lizard
  • 2325519 View
  • 2951 Score
  • 18 Answer
  • Tags:   bash file-io scripting

43 Answered Questions

[SOLVED] JavaScript closure inside loops – simple practical example

36 Answered Questions

[SOLVED] How do I loop through or enumerate a JavaScript object?

42 Answered Questions

[SOLVED] How do I find all files containing specific text on Linux?

39 Answered Questions

[SOLVED] Loop through an array in JavaScript

19 Answered Questions

[SOLVED] Accessing the index in 'for' loops?

  • 2009-02-06 22:47:54
  • Joan Venge
  • 1796689 View
  • 3116 Score
  • 19 Answer
  • Tags:   python loops list

29 Answered Questions

[SOLVED] How to concatenate string variables in Bash

18 Answered Questions

[SOLVED] Loop through an array of strings in Bash?

  • 2012-01-16 13:21:16
  • Mo.
  • 1022665 View
  • 1254 Score
  • 18 Answer
  • Tags:   arrays bash shell

7 Answered Questions

[SOLVED] Iterate through a HashMap

Sponsored Content