By Tomas Sedovic

2009-03-03 12:23:01 8 Comments

I'm using this code to get standard output from an external program:

>>> from subprocess import *
>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]

The communicate() method returns an array of bytes:

>>> command_stdout
b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2\n'

However, I'd like to work with the output as a normal Python string. So that I could print it like this:

>>> print(command_stdout)
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2

I thought that's what the binascii.b2a_qp() method is for, but when I tried it, I got the same byte array again:

>>> binascii.b2a_qp(command_stdout)
b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2\n'

Does anybody know how to convert the bytes value back to string? I mean, using the "batteries" instead of doing it manually. And I'd like it to be ok with Python 3.


@dF. 2009-03-03 12:28:31

You need to decode the byte string and turn it in to a character (unicode) string.

On Python 2

encoding = 'utf-8'

On Python 3

encoding = 'utf-8'
str(b'hello', encoding)

@anatoly techtonik 2014-12-17 14:23:09

If you don't know the encoding, then to read binary input into string in Python 3 and Python 2 compatible way, use ancient MS-DOS cp437 encoding:

PY3K = sys.version_info >= (3, 0)

lines = []
for line in stream:
    if not PY3K:

Because encoding is unknown, expect non-English symbols to translate to characters of cp437 (English chars are not translated, because they match in most single byte encodings and UTF-8).

Decoding arbitrary binary input to UTF-8 is unsafe, because you may get this:

>>> b'\x00\x01\xffsd'.decode('utf-8')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 2: invalid
start byte

The same applies to latin-1, which was popular (default?) for Python 2. See the missing points in Codepage Layout - it is where Python chokes with infamous ordinal not in range.

UPDATE 20150604: There are rumors that Python 3 has surrogateescape error strategy for encoding stuff into binary data without data loss and crashes, but it needs conversion tests [binary] -> [str] -> [binary] to validate both performance and reliability.

UPDATE 20170116: Thanks to comment by Nearoo - there is also a possibility to slash escape all unknown bytes with backslashreplace error handler. That works only for Python 3, so even with this workaround you will still get inconsistent output from different Python versions:

PY3K = sys.version_info >= (3, 0)

lines = []
for line in stream:
    if not PY3K:
        lines.append(line.decode('utf-8', 'backslashreplace'))

See for details.

UPDATE 20170119: I decided to implement slash escaping decode that works for both Python 2 and Python 3. It should be slower that cp437 solution, but it should produce identical results on every Python version.

# --- preparation

import codecs

def slashescape(err):
    """ codecs error handler. err is UnicodeDecode instance. return
    a tuple with a replacement for the unencodable part of the input
    and a position where encoding should continue"""
    #print err, dir(err), err.start, err.end, err.object[:err.start]
    thebyte = err.object[err.start:err.end]
    repl = u'\\x'+hex(ord(thebyte))[2:]
    return (repl, err.end)

codecs.register_error('slashescape', slashescape)

# --- processing

stream = [b'\x80abc']

lines = []
for line in stream:
    lines.append(line.decode('utf-8', 'slashescape'))

@anatoly techtonik 2015-02-20 09:04:01

I really feel like Python should provide a mechanism to replace missing symbols and continue.

@wallyk 2015-05-27 21:19:25

Brilliant! This is much faster than @Sisso's method for a 256 MB file!

@user2284570 2015-10-20 23:02:01

@techtonik : This won’t work on an array like it worked in python2.

@anatoly techtonik 2015-10-22 07:25:20

@user2284570 do you mean list? And why it should work on arrays? Especially arrays of floats..

@Antonis Kalou 2016-07-06 12:14:08

You can also just ignore unicode errors with b'\x00\x01\xffsd'.decode('utf-8', 'ignore') in python 3.

@Nearoo 2017-01-16 10:40:37

@anatolytechtonik There is the possibility to leave the escape sequence in the string and move on: b'\x80abc'.decode("utf-8", "backslashreplace") will result in '\\x80abc'. This information was taken from the unicode documentation page which seems to have been updated since the writing of this answer.

@anatoly techtonik 2017-01-16 14:53:17

@Nearoo updated the answer. Unfortunately it doesn't work with Python 2 - see…

@Kevin 2019-06-03 13:58:42

"Decoding arbitrary binary input to UTF-8 is unsafe... The same applies to latin-1". Can you elaborate on this? b'\x00\x01\xffsd'.decode("latin-1") runs without crashing on my machine (tested in 2.7.11 and 3.7.3). Can you give an example of a bytes object that crashes with "ordinal not in range" when you try to latin1-decode it?

@Sisso 2012-08-22 12:57:08

I think this way is easy:

bytes_data = [112, 52, 52]
"".join(map(chr, bytes_data))
>> p44

@leetNightshade 2014-05-10 00:28:58

Thank you, your method worked for me when none other did. I had a non-encoded byte array that I needed turned into a string. Was trying to find a way to re-encode it so I could decode it into a string. This method works perfectly!

@Martijn Pieters 2014-09-01 16:25:49

@leetNightshade: yet it is terribly inefficient. If you have a byte array you only need to decode.

@leetNightshade 2014-09-01 17:06:38

@Martijn Pieters I just did a simple benchmark with these other answers, running multiple 10,000 runs And the above solution was actually much faster every single time. For 10,000 runs in Python 2.7.7 it takes 8ms, versus the others at 12ms and 18ms. Granted there could be some variation depending on input, Python version, etc. Doesn't seem too slow to me.

@Martijn Pieters 2014-09-01 17:11:03

@leetNightshade: yet the OP here is using Python 3.

@leetNightshade 2014-09-01 17:13:46

@Martijn Pieters Fair enough. In Python 3.4.1 x86 this method takes 17.01ms, the others 24.02ms, and 11.51ms for the bytearray to string cast. So it's not the fastest in that case.

@Martijn Pieters 2014-09-01 17:20:45

@leetNightshade: you also appear to be talking about integers and bytearrays, not a bytes value (as returned by Popen.communicate()).

@leetNightshade 2014-09-01 17:28:19

@Martijn Pieters Yes. So with that point, this isn't the best answer for the body of the question that was asked. And the title is misleading, isn't it? He/she wants to convert a byte string to a regular string, not a byte array to a string. This answer works okay for the title of the question that was asked.

@Martijn Pieters 2014-09-01 17:32:44

@leetNightshade: the title can indeed be misleading, I'll edit.

@Sasszem 2016-10-01 22:53:43

It can convert bytes read from a file with "rb" to string, and It's handy when you don't know the encoding

@jfs 2016-11-16 03:16:05

@Sasszem: this method is a perverted way to express: a.decode('latin-1') where a = bytearray([112, 52, 52]) ("There Ain't No Such Thing as Plain Text". If you've managed to convert bytes into a text string then you used some encoding—latin-1 in this case)

@Mr_and_Mrs_D 2017-10-11 15:14:29

For python 3 this should be equivalent to bytes([112, 52, 52]) - btw bytes is a bad name for a local variable exactly because it's a p3 builtin

@Martijn Pieters 2018-07-03 12:01:04

@leetNightshade: For completeness sake: bytes(list_of_integers).decode('ascii') is about 1/3rd faster than ''.join(map(chr, list_of_integers)) on Python 3.6.

@Zhichang Yu 2014-01-11 07:15:18


To write or read binary data from/to the standard streams, use the underlying binary buffer. For example, to write bytes to stdout, use sys.stdout.buffer.write(b'abc').

@Martijn Pieters 2014-09-01 17:34:19

The pipe to the subprocess is already a binary buffer. Your answer fails to address how to get a string value from the resulting bytes value.

@serv-inc 2015-11-13 10:24:21

While @Aaron Maenpaa's answer just works, a user recently asked:

Is there any more simply way? '"ASCII")' [...] It's so long!

You can use:


decode() has a standard argument:

codecs.decode(obj, encoding='utf-8', errors='strict')

@Broper 2017-11-22 04:20:55

If you should get the following by trying decode():

AttributeError: 'str' object has no attribute 'decode'

You can also specify the encoding type straight in a cast:

>>> my_byte_str
b'Hello World'

>>> str(my_byte_str, 'utf-8')
'Hello World'

@HCLivess 2019-06-01 02:30:56

If you want to convert any bytes, not just string converted to bytes:

with open("bytesfile", "rb") as infile:
    str = base64.b85encode(

with open("bytesfile", "rb") as infile:
    str2 = json.dumps(list(

This is not very efficient, however. It will turn a 2 mb picture into 9 mb.

@lmiguelvargasf 2016-06-29 14:21:21

In Python 3, the default encoding is "utf-8", so you can use directly:


which is equivalent to


On the other hand, in Python 2, encoding defaults to the default string encoding. Thus, you should use:


where encoding is the encoding you want.

Note: support for keyword arguments was added in Python 2.7.

@Leonardo Filipe 2018-06-03 22:44:45

def toString(string):    
        return v.decode("utf-8")
    except ValueError:
        return string

b = b'97.080.500'
s = '97.080.500'

@Dev-iL 2018-06-04 05:37:06

While this code may answer the question, providing additional context regarding how and/or why it solves the problem would improve the answer's long-term value. Remember that you are answering the question for readers in the future, not just the person asking now! Please edit your answer to add an explanation, and give an indication of what limitations and assumptions apply. It also doesn't hurt to mention why this answer is more appropriate than others.

@wim 2018-05-31 17:52:19

Since this question is actually asking about subprocess output, you have a more direct approach available since Popen accepts an encoding keyword (in Python 3.6+):

>>> from subprocess import Popen, PIPE
>>> text = Popen(['ls', '-l'], stdout=PIPE, encoding='utf-8').communicate()[0]
>>> type(text)
>>> print(text)
total 0
-rw-r--r-- 1 wim badger 0 May 31 12:45 some_file.txt

The general answer for other users is to decode bytes to text:

>>> b'abcde'.decode()

With no argument, sys.getdefaultencoding() will be used. If your data is not sys.getdefaultencoding(), then you must specify the encoding explicitly in the decode call:

>>> b'caf\xe9'.decode('cp1250')

@Boris 2018-12-24 19:04:25

Or with Python 3.7 you can pass text=True to decode stdin, stdout and stderr using the given encoding (if set) or the system default otherwise. Popen(['ls', '-l'], stdout=PIPE, text=True).

@bers 2018-03-16 13:28:25

When working with data from Windows systems (with \r\n line endings), my answer is

String = Bytes.decode("utf-8").replace("\r\n", "\n")

Why? Try this with a multiline Input.txt:

Bytes = open("Input.txt", "rb").read()
String = Bytes.decode("utf-8")
open("Output.txt", "w").write(String)

All your line endings will be doubled (to \r\r\n), leading to extra empty lines. Python's text-read functions usually normalize line endings so that strings use only \n. If you receive binary data from a Windows system, Python does not have a chance to do that. Thus,

Bytes = open("Input.txt", "rb").read()
String = Bytes.decode("utf-8").replace("\r\n", "\n")
open("Output.txt", "w").write(String)

will replicate your original file.

@mhlavacka 2019-02-20 09:45:43

I was looking for .replace("\r\n", "\n") addition so long. This is the answer if you want to render HTML properly.

@eafloresf 2016-06-01 00:03:04

I made a function to clean a list

def cleanLists(self, lista):
    lista = [x.strip() for x in lista]
    lista = [x.replace('\n', '') for x in lista]
    lista = [x.replace('\b', '') for x in lista]
    lista = [x.encode('utf8') for x in lista]
    lista = [x.decode('utf8') for x in lista]

    return lista

@Taylor Edmiston 2017-06-11 19:04:17

You can actually chain all of the .strip, .replace, .encode, etc calls in one list comprehension and only iterate over the list once instead of iterating over it five times.

@JulienD 2017-07-28 07:13:59

@TaylorEdmiston Maybe it saves on allocation but the number of operations would remain the same.

@Inconnu 2017-01-18 07:21:09

For Python 3,this is a much safer and Pythonic approach to convert from byte to string:

def byte_to_str(bytes_or_str):
    if isinstance(bytes_or_str, bytes): #check if its in bytes
        print("Object not of byte type")

byte_to_str(b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2\n')


total 0
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file1
-rw-rw-r-- 1 thomas thomas 0 Mar  3 07:03 file2

@bodangly 2018-02-10 07:15:21

Checking types is one of the least Pythonic things I can imagine...

@cosmicFluke 2018-05-25 19:51:26

1) As @bodangly said, type checking is not pythonic at all. 2) The function you wrote is named "byte_to_str" which implies it will return a str, but it only prints the converted value, and it prints an error message if it fails (but doesn't raise an exception). This approach is also unpythonic and obfuscates the bytes.decode solution you provided.

@jfs 2016-11-16 09:43:26

To interpret a byte sequence as a text, you have to know the corresponding character encoding:

unicode_text = bytestring.decode(character_encoding)


>>> b'\xc2\xb5'.decode('utf-8')

ls command may produce output that can't be interpreted as text. File names on Unix may be any sequence of bytes except slash b'/' and zero b'\0':

>>> open(bytes(range(0x100)).translate(None, b'\0/'), 'w').close()

Trying to decode such byte soup using utf-8 encoding raises UnicodeDecodeError.

It can be worse. The decoding may fail silently and produce mojibake if you use a wrong incompatible encoding:

>>> '—'.encode('utf-8').decode('cp1252')

The data is corrupted but your program remains unaware that a failure has occurred.

In general, what character encoding to use is not embedded in the byte sequence itself. You have to communicate this info out-of-band. Some outcomes are more likely than others and therefore chardet module exists that can guess the character encoding. A single Python script may use multiple character encodings in different places.

ls output can be converted to a Python string using os.fsdecode() function that succeeds even for undecodable filenames (it uses sys.getfilesystemencoding() and surrogateescape error handler on Unix):

import os
import subprocess

output = os.fsdecode(subprocess.check_output('ls'))

To get the original bytes, you could use os.fsencode().

If you pass universal_newlines=True parameter then subprocess uses locale.getpreferredencoding(False) to decode bytes e.g., it can be cp1252 on Windows.

To decode the byte stream on-the-fly, io.TextIOWrapper() could be used: example.

Different commands may use different character encodings for their output e.g., dir internal command (cmd) may use cp437. To decode its output, you could pass the encoding explicitly (Python 3.6+):

output = subprocess.check_output('dir', shell=True, encoding='cp437')

The filenames may differ from os.listdir() (which uses Windows Unicode API) e.g., '\xb6' can be substituted with '\x14'—Python's cp437 codec maps b'\x14' to control character U+0014 instead of U+00B6 (¶). To support filenames with arbitrary Unicode characters, see Decode poweshell output possibly containing non-ascii unicode characters into a python string

@Aaron Maenpaa 2009-03-03 12:26:18

You need to decode the bytes object to produce a string:

>>> b"abcde"

# utf-8 is used here because it is a very common encoding, but you
# need to use the encoding your data is actually in.
>>> b"abcde".decode("utf-8") 

@mcherm 2011-07-18 19:48:00

Yes, but given that this is the output from a windows command, shouldn't it instead be using ".decode('windows-1252')" ?

@nikow 2012-01-03 15:20:55

Using "windows-1252" is not reliable either (e.g., for other language versions of Windows), wouldn't it be best to use sys.stdout.encoding?

@Wookie88 2013-04-16 13:27:01

Maybe this will help somebody further: Sometimes you use byte array for e.x. TCP communication. If you want to convert byte array to string cutting off trailing '\x00' characters the following answer is not enough. Use b'example\x00\x00'.decode('utf-8').strip('\x00') then.

@anatoly techtonik 2013-04-28 14:40:18

I've filled a bug about documenting it at - feel free to propose a patch. If it is hard to contribute - comments how to improve that are welcome.

@CMCDragonkai 2014-04-16 02:59:41

what other decoding options does the binary object possess?

@martineau 2014-05-18 20:12:06

In Python 2.7.6 doesn't handle b"\x80\x02\x03".decode("utf-8") -> UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte.

@wallyk 2015-05-27 21:21:46

If the content is random binary values, the utf-8 conversion is likely to fail. Instead see @techtonik answer (below)

@Ekevoo 2015-09-03 05:53:23

@user2284570 2015-10-20 23:02:58

@AaronMaenpaa : This won’t work on an array like it worked in python2.

@serv-inc 2015-11-13 10:25:25

@Profpatsch: it's kinda hidden. See answer below for a reference to documentation. It's also in the bytes-docstring (help(command_stdout)).

@Dave 2016-04-04 23:35:31

@Kevin Shea 2017-10-09 12:03:02

@nikow: small update on using sys.stdout.encoding - this is allowed to be None which will cause encode() to fail.

@Jessica Warren 2018-01-01 21:20:15

I have some code for networking program. and its [def dataReceived(self, data): print(f"Received quote: {data}")] its printing out "received quote: b'\x00&C:\\Users\\.pycharm2016.3\\config\x00&C:\\users\\pych‌​arm\\system\x00\x03-‌​-' how would i change my code to fix this. WHen i write print(f"receivedquote: {data}".decode('utf-8') that does not do the trick.

@Steve Hollasch 2018-04-10 21:38:55

See @borislav-sabev 's answer below. Much better solution.

@Shayne 2018-07-04 17:39:08

While this is generally the way to go, you need to be certain you've got the encoding right, or your code might end up vomiting all over itself. To make it worse, data from the outside world can contain unexpected encodings. The chardet library at can help you with this, but again, always program defensively, sometimes even chardet can get it wrong, so wrap your junk with some appropriate Exception handling.

@Shihabudheen K M 2018-07-27 06:46:50

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 168: invalid start byte

@Charlie Parker 2019-03-14 22:25:06

why doesn't str(text_bytes) work? This seems bizarre to me.

@Charlie Parker 2019-03-14 22:29:46

is this expected? I get AttributeError: 'str' object has no attribute 'decode' but the string has a b at the beggining: b'(Answer 1 Ack)\n' hu?!

@ContextSwitch 2014-01-21 15:31:09

Set universal_newlines to True, i.e.

command_stdout = Popen(['ls', '-l'], stdout=PIPE, universal_newlines=True).communicate()[0]

@twasbrillig 2014-03-01 22:43:00

I've been using this method and it works. Although, it's just guessing at the encoding based on user preferences on your system, so it's not as robust as some other options. This is what it's doing, referencing "If universal_newlines is True, [stdin, stdout and stderr] will be opened as text streams in universal newlines mode using the encoding returned by locale.getpreferredencoding(False)."

@Boris 2019-01-13 17:02:29

On 3.7 you can (and should) do text=True instead of universal_newlines=True.

@mcherm 2011-07-18 19:51:15

I think what you actually want is this:

>>> from subprocess import *
>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]
>>> command_text = command_stdout.decode(encoding='windows-1252')

Aaron's answer was correct, except that you need to know WHICH encoding to use. And I believe that Windows uses 'windows-1252'. It will only matter if you have some unusual (non-ascii) characters in your content, but then it will make a difference.

By the way, the fact that it DOES matter is the reason that Python moved to using two different types for binary and text data: it can't convert magically between them because it doesn't know the encoding unless you tell it! The only way YOU would know is to read the Windows documentation (or read it here).

@jfs 2014-02-21 17:00:20

open() function for text streams or Popen() if you pass it universal_newlines=True do magically decide character encoding for you (locale.getpreferredencoding(False) in Python 3.3+).

@tripleee 2017-02-17 07:32:29

'latin-1' is a verbatim encoding with all code points set, so you can use that to effectively read a byte string into whichever type of string your Python supports (so verbatim on Python 2, into Unicode for Python 3).

Related Questions

Sponsored Content

56 Answered Questions

[SOLVED] How to replace all occurrences of a string?

19 Answered Questions

[SOLVED] Converting string into datetime

  • 2009-01-21 18:00:29
  • Oli
  • 2525597 View
  • 1904 Score
  • 19 Answer
  • Tags:   python datetime

58 Answered Questions

[SOLVED] How do I read / convert an InputStream into a String in Java?

3 Answered Questions

63 Answered Questions

[SOLVED] What is the difference between String and string in C#?

24 Answered Questions

[SOLVED] Case insensitive 'Contains(string)'

83 Answered Questions

[SOLVED] How do I make the first letter of a string uppercase in JavaScript?

10 Answered Questions

[SOLVED] Does Python have a string 'contains' substring method?

43 Answered Questions

[SOLVED] How do I convert a String to an int in Java?

18 Answered Questions

[SOLVED] Why is char[] preferred over String for passwords?

Sponsored Content