By freshWoWer


2008-09-18 01:35:30 8 Comments

How do you call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script?

30 comments

@N.Nonkovic 2019-11-28 13:40:49

MOST OF THE CASES:

For the most of cases, a short snippet of code like this is all you are going to need:

import subprocess
import shlex

source = "test.txt"
destination = "test_copy.txt"

base = "cp {source} {destination}'"
cmd = base.format(source=source, destination=destination)
subprocess.check_call(shlex.split(cmd))

It is clean and simple. >

subprocess.check_call run command with arguments and wait for command to complete.

shlex.split split the string cmd using shell-like syntax

REST OF THE CASES:

If this do not work for some specific command, most probably you have a problem with command-line interpreters. The operating system chose the default one which is not suitable for your type of program or could not found an adequate one on the system executable path.

Example:

Using the redirection operator on a Unix system

input_1 = "input_1.txt"
input_2 = "input_2.txt"
output = "merged.txt"
base_command = "/bin/bash -c 'cat {input} > {output}'"

base_command.format(input_1, output=output_1)
subprocess.check_call(shlex.split(base_command))

base_command.format(input_1, output=output_2)
subprocess.check_call(shlex.split(base_command))

As it is stated in The Zen of Python: Explicit is better than implicit

So if using a Python >=3.6 function, it would look something like this:

import subprocess
import shlex

def run_command(cmd_interpreter, command_as_str):
    base_command = f"{cmd_interpreter} -c '{command_as_str}'"
    subprocess.check_call(shlex.split(base_command)

@Tessaracter 2019-11-26 11:39:48

os.popen() is the easiest and the most safest way to execute a command. You can execute any command that you run on the command line. In addition you will also be able to capture the output of the command using os.popen().read()

You can do it like this:

import os
output = os.popen('Your Command Here').read()
print (output)

An example where you list all the files in the current directory:

import os
output = os.popen('ls').read()
print (output)
# Outputs list of files in the directory

@noɥʇʎԀʎzɐɹƆ 2019-08-28 16:56:20

If you're writing a Python shell script and have IPython installed on your system, you can use the bang magic to run a command inside IPython:

!ls
filelist = !ls

@noɥʇʎԀʎzɐɹƆ 2019-11-30 01:39:23

@PeterMortensen I don't think it works in DOS, but it should work in Cygwin.

@geckos 2019-03-31 12:21:50

If you are not using user input in the commands, you can use this:

from os import getcwd
from subprocess import check_output
from shlex import quote

def sh(command):
    return check_output(quote(command), shell=True, cwd=getcwd(), universal_newlines=True).strip()

And use it as

branch = sh('git rev-parse --abbrev-ref HEAD')

shell=True will spawn a shell, so you can use pipe and such shell things sh('ps aux | grep python'). This is very very handy for running hardcoded commands and processing its output. The universal_lines=True make sure the output is returned in a string instead of binary.

cwd=getcwd() will make sure that the command is run with the same working directory as the interpreter. This is handy for Git commands to work like the Git branch name example above.

Some recipes

  • free memory in megabytes: sh('free -m').split('\n')[1].split()[1]
  • free space on / in percent sh('df -m /').split('\n')[1].split()[4][0:-1]
  • CPU load sum(map(float, sh('ps -ef -o pcpu').split('\n')[1:])

But this isn't safe for user input, from the documentation:

Security Considerations

Unlike some other popen functions, this implementation will never implicitly call a system shell. This means that all characters, including shell metacharacters, can safely be passed to child processes. If the shell is invoked explicitly, via shell=True, it is the application’s responsibility to ensure that all whitespace and metacharacters are quoted appropriately to avoid shell injection vulnerabilities.

When using shell=True, the shlex.quote() function can be used to properly escape whitespace and shell metacharacters in strings that are going to be used to construct shell commands.

Even using the shlex.quote(), it is good to keep a little paranoid when using user inputs on shell commands. One option is using a hardcoded command to take some generic output and filtering by user input. Anyway using shell=False will make sure that only the exactly process that you want to execute will be executed or you get a No such file or directory error.

Also there is some performance impact on shell=True, from my tests it seems about 20% slower than shell=False (the default).

In [50]: timeit("check_output('ls -l'.split(), universal_newlines=True)", number=1000, globals=globals())
Out[50]: 2.6801227919995654

In [51]: timeit("check_output('ls -l', universal_newlines=True, shell=True)", number=1000, globals=globals())
Out[51]: 3.243950183999914

@Siddharth Satpathy 2019-02-05 00:13:13

There are two prominent ways in which one can execute shell commands using Python. Both the below mentioned examples show how one can get the name of present working directory (pwd) using Python. You can use any other Unix command in place of pwd.

1.> 1st method: One can use the os module from Python, and system() function from there to execute shell commands in Python.

import os
os.system('pwd')

Output:

/Users/siddharth

1.> 2nd method: Another way is to use the subprocess module and call() function.

import subprocess
subprocess.call('pwd')

Output:

/Users/siddharth

@Farzad Vertigo 2019-01-29 05:11:40

As some of the answers were related to the previous versions of Python or were using os.system module, I post this answer for people like me who intend to use subprocess in Python 3.5+. The following did the trick for me on Linux:

import subprocess

# subprocess.run() returns a completed process object that can be inspected
c = subprocess.run(["ls", "-ltrh"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(c.stdout.decode('utf-8'))

As mentioned in the documentation, PIPE values are byte sequences and for properly showing them decoding should be considered. For later versions of Python, text=True and encoding='utf-8' are added to kwargs of subprocess.run().

The output of the abovementioned code is:

total 113M
-rwxr-xr-x  1 farzad farzad  307 Jan 15  2018 vpnscript
-rwxrwxr-x  1 farzad farzad  204 Jan 15  2018 ex
drwxrwxr-x  4 farzad farzad 4.0K Jan 22  2018 scripts
.... # Some other lines

@abhi krishnan 2018-07-27 08:10:20

Use the os module:

import os
os.system("your command")

For example,

import os
os.system("ifconfig")

@tripleee 2018-12-03 04:59:29

This duplicates a (slightly) more detailed answer from the previous April, which however also fails to point out the caveats.

@IRSHAD 2016-07-20 09:50:01

To fetch the network id from the OpenStack Neutron:

#!/usr/bin/python
import os
netid = "nova net-list | awk '/ External / { print $2 }'"
temp = os.popen(netid).read()  /* Here temp also contains new line (\n) */
networkId = temp.rstrip()
print(networkId)

Output of nova net-list

+--------------------------------------+------------+------+
| ID                                   | Label      | CIDR |
+--------------------------------------+------------+------+
| 431c9014-5b5d-4b51-a357-66020ffbb123 | test1      | None |
| 27a74fcd-37c0-4789-9414-9531b7e3f126 | External   | None |
| 5a2712e9-70dc-4b0e-9281-17e02f4684c9 | management | None |
| 7aa697f5-0e60-4c15-b4cc-9cb659698512 | Internal   | None |
+--------------------------------------+------------+------+

Output of print(networkId)

27a74fcd-37c0-4789-9414-9531b7e3f126

@tripleee 2018-12-03 05:49:21

You should not recommend os.popen() in 2016. The Awk script could easily be replaced with native Python code.

@amehta 2014-08-24 21:46:12

A simple way is to use the os module:

import os
os.system('ls')

Alternatively, you can also use the subprocess module:

import subprocess
subprocess.check_call('ls')

If you want the result to be stored in a variable try:

import subprocess
r = subprocess.check_output('ls')

@andruso 2014-04-12 11:58:23

Use subprocess.call:

from subprocess import call

# Using list
call(["echo", "Hello", "world"])

# Single string argument varies across platforms so better split it
call("echo Hello world".split(" "))

@Jake W 2014-03-14 02:59:05

After some research, I have the following code which works very well for me. It basically prints both stdout and stderr in real time.

stdout_result = 1
stderr_result = 1


def stdout_thread(pipe):
    global stdout_result
    while True:
        out = pipe.stdout.read(1)
        stdout_result = pipe.poll()
        if out == '' and stdout_result is not None:
            break

        if out != '':
            sys.stdout.write(out)
            sys.stdout.flush()


def stderr_thread(pipe):
    global stderr_result
    while True:
        err = pipe.stderr.read(1)
        stderr_result = pipe.poll()
        if err == '' and stderr_result is not None:
            break

        if err != '':
            sys.stdout.write(err)
            sys.stdout.flush()


def exec_command(command, cwd=None):
    if cwd is not None:
        print '[' + ' '.join(command) + '] in ' + cwd
    else:
        print '[' + ' '.join(command) + ']'

    p = subprocess.Popen(
        command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd
    )

    out_thread = threading.Thread(name='stdout_thread', target=stdout_thread, args=(p,))
    err_thread = threading.Thread(name='stderr_thread', target=stderr_thread, args=(p,))

    err_thread.start()
    out_thread.start()

    out_thread.join()
    err_thread.join()

    return stdout_result + stderr_result

@jfs 2015-07-13 18:52:13

your code may lose data when the subprocess exits while there is some data is buffered. Read until EOF instead, see teed_call()

@Honza Javorek 2013-04-11 17:17:53

With the standard library

Use the subprocess module (Python 3):

import subprocess
subprocess.run(['ls', '-l'])

It is the recommended standard way. However, more complicated tasks (pipes, output, input, etc.) can be tedious to construct and write.

Note on Python version: If you are still using Python 2, subprocess.call works in a similar way.

ProTip: shlex.split can help you to parse the command for run, call, and other subprocess functions in case you don't want (or you can't!) provide them in form of lists:

import shlex
import subprocess
subprocess.run(shlex.split('ls -l'))

With external dependencies

If you do not mind external dependencies, use plumbum:

from plumbum.cmd import ifconfig
print(ifconfig['wlan0']())

It is the best subprocess wrapper. It's cross-platform, i.e. it works on both Windows and Unix-like systems. Install by pip install plumbum.

Another popular library is sh:

from sh import ifconfig
print(ifconfig('wlan0'))

However, sh dropped Windows support, so it's not as awesome as it used to be. Install by pip install sh.

@Joe 2012-11-15 17:13:22

Update:

subprocess.run is the recommended approach as of Python 3.5 if your code does not need to maintain compatibility with earlier Python versions. It's more consistent and offers similar ease-of-use as Envoy. (Piping isn't as straightforward though. See this question for how.)

Here's some examples from the documentation.

Run a process:

>>> subprocess.run(["ls", "-l"])  # Doesn't capture output
CompletedProcess(args=['ls', '-l'], returncode=0)

Raise on failed run:

>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
  ...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1

Capture output:

>>> subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE)
CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,
stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n')

Original answer:

I recommend trying Envoy. It's a wrapper for subprocess, which in turn aims to replace the older modules and functions. Envoy is subprocess for humans.

Example usage from the README:

>>> r = envoy.run('git config', data='data to pipe in', timeout=2)

>>> r.status_code
129
>>> r.std_out
'usage: git config [options]'
>>> r.std_err
''

Pipe stuff around too:

>>> r = envoy.run('uptime | pbcopy')

>>> r.command
'pbcopy'
>>> r.status_code
0

>>> r.history
[<Response 'uptime'>]

@mdwhatcott 2012-08-13 18:36:32

I quite like shell_command for its simplicity. It's built on top of the subprocess module.

Here's an example from the documentation:

>>> from shell_command import shell_call
>>> shell_call("ls *.py")
setup.py  shell_command.py  test_shell_command.py
0
>>> shell_call("ls -l *.py")
-rw-r--r-- 1 ncoghlan ncoghlan  391 2011-12-11 12:07 setup.py
-rw-r--r-- 1 ncoghlan ncoghlan 7855 2011-12-11 16:16 shell_command.py
-rwxr-xr-x 1 ncoghlan ncoghlan 8463 2011-12-11 16:17 test_shell_command.py
0

@Saurabh Bangad 2012-06-11 22:28:35

os.system does not allow you to store results, so if you want to store results in some list or something, a subprocess.call works.

@Jorge E. Cardona 2012-03-13 00:12:54

I always use fabric for this things like:

from fabric.operations import local
result = local('ls', capture=True)
print "Content:/n%s" % (result, )

But this seem to be a good tool: sh (Python subprocess interface).

Look at an example:

from sh import vgdisplay
print vgdisplay()
print vgdisplay('-v')
print vgdisplay(v=True)

@newtover 2010-02-12 10:15:34

Some hints on detaching the child process from the calling one (starting the child process in background).

Suppose you want to start a long task from a CGI script. That is, the child process should live longer than the CGI script execution process.

The classical example from the subprocess module documentation is:

import subprocess
import sys

# Some code here

pid = subprocess.Popen([sys.executable, "longtask.py"]) # Call subprocess

# Some more code here

The idea here is that you do not want to wait in the line 'call subprocess' until the longtask.py is finished. But it is not clear what happens after the line 'some more code here' from the example.

My target platform was FreeBSD, but the development was on Windows, so I faced the problem on Windows first.

On Windows (Windows XP), the parent process will not finish until the longtask.py has finished its work. It is not what you want in a CGI script. The problem is not specific to Python; in the PHP community the problems are the same.

The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in Windows API. If you happen to have installed pywin32, you can import the flag from the win32process module, otherwise you should define it yourself:

DETACHED_PROCESS = 0x00000008

pid = subprocess.Popen([sys.executable, "longtask.py"],
                       creationflags=DETACHED_PROCESS).pid

/* UPD 2015.10.27 @eryksun in a comment below notes, that the semantically correct flag is CREATE_NEW_CONSOLE (0x00000010) */

On FreeBSD we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in a CGI script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:

pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)

I have not checked the code on other platforms and do not know the reasons of the behaviour on FreeBSD. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.

@maranas 2010-04-09 08:09:26

i noticed a possible "quirk" with developing py2exe apps in pydev+eclipse. i was able to tell that the main script was not detached because eclipse's output window was not terminating; even if the script executes to completion it is still waiting for returns. but, when i tried compiling to a py2exe executable, the expected behavior occurs (runs the processes as detached, then quits). i am not sure, but the executable name is not in the process list anymore. this works for all approaches (os.system("start *"), os.spawnl with os.P_DETACH, subprocs, etc.)

@Alexey Lebedev 2012-04-16 10:04:33

Windows gotcha: even though I spawned process with DETACHED_PROCESS, when I killed my Python daemon all ports opened by it wouldn't free until all spawned processes terminate. WScript.Shell solved all my problems. Example here: pastebin.com/xGmuvwSx

@jfs 2012-11-16 14:16:42

you might also need CREATE_NEW_PROCESS_GROUP flag. See Popen waiting for child process even when the immediate child has terminated

@ubershmekel 2014-10-27 21:01:35

I'm seeing import subprocess as sp;sp.Popen('calc') not waiting for the subprocess to complete. It seems the creationflags aren't necessary. What am I missing?

@newtover 2014-10-28 12:25:50

@ubershmekel, I am not sure what you mean and don't have a windows installation. If I recall correctly, without the flags you can not close the cmd instance from which you started the calc.

@ubershmekel 2014-10-30 05:45:20

I'm on Windows 8.1 and calc seems to survive the closing of python.

@SuperBiasedMan 2015-05-05 13:13:32

Is there any significance to using '0x00000008'? Is that a specific value that has to be used or one of multiple options?

@Eryk Sun 2015-10-27 00:27:30

The following is incorrect: "[o]n windows (win xp), the parent process will not finish until the longtask.py has finished its work". The parent will exit normally, but the console window (conhost.exe instance) only closes when the last attached process exits, and the child may have inherited the parent's console. Setting DETACHED_PROCESS in creationflags avoids this by preventing the child from inheriting or creating a console. If you instead want a new console, use CREATE_NEW_CONSOLE (0x00000010).

@Eryk Sun 2015-10-27 17:37:15

I didn't mean that executing as a detached process is incorrect. That said, you may need to set the standard handles to files, pipes, or os.devnull because some console programs exit with an error otherwise. Create a new console when you want the child process to interact with the user concurrently with the parent process. It would be confusing to try to do both in a single window.

@Dr_Zaszuś 2018-03-08 08:56:54

stdout=subprocess.PIPE will make your code hang up if you have long output from a child. For more details see thraxil.org/users/anders/posts/2008/03/13/…

@Charlie Parker 2019-02-24 19:05:29

is there not an OS-agnostic way to have the process run in the background?

@Charlie Parker 2019-02-24 19:38:51

your answer seems strange to me. I just opened a subprocess.Popen and nothing bad happened (not had to wait). Why exactly do we need to worry about the scenario you are pointing out? I'm skeptical.

@Ben Hoffstein 2008-09-18 01:43:30

Use subprocess.

...or for a very simple command:

import os
os.system('cat testfile')

@Vishal 2019-10-09 05:35:21

import subprocess

p = subprocess.run(["ls", "-ltr"], capture_output=True)
print(p.stdout.decode(), p.stderr.decode())

Try online

@Eli Courtwright 2008-09-18 13:11:46

Here's a summary of the ways to call external programs and the advantages and disadvantages of each:

  1. os.system("some_command with args") passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:

    os.system("some_command < input_file | another_command > output_file")  
    

However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs. See the documentation.

  1. stream = os.popen("some_command with args") will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything. See the documentation.

  2. The Popen class of the subprocess module. This is intended as a replacement for os.popen but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say:

    print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read()
    

    instead of:

    print os.popen("echo Hello World").read()
    

    but it is nice to have all of the options there in one unified class instead of 4 different popen functions. See the documentation.

  3. The call function from the subprocess module. This is basically just like the Popen class and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:

    return_code = subprocess.call("echo Hello World", shell=True)  
    

    See the documentation.

  4. If you're on Python 3.5 or later, you can use the new subprocess.run function, which is a lot like the above but even more flexible and returns a CompletedProcess object when the command finishes executing.

  5. The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.

The subprocess module should probably be what you use.

Finally please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. There are serious security implications if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:

print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read()

and imagine that the user enters something "my mama didnt love me && rm -rf /" which could erase the whole filesystem.

@Jean 2015-05-26 21:16:27

Nice answer/explanation. How is this answer justifying Python's motto as described in this article ? fastcompany.com/3026446/… "Stylistically, Perl and Python have different philosophies. Perl’s best known mottos is " There’s More Than One Way to Do It". Python is designed to have one obvious way to do it" Seem like it should be the other way! In Perl I know only two ways to execute a command - using back-tick or open.

@phoenix 2015-10-07 16:37:18

If using Python 3.5+, use subprocess.run(). docs.python.org/3.5/library/subprocess.html#subprocess.run

@Evgeni Sergeev 2016-06-01 10:44:53

What one typically needs to know is what is done with the child process's STDOUT and STDERR, because if they are ignored, under some (quite common) conditions, eventually the child process will issue a system call to write to STDOUT (STDERR too?) that would exceed the output buffer provided for the process by the OS, and the OS will cause it to block until some process reads from that buffer. So, with the currently recommended ways, subprocess.run(..), what exactly does "This does not capture stdout or stderr by default." imply? What about subprocess.check_output(..) and STDERR?

@Charlie Parker 2017-10-24 19:08:43

which of the commands you recommended block my script? i.e. if I want to run multiple commands in a for loop how do I do it without it blocking my python script? I don't care about the output of the command I just want to run lots of them.

@Qback 2017-12-08 09:27:04

@phoenix I disagree. There is nothing preventing you from using os.system in python3 docs.python.org/3/library/os.html#os.system

@Pitto 2018-03-28 07:57:06

my mama didnt love me && rm -rf / will do nothing :D Probably my mama didnt love me || rm -rf / will be more dangerous :)

@Chris Arndt 2018-09-10 16:38:25

@Pitto yes, but that is not what gets executed by the example. Notice the echo in front of the string passed to Popen? So the full command will be echo my mama didnt love me && rm -rf /.

@tripleee 2018-12-03 06:00:46

This is arguably the wrong way around. Most people only need subprocess.run() or its older siblings subprocess.check_call() et al. For cases where these do not suffice, see subprocess.Popen(). os.popen() should perhaps not be mentioned at all, or come even after "hack your own fork/exec/spawn code".

@EmmEff 2008-09-18 18:20:46

Typical implementation:

import subprocess

p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
    print line,
retval = p.wait()

You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().

@jfs 2012-11-16 14:12:18

.readlines() reads all lines at once i.e., it blocks until the subprocess exits (closes its end of the pipe). To read in real time (if there is no buffering issues) you could: for line in iter(p.stdout.readline, ''): print line,

@EmmEff 2012-11-17 13:25:15

Could you elaborate on what you mean by "if there is no buffering issues"? If the process blocks definitely, the subprocess call also blocks. The same could happen with my original example as well. What else could happen with respect to buffering?

@jfs 2012-11-17 13:51:25

the child process may use block-buffering in non-interactive mode instead of line-buffering so p.stdout.readline() (note: no s at the end) won't see any data until the child fills its buffer. If the child doesn't produce much data then the output won't be in real time. See the second reason in Q: Why not just use a pipe (popen())?. Some workarounds are provided in this answer (pexpect, pty, stdbuf)

@jfs 2012-11-17 13:53:26

the buffering issue only matters if you want output in real time and doesn't apply to your code that doesn't print anything until all data is received

@tripleee 2018-12-03 05:39:55

This answer was fine for its time, but we should no longer recommend Popen for simple tasks. This also needlessly specifies shell=True. Try one of the subprocess.run() answers.

@Vishal 2019-10-03 04:19:48

Python 3.5+

import subprocess

p = subprocess.run(["ls", "-ltr"], capture_output=True)
print(p.stdout.decode(), p.stderr.decode())

Try online

@David Cournapeau 2008-09-18 01:39:35

Look at the subprocess module in the standard library:

import subprocess
subprocess.run(["ls", "-l"])

The advantage of subprocess vs. system is that it is more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc...).

The official documentation recommends the subprocess module over the alternative os.system():

The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function [os.system()].

The Replacing Older Functions with the subprocess Module section in the subprocess documentation may have some helpful recipes.

For versions of Python before 3.5, use call:

import subprocess
subprocess.call(["ls", "-l"])

@Kevin Wheeler 2015-09-01 23:17:07

Is there a way to use variable substitution? IE I tried to do echo $PATH by using call(["echo", "$PATH"]), but it just echoed the literal string $PATH instead of doing any substitution. I know I could get the PATH environment variable, but I'm wondering if there is an easy way to have the command behave exactly as if I had executed it in bash.

@SethMMorton 2015-09-02 20:38:24

@KevinWheeler You'll have to use shell=True for that to work.

@Murmel 2015-11-11 20:24:27

@KevinWheeler You should NOT use shell=True, for this purpose Python comes with os.path.expandvars. In your case you can write: os.path.expandvars("$PATH"). @SethMMorton please reconsider your comment -> Why not to use shell=True

@florisla 2017-07-12 10:02:16

The example calls ls -l but does not give access to its output (stdout is not accessible). I find that confusing -- you could use a command without stdout instead, such as touch.

@Braden Best 2017-07-24 23:13:35

Seems to be pretty clunky on Python3 + Windows. If I enter a filename with special characters like &, it will throw a FileNotFoundError. Even though the file is in the working directory where I executed python, and obviously does exist.

@Charlie Parker 2017-10-24 19:07:55

does call block? i.e. if I want to run multiple commands in a for loop how do I do it without it blocking my python script? I don't care about the output of the command I just want to run lots of them.

@sudo 2017-11-16 19:43:26

I understand using call for more advanced features, but I don't see anything wrong with using system if it does what you need.

@slehar 2018-06-16 17:15:48

To simplify at least conceptually:\n call("ls -l".split())

@Daniel F 2018-09-20 18:05:59

If you want to create a list out of a command with parameters, a list which can be used with subprocess when shell=False, then use shlex.split for an easy way to do this docs.python.org/2/library/shlex.html#shlex.split

@Daniel F 2018-09-20 18:15:30

subprocess also allows you to directly pipe two commands together, there's an example in the docs.

@Lie Ryan 2018-12-03 13:44:28

@CharlieParker: call, check_call, check_output, and run blocks. If you want non-blocking, use subprocess.Popen.

@Lie Ryan 2018-12-03 13:49:37

@sudo: you got it the other way around. system() is more advanced than Popen, it adds some friggin shell that is ready to screw you over when you least expects it. Which shell? Depends on what the user has as their SHELL env, which means the exact syntax it'll screw you is often out of your control. Popen is lower level, and is closer to what the underlying OS API looks like, and has much less surprising behavior.

@sudo 2018-12-04 03:03:56

@LieRyan I would use Popen if exact syntax were something to worry about, but if you're just running some program (maybe with a couple of constant arguments), system isn't an issue. Also, sometimes I do want the shell.

@pulse 2019-02-02 22:50:04

you forgot to say it needs python 3.5 at least. It doesn't work on python 3.4.3 for example, which is default for Ubuntu 14.04 LTS

@mark 2019-03-22 10:48:10

As already mentioned, this is only supported in recent versions of python. I think this should be stated in the answer

@tripleee 2019-03-23 12:09:23

The original answer had subprocess.call() so the person who updated the answer (not the original answerer!) should also explain this change in more detail.

@tuket 2019-09-23 14:33:35

You can also pass in as string instead of a list of strings: subprocess.run("ls -l")

@Samuel Muldoon 2019-11-07 04:28:36

If they want to name it run instead of call why not just make it an alias? Some types of backwards compatibility are difficult and allow error-prone patterns to continue, but making run == call is easy, simple, and almost no cost. Why call for versions of Python before 3.5 instead of both new and old versions?

@FeRD 2019-11-08 12:54:22

@SamuelMuldoon The issue is that subprocess.call has completely different semantics from subprocess.run. subprocess.call still exists in Python 3.5–3.8 (for backwards compatibility), it's just that if you have it available, subprocess.run is better by far.

@Tessaracter 2019-11-26 11:40:56

You can use os.popen also. stackoverflow.com/a/59050139/9789097

@Pedro Lobito 2019-08-18 14:42:20

You can also use subprocess.getoutput() and subprocess.getstatusutput(), which are Legacy Shell Invocation Functions from the deprecated commands module, i.e.:

subprocess.getstatusoutput(cmd)

Return (exitcode, output) of executing cmd in a shell.


subprocess.getoutput(cmd)

Return output (stdout and stderr) of executing cmd in a shell.

@Zach Valenta 2019-07-01 20:46:19

Sultan is a recent-ish package meant for this purpose. Provides some niceties around managing user privileges and adding helpful error messages.

from sultan.api import Sultan

with Sultan.load(sudo=True, hostname="myserver.com") as sultan:
  sultan.yum("install -y tree").run()

@Peter Mortensen 2019-11-29 22:16:55

Re "Provides some niceties": Can you elaborate?

@Cédric 2018-10-30 11:40:43

I wrote a small library to help with this use case:

https://pypi.org/project/citizenshell/

It can be installed using

pip install citizenshell

And then used as follows:

from citizenshell import sh
assert sh("echo Hello World") == "Hello World"

You can separate stdout from stderr and extract the exit code as follows:

result = sh(">&2 echo error && echo output && exit 13")
assert result.stdout() == ["output"]
assert result.stderr() == ["error"]
assert result.exit_code() == 13

And the cool thing is that you don't have to wait for the underlying shell to exit before starting processing the output:

for line in sh("for i in 1 2 3 4; do echo -n 'It is '; date +%H:%M:%S; sleep 1; done", wait=False)
    print ">>>", line + "!"

will print the lines as they are available thanks to the wait=False

>>> It is 14:24:52!
>>> It is 14:24:53!
>>> It is 14:24:54!
>>> It is 14:24:55!

More examples can be found at https://github.com/meuter/citizenshell

@am5 2018-03-23 02:30:43

Often, I use the following function for external commands, and this is especially handy for long running processes. The below method tails process output while it is running and returns the output, raises an exception if process fails.

It comes out if the process is done using the poll() method on the process.

import subprocess,sys

def exec_long_running_proc(command, args):
    cmd = "{} {}".format(command, " ".join(str(arg) if ' ' not in arg else arg.replace(' ','\ ') for arg in args))
    print(cmd)
    process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

    # Poll process for new output until finished
    while True:
        nextline = process.stdout.readline().decode('UTF-8')
        if nextline == '' and process.poll() is not None:
            break
        sys.stdout.write(nextline)
        sys.stdout.flush()

    output = process.communicate()[0]
    exitCode = process.returncode

    if (exitCode == 0):
        return output
    else:
        raise Exception(command, exitCode, output)

You can invoke it like this:

exec_long_running_proc(command = "hive", args=["-f", hql_path])

@sbk 2018-05-17 12:08:29

You'll get unexpected results passing an arg with space. Using repr(arg) instead of str(arg) might help by the mere coincidence that python and sh escape quotes the same way

@am5 2018-11-17 00:07:21

@sbk repr(arg) didn't really help, the above code handles spaces as well. Now the following works exec_long_running_proc(command = "ls", args=["-l", "~/test file*"])

@Valery Ramusik 2018-09-14 22:20:09

Invoke is a Python (2.7 and 3.4+) task execution tool & library. It provides a clean, high level API for running shell commands

>>> from invoke import run
>>> cmd = "pip install -r requirements.txt"
>>> result = run(cmd, hide=True, warn=True)
>>> print(result.ok)
True
>>> print(result.stdout.splitlines()[-1])
Successfully installed invocations-0.13.0 pep8-1.5.7 spec-1.3.1

@user9074332 2019-03-12 02:00:27

This is a great library. I was trying to explain it to a coworker the other day adn described it like this: invoke is to subprocess as requests is to urllib3.

@Samadi Salahedine 2018-04-30 13:47:17

It can be this simple:

import os
cmd = "your command"
os.system(cmd)

@tripleee 2018-12-03 05:02:26

This fails to point out the drawbacks, which are explained in much more detail in PEP-324. The documentation for os.system explicitly recommends avoiding it in favor of subprocess.

@dportman 2018-05-08 20:49:55

If you need to call a shell command from a Python notebook (like Jupyter, Zeppelin, Databricks, or Google Cloud Datalab) you can just use the ! prefix.

For example,

!ls -ilF

Related Questions

Sponsored Content

23 Answered Questions

[SOLVED] Does Python have a ternary conditional operator?

16 Answered Questions

[SOLVED] What are metaclasses in Python?

29 Answered Questions

[SOLVED] How do I prompt for Yes/No/Cancel input in a Linux shell script?

14 Answered Questions

[SOLVED] How do I set a variable to the output of a command in Bash?

13 Answered Questions

[SOLVED] How to echo shell commands as they are executed

20 Answered Questions

[SOLVED] Calling shell commands from Ruby

  • 2008-08-05 12:56:52
  • CodingWithoutComments
  • 509823 View
  • 1026 Score
  • 20 Answer
  • Tags:   ruby shell interop

36 Answered Questions

[SOLVED] Check if a directory exists in a shell script

  • 2008-09-12 20:06:25
  • Grundlefleck
  • 2716849 View
  • 3566 Score
  • 36 Answer
  • Tags:   shell unix posix

18 Answered Questions

[SOLVED] Convert bytes to a string

14 Answered Questions

[SOLVED] How to mkdir only if a dir does not already exist?

15 Answered Questions

[SOLVED] In the shell, what does " 2>&1 " mean?

  • 2009-05-03 22:57:00
  • Tristan Havelick
  • 1034194 View
  • 2129 Score
  • 15 Answer
  • Tags:   bash shell unix redirect

Sponsored Content