How to use 'subprocess' in Python
Learn how to use Python's subprocess module. Explore different methods, tips, real-world applications, and common error debugging.
.png)
Python's subprocess module lets you run and manage external commands from your scripts. It's a key tool for system automation, data pipelines, and complex workflow integration.
In this article, you'll discover key techniques and practical tips. You will explore real-world applications and learn how to debug common issues to master the subprocess module.
Running basic commands with subprocess.run()
import subprocess
result = subprocess.run(["echo", "Hello, subprocess!"])
print(f"Return code: {result.returncode}")--OUTPUT--Hello, subprocess!
Return code: 0
The subprocess.run() function is the recommended way to execute external commands. It runs the command and waits for it to complete before your script continues, making it a straightforward, blocking call.
Notice the command is passed as a list of strings, like ["echo", "Hello, subprocess!"]. This is a crucial security practice that helps prevent shell injection attacks. The function returns a CompletedProcess object, which contains details about the execution. You can check its returncode attribute to see if the command succeeded—a value of 0 indicates success.
Capturing command output
Checking the returncode confirms a command ran successfully, but to make use of its work, you need to capture its output.
Capturing command output with capture_output
import subprocess
result = subprocess.run(["ls", "-l"], capture_output=True, text=True)
print(f"Command output:\n{result.stdout[:70]}...") # First 70 chars--OUTPUT--Command output:
total 16
-rw-r--r-- 1 user user 2048 May 10 15:30 example.py
...
To grab the command's output, you just need to add a couple of arguments to subprocess.run(). The captured text is then available on the stdout attribute of the returned CompletedProcess object.
capture_output=True: This tells the function to catch the standard output and standard error streams.text=True: This decodes the output from bytes into a human-readable string, so you don't have to do it yourself.
Any errors the command produces are similarly captured in the stderr attribute.
Using check_output() for simple output capture
import subprocess
output = subprocess.check_output(["python", "-c", "print('Hello from Python subprocess')"], text=True)
print(f"Captured: {output.strip()}")--OUTPUT--Captured: Hello from Python subprocess
For situations where you only need the command's output and want to keep things simple, subprocess.check_output() is a great shortcut. Unlike subprocess.run(), it doesn't return a CompletedProcess object. Instead, it gives you the command's standard output directly as a string when you use text=True.
- The main difference is in error handling. If the command fails by returning a non-zero exit code,
check_output()will raise aCalledProcessErrorexception, immediately stopping your script unless you handle it.
Streaming command output in real-time
import subprocess
process = subprocess.Popen(["ping", "-c", "3", "google.com"], stdout=subprocess.PIPE, text=True)
for line in process.stdout:
print(f"Got line: {line.strip()}")--OUTPUT--Got line: PING google.com (142.250.190.78) 56(84) bytes of data.
Got line: 64 bytes from muc11s02-in-f14.1e100.net (142.250.190.78): icmp_seq=1 ttl=57 time=15.3 ms
Got line: 64 bytes from muc11s02-in-f14.1e100.net (142.250.190.78): icmp_seq=2 ttl=57 time=14.8 ms
Got line: 64 bytes from muc11s02-in-f14.1e100.net (142.250.190.78): icmp_seq=3 ttl=57 time=15.1 ms
When you need to handle output from a long-running command in real-time, subprocess.Popen is the tool for the job. Unlike subprocess.run(), it executes the command non-blockingly. This means your script can continue running while the external process does its work.
- The key is setting
stdout=subprocess.PIPE, which redirects the command's output to a stream your script can read. - You can then iterate directly over the
process.stdoutobject to process each line of output as it becomes available, perfect for monitoring progress.
Advanced subprocess techniques
Beyond just running commands and capturing their output, you can also control their environment, chain them together, and manage their execution with greater precision.
Setting environment variables for subprocess commands
import subprocess
import os
env = os.environ.copy()
env["CUSTOM_VAR"] = "Hello from environment"
result = subprocess.run(["python", "-c", "import os; print(os.environ.get('CUSTOM_VAR'))"],
env=env, text=True, capture_output=True)
print(result.stdout.strip())--OUTPUT--Hello from environment
Sometimes a command needs specific environment variables to run correctly. You can provide a custom environment by passing a dictionary to the env argument in subprocess.run().
- First, it’s best practice to make a copy of the current environment with
os.environ.copy(). - Then, you can add or change any variables in your copied dictionary, like setting
CUSTOM_VAR.
The subprocess executes with this tailored environment, letting you securely pass configuration without affecting your main script.
Handling input and output with pipes
import subprocess
process = subprocess.Popen(["sort"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True)
stdout, stderr = process.communicate("banana\napple\ncherry\n")
print(stdout)--OUTPUT--apple
banana
cherry
Pipes let you connect a command's input and output streams directly to your script. In this example, you use subprocess.Popen to run the sort command, but with a twist.
- Setting
stdin=subprocess.PIPEcreates a channel to send data to the command. stdout=subprocess.PIPEopens a channel to receive data from it.
The process.communicate() method handles the interaction. It sends your input string to the command, waits for it to complete, and then returns the final output and any errors.
Using subprocess.Popen with timeouts and signal handling
import subprocess
import signal
import time
process = subprocess.Popen(["sleep", "10"])
time.sleep(2) # Let it run for 2 seconds
process.send_signal(signal.SIGTERM)
print(f"Process terminated: {process.poll() is not None}")--OUTPUT--Process terminated: True
For long-running tasks, subprocess.Popen gives you precise control over process execution. In this case, the script starts a sleep command but doesn't wait for it to finish. Instead, your script continues running independently.
- After a two-second pause,
process.send_signal(signal.SIGTERM)sends a termination signal, which is a standard way to ask a process to shut down gracefully. - You can then use
process.poll()to check the process’s status. It returns an exit code if the process has terminated orNoneif it’s still running.
This approach is perfect for managing background jobs or implementing custom timeouts.
Move faster with Replit
Replit is an AI-powered development platform that transforms natural language into working applications. You describe what you want to build, and Replit Agent creates it—complete with databases, APIs, and deployment.
The subprocess techniques from this article are the building blocks for powerful tools. With Replit Agent, you can turn these concepts into production-ready applications from a simple description.
- Build a real-time network monitoring tool that uses commands like
pingto stream diagnostics to a web dashboard. - Create a text-processing utility that chains shell commands like
grepandsortto filter and organize data. - Deploy a system administration helper that runs commands to check disk space or list running processes and formats the output cleanly.
Describe your app idea, and Replit Agent writes the code, tests it, and fixes issues automatically. Try Replit Agent to turn your concepts into working software.
Common errors and challenges
Using subprocess effectively means navigating a few common pitfalls, from command failures to security risks and path errors.
Handling command failures with subprocess.run()
When a command executed with subprocess.run() fails, your script won't crash by default. It continues running as if everything is fine, which can lead to silent failures and incorrect outcomes. The following code demonstrates this exact problem.
import subprocess
# This command will likely fail but the script continues
subprocess.run(["ls", "/nonexistent_directory"])
print("Command completed successfully")
The ls command fails, but the script continues because subprocess.run() doesn't raise an exception by default. This leads to silent errors. The following code shows how to catch these failures automatically.
import subprocess
try:
subprocess.run(["ls", "/nonexistent_directory"], check=True)
print("Command completed successfully")
except subprocess.CalledProcessError as e:
print(f"Command failed with return code {e.returncode}")
To catch failures automatically, add the check=True argument to your subprocess.run() call. This is crucial whenever the success of one command is a prerequisite for the next steps in your script.
- With
check=True, the function raises aCalledProcessErrorif the command fails. - You can then wrap the call in a
try...exceptblock to handle the exception, preventing your script from continuing with bad data or incorrect assumptions.
Avoiding shell injection vulnerabilities with shell=True
Using shell=True lets you run commands as a single string, which can feel convenient. However, it introduces a major security risk called shell injection, especially when you include un-sanitized user input. This can allow attackers to run arbitrary commands.
The following code demonstrates how easily this can be exploited.
import subprocess
user_input = input("Enter filename to display: ")
subprocess.run(f"cat {user_input}", shell=True)
If a user enters something like my_file.txt; rm -rf /, the shell executes both commands. The semicolon acts as a separator, creating a severe security hole. The next example demonstrates the safe way to handle user input.
import subprocess
user_input = input("Enter filename to display: ")
subprocess.run(["cat", user_input])
The secure way to handle external input is to pass the command and arguments as a list. By using ["cat", user_input], you tell the subprocess module to treat the input as a single, literal argument to the cat command. The module won't interpret any special shell characters within the input, which neutralizes the risk of shell injection. This list-based approach is the standard for running commands with variable data safely.
Debugging path-related issues with executables
A common headache is the FileNotFoundError, which happens when Python can't locate the command you're trying to run. This usually means the executable isn't in the system's PATH, a list of directories your shell searches for commands.
The following code demonstrates what happens when you try to run a command that the system can't find.
import subprocess
# This might fail if 'myapp' is not in PATH
result = subprocess.run(["myapp", "--version"])
The subprocess module can't find myapp because its location isn't listed in the system's PATH environment variable, triggering a FileNotFoundError. See how to handle this scenario in the code below.
import subprocess
import shutil
myapp_path = shutil.which("myapp")
if myapp_path:
result = subprocess.run([myapp_path, "--version"])
else:
print("myapp executable not found in PATH")
To fix this, you can find the executable's full path before trying to run it. The shutil.which() function is perfect for this, as it searches the system's PATH for you.
- If the command is found, it returns the absolute path.
- If not, it returns
None, letting you handle the error gracefully.
This check prevents FileNotFoundError and makes your script more robust, especially when it needs to run in different environments.
Real-world applications
Now that you can navigate common errors, you can apply subprocess to automate git workflows and run commands in parallel.
Automating git operations with subprocess
You can use the subprocess module to automate common Git workflows by wrapping a series of commands inside a single Python function.
import subprocess
def git_commit_and_push(commit_message):
subprocess.run(["git", "add", "."], check=True)
subprocess.run(["git", "commit", "-m", commit_message], check=True)
result = subprocess.run(["git", "push"], capture_output=True, text=True, check=True)
return result.stdout
output = git_commit_and_push("Update documentation files")
print(f"Git operation result:\n{output}")
The git_commit_and_push function automates a common Git sequence by chaining three subprocess.run calls. This allows you to stage, commit, and push your code changes in a single action.
- Using
check=Trueon each command ensures reliability. If any step fails, the entire function stops and raises an error. - The function captures and returns the output from the final
git pushcommand, giving you direct feedback on the operation's success.
Running multiple commands in parallel with ThreadPoolExecutor
For I/O-bound tasks like making multiple network requests, you can significantly speed up your script by running subprocess commands concurrently with a ThreadPoolExecutor.
import subprocess
from concurrent.futures import ThreadPoolExecutor
import time
def run_command(command):
result = subprocess.run(command, capture_output=True, text=True)
return command[0], result.stdout
commands = [
["curl", "-s", "https://api.github.com"],
["curl", "-s", "https://api.weather.gov"],
["curl", "-s", "https://api.publicapis.org/entries"]
]
start = time.time()
with ThreadPoolExecutor(max_workers=3) as executor:
results = list(executor.map(run_command, commands))
print(f"Completed {len(results)} requests in {time.time() - start:.2f} seconds")
for cmd, output in results:
print(f"{cmd}: {len(output)} bytes received")
This code uses a ThreadPoolExecutor to manage and run multiple subprocesses at once. The executor.map function is the key here; it applies the run_command function to each of the curl commands, assigning each one to a separate thread.
- The
run_commandfunction is a simple wrapper that executes a command and returns its output. - By creating a pool of three worker threads, the script handles all three network requests simultaneously, waiting for them to finish before printing the results.
Get started with Replit
Turn your new subprocess skills into a real tool. Describe what you want to build to Replit Agent, like “a web dashboard that pings websites and shows their live status” or “a script that automates Git commits.”
Replit Agent writes the code, tests for errors, and deploys your application from your description. Start building with Replit and bring your ideas to life.
Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.
Create & deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.


.png)
.png)