How to run a curl command in Python
Learn how to run curl commands in Python. Explore different methods, tips, real-world examples, and common error debugging for your projects.

Python lets you execute curl commands to automate complex web requests and data retrieval. This combines the flexibility of Python scripting with the power of a classic command line tool.
In this article, we'll cover several techniques to run curl commands. You'll find practical tips, real-world applications, and advice to debug common issues you might face along the way.
Using the requests library for basic HTTP requests
import requests
response = requests.get('https://httpbin.org/get')
print(response.status_code)
print(response.json())--OUTPUT--200
{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.28.1', 'X-Amzn-Trace-Id': 'Root=1-abc123def456'}, 'origin': '203.0.113.1', 'url': 'https://httpbin.org/get'}
The requests library offers a high-level, "Pythonic" alternative to shelling out to curl. The requests.get() function, for example, performs the same basic action as a simple curl command that fetches a URL.
Once the request is made, the returned response object neatly packages the server's reply. Instead of parsing raw text, you can directly access structured data like the status code with response.status_code or the JSON body with the response.json() method.
Standard library approaches
While the requests library is a common choice, Python's standard library offers its own powerful modules for making web requests.
Using urllib for HTTP requests
import urllib.request
import json
with urllib.request.urlopen('https://httpbin.org/get') as response:
data = json.loads(response.read().decode('utf-8'))
print(data)--OUTPUT--{'args': {}, 'headers': {'Accept-Encoding': 'identity', 'Host': 'httpbin.org', 'User-Agent': 'Python-urllib/3.9', 'X-Amzn-Trace-Id': 'Root=1-abc123def456'}, 'origin': '203.0.113.1', 'url': 'https://httpbin.org/get'}
The urllib.request module is Python's built-in tool for handling URLs. Using urlopen() makes the request, but you'll notice it requires more manual handling than requests. The response body must be explicitly processed:
- First, you read the raw data as bytes with
response.read(). - Then, you convert those bytes into a string using
.decode(). - Finally,
json.loads()parses the JSON string into a Python dictionary.
Using http.client for lower-level control
import http.client
import json
conn = http.client.HTTPSConnection("httpbin.org")
conn.request("GET", "/get")
response = conn.getresponse()
data = json.loads(response.read().decode())
print(f"Status: {response.status}, Data: {data}")
conn.close()--OUTPUT--Status: 200, Data: {'args': {}, 'headers': {'Host': 'httpbin.org', 'User-Agent': 'Python-http.client/3.9', 'X-Amzn-Trace-Id': 'Root=1-abc123def456'}, 'origin': '203.0.113.1', 'url': 'https://httpbin.org/get'}
The http.client module offers the most granular control by operating at a lower level. Instead of just sending a request to a URL, you manage the connection lifecycle directly.
- First, you create a connection object to a host with
http.client.HTTPSConnection(). - Then, you send the request details, like the method and path, using
conn.request(). - Crucially, you're responsible for closing the connection with
conn.close()after you've processed the response.
Using pycurl for curl-like functionality
import pycurl
from io import BytesIO
buffer = BytesIO()
c = pycurl.Curl()
c.setopt(c.URL, 'https://httpbin.org/get')
c.setopt(c.WRITEDATA, buffer)
c.perform()
c.close()
print(buffer.getvalue().decode())--OUTPUT--{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"User-Agent": "PycURL/7.45.1",
"X-Amzn-Trace-Id": "Root=1-abc123def456"
},
"origin": "203.0.113.1",
"url": "https://httpbin.org/get"
}
The pycurl library offers a Pythonic wrapper for libcurl, the engine that powers the curl command. It's a great choice if you're already familiar with curl's options, as the setup feels very similar.
- You start by creating a
Curlobject and configure it usingsetopt(). - Instead of printing to the console, you tell
pycurlto write the response data into an in-memoryBytesIObuffer. - The
perform()method executes the request, andclose()cleans up the connection.
Advanced techniques
With the fundamentals covered, you can tackle more advanced use cases like executing raw curl commands, sending data with POST, and managing asynchronous operations.
Running actual curl commands with subprocess
import subprocess
import json
result = subprocess.run(['curl', '-s', 'https://httpbin.org/get'], capture_output=True, text=True)
data = json.loads(result.stdout)
print(f"Status code in headers: {data.get('headers', {}).get('X-Amzn-Trace-Id')}")--OUTPUT--Status code in headers: Root=1-abc123def456
The subprocess module is your direct line to the system's shell, letting you run command-line tools from within a Python script. It's the most straightforward way to execute a raw curl command when you need its specific features or behavior.
- The
subprocess.run()function executes the command. You pass the command and its arguments as a list of strings. - Setting
capture_output=Trueensures the command's output is captured instead of printed to the console. - The captured output is available in the
result.stdoutattribute, which you can then parse or process as needed.
Making POST requests with requests library
import requests
payload = {'key1': 'value1', 'key2': 'value2'}
headers = {'Content-Type': 'application/json', 'User-Agent': 'MyCustomAgent/1.0'}
response = requests.post('https://httpbin.org/post', json=payload, headers=headers)
print(response.json()['json'])--OUTPUT--{'key1': 'value1', 'key2': 'value2'}
Sending data to a server, such as submitting a form, is done with a POST request. The requests library simplifies this with its requests.post() function. You can pass your data directly as a Python dictionary, and the library handles the heavy lifting of formatting it correctly.
- The
jsonparameter takes a dictionary and automatically serializes it into a JSON string for the request body. - The
headersparameter lets you send custom metadata, like specifying theContent-Typeor a uniqueUser-Agent.
Asynchronous HTTP requests with aiohttp
import aiohttp
import asyncio
async def fetch_data():
async with aiohttp.ClientSession() as session:
async with session.get('https://httpbin.org/get') as response:
data = await response.json()
return data['headers']['Host']
print(asyncio.run(fetch_data()))--OUTPUT--httpbin.org
For handling many requests at once without getting stuck, the aiohttp library is your go-to. It works with Python's asyncio framework to make non-blocking HTTP calls. This means your program can start a request and then work on something else while it waits for the server to respond, which is great for performance.
- The keywords
asyncandawaitare the core of this model, signaling where the program can pause and resume. - You create an
aiohttp.ClientSessionto manage connections, and then useawait session.get()to make the actual request. - Finally,
asyncio.run()kicks everything off by running your main asynchronous function.
Move faster with Replit
Replit is an AI-powered development platform where all Python dependencies come pre-installed, so you can skip setup and start coding instantly. This means you can go from learning a new technique to applying it without wrestling with environment configurations.
While the techniques in this article are powerful building blocks, Agent 4 helps you move from piecing them together to building complete applications. It takes your description of an application and handles the coding, API connections, and deployment for you. Instead of just running individual commands, you can build:
- An API monitoring dashboard that periodically sends requests to your critical endpoints and alerts you if one goes down.
- A data scraper that fetches information from multiple web pages asynchronously and consolidates it into a single file.
- An automated form-filler that submits data to a web service using
POSTrequests, perfect for repetitive data entry tasks.
Simply describe your app, and Replit will write the code, test it, and fix issues automatically, all within your browser.
Common errors and challenges
When making web requests in Python, you'll often face challenges like network timeouts, unexpected server responses, and SSL certificate verification errors.
Handling timeouts with the requests library
A request can hang indefinitely if the server is slow or unresponsive. To prevent this, the requests library lets you set a timeout parameter. For example, requests.get('url', timeout=5) will wait a maximum of five seconds for a response. If the server doesn't respond in time, the library raises a requests.exceptions.Timeout error, which you can catch and handle gracefully.
Proper error handling for HTTP status codes
Not all successful connections mean a successful request—the server might still return an error code like 404 Not Found. Instead of manually checking response.status_code, you can use the response.raise_for_status() method. This function will raise an HTTPError if the request failed (with a 4xx or 5xx status code), allowing you to centralize error handling in a try...except block.
Dealing with SSL certificate verification issues
SSL certificate errors occur when your script can't verify the server's identity, a crucial security step. While you can bypass this by setting verify=False in your request, you should do so with extreme caution. This practice is acceptable for internal development servers with self-signed certificates but introduces major security risks if used against public or untrusted websites, as it disables protection against man-in-the-middle attacks.
Handling timeouts with the requests library
If a server is slow to respond, your script can get stuck waiting indefinitely. This is a common issue when making web requests. The following code demonstrates this problem by calling an endpoint that intentionally delays its response, causing the script to hang.
import requests
def fetch_data(url):
response = requests.get(url)
return response.json()
# This might hang indefinitely if the server is slow
data = fetch_data('https://httpbin.org/delay/10')
print(data)
The script hangs because the /delay/10 endpoint is designed to wait ten seconds before responding. Since the request doesn't have a timeout, your program gets stuck. The code below shows how to fix this.
import requests
def fetch_data(url):
response = requests.get(url, timeout=5)
return response.json()
try:
data = fetch_data('https://httpbin.org/delay/10')
print(data)
except requests.exceptions.Timeout:
print("The request timed out")
The solution is to add a timeout parameter to your requests.get() call, which sets a maximum wait time. If the server exceeds this limit, a requests.exceptions.Timeout error occurs. By wrapping the request in a try...except block, you can catch this error and handle it gracefully instead of letting your script hang. This is essential when interacting with any external API or service where response times can be unpredictable.
Proper error handling for HTTP status codes
A successful network connection doesn't guarantee a successful API response. When a server returns an error like 404 Not Found, your script can crash if it tries to parse a JSON body that doesn't exist. The code below shows what happens.
import requests
response = requests.get('https://httpbin.org/status/404')
data = response.json() # Will raise an exception for 404 response
print(data)
Calling response.json() on a 404 error page causes a crash because the server sends back HTML or text, not the expected JSON. This halts your script. The code below shows how to handle this correctly.
import requests
response = requests.get('https://httpbin.org/status/404')
if response.status_code == 200:
data = response.json()
print(data)
else:
print(f"Error: Received status code {response.status_code}")
The fix is to check the response.status_code before trying to parse the JSON. By wrapping the response.json() call in an if statement that confirms the status is 200, you'll avoid the error. This simple check ensures your script only processes successful responses and handles errors gracefully. It prevents unexpected crashes whenever an API returns something other than a success code, which is a common scenario.
Dealing with SSL certificate verification issues
When your script tries to connect to a site with an invalid SSL certificate, Python's security features will stop the request cold. This prevents you from communicating with a potentially insecure server. The following code demonstrates what happens when this error occurs.
import requests
# This will fail if the site has SSL certificate issues
response = requests.get('https://expired.badssl.com/')
print(response.text)
Because the target URL has an expired certificate, the requests.get() call triggers a security error and stops the script. This is a default safety feature. The code below demonstrates how to manage this for specific cases.
import requests
# Option 1: Disable verification (only use when necessary)
response = requests.get('https://expired.badssl.com/', verify=False)
print("Warning: SSL verification disabled")
print(response.status_code)
The solution is to pass verify=False to your request, which tells the library to ignore SSL certificate errors. This is a common workaround when you're working with internal development servers that use self-signed certificates. Be extremely cautious with this setting. Disabling verification on public or untrusted sites removes a critical security layer, leaving your application vulnerable to attacks.
Real-world applications
Now that you can troubleshoot requests, you can apply these skills to build practical tools for fetching data and monitoring websites.
Fetching weather data with the requests library
You can use the requests library to connect to a public API, like OpenWeatherMap, and pull real-time information such as current weather conditions.
import requests
api_key = "demo_key" # Replace with your actual API key
city = "London"
url = f"https://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
response = requests.get(url)
weather_data = response.json()
print(f"Current temperature in {city}: {weather_data['main']['temp']}°C")
print(f"Weather condition: {weather_data['weather'][0]['description']}")
This script shows a practical way to fetch data from a web API. It uses an f-string to build a specific URL for the OpenWeatherMap service, inserting your city and api_key directly into the request.
- The
requests.get()function sends the request to this URL. - The server's reply is parsed from JSON into a Python dictionary with
response.json(). - You can then pull out specific details, like
weather_data['main']['temp'], using standard dictionary keys.
Building a simple website monitoring system
You can build a simple monitoring script that uses requests to check a site's status, time to measure its responsiveness, and datetime to log when the check happened.
import requests
import time
from datetime import datetime
websites = ["https://www.google.com", "https://www.github.com", "https://www.python.org"]
def check_website(url):
try:
start_time = time.time()
response = requests.get(url, timeout=5)
response_time = time.time() - start_time
return response.status_code, response_time
except requests.RequestException:
return None, None
for site in websites:
status_code, response_time = check_website(site)
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
if status_code:
print(f"{timestamp} - {site}: Status {status_code}, Response time: {response_time:.2f}s")
else:
print(f"{timestamp} - {site}: DOWN")
This script automates checking the health of multiple websites. It loops through a list of URLs, calling the check_website function for each one to see if it's online and responsive.
- The function uses a
try...exceptblock to safely handle connection errors or timeouts. On a successful connection, it returns the HTTP status code and how long the request took. - The main loop then prints a timestamped log, reporting either the site's status and speed or simply "DOWN" if the request failed.
Get started with Replit
Now, turn these techniques into a real tool with Replit Agent. Describe what you want to build, like “a script that checks API endpoints for a 200 status code” or “a tool that pulls weather data for a list of cities.”
Replit Agent will write the code, test for errors, and deploy your application for you. Start building with Replit.
Describe what you want to build, and Replit Agent writes the code, handles the infrastructure, and ships it live. Go from idea to real product, all in your browser.
Create & deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.

.png)
.png)
.png)