How to send an HTTP request in Python
Learn how to send HTTP requests in Python. Explore different methods, tips, real-world applications, and how to debug common errors.

Sending HTTP requests in Python is a fundamental skill for web scraping, API interaction, and data retrieval. Python’s libraries offer powerful functions that make this process straightforward and efficient.
Here, you'll explore several techniques for sending requests. You'll find practical tips, real-world applications, and debugging advice to help you select the right approach for your project.
Using the requests library for basic HTTP GET requests
import requests
response = requests.get('https://api.github.com')
print(f"Status code: {response.status_code}")
print(f"Content type: {response.headers['content-type']}")--OUTPUT--Status code: 200
Content type: application/json; charset=utf-8
The requests.get() function is the simplest way to fetch data from a URL. It returns a Response object, which contains the server's complete reply—not just the data. This lets you inspect the request's outcome before you dive into the content.
response.status_code: This attribute lets you check if the request was successful. A status of200, for example, indicates success.response.headers: This dictionary contains metadata about the response. Checking the'content-type'key tells you how to interpret the body, such asapplication/json.
Common HTTP request methods and parameters
While requests.get() is perfect for fetching data, you'll often need more control, like sending data or adding custom headers and query parameters.
Making POST requests with requests
import requests
data = {'key': 'value', 'another_key': 'another_value'}
response = requests.post('https://httpbin.org/post', data=data)
print(response.json()['form'])--OUTPUT--{'key': 'value', 'another_key': 'another_value'}
To send data to a server, you use requests.post(). Unlike a GET request, a POST request is used for submitting information. You simply pass a dictionary to the data parameter, and requests handles the encoding.
- The
dataparameter takes a dictionary containing the key-value pairs you want to submit. requestsautomatically form-encodes this data, a common format for web forms.- The example uses
httpbin.orgto echo the submitted data, letting you confirm the server received it correctly.
Working with request headers
import requests
headers = {'User-Agent': 'Python HTTP Client', 'Accept': 'application/json'}
response = requests.get('https://httpbin.org/headers', headers=headers)
print(response.json()['headers'])--OUTPUT--{
"Accept": "application/json",
"Host": "httpbin.org",
"User-Agent": "Python HTTP Client"
}
Custom headers let you send extra metadata with your request. You simply pass a dictionary of key-value pairs to the headers parameter in functions like requests.get(). This allows you to customize how the server sees and responds to your request.
- The
User-Agentheader identifies your client. Some servers change their response based on this value. - The
Acceptheader tells the server what kind of data format you prefer in the response, likeapplication/json.
Handling query parameters
import requests
params = {'q': 'python', 'sort': 'stars'}
response = requests.get('https://api.github.com/search/repositories', params=params)
results = response.json()
print(f"Total results: {results['total_count']}")
print(f"First repository: {results['items'][0]['full_name']}")--OUTPUT--Total results: 274198
First repository: vinta/awesome-python
Query parameters let you filter or customize the data you request from a URL. Instead of manually building the URL string, you can pass a dictionary to the params argument in requests.get(). The library automatically encodes and appends these parameters for you, making your code cleaner and more readable.
- The
paramsdictionary contains key-value pairs that correspond to the API's filtering options. - In this example,
{'q': 'python', 'sort': 'stars'}tells the GitHub API to search for repositories matching "python" and sort them by stars.
Advanced HTTP techniques in Python
For more demanding tasks, you'll move beyond single requests to manage sessions, perform asynchronous operations with aiohttp, and implement retries using urllib3.
Using sessions for multiple requests
import requests
session = requests.Session()
session.headers.update({'User-Agent': 'Python HTTP Tutorial'})
response1 = session.get('https://httpbin.org/cookies/set/sessioncookie/123456789')
response2 = session.get('https://httpbin.org/cookies')
print(response2.json())--OUTPUT--{
"cookies": {
"sessioncookie": "123456789"
}
}
When you need to make several requests to the same server, a requests.Session object is your best friend. It bundles parameters and persists them across requests. This means you don't have to set things like headers or authentication for every single call, which is also a performance boost.
- The session automatically handles cookies. In the example, the first request sets a cookie, and the session ensures it's sent with the second request.
- It also keeps any headers you set with
session.headers.update(), making your code cleaner and more efficient.
Asynchronous HTTP requests with aiohttp
import aiohttp
import asyncio
async def fetch(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.text()
async def main():
result = await fetch('https://python.org')
print(f"Retrieved {len(result)} characters")
asyncio.run(main())--OUTPUT--Retrieved 49531 characters
For tasks that involve waiting, like network requests, aiohttp offers a performance boost by handling operations asynchronously. This means your program doesn't have to halt completely while waiting for a server to respond. Instead, it can work on other tasks, making it highly efficient for I/O-bound applications.
- The
asyncandawaitsyntax is central to this approach. Functions defined withasync defcan be paused and resumed. - Using
awaitbefore a function likesession.get()tells Python to pause execution there, free up the processor for other work, and return once the data arrives. - The entire process is managed by an event loop, which you kick off with
asyncio.run().
Implementing retry mechanisms with urllib3
import urllib3
from urllib3.util import Retry
from urllib3.exceptions import MaxRetryError
http = urllib3.PoolManager(retries=Retry(total=3, backoff_factor=0.5))
try:
response = http.request('GET', 'https://httpbin.org/status/500')
except MaxRetryError:
print("Failed after 3 retry attempts")--OUTPUT--Failed after 3 retry attempts
Network requests can be unreliable, but urllib3 helps you build more resilient applications. It’s a smart way to handle temporary server errors without manual intervention. You configure a Retry object and pass it to the PoolManager to automate the process.
- The
total=3parameter sets the maximum number of retry attempts before giving up. backoff_factor=0.5introduces an exponential delay between retries. This prevents your client from overwhelming a struggling server.- If all attempts fail,
urllib3raises aMaxRetryError, which you can catch to handle the final failure gracefully.
Move faster with Replit
Replit is an AI-powered development platform that comes with all Python dependencies pre-installed, so you can skip setup and start coding instantly. Instead of piecing together the techniques you've just learned, you can use Agent 4 to build complete applications from a simple description.
Describe the app you want to build, and Agent 4 will take it from an idea to a working product. For example, you could build:
- A dashboard that pulls data from the GitHub API to track the most popular Python repositories in real-time.
- A utility that automatically submits form data to a web service and logs the response.
- A price tracker that uses a session to scrape product information from multiple pages of an e-commerce site.
Simply describe your app, and Replit will write the code, test it, and fix issues automatically, all within your browser.
Common errors and challenges
Even with powerful libraries, you'll run into issues like timeouts, SSL errors, and data formatting problems, but they're usually simple to fix.
Handling connection timeouts with the timeout parameter
Sometimes a server takes too long to respond, causing your program to hang indefinitely. To prevent this, you can set a timeout on your request. By adding the timeout parameter to your request function, you tell requests to wait only a specified number of seconds for a response before giving up.
For example, requests.get('https://example.com', timeout=5) will raise a Timeout exception if the server doesn't respond within five seconds. It’s a simple but essential practice for building robust applications that don't get stuck waiting forever.
Dealing with SSL certificate verification using verify=False
By default, requests verifies SSL certificates to ensure your connection is secure. If a server's certificate is invalid, you'll get an SSLError. While you can bypass this check by setting verify=False, you should do so with extreme caution.
Disabling verification exposes your application to security risks, such as man-in-the-middle attacks. This option is useful for testing against a local server with a self-signed certificate, but for production code, the correct solution is to fix the server's certificate configuration.
Correctly sending JSON data with the json parameter
A common mistake is using the data parameter to send JSON payloads. While data is great for submitting form-encoded data, APIs that expect JSON will reject it. Instead, you should use the json parameter.
When you pass a dictionary to the json parameter—for example, requests.post(url, json=my_dict)—the library automatically handles two key steps. It serializes your dictionary into a JSON string and sets the Content-Type header to application/json, ensuring the server understands your request.
Handling connection timeouts with the timeout parameter
Handling connection timeouts with the timeout parameter
If a server is slow to respond, your request can hang indefinitely, freezing your entire application. This is a common pitfall when you don't explicitly set a time limit. The code below demonstrates this problem with a simple get_data function.
import requests
def get_data(url):
response = requests.get(url)
return response.json()
# This could hang indefinitely if the server is slow
data = get_data('https://example.com/api/data')
print(data)
The get_data function is risky because it doesn't tell requests.get() when to give up. If the server never responds, your program will wait forever. See how to fix this in the next example.
import requests
def get_data(url):
response = requests.get(url, timeout=5) # 5 second timeout
return response.json()
try:
data = get_data('https://example.com/api/data')
print(data)
except requests.exceptions.Timeout:
print("Request timed out. The server might be slow or unavailable.")
The fix is simple: add the timeout parameter to your request. By setting timeout=5, you tell requests to give up if the server doesn't respond within five seconds. This prevents your application from hanging indefinitely on a slow connection.
Wrapping the call in a try...except block lets you catch the requests.exceptions.Timeout error. This allows you to handle the failure gracefully, like notifying the user or retrying the request. It's a crucial practice for any external API call.
Dealing with SSL certificate verification using verify=False
By default, requests won't connect to a server if its SSL certificate is invalid. While this is a great security feature, it can cause errors during development with local servers. The code below shows what happens when a request runs into this issue.
import requests
# This will fail if the site has SSL certificate issues
response = requests.get('https://localhost:8000/api')
print(response.status_code)
This request fails because it targets a local HTTPS server, which likely uses a self-signed certificate that requests won't trust. The next example shows how you can bypass this check specifically for development environments.
import requests
# For development only - NOT recommended for production!
response = requests.get('https://localhost:8000/api', verify=False)
print(response.status_code)
# Better approach for production - specify a CA bundle
# response = requests.get('https://example.com/api', verify='/path/to/certfile')
Setting verify=False lets you bypass SSL certificate checks, a quick fix that’s useful for development servers with self-signed certificates. You should never use this in production, as it exposes your application to security risks. For a live app, the correct solution is to fix the server's certificate or point the verify parameter to a trusted certificate authority (CA) bundle, keeping your connection secure.
Correctly sending JSON data with the json parameter
It's a common pitfall to send a dictionary to an API using the data parameter. This method form-encodes the payload, which is not the JSON format most APIs expect. The code below demonstrates this frequent mistake in action.
import requests
data = {'name': 'John', 'age': 30}
# This sends form data, not JSON
response = requests.post('https://api.example.com/users', data=data)
print(response.status_code)
Because the request uses the data parameter, the Content-Type header is incorrect for a JSON API. The server receives form data instead of a JSON object, causing it to fail. The correct implementation is straightforward.
import requests
data = {'name': 'John', 'age': 30}
# Use json parameter to send JSON data
response = requests.post('https://api.example.com/users', json=data)
print(response.status_code)
Using the json parameter is the correct approach because it signals your intent to the API. The library handles the tedious work of serializing the dictionary and setting the Content-Type header, which prevents the server from misinterpreting your request as form data. You'll need to use this method anytime you're interacting with a modern REST API that requires a JSON payload, which is a very common scenario.
Real-world applications
With these fundamentals in place, you can build powerful tools that solve real-world problems, like tracking the ISS or monitoring website uptime.
Tracking the International Space Station with requests
You can find the ISS's current location with a single requests.get() call to a public API that provides its live coordinates.
import requests
response = requests.get('http://api.open-notify.org/iss-now.json')
location = response.json()
latitude = location['iss_position']['latitude']
longitude = location['iss_position']['longitude']
print(f"The ISS is currently at latitude {latitude}, longitude {longitude}")
This script demonstrates a practical API call. It uses requests.get() to fetch data from the Open Notify API, which returns a JSON object.
- The
response.json()method is crucial here. It automatically decodes the JSON response into a Python dictionary, so you can work with it natively. - The code then navigates this dictionary, accessing the nested
iss_positionkey to extract the latitude and longitude values.
This pattern—request, parse, and extract—is fundamental for interacting with most web APIs.
Creating a simple website monitoring tool with timeout parameter
By combining requests.get() with the timeout parameter inside a loop, you can create a simple yet effective tool for monitoring a website's uptime.
import requests
import time
url = "https://www.python.org"
check_interval = 2 # seconds between checks
max_checks = 2
for i in range(max_checks):
try:
print(f"Check {i+1}: Requesting {url}")
response = requests.get(url, timeout=5)
print(f"Status: {response.status_code}, Length: {len(response.text)} characters")
except requests.RequestException as e:
print(f"Error: {e}")
if i < max_checks - 1:
print(f"Waiting {check_interval} seconds...")
time.sleep(check_interval)
This script repeatedly checks a website's status, demonstrating a robust way to handle network requests. It uses a for loop to run a set number of checks, making it a practical example of automated monitoring.
- The
try...exceptblock gracefully handles anyrequests.RequestException, preventing the program from crashing if the site is down. - Using
time.sleep()pauses execution between attempts, which is a good practice to avoid overwhelming the server with rapid-fire requests.
This combination makes the script reliable for repeated network tasks.
Get started with Replit
Now, turn your knowledge into a real tool. Tell Replit Agent to "build a website uptime monitor" or "create a dashboard that tracks GitHub repository stats using their API."
Replit Agent writes the code, tests for errors, and deploys your app from a single prompt. Start building with Replit.
Describe what you want to build, and Replit Agent writes the code, handles the infrastructure, and ships it live. Go from idea to real product, all in your browser.
Create & deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.

.png)
.png)
.png)