How to call an API in Python

Learn how to call an API in Python. Explore different methods, tips, real-world examples, and common error debugging to master API calls.

How to call an API in Python
Published on: 
Thu
Feb 12, 2026
Updated on: 
Tue
Feb 24, 2026
The Replit Team Logo Image
The Replit Team

An API call in Python connects your application to external data and services. It's a core skill for developers who build dynamic, data-driven software and want to integrate powerful functionalities.

In this article, you’ll explore key techniques and practical tips for API integration. You’ll also find real-world applications and debugging advice to help you confidently make your first API call.

Using the requests library for a basic API call

import requests

response = requests.get("https://jsonplaceholder.typicode.com/posts/1")
data = response.json()
print(data)--OUTPUT--{'userId': 1, 'id': 1, 'title': 'sunt aut facere repellat provident occaecati excepturi optio reprehenderit', 'body': 'quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto'}

The requests library is the standard for making HTTP calls in Python. The code first uses requests.get() to fetch data from a public test API, storing the server's reply in the response object.

Next, the response.json() method parses the JSON content from that response. This is a critical step that converts the raw data string into a Python dictionary, making it easy to access and use the API's data in your code.

Basic API techniques

Beyond fetching a single resource, you'll need to master techniques like passing query parameters, sending data with POST requests, and managing API authentication.

Working with query parameters using requests

import requests

params = {"postId": 1, "limit": 3}
response = requests.get("https://jsonplaceholder.typicode.com/comments", params=params)
comments = response.json()
print(f"Found {len(comments)} comments")
print(comments[0]["email"])--OUTPUT--Found 3 comments
[email protected]

Query parameters let you filter and customize the data you request from an API. Instead of building a URL string by hand, you can simply pass a dictionary to the params argument in requests.get(). The library automatically handles the encoding, making your code cleaner and more readable.

  • The postId parameter filters for comments associated with a specific post.
  • The limit parameter restricts the response to a maximum of three comments.

Making POST requests with JSON data

import requests

new_post = {
"title": "My New Post",
"body": "This is the content of my post",
"userId": 1
}
response = requests.post("https://jsonplaceholder.typicode.com/posts", json=new_post)
print(f"Status code: {response.status_code}")
print(response.json())--OUTPUT--Status code: 201
{'title': 'My New Post', 'body': 'This is the content of my post', 'userId': 1, 'id': 101}

When you need to create a new resource on a server, you'll use a POST request. The code sends a Python dictionary containing a new post to the API endpoint using the requests.post() method.

  • By passing your data to the json parameter, you let the requests library handle the conversion to a JSON string and set the appropriate headers for you.
  • A status code of 201 indicates success. The API's response confirms creation by returning the new post, now with a server-assigned id.

Handling API authentication

import requests

api_key = "your_api_key_here"
headers = {"Authorization": f"Bearer {api_key}"}
response = requests.get(
"https://api.example.com/data",
headers=headers
)
print(f"Status: {response.status_code}")
print("Authenticated request completed")--OUTPUT--Status: 200
Authenticated request completed

Most APIs require authentication to protect their data. This example shows a common method where you send a secret API key in the request headers to prove your identity.

  • You build a headers dictionary containing an Authorization key.
  • The value is a string that includes your key, often prefixed with "Bearer ", which specifies the authentication type.

By passing this dictionary to the headers parameter in your request, you securely authenticate your call. A status code of 200 confirms that the server accepted your key and granted access.

Advanced API techniques

Once you've got the basics down, you can write more scalable and robust code by using asynchronous calls, creating reusable clients, and handling errors gracefully.

Working with async API calls using aiohttp

import aiohttp
import asyncio

async def fetch_data(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.json()

async def main():
tasks = [
fetch_data(f"https://jsonplaceholder.typicode.com/posts/{i}")
for i in range(1, 4)
]
results = await asyncio.gather(*tasks)
for result in results:
print(f"Post title: {result['title'][:30]}...")

asyncio.run(main())--OUTPUT--Post title: sunt aut facere repellat provi...
Post title: qui est esse...
Post title: ea molestias quasi exercitatio...

For tasks that involve waiting, like network requests, asynchronous code can dramatically boost performance. The aiohttp library, combined with asyncio, allows your program to make multiple API calls concurrently instead of one by one.

  • The code first creates several API call “tasks” using a list comprehension.
  • asyncio.gather() then runs all these tasks at roughly the same time.
  • This approach is far more efficient because your application doesn’t sit idle waiting for each server to respond—it processes them as they complete.

Creating a reusable API client class

class ApiClient:
def __init__(self, base_url, api_key=None):
self.base_url = base_url
self.session = requests.Session()
if api_key:
self.session.headers.update({"Authorization": f"Bearer {api_key}"})

def get(self, endpoint, params=None):
url = f"{self.base_url}/{endpoint}"
response = self.session.get(url, params=params)
response.raise_for_status()
return response.json()

client = ApiClient("https://jsonplaceholder.typicode.com")
users = client.get("users")
print(f"Retrieved {len(users)} users")
print(f"First user: {users[0]['name']}")--OUTPUT--Retrieved 10 users
First user: Leanne Graham

An ApiClient class organizes your code by bundling API logic into a reusable object. This approach centralizes configuration like the base URL and authentication, so you don't have to repeat yourself in every request. It makes your code cleaner and much easier to maintain.

  • The __init__ method sets up a requests.Session() object, which efficiently reuses connections for multiple calls and stores headers.
  • Methods like get simplify making requests and include response.raise_for_status() to automatically check for HTTP errors, making your code more robust.

Implementing error handling and retries

import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

session = requests.Session()
retries = Retry(total=3, backoff_factor=0.5, status_forcelist=[500, 502, 503, 504])
session.mount('https://', HTTPAdapter(max_retries=retries))

try:
response = session.get("https://api.example.com/potentially-unstable-endpoint")
response.raise_for_status()
data = response.json()
except requests.exceptions.RequestException as e:
print(f"API request failed: {e}")--OUTPUT--API request failed: HTTPSConnectionPool(host='api.example.com', port=443): Max retries exceeded with url: /potentially-unstable-endpoint

APIs can be unreliable, so robust code anticipates temporary failures. This example configures a requests.Session to automatically retry failed requests. The Retry object is the key component here.

  • It's set to retry up to three times using total=3.
  • It waits between attempts, with the delay controlled by backoff_factor.
  • It only retries on specific server errors like 500 or 502, defined in status_forcelist.

The try...except block acts as a safety net, catching a RequestException if all retries ultimately fail and preventing your application from crashing.

Move faster with Replit

Replit is an AI-powered development platform that transforms natural language into working applications. You can describe what you want to build, and Replit Agent creates it—complete with databases, APIs, and deployment.

The agent can turn the API integration techniques from this article into production-ready tools. For example, you could ask it to build:

  • A real-time stock tracker that uses requests.get() to pull data from a financial API and display it in a simple dashboard.
  • A content submission tool that automates posting to a blog or social media platform using authenticated requests.post() calls.
  • A flight deal finder that uses aiohttp to concurrently search multiple airline APIs for the cheapest fares.

Simply describe your app idea, and Replit Agent will write the code, handle testing, and fix issues for you. Try turning your own concept into a working application with Replit Agent.

Common errors and challenges

Even with the right tools, you'll run into common issues like timeouts, bad responses, and encoding errors when working with APIs.

Sometimes an API server is slow or unresponsive. To prevent your application from waiting indefinitely, you can set a limit using the timeout parameter in your request. For example, requests.get(url, timeout=5) tells the library to give up if the server doesn't respond within five seconds. This simple addition is a crucial step to make your application more resilient.

A common mistake is to immediately call .json() on a response without checking if the request was successful. If the API returns an error like a 404 Not Found or 500 Internal Server Error, the response body won't contain valid JSON, and your code will crash. Always check that response.status_code is in the successful range (like 200) before you try to parse the data.

URLs can't contain special characters like spaces or ampersands; they must be "percent-encoded." While the requests library handles this for you when you use the params argument, you can run into issues if you build URL strings manually. Sticking with the params dictionary is the safest way to avoid malformed URLs and unexpected errors.

Handling timeout errors with the timeout parameter

Without a timeout, your application will wait indefinitely for a slow server to respond, freezing the program. This creates a frustrating user experience and can make your service seem unreliable. The following code shows this problem in action, where the request hangs.

import requests

response = requests.get("https://api.example.com/data")
data = response.json()
print("Data retrieved successfully")

This code hangs because the requests.get() call lacks a time limit, forcing it to wait indefinitely for a slow server. The following example demonstrates how to prevent your application from freezing.

import requests

response = requests.get("https://api.example.com/data", timeout=5)
data = response.json()
print("Data retrieved successfully")

By adding timeout=5 to the requests.get() call, you give the server five seconds to respond. If it takes longer, the request will fail with a timeout error instead of hanging indefinitely. This simple change makes your application more resilient. It's a good practice to include a timeout in all your API requests. This is especially true when you're connecting to external services you don't control, as their response times can be unpredictable.

Checking status codes before calling json()

Calling .json() directly on a response is risky. If the request fails—for example, by hitting a nonexistent endpoint—the server sends back an error, not the JSON data you expect. This mismatch will cause your program to crash. The code below shows what happens when you don't check the status first.

import requests

response = requests.get("https://jsonplaceholder.typicode.com/nonexistent")
data = response.json()
print(data)

The request to a nonexistent endpoint returns a 404 error, which has no JSON body. Calling .json() on this empty response triggers an exception because there's no data to parse. The next example demonstrates a safer approach.

import requests

response = requests.get("https://jsonplaceholder.typicode.com/nonexistent")
if response.status_code == 200:
data = response.json()
print(data)
else:
print(f"Error: {response.status_code}, {response.text[:50]}")

This safer approach first checks if response.status_code is 200, which confirms the request was successful. Only then does it call .json() to parse the data. If the status code indicates an error, the else block prints a helpful message instead of letting the program crash. It's a simple but crucial check to perform every time you expect a JSON response, as it makes your application far more robust.

Handling URL encoding issues with special characters

URLs can't handle special characters like spaces or ampersands (&) directly. They must be encoded, or "escaped," to be included safely. If you build a URL string by hand and forget this step, you'll run into unexpected errors. The code below shows what happens.

import requests

search_term = "coffee & tea"
response = requests.get(f"https://api.example.com/search?q={search_term}")
results = response.json()
print(f"Found {len(results)} results")

The f-string inserts the & character directly into the URL, which the server misinterprets as a parameter separator instead of part of the search term. This breaks the query. The next example shows the correct approach.

import requests
from urllib.parse import quote

search_term = "coffee & tea"
encoded_term = quote(search_term)
response = requests.get(f"https://api.example.com/search?q={encoded_term}")
results = response.json()
print(f"Found {len(results)} results")

The fix is to use the quote() function from Python's urllib.parse module. This function correctly escapes special characters, like converting & to %26, so the server doesn't misinterpret your URL. You'll need to do this anytime you manually build a URL string. However, the best practice is to pass a dictionary to the params argument in your request, as the requests library will handle the encoding for you automatically.

Real-world applications

These API skills translate directly into real-world tools, like a URL shortener client or a script for analyzing large data sets.

Creating a simple URL shortener client with requests

This example shows how a single requests.get() call to the TinyURL API is all it takes to build a functional URL shortener.

import requests

def shorten_url(long_url):
api_url = f"https://tinyurl.com/api-create.php?url={long_url}"
response = requests.get(api_url)
return response.text

original_url = "https://www.example.com/really/long/url/that/is/hard/to/share"
short_url = shorten_url(original_url)
print(f"Original URL: {original_url}")
print(f"Shortened URL: {short_url}")

The shorten_url function constructs a request URL by embedding a long link directly into the TinyURL API endpoint. It then sends this request using requests.get().

  • Unlike previous examples, this API doesn't return JSON. Instead, it sends back the shortened URL as plain text.
  • The code accesses this raw text directly with response.text and returns it.

This demonstrates how to work with simple APIs that provide a direct text-based response, making the integration straightforward.

Downloading and analyzing JSON data sets

This script pulls population data for European countries from a public API, showing how you can quickly turn raw JSON into useful insights.

import requests
import statistics

def analyze_population_data():
url = "https://restcountries.com/v3.1/region/europe?fields=name,population"
response = requests.get(url)
countries = response.json()

populations = [country['population'] for country in countries]

return {
'count': len(populations),
'min': min(populations),
'max': max(populations),
'mean': statistics.mean(populations)
}

stats = analyze_population_data()
print(f"Analyzed {stats['count']} European countries:")
print(f"Smallest population: {stats['min']:,}")
print(f"Largest population: {stats['max']:,}")
print(f"Average population: {stats['mean']:,.0f}")

The analyze_population_data function calls the REST Countries API, requesting only the name and population for European countries. It then uses a list comprehension to efficiently extract all population figures from the JSON response into a new list.

  • The script calculates key metrics on this list using Python’s built-in functions and the statistics.mean() method.
  • Finally, it returns these calculations in a dictionary, packaging the results for use elsewhere in your application.

Get started with Replit

Turn your new skills into a working tool with Replit Agent. Try prompts like "build a real-time currency converter" or "create a weather dashboard that pulls data from a public API."

The agent writes the code, tests for errors, and deploys your application for you. Start building with Replit.

Get started free

Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.

Get started for free

Create & deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.