How to create a log file in Python
Learn how you can create a log file in Python. This guide covers methods, tips, real-world uses, and how to debug common errors.

Log files are essential to monitor applications, debug issues, and analyze performance. Python makes it simple to create these files and record events, errors, and operational data for review.
Here, you'll learn several techniques to set up logging. You'll get practical tips, see real-world applications, and receive debugging advice to help you manage your application logs effectively.
Using open() to create a basic log file
log_file = open("application.log", "w")
log_file.write("INFO: Application started\n")
log_file.write("ERROR: Something went wrong\n")
log_file.close()--OUTPUT--(No visible output - content is written to application.log file)
The simplest method for logging involves Python's built-in open() function. Using the "w" mode creates a new file named application.log. A key detail here is that "w" mode will overwrite the file's contents every time the program runs. This isn't ideal for continuous logging but works for simple, one-off scripts.
Each write() call adds a string to the file, and the newline character (\n) is essential for separating entries to keep the log readable. You must call close() to save the changes and release the file resource from memory.
Basic logging methods
For more control than the open() function offers, you can use Python's logging module to categorize messages by severity and customize their format.
Using the logging module
import logging
logging.basicConfig(filename="app.log", level=logging.INFO)
logging.info("Application started")
logging.error("An error occurred")--OUTPUT--(No visible output - content is written to app.log file)
The logging module provides a more robust way to handle logs. You set it up once with logging.basicConfig(), which configures the logging system for your entire application. This approach gives you much more control than simply writing to a file.
- The
filename="app.log"argument tells the logger where to save messages. Unlike usingopen()in write mode, this appends to the file by default, preserving your log history across runs. - Setting
level=logging.INFOestablishes a threshold. It ensures that any message with a severity ofINFOor higher—such asWARNINGorERROR—gets recorded, while less critical messages are ignored.
Once configured, you can call functions like logging.info() or logging.error() anywhere in your code to add formatted entries to your log file.
Working with different log levels
import logging
logging.basicConfig(filename="levels.log", level=logging.DEBUG)
logging.debug("Debug information")
logging.info("Informational message")
logging.warning("Warning: potential issue detected")
logging.error("Error: something went wrong")
logging.critical("Critical: application failure")--OUTPUT--(No visible output - all levels are written to levels.log file)
The logging module uses severity levels to help you categorize and filter messages. By setting level=logging.DEBUG, you ensure that all messages are recorded, which is useful during development.
- Levels range from
DEBUG(most detailed) toCRITICAL(most severe). - You can change the level in
basicConfig()to control what gets logged. For example, setting it tologging.WARNINGwould ignoreDEBUGandINFOmessages.
This lets you tailor your logging output for different environments—like getting detailed logs in development but only critical errors in production.
Formatting log messages
import logging
format_str = '%(asctime)s - %(levelname)s - %(message)s'
logging.basicConfig(filename="formatted.log", level=logging.INFO, format=format_str)
logging.info("Application started")
logging.error("An error occurred")--OUTPUT--(No visible output - formatted content is written to formatted.log file)
You can customize your log entries by passing a format string to the format parameter in basicConfig(). This makes your logs much more informative by adding useful context to every message. The format string uses special placeholders to include dynamic data.
%(asctime)sadds the time the event occurred.%(levelname)sincludes the log's severity level, likeINFOorERROR.%(message)sis the actual message you logged.
This setup automatically structures each entry, making your logs consistent and easier to read.
Advanced logging techniques
Now that you can format messages, you can take your logging further by automatically managing file sizes, organizing complex setups, and sending logs to multiple places.
Using rotating log files
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger("app")
handler = RotatingFileHandler("rotating.log", maxBytes=2000, backupCount=5)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
logger.info("This message goes to the rotating log file")--OUTPUT--(No visible output - content is written to rotating.log file)
Log files can grow indefinitely, but RotatingFileHandler keeps them manageable. It automatically archives the current log and starts a new one when it hits a specific size. This approach uses a named logger via getLogger() and attaches the handler directly, offering more modular control than the global basicConfig().
- The
maxBytesargument defines the file's size limit. When the log reaches this size, it gets archived and a new one begins. backupCountdetermines how many old log files are kept. Here, it will save up to five backups.
Configuring logging with a dictionary
import logging.config
config = {
'version': 1,
'handlers': {'file': {'class': 'logging.FileHandler', 'filename': 'config.log', 'level': 'INFO'}},
'root': {'handlers': ['file'], 'level': 'INFO'}
}
logging.config.dictConfig(config)
logging.info("Configured using dictionary")--OUTPUT--(No visible output - content is written to config.log file)
For complex applications, configuring your logger with a dictionary using logging.config.dictConfig() provides a clean and organized solution. This method is especially powerful because you can load the configuration from a file, such as a JSON or YAML file, which keeps your logging setup separate from your application code.
- The
handlerskey defines how and where log messages are sent. Here, it sets up alogging.FileHandlerto write messages toconfig.log. - The
rootkey configures the default logger, specifying which handlers to use and setting its minimum logging level toINFO.
Logging to multiple destinations
import logging
import sys
logger = logging.getLogger("multi")
file_handler = logging.FileHandler("multi.log")
console_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
logger.setLevel(logging.INFO)
logger.info("This message goes to both file and console")--OUTPUT--This message goes to both file and console
You can direct logs to multiple destinations simultaneously by attaching several handlers to a single logger. This setup is perfect for when you need to see output in real-time while also keeping a permanent record for later analysis.
- A
FileHandlerwrites messages to a file, in this casemulti.log. - A
StreamHandlersends them to a stream, such as the console viasys.stdout.
By calling addHandler() for each one, every log message from your logger is automatically sent to both the console and the file.
Move faster with Replit
Replit is an AI-powered development platform that transforms natural language into working applications. You can describe what you want to build, and Replit Agent creates it—complete with databases, APIs, and deployment.
For the logging techniques we've explored, Replit Agent can turn them into production-ready tools:
- Build a web application monitor that uses
RotatingFileHandlerto log HTTP requests and errors, preventing log files from consuming too much disk space. - Create a data pipeline utility that logs each stage of a job—like extraction, transformation, and loading—to both the console and a file for real-time feedback and permanent records.
- Deploy a security dashboard that tracks user authentication events, using different log levels to categorize successful logins, failed attempts, and critical account lockouts for auditing.
Describe your next application, and Replit Agent will write the code, set up the infrastructure, and deploy it for you—all from a single prompt.
Common errors and challenges
When creating log files in Python, you might encounter a few common issues, but they're all straightforward to fix.
- A common frustration is when log messages seem to vanish. This usually happens because the logger's level is set too high. For instance, if you configure your logger with
level=logging.WARNING, it will ignore less severe messages likelogging.INFOorlogging.DEBUG. To see everything during development, ensure the level is set tologging.DEBUG. - Losing your log history with each run is another frequent issue, especially when using the
open()function with write mode ("w"). To preserve your logs, use append mode ("a") instead, which adds new entries to the end of the file. Theloggingmodule defaults to append mode, making it a safer choice for continuous logging. - If you see duplicate log entries, it's likely because a handler has been added to a logger multiple times. This can happen if
logger.addHandler()is called repeatedly. To avoid this, structure your code to configure loggers only once when your application starts. Also, remember thatlogging.basicConfig()only works the first time it's called; subsequent calls in the same program run will have no effect.
Troubleshooting missing log messages due to incorrect level
When your log messages don't appear, the first thing to check is the configured logging level. If it's set too high, it will filter out any messages with a lower severity. The following example demonstrates this exact scenario.
import logging
logging.basicConfig(filename="debug.log", level=logging.WARNING)
logging.debug("This debug message is important")
logging.info("This info message won't appear either")
logging.warning("Only this warning will be logged")
Since the level is set to logging.WARNING, the logger ignores the less severe logging.debug() and logging.info() calls. Only messages at this level or higher get recorded. The following code demonstrates the simple fix.
import logging
logging.basicConfig(filename="debug.log", level=logging.DEBUG)
logging.debug("This debug message will now be logged")
logging.info("This info message will also appear")
logging.warning("This warning continues to be logged")
By setting the level to logging.DEBUG, you lower the filter to its most inclusive setting. This ensures that messages from all severity levels are captured in your log file. It’s a common practice during development to get a complete picture of what’s happening. You can then raise the level in production to focus only on important events like warnings or errors, keeping your logs clean and relevant.
Preventing log file overwrites with append mode
It's frustrating when your log history vanishes with every program run. This usually happens when a file is opened in write mode ("w"), which erases existing content. The code below highlights a situation where previous logs are unexpectedly lost.
import logging
# This will overwrite the log file each time the program runs
logging.basicConfig(filename="data.log", level=logging.INFO)
logging.info("New run - previous logs are lost")
The basicConfig() function re-initializes the log file on every run, erasing the existing content of data.log. The following example shows how to adjust this behavior to preserve your log history.
import logging
# Using filemode="a" appends to the existing log file
logging.basicConfig(filename="data.log", filemode="a", level=logging.INFO)
logging.info("New run - appended to existing logs")
The fix is to explicitly set the file mode to append. By adding filemode="a" to your logging.basicConfig() configuration, you tell Python to add new log entries to the end of the file. This prevents the file from being overwritten each time the script runs. It’s a simple but essential step for maintaining a continuous log history, which is vital for tracking application behavior over time.
Eliminating duplicate log entries
Duplicate log entries can clutter your output and make debugging confusing. It's a common side effect of adding a handler to a logger that already inherits one from the root configuration set by logging.basicConfig(). The following code demonstrates this issue.
import logging
# Configure the root logger
logging.basicConfig(level=logging.INFO)
# Create a module logger
logger = logging.getLogger("my_module")
logger.setLevel(logging.INFO)
# Add a handler (causes duplication)
handler = logging.StreamHandler()
logger.addHandler(handler)
logger.info("This message appears twice")
The basicConfig() call configures the root logger, which propagates messages. Adding a separate StreamHandler to the my_module logger means the message is handled once there, and again by the root, creating the duplicate. The corrected code shows how to prevent this.
import logging
# Configure the root logger
logging.basicConfig(level=logging.INFO)
# Create a module logger with propagate=False
logger = logging.getLogger("my_module")
logger.propagate = False
logger.setLevel(logging.INFO)
# Add a handler (no duplication now)
handler = logging.StreamHandler()
logger.addHandler(handler)
logger.info("This message appears only once")
By setting logger.propagate = False, you stop the message from traveling up to the parent logger. Since logging.basicConfig() configures the root logger, this simple change prevents it from also handling the message, which is what creates the duplicate output. You'll want to watch for this when you're creating specific loggers for different parts of your application but have also set up a global logger configuration.
Real-world applications
Now that you can solve common logging issues, you can apply these skills to practical uses like monitoring web apps and automating error alerts.
Logging HTTP requests in a Flask application
In a web framework like Flask, you can use the logging module to automatically record details about incoming requests, such as the visitor's IP address.
from flask import Flask, request
import logging
app = Flask(__name__)
logging.basicConfig(filename='webapp.log', level=logging.INFO)
@app.route('/')
def home():
logging.info(f"Request from: {request.remote_addr}")
return "Hello, World!"
This example integrates Python's logging module into a simple Flask application. The logging.basicConfig() function is configured to write all messages at the INFO level or higher into a file named webapp.log.
- When a user visits the homepage, the
@app.route('/')decorator triggers thehome()function. - Inside this function,
logging.info()records the visitor's IP address—accessed viarequest.remote_addr—creating a basic access log for every request the application receives.
Setting up automated error email alerts with SMTPHandler
For critical issues that require immediate attention, you can configure the logging module’s SMTPHandler to automatically send error details directly to your email inbox.
import logging
from logging.handlers import SMTPHandler
import sys
logger = logging.getLogger("error_alert")
mail_handler = SMTPHandler(('smtp.example.com', 587), '[email protected]',
['[email protected]'], 'Error Alert')
logger.addHandler(mail_handler)
logger.addHandler(logging.StreamHandler(sys.stdout))
logger.setLevel(logging.ERROR)
logger.error("Critical error: database connection failed")
This setup creates a logger that dispatches alerts for critical problems. It uses the SMTPHandler to configure email notifications, specifying the server, sender, recipients, and subject line for the alert.
- Two handlers are attached to the logger. The
mail_handlersends the email, while aStreamHandlersimultaneously prints the message to your console. - By setting the level to
logging.ERROR, you ensure only severe issues trigger these actions. Whenlogger.error()is called, the message is sent to both destinations.
Get started with Replit
Put your new skills to work by building a real tool. Describe what you want to Replit Agent, like “a Flask app that logs every request to a rotating file” or “a script that sends email alerts for critical errors.”
It writes the code, tests for errors, and deploys your application from a single prompt. Start building with Replit.
Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.
Create & deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.


.png)
.png)