Python Logging

Reference

Logging in Python
Logging in Python Course


Warning!!

During exercising from this tutorial, please reset your Python environment every time if you are running the logging code in the same kernel. Because we must call basicConfig() ourselves before we use any of the severity level functions.


The Logging Module

  • The logging module provides a default logger that is great for getting started.
  • To log an event, we must first assign it a severity level.
  • Then, we can log only events of a specific severity level or greater.

With the logging module imported, you can use something called a “logger” to log messages that you want to see. By default, there are 5 standard levels indicating the severity of events. Each has a corresponding method that can be used to log events at that level of severity.

The defined levels, in order of increasing severity, are the following:

  • DEBUG: Debug is the least severe
  • INFO
  • WARNING
  • ERROR
  • CRITICAL: Critical is the most severe
Py
1
2
3
4
5
6
7
import logging

logging.debug('This is a debug message')
logging.info('This is an info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical message')

Output:

  • root is the name of the default logger
  • The default logger logs event marked as WARNING or more severe
Python
1
2
3
WARNING:root:This is a warning message
ERROR:root:This is an error message
CRITICAL:root:This is a critical message

Basic Configurations

  • The basicConfig(**kwargs) method is used to configure logging
  • Common parameters:
    • level: The root logger will be set to the specified severity level.
    • filename: This specifies the file.
    • filemode: If filename is given, the file is opened in this mode. The default is a, which means append.
    • format: This is the format of the log message.

By using the level parameter, you can set what level of log messages you want to record. This can be done by passing one of the constants available in the class, and this would enable all logging calls at or above that level to be logged.


Python
1
2
3
4
import logging

logging.basicConfig(level=logging.DEBUG)
logging.debug('This will get logged')

Output:

Python
1
DEBUG:root:This will get logged

Python
1
2
3
4
5
6
7
8
import logging

logging.basicConfig(
filename='app.log',
filemode='w',
format='%(name)s - %(levelname)s - %(message)s'
)
logging.warning('This will get logged to a file')

Output:

  • a file named app.log

![[app.log]]


Available filemode

Character

Meaning

'r'

open for reading (default)

'w'

open for writing, truncating the file first

'x'

open for exclusive creation, failing if the file already exists

'a'

open for writing, appending to the end of the file if it exists

'b'

binary mode

't'

text mode (default)

'+'

open for updating (reading and writing)


Details

  • basicConfig(**kwargs) can only be called once.
  • The severity level functions, like debug(), call this function internally if it has not been called before.
  • This means we must call basicConfig() ourselves before we use any of the severity level functions.

Formatting the Output

While you can pass any variable that can be represented as a string from your program as a message to your logs, there are some basic elements that are already a part of the LogRecord and can be easily added to the output format. If you want to log the process ID along with the level and message, you can do something like this:

Python
1
2
3
4
import logging

logging.basicConfig(format='%(process)d - %(levelname)s - %(message)s')
logging.warning('This is a Warning')

Output:

1
8686 - WARNING - This is a Warning

Python
1
2
3
4
5
6
7
import logging

logging.basicConfig(
format='%(asctime)s - %(message)s',
level=logging.INFO
)
logging.warning('Admin logged in')
1
2021-09-01 23:05:15,202 - Admin logged in

1
2
3
4
5
6
7
import logging

logging.basicConfig(
format='%(asctime)s - %(message)s',
datefmt='%d-%b-%y %H:%M:%S'
)
logging.warning('Admin logged in')

Output:

1
01-Sep-21 23:08:22 - Admin logged in

Logging Variable Data

In most cases, you would want to include dynamic information from your application in the logs. You have seen that the logging methods take a string as an argument, and it might seem natural to format a string with variable data in a separate line and pass it to the log method. But this can actually be done directly by using a format string for the message and appending the variable data as arguments. Here’s an example:

1
2
3
4
5
import logging

name = 'Zacks'

logging.error(f'{name} raised an error')

Output:

1
ERROR:root:Zacks raised an error

Capturing Stack Traces

The logging module also allows you to capture the full stack traces in an application. Exception information can be captured if the exc_info parameter is passed as True, and the logging functions are called like this:

1
2
3
4
5
6
7
8
import logging

a, b = 5, 0

try:
c = a / b
except Exception as e:
logging.error('Exception occurred', exc_info=True)

Output:

1
2
3
4
5
ERROR:root:Exception occurred
Traceback (most recent call last):
File "<ipython-input-1-17c64f1e907c>", line 6, in <module>
c = a / b
ZeroDivisionError: division by zero

If exc_info is not set to True, the output of the above program would not tell us anything about the exception, which, in a real-world scenario, might not be as simple as a ZeroDivisionError. Imagine trying to debug an error in a complicated codebase with a log that shows only this:

Output:

1
ERROR:root:Exception occurred

logging.exception

  • Alternatively, we can call logging.exception()
  • This equivalent of logging.error(exc_info=True
  • If you want a different level, use that’s function and pass exc_info
    • ex. logging.earning(exc_info=True)

Classes and Functions

So far, we have seen the default logger named root, which is used by the logging module whenever its functions are called directly like this: logging.debug(). You can (and should) define your own logger by creating an object of the Logger class, especially if your application has multiple modules. Let’s have a look at some of the classes and functions in the module.

The most commonly used classes defined in the logging module are the following:

  • Logger: This is the class whose objects will be used in the application code directly to call the functions.
  • LogRecord: Loggers automatically create LogRecord objects that have all the information related to the event being logged, like the name of the logger, the function, the line number, the message, and more.
  • Handler: Handlers send the LogRecord to the required output destination, like the console or a file. Handler is a base for subclasses like StreamHandler, FileHandler, SMTPHandler, HTTPHandler, and more. These subclasses send the logging outputs to corresponding destinations, like sys.stdout or a disk file.
  • Formatter: This is where you specify the format of the output by specifying a string format that lists out the attributes that the output should contain.

Out of these, we mostly deal with the objects of the Logger class, which are instantiated using the module-level function logging.getLogger(name). Multiple calls to getLogger() with the same name will return a reference to the same Logger object, which saves us from passing the logger objects to every part where it’s needed. Here’s an example:

1
2
3
4
5
import logging

# Define the logger
logger = logging.getLogger('example_logger')
logger.warning('This is a warning')
1
This is a warning

Creating a Custom Logger

In this lesson, you’re going to get some hands-on practice. You’re going to create a custom logger as well as some handlers that will represent output destinations.

Then, you’ll attach formatters to those handlers so you can independently format your logging output based on destination. After all that, you’ll attach those handlers to the logger so that you can use it. Basically, you’re going to create a bunch of objects and then wire them together to form a logging system.

In short, change the level in logging.basicConfig will influence the global settings.
You should better set level for each logger and the specific handler in the logger.

The following is an example that displays all levels on the console and only records messages >= errors in log_file.log. Notice the level for each handler is different.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import logging
# Define logger
logger = logging.getLogger('test')

# Set level for logger
logger.setLevel(logging.DEBUG)

# Define the handler and formatter for console logging
consoleHandler = logging.StreamHandler() # Define StreamHandler
consoleHandler.setLevel(logging.DEBUG) # Set level
concolsFormatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s') # Define formatter
consoleHandler.setFormatter(concolsFormatter) # Set formatter
logger.addHandler(consoleHandler) # Add handler to logger

# Define the handler and formatter for file logging
log_file = 'log_file'
fileHandler = logging.FileHandler(f'{log_file}.log') # Define FileHandler
fileHandler.setLevel(logging.ERROR) # Set level
fileFormatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') # Define formatter
fileHandler.setFormatter(fileFormatter) # Set formatter
logger.addHandler(fileHandler) # Add handler to logger

# Test
logger.debug('This is a debug')
logger.info('This is an info')
logger.warning('This is a warning')
logger.error('This is an error')
logger.critical('This is a critical')

Console output:

1
2
3
4
5
6
# Test
test - DEBUG - This is a debug
test - INFO - This is an info
test - WARNING - This is a warning
test - ERROR - This is an error
test - CRITICAL - This is a critical

File log_file.log content:

1
2
2021-09-22 12:50:50,938 - test - ERROR - This is an error
2021-09-22 12:50:50,938 - test - CRITICAL - This is a critical

To review your logger‘s level:

1
logger.level

The result should be one of the following:

1
2
3
4
5
10 # DEBUG
20 # INFO
30 # WARNING
40 # ERROR
50 # CRITICAL

To review your handlers‘s levels:

1
logger.handlers
1
2
[<StreamHandler stderr (DEBUG)>,
<FileHandler ***/log_file.log (ERROR)>]

SMTP

How to Send Emails with Gmail using Python

Set user and password:

1
2
gmail_user = 'your-gmail@gmail.com'
gmail_password = 'your-password' # Google App Password

Send email with attachment.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import smtplib
from email.message import EmailMessage

msg = EmailMessage()
msg["From"] = FROM_EMAIL
msg["Subject"] = "Subject"
msg["To"] = TO_EMAIL
msg.set_content("This is the message body")
with open('file.pdf', 'rb') as f:
content = f.read()
msg.add_attachment(content, maintype='application', subtype='pdf', filename='file.pdf')

server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
server.login(gmail_user, gmail_password)
server.send_message(msg)
server.close()

Logger From Config File

logging.config — Logging configuration

You can configure logging using the module and class functions or by creating a config file or a dictionary and loading it using fileConfig() or dictConfig() respectively. These are useful in case you want to change your logging configuration in a running application.

Here’s an example file configuration:

Use the following code to make a config file named etlLogger.conf.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[loggers]
keys=root,etlLogger

[handlers]
keys=consoleHandler, fileHandler

[formatters]
keys=consoleFormatter, fileFormatter

[logger_root]
level=DEBUG
handlers=consoleHandler

[logger_etlLogger]
level=DEBUG
handlers=consoleHandler, fileHandler
qualname=etlLogger
propagate=0

[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=consoleFormatter
args=(sys.stdout,)

[handler_fileHandler]
class=FileHandler
level=WARNING
formatter=fileFormatter
args=('./logs/etl.log', 'a')

[formatter_consoleFormatter]
format=%(name)s - %(levelname)s - %(message)s
class=logging.Formatter

[formatter_fileFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
class=logging.Formatter

In the above file, there are two loggers, one handler, and one formatter. After their names are defined, they are configured by adding the words logger, handler, and formatter before their names separated by an underscore.


1
2
3
4
5
6
7
8
9
10
11
12
13
import logging
import logging.config

logging.config.fileConfig(
fname='config/etlLogger.conf', # read etlLogger.conf
disable_existing_loggers=False # don't disable the existing_loggers (other loggers)
)

etlLogger = logging.getLogger('etlLogger')
etlLogger.debug('This is a debug message')
etlLogger.warning("This is a warning message")
etlLogger.error("This is an error message")
!cat logs/etl.log
1
2
3
4
5
etlLogger - DEBUG - This is a debug message
etlLogger - WARNING - This is a warning message
etlLogger - ERROR - This is an error message
2021-09-30 12:40:09,337 - etlLogger - WARNING - This is a warning message
2021-09-30 12:40:09,338 - etlLogger - ERROR - This is an error message

Logger From Dictionary

In this lesson, you’ll see how to do what you did in the last lesson, but this time, but with a dictionary approach. The most straightforward way to do this is with a YAML file for your configuration. Here’s the same configuration in a YAML format for the dictionary approach:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
version: 1
# Define a formatter called simple.
formatters:
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
# Define a handler called console.
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
# Define loggers.
loggers:
# Define a logger called sampleLogger.
sampleLogger:
level: DEBUG
handlers: [console]
propagate: no
# Define the root logger.
root:
level: DEBUG
handlers: [console]

![[config.yaml]]


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import logging
import logging.config

logging.config.fileConfig(
fname='file.conf', # read file.conf
disable_existing_loggers=False # don't disable the existing_loggers (other loggers)
)

logger = logging.getLogger(__name__)
logger.debug('This is a debug message')
# %% Logger From Dictionary
import logging
import logging.config
import yaml

with open('config.yaml') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)

logger = logging.getLogger(__name__)

logger.debug('This a debug message')

Output:

1
2021-09-02 00:29:10,073 - __main__ - DEBUG - This a debug message

Conclusion

Logging is a very useful tool in a programmer’s toolbox. It can help you develop a better understanding of the flow of a program and discover scenarios that you might not even have thought of while developing.

Logs provide developers with an extra set of eyes that are constantly looking at the flow that an application is going through. They can store information, like which user or IP accessed the application. If an error occurs, then they can provide more insights than a stack trace by telling you what the state of the program was before it arrived at the line of code where the error occurred.

By logging useful data from the right places, you can not only debug errors easily but also use the data to analyze the performance of the application to plan for scaling or look at usage patterns to plan for marketing.

Python provides a logging system as a part of its standard library, so you can quickly add logging to your application. In this course, you learned why using this module is the best way to add logging to your application as well as how to get started quickly, and you will get an introduction to some of the advanced features available.