How Node.js Libraries Can Improve Your Logging

Resource type

Software development projects usually take a long time and go through several stages. All the requirements gathered at the beginning must be turned into working software features using code written by multiple developers. Due to the complexity of this process, even the most talented teams can't implement a comprehensive software system without making a single mistake on the first try. Syntax, logical, and runtime errors not recognized early may become costly to fix over time.

Logging is one of those tools that development teams can rely on at every project stage to ensure code quality. Seemingly simple yet powerful, such tools can become highly useful. Today, we'll consider how you can improve the quality of your Node.js projects with logging and consider some popular logging libraries.

Logging in a Nutshell

What's logging? By logging, we usually mean the process of recording events that occur in an app. In the context of Node.js, logging typically involves writing messages to a file that the development team can use to diagnose and debug emerging issues.

What are the main approaches to logging? You can perform it in different ways. Console logging involves outputting messages to the console, which helps debug during development. It's a simple, valuable technique for learning or building small Node.js projects. However, it's not recommended for production use, as console logging can be slow and unavailable in some production environments.

Who uses logging? It is used by on-site or dedicated Node.js development teams, DevOps engineers, and IT operations teams to diagnose and debug issues that occur in an app. By logging relevant data, these teams can gain valuable insights into how the application behaves and what might be causing problems. Also, it can be used to monitor the performance and availability of a running Node.js application after development is complete. Ensuring it's running smoothly and delivering value to users is critical.

Read Also Site Reliability Engineering And Its Impact on Software Quality

What are the main logging challenges? If your Node.js app generates tons of messages, files containing all this data may eventually become pretty big. Large files, in this case, don't necessarily mean large benefits since they may contain data of zero importance. Regular file rotation helps to avoid large files. However, this can be challenging for a development team to implement correctly, as you must ensure you're not losing any vital information. Once you've generated log files, it's essential to be able to analyze them to identify issues in your Node.js application. It can be not easy if you're dealing with large files or don't have the right tools to analyze them.

To log or not to log? Efficient techniques imply that Node.js development teams know how to separate the wheat from the chaff. In other words, it's essential to understand what data should or should not go into the files. Any error messages your application generates should be logged, as they can provide valuable insights into what might be causing issues. Uncaught exceptions and unhandled rejections are data that must not be ignored by your Node.js logging and development tools. If your application has user interactions, tracking data about these events can be helpful, such as when a user enters the system, creates an account, or performs other actions. Any sensitive data that could potentially be used to compromise your application or your users should not be logged. It includes passwords, credit card numbers, and other personally identifiable data.

Node.js Logging Libraries Worthy of Trying

Node.js development companies can rely on the console API and the methods it provides for logging. However, data gathered this way may be hard to analyze. Logging Node.js libraries, on the other hand, provide JSON support and enable tools for adding, sorting, or sending data to a given destination.

Read Also Node.js vs Java. Who Outplays Who in Enterprise Web Development

Most libraries can be added to your Node.js development projects using npm. Spend some time on some research, and you'll find a dozen. Since covering such an extensive range of tools is outside the scope of this article, we'll focus on four of them, namely, Winston, Loglevel, Bunyan, and Pino. We'll cover these development tools in order of popularity, according to data provided by npm trends:

Image removed.


Winston is a powerful and flexible library for Node.js, designed to be easy to use and highly configurable. It offers a variety of log levels, customizable formats, and multiple transports for handling logs (such as Console, File, HTTP, etc.).

To use Winston (or any other logging library we’ll discuss later) for Node.js development, you'll first need to install it using npm. You can do this by running the following command in your terminal:

npm install winston

Once you've installed the library, you can start using it in your Node.js application. Here's an example of how to configure and use Winston to add data to a file:

const winston = require('winston'); const logger = winston.createLogger({ level: 'info', format: winston.format.json(), defaultMeta: { service: 'my-service' }, transports: [ new winston.transports.File({ filename: 'error.log', level: 'error' }), new winston.transports.File({ filename: 'combined.log' }) ] }); logger.log({ level: 'info', message: 'Hello, Winston!' });

Here, we've created a new Node.js logger using the winston.createLogger() method, and specified a log level as info. We've also defined two transports to write error messages to a file called error.log, and all other messages to a file called combined.log.

Finally, we've called the logger.log() method to log a message. When this code is run, the message 'Hello, Winston!' will be written to the combined.log. For example, there’s a JSON object that will be added to this file if we run our Node.js app:

{"level":"info","message":"Hello, Winston!","service":"my-service","timestamp":"2023-04-18T14:30:00.000Z"}

In general, when using Winston or other Node.js libraries in development, the data that is logged to the file will depend on what you choose to include in the message. You can customize the format of the record using the winston.format() method, and include any relevant data you want to track in your application. For example, it might be information about HTTP requests, database queries, or user interactions, depending on what is relevant to the current development needs.


Loglevel is a lightweight library for Node.js development that provides a simple way to track events in your application. It also supports multiple log levels and can be used both synchronously and asynchronously. Let’s see how we can configure and use loglevel to log a message to the console this time:

const log = require('loglevel'); log.setLevel('info');'Hello, loglevel!');

In this example, we've imported the loglevel Node.js library using the require() method, and set the log level to 'info' using the log.setLevel() method. We've then called the method to log an 'info' level message to the console. When this code is run, the message 'Hello, loglevel!' will be output to the console.

By default, loglevel uses the console to output data, but you can also configure it to send data to a file or another destination if desired. The data that is logged can be used to help the development team to diagnose issues in your Node.js application and gain insights into how users are interacting with it.


Pino is a fast and low-overhead Node.js development library designed for high-performance and production use. It uses a stream-based architecture that allows it to write directly to any destination without blocking the main Node.js event loop. Let’s try to make Pino work:

const pino = require('pino'); const logger = pino({ level: 'info', prettyPrint: true, formatters: { level: (label, number) => { return { level: label } } } });'Hello, Pino!');

In this case, the logger is configured to log data at the "info" level or higher. The prettyPrint option is set to true, which means that the output will be formatted in a human-readable way. The formatters option is an object that allows you to customize how certain fields are formatted. In this case, it specifies a custom formatter for the level field, which simply returns the label property of the level argument.

With the default configuration, the message will be written to a file in JSON format with the following structure:

{"level":30,"time":1647731400000,"msg":"Hello, Pino!","pid":1234}

Where level is a numeric value representing the logging level, time is the Unix timestamp of the event, msg is the message being logged, and pid is the ID of the Node.js process logging the message.


Bunyan is a fast, extensible, and easy-to-use logging library for Node.js development. It is particularly well-suited for use in large-scale production environments where data must be managed and analyzed at scale. Here's an example of how to configure and use Bunyan:

const bunyan = require('bunyan'); const logger = bunyan.createLogger({ name: 'myapp', streams: [ { path: '/var/log/myapp.log', }, ], });'Hello, Bunyan!');

Here, we initialize a new logger instance using the Bunyan Node.js logging library. The createLogger() method is called with an options object containing a name property set to myapp. It specifies the name of the logger instance, which is useful for distinguishing data from different sources in the same file.

Additionally, the options object specifies a streams property, which is an array of output streams to write logs to. In this case, there is only one stream, which writes to /var/log/myapp.log.

When the method is called, the message will be written to the specified output stream. The exact data format will depend on the configuration of the logger instance and the selected output format. In this scenario, the log will be updated with data in JSON format looking like this:

{"name":"myapp","hostname":"your_hostname","pid":1234,"level":30,"msg":"Hello, Bunyan!","time":"2023-04-19T10:00:00.000Z","v":0}


Making complex web apps work according to requirements is a non-trivial task. If the application fails, there are better ways of doing things than clicking all the buttons and trying to figure out what went wrong. Revising thousands of lines of code may not be so productive as well. Checking data saved in logs during the application's life can be a decent alternative to blind guessing or digging into endless lines of code.