Node.js Logging Guide:
Advanced Concepts

Arfan Sharif - February 13, 2023

In part one of our Node.js Logging Guide Overview, we covered the basics of logging in Node.js applications, including how to view your application logs and how to use structured logs to facilitate denser log messages with more contextual information.

Here in part two, we’ll dive into more advanced Node.js logging topics, covering custom Node.js logging packages. With these loggers, you can easily control the log format and message transport and integrate them with other logging tools. We’ll also cover some Node.js logging best practices to ensure you get the most out of the logs generated by your applications.

Learn More

Explore the complete Node.js Logging Guide series:

Node.js Logging Packages

Many developers are familiar with the standard library logging methods available in the Node.js console class. Although the library is useful, it has limitations when implementing other standard logging requirements, such as sending logs to multiple destinations or using structured log data. The Node.js package ecosystem through npm has many feature-rich loggers to implement your custom logging requirements. Let’s briefly cover some of the currently popular npm logging packages.

Winston: A simple logger that can be used universally across Node.js applications. Winston’s feature set focuses on configuration, allowing you to implement any logging features you need, such as multiple logging transports and formats for each.

Bunyan: Allows you to configure structured logging for your application as well as configure the streaming of those logs to files on a disk. As such, the manifest for this package states that all logs should be structured.

Pino: Aims to be a lightweight logger that you can implement anywhere, and it includes features such as customizable transports and structured log formats. Pino is an excellent option if your application requires fast or asynchronous logging.

LogLevel: A minimal, lightweight logger that replaces console.log() with level-based logging and filtering. It aims to address the downsides of the Node.js standard library console class.

These external packages expedite the implementation of your logging requirements, requiring only a simple configuration in your Node.js application entry point file before your application log is executed. The following example shows how to use the Winston logger to customize multiple transports. This includes different destinations, the console, and files—each with its structured log format. You can view all transport configuration options here.

// winston.js
const winston = require("winston");

const logger = winston.createLogger({
  // Default level for logs
  level: "info",
  // Default format for logs
  format: winston.format.json(),
  // Any keys you want attached to your logs
  defaultMeta: { service: "user-service" },
  transports: [
    // Write all logs with importance level of `error` or less to `error.log` in a logstash format
    new winston.transports.File({
      filename: "error.log",
      level: "error",
      format: winston.format.logstash(),
    }),
    // Write all logs with importance level of `info` or less to `combined.log`
    new winston.transports.File({ filename: "combined.log" }),
    // Log everything to the console in a simple format
    new winston.transports.Console({ format: winston.format.simple() }),
  ],
});

// Create some log entries
logger.info("info log");
logger.error("error log");
logger.warn("warn log");

// Call your application like normal and use the above "logger" in your application
// app(logger)

Once you run the above example, you should see the following structured log messages in your console according to the configured log format for the console transport.

# output in simple format
$ node ./winston.js 
info: info log {"service":"user-service"}
error: error log {"service":"user-service"}
warn: warn log {"service":"user-service"}

# output in default logger json format
$ cat ./combined.log
{"level":"info","message":"info log","service":"user-service"}
{"level":"error","message":"error log","service":"user-service"}
{"level":"warn","message":"warn log","service":"user-service"}

# output in logstash format
{"@fields":{"level":"error","service":"user-service"},"@message":"error log"}

As you can see, these external loggers provide an easy way for you to achieve your logging requirements in a scalable fashion, simplifying the integration of new destination formats. In many cases, you can do this without making additional changes to your application code.

Integrating With Log Management Tools

Structured logging tools make it easier to integrate your application with external log management tools and services. These services can use the extra context data in your structured log messages to generate comprehensive analytics, yielding better insights into your application. This context data enables further operational activities such as correlating service requests, aggregating logs into metrics, and alerting on these metrics.

Winston is the easiest of the external loggers to integrate with these log management tools. You only need to add a new transport to your logger initialization. This transport will push your logs to the provider in the format the server is expecting. The following example shows how you can configure Winston to communicate with CrowdStrike Falcon LogScale using the humio-winston package available on npm.

// winston-humio.js
const winston = require('winston');
const HumioTransport = require('humio-winston');

const logger = winston.createLogger({
  transports: [
    new HumioTransport({
      ingestToken: '<YOUR HUMIO INGEST TOKEN>',
    }),
  ],
});
// Create some log entries
logger.info("info log");
logger.error("error log");
logger.warn("warn log");

// Call your application like normal and use the above "logger" in your application
// app(logger)

For Winston, you can find the complete list of transports—including HTTP, streams, Syslog, and many community transports—on the Winston GitHub Page. Using an external provider with good transport support allows you to integrate with external logging services as long as you have a compatible transport and log format.

Node.js Logging Best Practices

The following section describes some of the best practices you should follow when logging in your Node.js applications. These recommendations reduce the amount of noise from logs in your applications, helping developers make the most of their log messages.

Override the default logger

In your Node.js application, if you configure a custom logger for your custom logging requirements, then you should tell all other developers to use that custom logger class instead of the default console class from the Node.js standard library. Otherwise, someone will forget which class they need to use when logging, and you’ll find yourself with a log message that is not structured (or one that’s sent to the wrong place).

The easiest solution to this isn’t asking people to change how they emit logs in the application. Instead, you can override the default library methods at runtime with your custom loggers. This means a single place configures your logging, with all implementations being the same. Below is an example of how to set up this behavior with a Winston logger.

// winston.js
const winston = require("winston");

const logger = winston.createLogger({
  level: "info",
  format: winston.format.simple(),
  defaultMeta: { service: "user-service" },
});

// Override the default logging functions with a call to the loggers respective method
console.log = (...args) => logger.info.call(logger, ...args);
console.info = (...args) => logger.info.call(logger, ...args);
console.warn = (...args) => logger.warn.call(logger, ...args);
console.error = (...args) => logger.error.call(logger, ...args);
console.debug = (...args) => logger.debug.call(logger, ...args);

// Create some log entries using the regular methods
console.info("info log");
console.error("error log");
console.warn("warn log");

// Call your application like normal, no need to pass around a custom logger class
// app()

Log errors automatically

Within a Node.js application, the global process object that represents the runtime of the active application can have listeners attached to actions that occur in the application. You can use this to customize the behavior of the application when an unhandled error occurs in your application. This useful safeguard ensures that any errors are logged even when they are unhandled, and it lets you perform any custom steps before exiting if needed.

In the example below, we see this customization on two different error events. You can also find a complete list of Node.js process events here.

process
  .on("unhandledRejection", (reason, promise) => {
    console.error(
      reason,
      "Unhandled Promise Rejection in application",
      promise
    );
  })
  .on("uncaughtException", (err) => {
    console.error(err, "Uncaught Exception thrown in application");
    // Do any other application cleanup steps
    process.exit(1);
  });

// Call your application like normal
// app()

Use structured log messages

In most scenarios, you’ll find that a single message string is not enough information for effective debugging. This is where standardizing through the use of structured log messages can help. Doing so lets you squish a lot of data into a single log line. With a structured log message format, you also enable advanced features for your logging clients and servers. You can alert, filter, and redirect messages based on the defined data structures and keys in your log messages. This is a straightforward way to standardize your logging to a single format and integrate it with other logging tools and services.

Log your data with CrowdStrike Falcon Next-Gen SIEM

Elevate your cybersecurity with the CrowdStrike Falcon® platform, the premier AI-native platform for SIEM and log management. Experience security logging at a petabyte scale, choosing between cloud-native or self-hosted deployment options. Log your data with a powerful, index-free architecture, without bottlenecks, allowing threat hunting with over 1 PB of data ingestion per day. Ensure real-time search capabilities to outpace adversaries, achieving sub-second latency for complex queries. Benefit from 360-degree visibility, consolidating data to break down silos and enabling security, IT, and DevOps teams to hunt threats, monitor performance, and ensure compliance seamlessly across 3 billion events in less than 1 second.

Schedule Falcon Next-Gen SIEM Demo

GET TO KNOW THE AUTHOR

Arfan Sharif is a product marketing lead for the Observability portfolio at CrowdStrike. He has over 15 years experience driving Log Management, ITOps, Observability, Security and CX solutions for companies such as Splunk, Genesys and Quest Software. Arfan graduated in Computer Science at Bucks and Chilterns University and has a career spanning across Product Marketing and Sales Engineering.