Node.js Logging: The Basics

Arfan Sharif - February 13, 2023

Logs are essential for understanding the context of an application’s state. They contain records about the application at runtime, events that occurred, or the execution of a specific code path. As a developer, you can use this context for application debugging. You can also use it with a log management analysis tool, processing log data for deeper automated insights into events relating to the execution of your application.

For Node.js applications, many popular packages let you use different log transport modes, customize formatting, or integrate with other log management tools. With this customization, you gain a high degree of visibility into your application and are better equipped to monitor and analyze events related to it.

This is the first part in a series on Node.js logging. In this article, we’ll introduce you to the basics of logging in Node.js applications and how you can view the different log streams available while running your application. You’ll also learn how you can use structured log messages to further analyze and correlate logging data.

Learn More

Explore the complete Node.js Logging Guide series:

Basics of Logging in Node.js

In a Node.js application, you can implement logging with the standard library console class, all without needing any external dependencies. You can use the different functions from this console class to send messages to either the standard error or standard message console log streams.

  • log: The default log stream for uncategorized or general messages
  • info: Any messages about the application’s usual operation
  • debug: Messages that provide technical information about the system at runtime (often used during development and debugging activities to gather additional information)
  • warn: Issues caught in the application that could indicate more significant issues affecting the continued functionality of the application
  • error: Messages relating to anything that critically affects the runtime of the application

When troubleshooting issues with your application, you can proactively add console logging statements to your code for certain edge cases. You can also add them retroactively to narrow down when and where issues are occurring. The following example shows calls to the different functions from the console class.

# index.js
console.log("Your log message");"Your info message");
console.debug("Your debug message");
console.warn("Your warn message");
console.error("Your error message");

When running your Node.js application via the terminal, both the error and log message streams are available to you, and all messages appear in the format shown below:

$ node index.js 
Your log message
Your info message
Your debug message
Your warn message
Your error message

By applying general categorizations for each log level and associated console class method, you can simplify the task of filtering messages by the log stream.

By categorizing the messages with an appropriate log level, they will be output to the stdout or stderr (or both) log streams on your system. You can use these log streams to access the messages most relevant to you at the time and filter out the messages that you’re not concerned about.

Let’s look at how to access these log messages based on the log stream.

Viewing Node.js Logs

The simplest way to view the logs from your Node.js application is through the console you used to start your application. The stdout and stderr log streams are available to you when running the application. Based on these two streams, you can redirect the output of the specific streams to files as needed. The example below shows the use of these redirections.

$ node ./index.js > ./stdout-only.txt
$ node ./index.js 2> ./stderr-only.txt
$ node ./index.js 2>&1 ./stdout-and-stderr.txt

Suppose you are working with an external framework in your application—such as Express—which may have an internal configuration for filtering the framework log messages. For the Express framework, you can use the following option when calling the application to show all the internal router messages. This can help debug complex Express applications with many middleware layers.

$ DEBUG=express:* node ./index.js
  express:application set "x-powered-by" to true +0ms
  express:application set "etag" to 'weak' +2ms
  express:application set "etag fn" to [Function: generateETag] +1ms
  express:application set "env" to 'development' +1ms
  express:application set "query parser" to 'extended' +0ms
  express:application set "query parser fn" to [Function: parseExtendedQueryString] +1ms
  express:application set "subdomain offset" to 2 +1ms
  express:application set "trust proxy" to false +0ms
  express:application set "trust proxy fn" to [Function: trustNone] +0ms
  express:application booting in development mode +1ms
  express:application set "view" to [Function: View] +1ms
  express:application set "views" to 'D:\\dev\\Node.js-logging\\views' +1ms
  express:application set "jsonp callback name" to 'callback' +0ms
  express:router use '/' query +1ms
  express:router:layer new '/' +1ms
  express:router use '/' expressInit +0ms
  express:router:layer new '/' +1ms
  express:router:route new '/' +0ms
  express:router:layer new '/' +1ms
  express:router:route get '/' +0ms
  express:router:layer new '/' +1ms
Example app listening on port 3000

In part two of this guide, we’ll examine in greater detail the external logging packages that provide additional filtering capabilities when using the library’s provided loggers.

Structured Log Messages

Once you can emit basic logs in your application, you can build on this to add more helpful context to each message. For example, if you emit a log when catching an exception, it may be helpful to add the exception message information to your log entry. You can add further context by using structured logs.

Adding more context as structured data requires using a standardized log format that your application’s log client and your logging server or log processor can understand. With a standardized messaging format, you can more efficiently transfer information with a single log message. This allows for deeper analysis or processing to provide powerful features like alerting, querying, or correlating log messages through a complex system with many applications.

Let’s consider some common structured log formats you may encounter when working with Node.js applications.


If you have used Heroku, you have probably seen this log format used for all their internal logging on the platform. It uses key-value pairs on a single log line, enabling you to send a lot of densely packed information in a single log line.

at=info method=GET path=/ fwd="" dyno=web.1 connect=2ms service=4ms status=200 bytes=1200


Most Linux operating systems will operate with a Syslog client or server already installed by default. This protocol and structured log format let you push logs when needed, based on pre-configured rules. Like logfmt, the structure for Syslog enables you to send multiple key-value pairs. Syslog formalizes some aspects of the message through the priority and facility keys. These keys categorize the area of the system the message originates from and the severity of the message, respectively. This structuring facilitates more advanced filtering of where to send messages when processed by a Syslog server. Due to this standardization of the RFC-5424 format, many log processing and management integrations are turnkey with the supplied message in the Syslog format.

<34>1 20022-10-11T12:00:00.000Z su - ID33 - 'su root' failed for nobody on /dev/sba


The JSON format can handle very dense and complex nested data structures. Each log line within the JSON format is a single JSON object. When processed, you can query these data structures and efficiently generate analytics based on their attributes. This also means you can easily use custom log schemas for these data structures.

{ "Message": "Page not found", "Code": 404 }

Log your data with CrowdStrike Falcon Next-Gen SIEM

Elevate your cybersecurity with the CrowdStrike Falcon® platform, the premier AI-native platform for SIEM and log management. Experience security logging at a petabyte scale, choosing between cloud-native or self-hosted deployment options. Log your data with a powerful, index-free architecture, without bottlenecks, allowing threat hunting with over 1 PB of data ingestion per day. Ensure real-time search capabilities to outpace adversaries, achieving sub-second latency for complex queries. Benefit from 360-degree visibility, consolidating data to break down silos and enabling security, IT, and DevOps teams to hunt threats, monitor performance, and ensure compliance seamlessly across 3 billion events in less than 1 second.

Schedule Falcon Next-Gen SIEM Demo


Arfan Sharif is a product marketing lead for the Observability portfolio at CrowdStrike. He has over 15 years experience driving Log Management, ITOps, Observability, Security and CX solutions for companies such as Splunk, Genesys and Quest Software. Arfan graduated in Computer Science at Bucks and Chilterns University and has a career spanning across Product Marketing and Sales Engineering.