Looking at software development before the cloud revolution, using logging frameworks was kind of common. They supported the dev in formatting the logs, buffering them and sending them to the correct sink such as Elasticsearch. With the rise of cloud platforms applications tended to get smaller units and direct integrations of cloud applications into observability or log archive platforms is now possible with a few lines of configuration. It's time to review if using logging frameworks pays off or a simple 'console.log' and forward the application/function output to some provider such as datadog will do the job.
Direct Comparison
Let's have a view on usual requirements a developer has on the logging facility. Loggers should be available everywhere in the code, either you want to log specific infos deep in the code stack or log unhandled exceptions on the top entry point. As a log message might not be specific enough to understand the context in which it was written, you will enrich it with additional metadata describing a current snapshot of the context. This leads to the next point: once you want to serialize full objects you need to take care of removing sensitive information like tokens. It is up to you deciding how useful it is to log the whole object. Once you are working in a system under load, there will be a high volume of log messages produced, so the key essence of logging is the search-ability and filterability of messages afterwards.