ISO 27001:2022 A 8.15 Logging

A log, in a computing context, is the automatically produced and time-stamped documentation of events relevant to a particular system. Virtually all software applications and systems produce log files. On a Web server, an access log lists all the individual files that people have requested from a website. These files will include the HTML files and their imbedded graphic images and any other associated files that get transmitted. From the server’s log files, an administrator can identify numbers of visitors, the domains from which they’re visiting, the number of requests for each page and usage patterns according to variables such as times of the day, week, month or year. In Microsoft Exchange, a transaction log records all changes made to an Exchange database. Information to be added to a mailbox database is first written to an Exchange transaction log. Afterwards, the contents of the transaction log are written to the Exchange Server database. An audit log (also known as an audit trail) records chronological documentation of any activities that could have affected a particular operation or event. Details typically include the resources that were accessed, destination and source addresses, a timestamp and user login information for the person who accessed the resources.


Logs that record activities, exceptions, faults and other relevant events should be produced, stored, protected and analysed.


To record events, generate evidence, ensure the integrity of log information, prevent against unauthorized access, identify information security events that can lead to an information security incident and to support investigations.

ISO 27002 Implementation Guidance

The organization should determine the purpose for which logs are created, what data is collected and logged, and any log-specific requirements for protecting and handling the log data. This should be documented in a topic-specific policy on logging. Event logs should include for each event, as applicable:

a) user IDs;
b) system activities;
c) dates, times and details of relevant events (e.g. log-on and log-off);
d) device identity, system identifier and location;
e) network addresses and protocols.

The following events should be considered for logging:

a) successful and rejected system access attempts;
b) successful and rejected data and other resource access attempts;
c) changes to system configuration;
d) use of privileges;
e) use of utility programs and applications;
f) files accessed and the type of access, including deletion of important data files;
g) alarms raised by the access control system;
h) activation and de-activation of security systems, such as anti-virus systems and intrusion detection systems;
i) creation, modification or deletion of identities;
j) transactions executed by users in applications. In some cases, the applications are a service or product provided or run by a third party.

It is important for all systems to have synchronized time sources as this allows for correlation of logs between systems for analysis, alerting and investigation of an incident.

Protection of logs
Users, including those with privileged access rights, should not have permission to delete or de-activate logs of their own activities. They can potentially manipulate the logs on information processing facilities under their direct control. Therefore, it is necessary to protect and review the logs to maintain accountability for the privileged users. Controls should aim to protect against unauthorized changes to log information and operational problems with the logging facility including:

  • alterations to the message types that are recorded;
  • log files being edited or deleted;
  • failure to record events or over-writing of past recorded events if the storage media holding a log file is exceeded.

For protection of logs, the use of the following techniques should be considered: cryptographic hashing, recording in an append-only and read-only file, recording in a public transparency file. Some audit logs can be required to be archived because of requirements on data retention or requirements to collect and retain evidence .
Where the organization needs to send system or application logs to a vendor to assist with debugging or troubleshooting errors, logs should be de-identified where possible using data masking techniques for information such as usernames, internet protocol (IP) addresses, hostnames or organization name, before sending to the vendor. Event logs can contain sensitive data and personally identifiable information. Appropriate privacy protection measures should be taken .

Log analysis
Log analysis should cover the analysis and interpretation of information security events, to help identify unusual activity or anomalous behavior, which can represent indicators of compromise. Analysis of events should be performed by taking into account:

a) the necessary skills for the experts performing the analysis;
b) determining the procedure of log analysis;
c) the required attributes of each security-related event;
d) exceptions identified through the use of predetermined rules [e.g. security information and event management (SIEM) or firewall rules, and intrusion detection systems (IDSs) or malware signatures];
e) known behaviour patterns and standard network traffic compared to anomalous activity and behavior [user and entity behaviour analytics (UEBA)];
f) results of trend or pattern analysis (e.g. as a result of using data analytics, big data techniques and specialized analysis tools);
g) available threat intelligence.
Log analysis should be supported by specific monitoring activities to help identify and analyse anomalous behavior, which includes:
a) reviewing successful and unsuccessful attempts to access protected resources [e.g. domain name system (DNS) servers, web portals and file shares];
b) checking DNS logs to identify outbound network connections to malicious servers, such as those associated with botnet command and control servers;
c) examining usage reports from service providers (e.g. invoices or service reports) for unusual activity within systems and networks (e.g. by reviewing patterns of activity);
d) including event logs of physical monitoring such as entrance and exit to ensure more accurate detection and incident analysis;
e) correlating logs to enable efficient and highly accurate analysis.

Suspected and actual information security incidents should be identified (e.g. malware infection or probing of firewalls) and be subject to further investigation (e.g. as part of an information security incident management process,).

Other information

System logs often contain a large volume of information, much of which is extraneous to information security monitoring. To help identify significant events for information security monitoring purposes, the use of suitable utility programs or audit tools to perform file interrogation can be considered. Event logging sets the foundation for automated monitoring systems which are capable of generating consolidated reports and alerts on system security. A SIEM tool or equivalent service can be used to store, correlate, normalize and analyse log information, and to generate alerts. SIEMs tend to require careful configuration to optimize their benefits. Configurations to consider include identification and selection of appropriate log sources, tuning and testing of rules and development of use cases. Public transparency files for the recording of logs are used, for example, in certificate transparency systems. Such files can provide an additional detection mechanism useful for guarding against log tampering. In cloud environments, log management responsibilities can be shared between the cloud service customer and the cloud service provider. Responsibilities vary depending on the type of cloud service being used.

Log files are automatically computer-generated whenever an event with a specific classification takes place on the network. The reason log files exist is that software and hardware developers find it easier to troubleshoot and debug their creations when they access a textual record of the events that the system is producing. Each of the leading operating systems is uniquely configured to generate and categorize event logs in response to specific types of events. Log management systems centralize all log files, to gather sort and analyze log data, and make it easy to understand, trace, and address key issues related to application performance.

Large IT organizations depend on an extensive network of IT infrastructure and applications to power key business services. Logfile monitoring and analysis increase the observability of this network, creating transparency and allowing visibility into the cloud computing environment. While observability should not be treated as an ultimate goal, it should always be seen as a mechanism for achieving real business objectives:

  • Improving the reliability of systems for the end-user: Log files include information about system performance that can be used to determine when additional capacity is needed to optimize the user experience. Log files can help analysts identify slow queries, errors that are causing transactions to take too long or bugs that impact website or application performance.
  • Maintain the security posture of cloud computing environments and prevent data breaches: Log files capture things like unsuccessful log-in attempts, failed user authentication, or unexpected server overloads, all of which can signal to an analyst that a cyber attack might be in progress. The best security monitoring tools can send alerts and automate responses as soon as these events are detected on the network.
  • Improve business decision-making.: Log files capture the behavior of users within an application, giving rise to an area of inquiry known as user behavior analytics. By analyzing the actions of users within an application, developers can optimize the application to get users to their goals more quickly, improving customer satisfaction and driving revenue in the process.

An ‘event’ is any action performed by a logical or physical presence on a computer system – e.g. a request for data, a remote login, an automatic system shutdown, a file deletion. An individual event log should contain 5 main components, in order for it to fulfil its operational purpose:

  1. User ID – Who or what account performed the actions.
  2. System activity – What happened
  3. Timestamps – Date and time of said event
  4. Device and system identifiers, and location – What asset the event occurred on
  5. Network addresses and protocols – IP information

For practical purposes, it may not be feasible to log every single event that occurs on a given network.With that in mind, the below 10 events as being particularly important for logging purposes, given their ability to modify risk and the part they play in maintaining adequate levels of information security:

  1. System access attempts.
  2. Data and/or resource access attempts.
  3. System/OS configuration changes.
  4. Use of elevated privileges.
  5. Use of utility programs or maintenance facilities .
  6. File access requests and what occurred (deletion, migration etc).
  7. Access control alarms and critical interrupts.
  8. Activation and/or deactivation of front end and back end security systems, such as client-side antivirus software or firewall protection systems.
  9. Identity administration work (both physical and logical).
  10. Certain actions or system/data alterations carried out as part of a session within an application.

It is vitally important that all logs are linked to the same synchronized time source (or set of courses), and in the case of third party application logs, any time discrepancies catered to and recorded.

Logs are the lowest common denominator for establishing user, system and application behavior on a given network, especially when faced with an investigation. It is therefore vitally important for organisations to ensure that users – regardless of their permission levels – do not retain the ability to delete or amend their own event logs. Individual logs should be complete, accurate and protected against any unauthorized changes or operational problems, including:

  • Message type amendments.
  • Deleted or edited log files.
  • Any failure to generate a log file, or unnecessary over-writing of log files due to prevailing issues with storage media or network performance.

Logs should be protected using the following methods:

  • Cryptographic hashing.
  • Append-only recording.
  • Read-only recording.
  • Use of public transparency files.

Organisations may need to send logs to vendors to resolve incidents and faults. Should this need arise, logs should be ‘de-identified’ and the following information should be masked:

  • Usernames
  • IP addresses
  • Hostnames

In addition to this, measures should be taken to safeguard personally identifiable information (PII) in line with the organisation’s own data privacy protocols, and any prevailing legislation .

When analysing logs for the purposes of identifying, resolving and analysing information security events – with the end goal of preventing future occurrences – the following factors need to be taken into account:

  • The expertise of the personnel carrying out the analysis.
  • How logs are analysed, in line with company procedure.
  • The type, category and attributes of each event that requires analysis.
  • Any exceptions that are applied via network rules emanating from security software hardware and platforms.
  • The default flow of network traffic, as compared to unexplainable patterns.
  • Trends that are identified as a result of specialised data analysis.
  • Threat intelligence.

Log analysis should not be carried out in isolation, and should be done in tandem with rigorous monitoring activities that pinpoint key patterns and anomalous behavior. In order to achieve a dual-fronted approach, organisations should:

  • Review any attempts to access secure and/or business critical resources, including domain servers, web portals and file sharing platforms.
  • Scrutinise DNS logs to discover outgoing traffic linked to malicious sources and harmful server operations.
  • Collate data usage reports from service providers or internal platforms to identify malicious activity.
  • Collect logs from physical access points, such as key card/fob logs and room access information.
  • Supplementary Information

Organisations should consider using specialized utility programs that help them search through the vast amounts of information that system logs generate, in order to save time and resources when investigating security incidents, such as a SIEM tool. If an organisation uses a cloud-based platform to carry out any part of their operation, log management should be considered as a shared responsibility between the service provider and the organisation themselves.

System logs generated by servers and other various network apparatus can create data is in vast quantities, and sooner or later, attempts at managing such information in an off-the-cuff fashion are no longer viable. Consequently, information systems managers are tasked with devising strategies for taming these volumes of log data to remain compliant with company IT policy and also to gain holistic visibility across all IT systems deployed throughout the organization. log management is defining what you need to log, how to log it, and how long to retain the information. This ultimately translates into requirements for hardware, software, and of course, policies. . These are as follows:

  1. Collection
    The organization needs to collect logs over encrypted channels. Their log management solution should ideally come equipped with multiple means to collect logs, but it should recommend the most reliable means of doing so. In general, organizations should use agent-based collection whenever possible, as this method is generally more secure and reliable than its agentless counterparts.
  2. Storage
    Once they have collected them, organizations need to preserve, compress, encrypt, store, and archive their logs. Companies can look for additional functionality in their log management solution such as the ability to specify where they can store their logs geographically. This type of feature can help meet their compliance requirements and ensure scalability.
  3. Search
    Organizations need to make sure they can find their logs once they’ve stored them, so they should index their records in such a way that they are discoverable via plaintext, REGEX, and API queries. A comprehensive log management solution should enable companies to optimize each log search with filters and classification tags. It should also allow them to view raw logs, conduct broad and detailed queries, and compare multiple queries at once.
  4. Correlation
    Organizations need to create rules that they can use to detect interesting events and perform automated actions. Of course, most events don’t occur on a single host in a single log. For that reason, companies should look for a log management solution that lets them create correlation rules according to the unique threats and requirements their environments face. They should also seek out a tool that allows them to import other data sources such as vulnerability scans and asset inventories.

Effective logging allows us to reach back in time to identify events, interactions, and changes that may have relevance to the security of information resources. A lack of logs often means that we lose the ability to investigate events (e.g. anomalies, unauthorized access attempts, excessive resource use) and perform root cause analysis to determine causation. In the context of this control area, logs can be interpreted very broadly to include automated and handwritten logs of administrator and operator activities taken to ensure the integrity of operations in information processing facilities, such as data and network centers.

How do we protect the value of log information?

Effective logging strategies must also consider how to log data that can be protected against tampering, sabotage, or deletion that devalues the integrity of log information. This usually involves consideration of role-based access controls that partition the ability to read and modify log data based on business needs and position responsibilities. In addition, timestamp information is extremely critical when performing correlation analysis between log sources. One essential control needed to assist with this is ensuring that organizational systems all have their clocks synchronized to a common source (often achieve via NTP server) so that the timelining of events can be performed with high confidence.

What should we log?

The question of what types of events to log must take into consideration a number of factors including relevant compliance obligations, organizational privacy policies, data storage costs, access control needs, and the ability to monitor and search large data sets in an appropriate time frame. When considering your overall logging strategy it can very often be helpful to “work backward”. Rather than initially attempting to catalog all event types, it can be useful to frame investigatory questions beginning with those issues that occur on regular basis or have the potential to be associated with significant risk events (e.g. abuse/attacks on ERP systems). These questions can then lead to a focused review of the security event data that has the most relevance to these particular questions and issues. Ideally, events logs should include key information including:

  • User IDs, System Activities; Dates, Times and Details of Key Events
  • Device identity or location, Records of Successful and Rejected System Access Attempts;
  • Records of Successful and Rejected Resource Access Attempts; Changes to System Configurations; Use of Privileges,
  • Use of System Utilities and Applications; Files Accessed and the Kind of Access; Network Addresses and Protocols;
  • Alarms raised by the access control system, Activation and De-activation of Protection systems, such as AV & IDS

5. Output

Finally, companies need to be able to distribute log information to different users and groups using dashboards, reports, and email. Their log management solution should facilitate the exchange of data with other systems and the security team.

Back to Home Page

If you need assistance or have any doubt and need to ask any questions contact me at You can also contribute to this discussion and I shall be happy to publish them. Your comments and suggestion are also welcome.

One thought on “ISO 27001:2022 A 8.15 Logging

  1. I agree that the resources should be monitored and tuned. You never know what could happen if no one is watching it. The last thing you need is for someone to change it and mess everything up.

Leave a Reply