ISO 27001:2022 A 8.13 Information backup

One of the best ways to protect your computer and data from malware attacks is to make regular backups.An organisation’s backup operation should encompass a broad range of efforts that improve resilience and protect against loss of business by establishing a robust and tightly managed set of backup jobs, using dedicated software and utilities, with adequate retention levels and agreed upon recovery times.You should always create at least two backups: one to keep offline and another to keep in the cloud. you can create a full backup using the System Image Backup tool to make a copy of your entire machine, including files, settings, apps, and OS installation. Alternatively, if you don’t have a lot of files, you could just make regular copies of your documents on a USB flash drive. If you’re a light user and files don’t change very often, you should at least be making a backup once a week. On the other hand, if you’re dealing with business files, you should be making backups at least once or twice a day.

1)Online backup:There are many ways to make backups online. OneDrive is a common example of online backup, but this solution should only be considered to protect your data against hardware failure, theft, or natural accidents. If your device gets infected with ransomware or another type of malware, OneDrive is likely to sync the changes making those files stored in the cloud unusable. A better solution includes subscribing to a third-party online backup service, such as CrashPlan or IDrive that allows you to schedule or trigger backups on demand to prevent syncing infected or encrypted files. The only caveat is that most cloud storage services don’t offer bare-metal recovery. If that’s something you need, you could create a full backup as you would normally do and then upload the package to a paid cloud storage service, such as Amazon Drive, Google Drive, etc.

2) Have an offline backup: Your recovery plan must include a full backup of your system and data to keep offline using an external hard drive or in a local network location (e.g. Network-attached Storage (NAS)). This is the kind of backup that will ensure you can recover from any malware, hardware failure, errors, and natural accidents. Remember that there is no such thing as enough backup. If you can make a backup of the backup that you can store offsite, do it. After creating a backup, always disconnect the external drive and store it in a safe location, or disconnect the network location where you store the backup because if the drive stays online and accessible from your computer, malware can still infect those files.

Control

Backup copies of information, software and systems should be maintained and regularly tested in accordance with the agreed topic-specific policy on backup.

Purpose

To enable recovery from loss of data or systems.

ISO 27002 Implementation Guidance

A topic-specific policy on backup should be established to address the organization’s data retention and information security requirements. Adequate backup facilities should be provided to ensure that all essential information and software can be recovered following an incident or failure or loss of storage media. Plans should be developed and implemented for how the organization will back up information, software and systems, to address the topic-specific policy on backup. When designing a backup plan, the following items should be taken into consideration:

  1. a) producing accurate and complete records of the backup copies and documented restoration procedures;
  2. b) reflecting the business requirements of the organization (e.g. the recovery point objective), the security requirements of the information involved and the criticality of the information to the continued operation of the organization in the extent (e.g. full or differential backup) and frequency of backups;
  3. c) storing the backups in a safe and secure remote location, at a sufficient distance to escape any damage from a disaster at the main site;
  4. d) giving backup information an appropriate level of physical and environmental protection consistent with the standards applied at the main site;
  5. e) regularly testing backup media to ensure that they can be relied on for emergency use when necessary. Testing the ability to restore backed-up data onto a test system, not by overwriting the original storage media in case the backup or restoration process fails and causes irreparable data damage or loss;
  6. f) protecting backups by means of encryption according to the identified risks (e.g. in situations where confidentiality is of importance);
  7. g) taking care to ensure that inadvertent data loss is detected before backup is taken.

Operational procedures should monitor the execution of backups and address failures of scheduled backups to ensure completeness of backups according to the topic-specific policy on backups. Backup measures for individual systems and services should be regularly tested to ensure that they meet the objectives of incident response and business continuity plans. This should be combined with a test of the restoration procedures and checked against the restoration time required by the business continuity plan. In the case of critical systems and services, backup measures should cover all systems information, applications and data necessary to recover the complete system in the event of a disaster. When the organization uses a cloud service, backup copies of the organization’s information, applications and systems in the cloud service environment should be taken. The organization should determine if and how requirements for backup are fulfilled when using the information backup service provided as part of the cloud service. The retention period for essential business information should be determined, taking into account any requirement for retention of archive copies. The organization should consider the deletion of information in storage media used for backup once the information’s retention period expires and should take into consideration legislation and regulations.

Organization should take a topic-specific approach to backups that includes bespoke processes for each individual topic, and takes into account the different types of data (and associated risk levels) that organisations process and access throughout their operation.

Organisations should draft topic-specific policies that directly address how the organisation backs up the relevant areas of its network. Backup facilities should be implemented with the primary aim of ensuring that all business critical data, software and systems are able to be recovered following the below events:

  • Data loss
  • Intrusion
  • Business interruption
  • Failure of systems, applications or storage media

Any backup plan created should aim to:

  • Outline clear and concise restoration procedures that cover all relevant critical systems and services.
  • Produce workable copies of any systems, data or applications that are covered under a backup job.
  • Meet the unique commercial and operational requirements of the organisation (e.g. recovery time objectives, backup types, backup frequency)
  • Store backups in an appropriate location that is environmentally protected, physically distinct from the source data in order to prevent total data loss, and securely accessed for maintenance purposes .
  • Mandate for regular testing of backup jobs, in order to guarantee data availability should the need arise to restore files, systems or applications at a moment’s notice. Backup tests should be measured against the organisation’s agreed recovery times to ensure adherence in the event of data loss or system interruption.
  • Encrypt data that has been backed up, in accordance with its risk level.
  • Check for data loss before running any backup jobs.
  • Implement a reporting system that alerts maintenance staff to the status of backup jobs – including complete or partial failures – so that remedial action can be taken.
  • Include data from cloud-based platforms that are not directly managed by the organisation.
  • Store backup data in line with a topic-specific retention policy that takes into account the underlying nature and purpose of the data that’s been backed up, including transfer and/or archiving to storage media

Organizations that implement Information Backup properly, will experience minimum downtime and smooth recovery in the event of a failure.

Backup types defined

  1. Full backup captures a copy of an entire data set. Although considered to be the most reliable backup method, performing a full backup is time-consuming and requires many disks or tapes. Most organizations run full backups only periodically.
  2. Incremental backup offers an alternative to full backups by backing up only the data that has changed since the last full backup. The drawback is that a full restore takes longer if an incremental-based data backup copy is used for recovery.
  3. Differential backup copies data changed since the last full backup. This enables a full restore to occur more quickly by requiring only the last full backup and the last differential backup. For example, if you create a full backup on Monday, the Tuesday backup would, at that point, be similar to an incremental backup. Wednesday’s backup would then back up the differential that has changed since Monday’s full backup. The downside is that progressive growth of differential backups tends to adversely affect your backup window. A differential backup spawns a file by combining an earlier complete copy of it with one or more incremental copies created later. The assembled file is not a direct copy of any single current or previously created file, but rather synthesized from the original file and any subsequent modifications to that file.
  4. Synthetic full backup is a variation of differential backup. In a synthetic full backup, the backup server produces an additional full copy, which is based on the original full backup and data gleaned from incremental copies.
  5. Incremental-forever backups minimize the backup window while providing faster recovery access to data. An incremental-forever backup captures the full data set and then supplements it with incremental backups from that point forward. Backing up only changed blocks is also known as delta differencing. Full backups of data sets are typically stored on the backup server, which automates the restoration.
  6. Reverse-incremental backups are changes made between two instances of a mirror. Once an initial full backup is taken, each successive incremental backup applies any changes to the existing full backup. This essentially generates a novel synthetic full backup copy each time an incremental change is applied, while also providing reversion to previous full backups.
  7. Hot backup, or dynamic backup, is applied to data that remains available to users as the update is in process. This method sidesteps user downtime and productivity loss. The risk with hot backup is that, if the data is amended while the backup is underway, the resulting backup copy might not match the final state of the data.

Techniques and technologies to complement data backup

  • Continuous data protection (CDP) refers to layers of associated technologies designed to enhance data protection. A CDP-based storage system backs up all enterprise data whenever a change is made. CDP tools enable multiple copies of data to be created. Many CDP systems contain a built-in engine that replicates data from a primary to a secondary backup server and/or tape-based storage. Disk-to-disk-to-tape backup is a popular architecture for CDP systems.
  • Near-continuous CDP takes backup snapshots at set intervals, which are different from array-based vendor snapshots that are taken each time new data is written to storage.
  • Data reduction lessens your storage footprint. There are two primary methods: data compression and data deduplication. These methods can be used singly, but vendors often combine the approaches. Reducing the size of data has implications on backup windows and restoration times.
  • Disk cloning involves copying the contents of a computer’s hard drive, saving it as an image file and transferring it to storage media. Disk cloning can be used for provisioning, system provisioning, system recovery and rebooting or returning a system to its original configuration.
  • Erasure coding, or forward error correction, evolved as a scalable alternative to traditional RAID systems. Erasure coding most often is associated with object storage. RAID stripes data writes across multiple drives, using a parity drive to ensure redundancy and resilience. The technology breaks data into fragments and encodes it with other bits of redundant data. These encoded fragments are stored across different storage media, nodes or geographic locations. The associated fragments are used to reconstruct corrupted data using a technique known as oversampling.
  • Flat backup is a data protection scheme in which a direct copy of a snapshot is moved to low-cost storage without the use of traditional backup software. The original snapshot retains its native format and location; the flat backup replica gets mounted should the original become unavailable or unusable.
  • Mirroring places data files on more than one computer server to ensure it remains accessible to users. In synchronous mirroring, data is written to local and remote disk simultaneously. Writes from local storage are not acknowledged until a confirmation is sent from remote storage, thus ensuring the two sites have an identical data copy. Conversely, asynchronous local writes are complete before confirmation is sent from the remote server.
  • Replication enables users to select the required number of replicas, or copies, of data needed to sustain or resume business operations. Data replication copies data from one location to another, providing an up-to-date copy to hasten DR.
  • Recovery-in-place, or instant recovery, enables users to temporarily run a production application directly from a backup VM instance, thus maintaining data availability while the primary VM is being restored. Mounting a physical or VM instance directly on a backup or media server can hasten system-level recovery to within minutes. Recovery from a mounted image does result in degraded performance, since backup servers are not sized for production workloads.
  • Storage snapshots capture a set of reference markers on disk for a given database, file or storage volume. Users refer to the markers, or pointers, to restore data from a selected point in time. Because it derives from an underlying source volume, an individual storage snapshot is an instance, not a full backup. As such, snapshots do not protect data against hardware failure.

Implementing Information Backup

1)What should be backed up
The organization should decide “what” should be backed up, and up to what level. A priority list of important information should be classified and levels assigned, based on the importance of the information. Something like this

  • Code repository (Level 5 protection)
  • Financial data (Level 5 protection)
  • Employee email (Level 4 protection)
  • Sales reports (Level 3 protection)

2) Define Levels of Back-up information
Define what Backup procedures need to be maintained for each of these levels. Something like this

  • Level 5 – Fail-over backup, off-location backup for disaster recovery, Weekly and daily backups, Weekly Mock recovery.
  • Level 4 – Weekly and daily backups, Weekly mock recovery.
  • Level 3 – Weekly backups. Monthly mock recovery.

Mock recoveries are conducted to make sure that the restoration process works well, in the event of an actual failure.The extent and frequency of backups should reflect the business requirements of the organization. Put up a question like this. If this information were to get lost, can we restore a week old copy. Will the information change a lot during a week or month. Also consider the Criticality of the information to the continued operation of the organization. Maybe the information doesn’t change a lot during the week, but it has to be restored with Zero Downtime, in the event of a failure. In such conditions, a Fail-over solution would be ideal. In case of critical systems, the backup should cover all systems information, applications and data necessary to recover the complete system in the event of a disaster.

3) Log the Backups and Restoration.
Accurate and complete records should be maintained of the backup process and the backup copies. This helps to track who did the last backup and when. Logs should also be maintained for the Mock Restorations, in order to track that the restorations were successful or not. Mock restorations help discover flaws in the backup process. For example, if all the files were not backed up, or the script was bad.

4) Backups should be stored in a remote location
Backups should be stored in a remote location, at a sufficient distance to escape any damage from a disaster at the main site. What remains to be decided, is the mode of such storage. Whether it has to be a fail-over server, or whether the information can simply be stored in tape drives. Consider the security requirements of the information involved. Is it safe to replicate the information in another off-location site? Maybe, your agreement with your clients, doesn’t allow you to transfer the information to another location.

5) Secure your Backups
Backup information should be given appropriate level of physical and environmental protection. What this means is whatever controls that you apply to media at the main site, should be extended to cover the Back-up site. In certain cases, where confidentiality is of importance, the backups should be protected by means of encryption.

6) Test your Backups
OK, you did everything great so far, backing up your information. Now assume, that the backups didn’t restore well, during an emergency. The entire effort of backing up information goes down the drain. Make sure that the Backup media is regularly tested to ensure that they can be relied upon for emergency use when necessary. Use Mock restoration procedures, so that you are sure that you are sure that the Backups are effective. Also ensure that Backups can restore in the time allocated for Recovery. For example, if the Operational procedure for recovery is 2 hours, make sure that the Backup can be effectively restored in 2 hours. Of course, it goes without saying that the Mock restoration procedures “should” be logged.

7) How long should Backups be retained
The backups should be retained for as long as the organization determines that the information is useful. Backup media is cheap, and the hours that are required to clean the data may be more expensive. In most cases, it may be cheaper to retain the backups. In effect, the organization needs to decide the Retention period, and also any requirement for archive copies to be permanently retained.

ISO 27001:2022 A 8.7 Protection against malware

Malware – short for malicious software – is software that infects your computer so that cyber criminals can infiltrate or damage your system or device. A cyber criminal may use malware to steal information or carry out malicious activities. Malware represents the single largest threat to business continuity and information security faced by businesses in the digital age. The global commercial community faces innumerable daily threats from a broad range of attack vectors that seek to gain unauthorized access to sensitive systems and data, extract information and money, dupe unassuming employees and leverage ransomed data for extortionate sums of money. An organisation’s approach to malware protection should be front and centre of any information security policy.an array of measures that helps organisations to educate their employees as to the dangers of malicious software, and implement meaningful practical measures that stop internal and external attacks before they have a chance to cause disruption and data loss.Downloading programs is the most common way to infect your device with malware. For example, you may download a software application that looks legitimate but that is actually malware designed to hack your computer. But direct downloads while browsing websites aren’t the only way you can get malware. You might infect your computer or device by opening or downloading attachments or clicking on links in emails or text messages.

Control

Protection against malware should be implemented and supported by appropriate user awareness.

Purpose

To ensure information and other associated assets are protected against malware.

ISO 27002 Implementation Guidance

Protection against malware should be based on malware detection and repair software, information security awareness, appropriate system access and change management controls. Use of malware detection and repair software alone is not usually adequate. The following guidance should be considered:

  1. implementing rules and controls that prevent or detect the use of unauthorized software [e.g. application allow listing (i.e. using a list providing allowed applications)] ;
  2. implementing controls that prevent or detect the use of known or suspected malicious websites (e.g. blocklisting);
  3. reducing vulnerabilities that can be exploited by malware (e.g. through technical vulnerability management );
  4. conducting regular automated validation of the software and data content of systems, especially for systems supporting critical business processes; investigating the presence of any unapproved files or unauthorized amendments;
  5. establishing protective measures against risks associated with obtaining files and software either from or via external networks or on any other medium;
  6. installing and regularly updating malware detection and repair software to scan computers and electronic storage media. Carrying out regular scans that include:
    • 1) scanning any data received over networks or via any form of electronic storage media, for malware before use;
    • 2) scanning email and instant messaging attachments and downloads for malware before use. Carrying out this scan at different places (e.g. at email servers, desktop computers) and when entering the network of the organization;
    • 3) scanning web pages for malware when accessed;
  7. determining the placement and configuration of malware detection and repair tools based on risk assessment outcomes and considering:
    • 1) defense in depth principles where they would be most effective. For example, this can lead to malware detection in a network gateway (in various application protocols such as email, file transfer and web) as well as user endpoint devices and servers;
    • 2) the evasive techniques of attackers (e.g. the use of encrypted files) to deliver malware or the use of encryption protocols to transmit malware;
  8. taking care to protect against the introduction of malware during maintenance and emergency procedures, which can bypass normal controls against malware;
  9. implementing a process to authorize temporarily or permanently disable some or all measures against malware, including exception approval authorities, documented justification and review date. This can be necessary when the protection against malware causes disruption to normal operations;
  10. preparing appropriate business continuity plans for recovering from malware attacks, including all necessary data and software backup (including both online and offline backup) and recovery measures;
  11. isolating environments where catastrophic consequences can occur;
  12. defining procedures and responsibilities to deal with protection against malware on systems, including training in their use, reporting and recovering from malware attacks;
  13. providing awareness or training to all users on how to identify and potentially mitigate the receipt, sending or installation of malware infected emails, files or programs [the information collected in n14 and 15) can be used to ensure awareness and training are kept up-to-date];
  14. implementing procedures to regularly collect information about new malware, such as subscribing to mailing lists or reviewing relevant websites;
  15. verifying that information relating to malware, such as warning bulletins, comes from qualified and reputable sources (e.g. reliable internet sites or suppliers of malware detection software) and is accurate and informative.

Other information

It is not always possible to install software that protects against malware on some systems (e.g. some industrial control systems). Some forms of malware infect computer operating systems and computer firmware such that common malware controls cannot clean the system and a full re imaging of the operating system software and sometimes the computer firmware is necessary to return to a secure state.

It’s rare for modern hackers to physically enter building premises because they may be caught or apprehended. Physical facility controls have a limited purpose in information security, to simply provide a local barrier for the physical intrusion. These localized barriers protect against common crimes by persons entering and leaving the facility. With the advent of the Internet, a smaller percentage of criminals will chance the risks of committing a physical crime. The new persistent threat is through electronic attacks. A hacker can commit the crime at a safe distance without fear of being physically caught. Attacks may originate from anywhere in the world or even be sponsored by a foreign government to gather intelligence data. Technical controls to protect against electronic attacks are usually spotty and inconsistent because of a lack of awareness for specific threats or lopsided implementations. It is very easy for the technical staff to inadvertently focus on only a few areas, thereby neglecting serious threats that still exist in others. Technical threats against software are usually difficult for lay-persons to visualize in the physical world. The adage out of sight, out of mind also means outside of the budget. Let’s take a moment to understand how the electronic threat will manifest in our clients’ environment.

  1. Malware: This title refers to every malicious software program ever created, whether it exploits a known vulnerability or creates its own. There are so many different ones, it’s easier to just call the entire group by the title of malware. The king of the malware threat is known as the Trojan horse.
  2. Trojan Horse: A revised concept of the historical Trojan horse has been adapted to attack computers. In a tale from the Trojan war, soldiers hid inside a bogus gift known as the Trojan horse. The unassuming recipients accepted the horse and brought it inside their fortress, only to be attacked by enemy soldiers hiding within. Malicious programs frequently use the Trojan horse concept to deliver viruses, worms, logic bombs, and other rootkits through downloaded files.
  3. Virus: The goal of a virus is to disrupt operations. Users inadvertently download a program built like a Trojan horse containing the virus. The attacker’s goal is usually to damage your programs or data files. Viruses append themselves to the end-of-file (EOF) marker on computerized files.
  4. Internet Worm: An Internet worm operates in a similar manner to the Trojan or virus, with one major exception. Worm programs can freely travel between computers because they exploit unprotected data transfer ports (software programming sockets) to access other systems. Internet worms started by trying to access the automatic update (file transfer) function through software ports with poor authentication or no authentication mechanism. It is the responsibility of the IS programmer to implement the security of the ports and protocols. IT technicians for hardware and operating system support cannot fix poor programming implementations. For IT technicians, the only choice is to disable software ports, but that won’t happen if the programmer requires the port left open for the user’s application program to operate.
  5. Logic Bomb: The concept of the logic bomb is designed around a dormant program code that is waiting for a trigger event to cause detonation. Unlike a virus or worm, logic bombs do not travel. The logic bomb remains in one location, awaiting detonation. Logic bombs are difficult to detect. Some logic bombs are intentional, and others are the unintentional result of poor programming. Intentional logic bombs can be set to detonate after the perpetrator is gone.
  6. Time Bomb: Programmers can install time bombs in their program to disable the software upon a predetermined date. Time bombs might be used to kill programs on symbolic dates such as April Fools’ Day or the anniversary of a historic event. Free trial evaluation versions of software use the time bomb mechanism to disable their program after 30–60 days with the intention of forcing the user to purchase a license. Time bombs can be installed by the vendor to eliminate perpetual customer support issues by forcing upgrades after a few years. The software installation utility will no longer run or install, because the programmer’s time bomb setting disabled the program. Now when trying to run the software, a message directs the user to contact customer support to purchase an upgrade. Hackers use the same technique to disrupt operations.
  7. Trapdoor: Computer programmers frequently install a shortcut, also known as a trapdoor, for use during software testing. The trapdoor is a hidden access point within the computer software. A competent programmer will remove the majority of trapdoors before releasing a production version of the program. However, several vendors routinely leave a trapdoor in a computer program to facilitate user support. Commercial encryption software began to change in 1996 with the addition of “key recovery” features. This is basically a trap door feature to recover lost encryption keys and to allow the government to read encrypted files, if necessary.
  8. RootKit: One of the most threatening attacks is the secret compromise of the operating system kernel. Attackers embed a rootkit into the downloadable software. This malicious software will subvert security settings by linking itself directly into the kernel processes, system memory, address registers, and swap space. Rootkits operate in stealth to hide their presence. Hackers designed rootkits to never display their execution as running applications. The system resource monitor does not show any activity related to the presence of the rootkit. After the rootkit is installed, the hacker has control over the system. The computer is completely compromised. Automatic update features use the same techniques as malicious rootkits to allow the software vendor to bypass your security settings. Vendors know that using the term rootkit may alarm users. The software agent is just another name for a rootkit.
  9. Brute Force: Attack Brute force is the use of extreme effort to overcome an obstacle. For example, an amateur could discover the combination to a safe by dialling all of the 63,000 possible combinations. There is a mathematical likelihood that the actual combination will be determined after trying less than one-third of the possible combinations. Brute force attacks are frequently used against user login IDs and passwords. In one particular attack, all of the encrypted computer passwords are compared against a list of all the words encrypted from a language dictionary. After the match is identified, the attacker will use the unencrypted word that created the password match. This is why it is important to use passwords that do not appear in any language dictionary.
  10. Denial of Service (DoS): Attackers can disable a computer by rendering legitimate use impossible. The objective is to remotely shut down service by overloading the system or disable the user environment (shell) and thereby prevent the normal user from processing anything on the computer. Denial-of-service (DoS) attacks may look similar to the loss of service while your system is downloading and installing vendor updates. The message “please wait, installing update 6 of 41…” makes your system unavailable for an hour or more. That is exactly how DoS operates.
  11. Distributed Denial of Service (DDoS): The denial of service has evolved to use multiple systems for targeted attacks against another computer, to force its crash. This type of attack, distributed denial of service (DDoS), is also known as the reflector attack. Your own computer is being used by the hacker to launch remote attacks against someone else. Hackers start the attack from unrelated systems that the hacker has already compromised. The attacking computers and target are drawn into the battle—similar in concept to starting a vicious rumour between two strangers, which leads them to fight each other. The hackers sit safely out of the way while this battle wages.

Organisations to adopt an approach to malware protection that encompasses four key areas:

  • Anti-malware software
  • Organisational information security awareness (user training)
  • Controlled systems and account access
  • Change management

ISO categorically points out that it is a mistake to assume that anti malware software alone represents an adequate set of measures. Control 8.7 instead asks organisations to take an end-to-end approach to malware protection that begins with user education and ends with a tightly-controlled network that minimizes the risk of intrusion across a variety of attack vectors. To achieve this goal, organisations should implement controls that:

  1. Prevent the use of unauthorized software .
  2. Block traffic to malicious or inappropriate websites.
  3. Minimize the amount of vulnerabilities resident on their network that have the potential to be exploited by malware or malicious intent
  4. Carry out regular software audits that scan the network for unauthorised software, system amendments and/or data.
  5. Ensure that data and applications are obtained with minimal risk, either internally or as an external acquisition.
  6. Establish a malware detection policy that includes regular and thorough scans of all relevant systems and files, based upon the unique risks of each area to be scanned. Organisations should adopt a ‘defence in depth’ approach that encompasses endpoint devices and gateway controls, and takes into consideration a broad range of attack vectors (e.g. ransomware).
  7. Protect against intrusions that emanate from emergency procedures and protocols – especially during an incident or high-risk maintenance activities.
  8. Draft a process that allows for technical staff to disable some or all anti malware efforts, especially when such activities are hampering the organisation’s ability to do business.
  9. Implement a robust backup and disaster recovery (BUDR) plan that allows the organisation to resume operational activity as quickly as possible, following disruption (see Control 8.13). This should include procedures that deal with software which isn’t able to be covered by anti malware software (i.e. machinery software).
  10. Partition off areas of the network and/or digital and virtual working environments that may cause catastrophic disruption in the event of an attack.
  11. Provide all relevant employees with anti malware awareness training that educates users on a broad range of topics, including (but not limited to):
    • Social engineering
    • Email security
    • Installing malicious software
  12. Collect industry-related information about the latest developments in malware protection.
  13. Ensure that notifications about potential malware attacks (particularly from software and hardware vendors) originate from a trusted source and are accurate.

There are numerous ways to protect and remove malware from our computers. No one method is enough to ensure the computer is secure. The more layers of defense, the harder for hackers to use the computer.

  1. Install a Firewall:
    A firewall enacts the role of a security guard. There are two types of firewalls: a software firewall and a hardware firewall. Each serves similar, but different purposes. A firewall is the first step to provide security to the computer. It creates a barrier between the computer and any unauthorized program trying to come in through the Internet. If you are using a system at home, turn on the firewall permanently. It makes you aware if there are any unauthorized efforts to use your system.
  2. Install Antivirus Software:
    Antivirus is one other means to protect the computer. It is software that helps to protect the computer from any unauthorized code or software that creates a threat to the system. Unauthorized software includes viruses, keyloggers, trojans, etc. This might slow down the processing speed of your computer, delete important files and access personal information. Even if your system is virus-free, you must install antivirus software to prevent the system from the further attack of the virus. Antivirus software plays a major role in real-time protection, its added advantage of detecting threats helps computers and the information in it to be safe. Some advanced antivirus programs provide automatic updates, this further helps to protect the PC from newly created viruses.
  3. Install Anti-Spyware Software:
    Spyware is a software program that collects personal information or information about an organization without its approval. This information is redirected to a third-party website. Spyware is designed in such a way that they are not easy to be removed. Anti-Spyware software is solely dedicated to combat spyware. Similar to antivirus software, the anti-spyware software offers real-time protection. It scans all the incoming information and helps in blocking the threat once detected.
  4. Check on the Security Settings of the Browser:
    Browsers have various security and privacy settings that you should review and set to the level you desire. Recent browsers give you the ability to tell websites that do not track your movements, increasing your privacy and security.
  5. Use secure authentication methods.
    The following best practices help keep accounts safe:
    • Require strong passwords with at least eight characters, including an uppercase letter, a lowercase letter, a number and a symbol in each password.
    • Enable multi-factor authentication, such as a PIN or security questions in addition to a password.
    • Use biometric tools like fingerprints, voiceprints, facial recognition and iris scans.
    • Never save passwords on a computer or network. Use a secure password manager if needed.
  6. Use administrator accounts only when absolutely necessary.
    Malware often has the same privileges as the active user. Non-administrator accounts are usually blocked from accessing the most sensitive parts of a computer or network system. Therefore:
    • Avoid using administrative privileges to browse the web or check email.
    • Log in as an administrator only to perform administrative tasks, such as to make configuration changes.
    • Install software using administrator credentials only after you have validated that the software is legitimate and secure.
  7. Keep software updated.
    No software package is completely safe against malware. However, software vendors regularly provide patches and updates to close whatever new vulnerabilities show up. As a best practice, validate and install all new software patches:
    • Regularly update your operating systems, software tools, browsers and plug-ins.
    • Implement routine maintenance to ensure all software is current and check for signs of malware in log reports.
  8. Control access to systems.
    There are multiple ways to regulate your networks to protect against data breaches:
    • Install or implement a firewall, intrusion detection system (IDS) and intrusion prevention system (IPS).
    • Never use unfamiliar remote drives or media that was used on a publicly accessible device.
    • Close unused ports and disable unused protocols.
    • Remove inactive user accounts.
    • Carefully read all licensing agreements before installing software.
  9. Adhere to the least-privilege model.
    Adopt and enforce the principle of least-privilege: Grant users in your organization the minimum access to system capabilities, services and data they need to complete their work.
  10. Limit application privileges.
    A hacker only needs an open door to infiltrate your business. Limit the number of possible entryways by restricting application privileges on your devices. Allow only the application features and functions that are absolutely necessary to get work done.
  11. Implement email security and spam protection.
    Email is an essential business communication tool, but it’s also a common malware channel. To reduce the risk of infection:
    • Scan all incoming email messages, including attachments, for malware.
    • Set spam filters to reduce unwanted emails.
    • Limit user access to only company-approved links, messages and email addresses.
  12. Monitor for suspicious activity.
    Monitor all user accounts for suspicious activity. This includes:
    • Logging all incoming and outgoing traffic
    • Baselining normal user activity and proactively looking for aberrations
    • Investigating unusual actions promptly
  13. Educate your users.
    At the end of the day, people are the best line of defense. By continually educating users, you can help reduce the risk that they will be tricked by phishing or other tactics and accidentally introduce malware into your network. In particular:
    • Build awareness of common malware attacks.
    • Keep users up to date on basic cybersecurity trends and best practices.
    • Teach users how to recognize credible sites and what to do if they stumble onto a suspicious one.
    • Encourage users to report unusual system behavior.
    • Advise users to only join secure networks and to use VPNs when working outside the office.

ISO 27001:2022 A 8.17 Clock synchronization

The clocks of all relevant information processing systems within an organisation or security domain must be synchronized to a single reference time source. System clock synchronization is important, especially when evidencing events as part of an investigation or legal proceeding as it is often impossible or very difficult to prove “cause & effect” if clocks are not synchronized correctly. Being able to rely upon a pinpoint accurate synchronize time across an organisation’s information systems is of paramount importance not only for the ongoing operation of a company’s commercial systems, but also in the event of an ICT-related incident. Accurate time representation gives an organisation the ability to provide itself and any law enforcement or regulatory bodies with a reliable account of how information has been managed, along with the actions of its employees and vendors.All systems should be configured with the same time and date; otherwise, if an incident occurs and we want to carry out a traceability test of what has happened in the different systems involved, it can be difficult if each one has a different configuration. Therefore, the ideal scenario would be that systems have a synchronized time, and this can be achieved in an automated manner with time servers (technically known as NTP servers, where “NTP” stands for an internet protocol for the synchronization of systems clocks).

Control

The clocks of information processing systems used by the organization should be synchronized to approved time sources.

Purpose

To enable the correlation and analysis of security-related events and other recorded data, and to support investigations into information security incidents.

ISO 27002 Implementation Guidance

External and internal requirements for time representation, reliable synchronization and accuracy should be documented and implemented. Such requirements can be from legal, statutory, regulatory, contractual, standards and internal monitoring needs. A standard reference time for use within the organization should be defined and considered for all systems, including building management systems, entry and exit systems and others that can be used to aid investigations. A clock linked to a radio time broadcast from a national atomic clock or global positioning system (GPS) should be used as the reference clock for logging systems; a consistent, trusted date and time source to ensure accurate time-stamps. Protocols such as network time protocol (NTP) or precision time protocol (PTP) should be used to keep all networked systems in synchronization with a reference clock. The organization can use two external time sources at the same time in order to improve the reliability of external clocks, and appropriately manage any variance. Clock synchronization can be difficult when using multiple cloud services or when using both cloud and on-premises services. In this case, the clock of each service should be monitored and the difference recorded in order to mitigate risks arising from discrepancies.

Other information

The correct setting of computer clocks is important to ensure the accuracy of event logs, which can be required for investigations or as evidence in legal and disciplinary cases. Inaccurate audit logs can hinder such investigations and damage the credibility of such evidence.

Organisations is to establish a standard reference time that can be used across all commercial, logistical and maintenance-based systems as a trusted date and time source for all the organisation’s needs. Organisations should:

  • Draft internal and external requirements for three aspects of clock synchronisation:
  • Time representation
  • Reliable synchronisation
  • Accuracy

When addressing said requirements, organisations should address their needs from 6 separate angles:

  • Legal
  • Statutory
  • Regulatory
  • Contractual
  • Standards
  • Internal monitoring

Make use of a radio time broadcast linked to an atomic clock as a singular reference point, alongside the implementation of key protocols (NTP, PTP) to ensure adherence across the network. Consider managing two separate time sources to improve redundancy.

Distributed System is a collection of computers connected via the high speed communication network. In the distributed system, the hardware and software components communicate and coordinate their actions by message passing. Each node in distributed systems can share their resources with other nodes. So, there is need of proper allocation of resources to preserve the state of resources and help coordinate between the several processes. To resolve such conflicts, synchronization is used. Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes.Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination). UTC is used as a reference time clock for the nodes in the system. The clock synchronization can be achieved by 2 ways: External and Internal Clock Synchronization.

  • External clock synchronization is the one in which an external reference clock is present. It is used as a reference and the nodes in the system can set and adjust their time accordingly.
  • Internal clock synchronization is the one in which each node shares its time with other nodes and all the nodes set and adjust their times accordingly.

There are 2 types of clock synchronization algorithms: Centralized and Distributed.

  • Centralized is the one in which a time server is used as a reference. The single time server propagates its time to the nodes and all the nodes adjust the time accordingly. It is dependent on single time server so if that node fails, the whole system will lose synchronization. Examples of centralized are- Berkeley Algorithm, Passive Time Server, Active Time Server etc.
  • Distributed is the one in which there is no centralized time server present. Instead the nodes adjust their time by using their local time and then, taking the average of the differences of time with other nodes. Distributed algorithms overcome the issue of centralized algorithms like the scalability and single point failure. Examples of Distributed algorithms are – Global Averaging Algorithm, Localized Averaging Algorithm, NTP (Network time protocol) etc.

Network Time Protocol (NTP) is an internet protocol used to synchronize with computer clock time sources in a network.Network Time Protocol (NTP) was developed to co-ordinate the time of the Internet and networked computers. It belongs to and is one of the parts of the TCP/IP suite. The term NTP applies to both the protocol and the client-server programs that run on computers. NTP implements a hierarchical system of time references. The hierarchy level, referred to as stratum, represents the distance of a time synchronization server from a source reference clock. The lower the stratum, the closer the reference clock. Stratum 1 devices are connected directly to a hardware clock, such as a GPS or radio time source.The following three steps are involved in the NTP time synchronization process:

  • The NTP client initiates a time-request exchange with the NTP server.
  • The client is then able to calculate the link delay and its local offset and adjust its local clock to match the clock at the server’s computer.
  • As a rule, six exchanges over a period of about five to 10 minutes are required to initially set the clock.

Once synchronized, the client updates the clock about once every 10 minutes, usually requiring only a single message exchange, in addition to client-server synchronization.

Precision Time Protocol (PTP) is a protocol that promotes the synchronization of clocks throughout a computer network. This protocol is used to synchronize clocks of different types of devices. PTP is a protocol that works for seamless communication between different devices. It uses a master-slave system of time resources and provides synchronization. This system consists of one or more communication devices and a single network connection provided by a grand master device. This grand master is responsible for the root timing reference. The grand master transmits synchronized information to the devices residing in the communication medium. Some of the features of PTP are –

  • It has an alternate time-scale functionality.
  • It uses a Grand Master clock to synchronize the communication.
  • It works on master-slave architecture.
  • It makes the path of communication traceable.

ISO 27001:2022 A 8.6 Capacity management

Capacity management is the broad term describing a variety of IT monitoring, administration and planning actions that are taken to ensure that a computing infrastructure has adequate resources to handle current data processing requirements as well as the capacity to accommodate future loads.The primary goal of capacity management is to ensure that IT resources are rightsized to meet current and future business requirements in a cost-effective manner. Capacity management, in the context of ICT, isn’t limited to ensuring that organisations have adequate space on servers and associated storage media for data access and Backup and Disaster Recovery (BUDR) purposes. Organisations need to ensure that they have the ability to operate with a set of resources that cater to a broad range of business functions, including HR, information processing, the management of physical office locations and attached facilities. All of these functions have the ability to adversely affect an organisation’s information management controls. The use of resources must be monitored, tuned and projections made of future capacity requirements to ensure the required system performance to meet the business objectives. Capacity management typically looks at three primary types; Data storage capacity – (e.g. in database systems, file storage areas etc.); Processing power capacity – (e.g. adequate computational power to ensure timely processing operations.); and Communications capacity – (often referred to as “bandwidth” to ensure communications are made in a timely manner). Capacity management also needs to be; Pro-active – for example, using capacity considerations as part of change management; Re-active – e.g. triggers and alerts for when capacity usage is reaching a critical point so that timely increases, temporary or permanent can be made.

Control

The use of resources should be monitored and adjusted in line with current and expected capacity requirements.

Purpose

To ensure the required capacity of information processing facilities, human resources, offices and other facilities.

ISO 27002 Implementation Guidance

Capacity requirements for information processing facilities, human resources, offices and other facilities should be identified, taking into account the business criticality of the concerned systems and processes. System tuning and monitoring should be applied to ensure and, where necessary, improve the availability and efficiency of systems. The organization should perform stress-tests of systems and services to confirm that sufficient system capacity is available to meet peak performance requirements. Detective controls should be put in place to indicate problems in due time. Projections of future capacity requirements should take account of new business and system requirements and current and projected trends in the organization’s information processing capabilities. Particular attention should be paid to any resources with long procurement lead times or high costs. Therefore, managers, service or product owners should monitor the utilization of key system resources. Managers should use capacity information to identify and avoid potential resource limitations and dependency on key personnel which can present a threat to system security or services and plan appropriate action. Providing sufficient capacity can be achieved by increasing capacity or by reducing demand. The following should be considered to increase capacity:
a) hiring new personnel;
b) obtaining new facilities or space;
c) acquiring more powerful processing systems, memory and storage;
d) making use of cloud computing, which has inherent characteristics that directly address issues of capacity. Cloud computing has elasticity and scalability which enable on-demand rapid expansion and reduction in resources available to particular applications and services.

The following should be considered to reduce demand on the organization’s resources:
a) deletion of obsolete data (disk space);
b) disposal of hardcopy records that have met their retention period (free up shelving space);
c) decommissioning of applications, systems, databases or environments;
d) optimizing batch processes and schedules;
e) optimizing application code or database queries;
f) denying or restricting bandwidth for resource-consuming services if these are not critical (e.g. video streaming).
A documented capacity management plan should be considered for mission critical systems.

The methodologies and processes used for IT capacity management may vary, it requires the ability to monitor IT resources closely enough to be able to gather and measure basic performance metrics. With that data in hand, IT managers and administrators can set baselines for operations to meet a company’s processing needs. The baselines — or benchmarks — represent average performance over a specific period of time and can be used to detect deviations from those established levels. Capacity management tools measure the volumes, speeds, latencies and efficiency of the movement of data as it is processed by an organization’s applications. All facets of data’s journey through the IT infrastructure must be monitored, so capacity management must be able to examine the operations of all the hardware and software in an environment and capture critical information about data flow. Capacity planning is typically based on the results and analysis of the data gathered during capacity management activities. By examining performance variances over time, IT management can use those performance statistics to help develop models describing anticipated processing which can be used for short- and long-term planning. By noting which particular resources are being stressed, current configurations can be appropriately revised and IT planners can assemble purchasing plans for hardware and software that will help meet future demands. Measurement and analysis tools must be able to observe the individual performances of IT assets, as well as how these assets interact. A comprehensive capacity management process should be able to monitor and measure the following IT elements:

  • Servers
  • End-user devices
  • Networks and related communications devices
  • Storage systems and storage network devices
  • Cloud services

Organisation’s ability to operate as a business on an ongoing basis depends upon the following:

  1. Organisations should consider business continuity as a top priority when implementing capacity management controls, including the wholesale implementation of detective controls that flag up potential issues before they occur.
  2. Capacity management should be based upon the proactive functions of tuning and monitoring. Both of these elements should work in harmony to ensure that systems and business functions are not compromised.
  3. In operational terms, organisations should perform regular stress tests that interrogate a systems ability to cater to overall business needs. Such tests should be formulated on a case-by-case basis and be relevant to the area of operation that they are targeted at.
  4. Capacity management controls should not be limited to an organisation’s current data or operational needs, and should include any plans for commercial and technical expansion (both from a physical and digital perspective) in order to remain as future-proof as is realistically possible.
  5. Expanding organisational resources is subject to varying lead times and costs, depending on the system or business function in question. Resources that are more expensive and more difficult to expand should be subject to a higher degree of scrutiny, in order to safeguard business continuity.
  6. Senior Management should be mindful of single points of failure relating to a dependency on key personnel or individual resources. Should any difficulties arise with either of these factors, it can often lead to complications that are markedly more difficult to rectify.
  7. Formulate a capacity management plan that deals specifically with business critical systems and business functions.

A dual-fronted approach to capacity management that either increases capacity, or reduces demand upon a resource, or set of resources.When attempting to increase capacity, organisations should:

  • Consider hiring new employees to carry out a business function.
  • Purchase, lease or rent new facilities or office space.
  • Purchase, lease or rent additional processing, data storage and RAM (either on-premise or cloud-hosted).
  • Consider using elastic and scalable cloud resources that expand with the computational needs of the organisation, with minimal intervention.

When attempting to reduce demand, organisations should:

  • Delete obsolete data to free up storage space on servers and attached media.
  • Securely dispose of any hard copies of information that the organisation no longer needs, and is not legally required to obtain, either by law or via a regulatory body.
  • Decommission any ICT resources, applications or virtual environments that are no longer required.
  • Scrutinise scheduled ICT tasks (including reports, automated maintenance functions and batch processes) to optimise memory resources and reduce the storage space taken up by outputted data.
  • Optimise any application code or database queries that are run on a regular enough basis to have an effect on the organisation’s operational capacity.
  • Limit the amount of bandwidth that is allocated to non-critical activities within the boundaries of the organisation’s network. This can include restricting Internet access and preventing video/audio streaming from work devices.

Formal capacity management processes involve conducting system tuning, monitoring the use of present resources and, with the support of user planning input, projecting future requirements. Controls in place to detect and respond to capacity problems can help lead to a timely reaction. This is often especially important for communications networks and shared resource environments (virtual infrastructure) where sudden changes in utilization can in poor performance and dissatisfied users. To address this, regular monitoring processes should be employed to collect, measure, analyze and predict capacity metrics including disk capacity, transmission throughput, service/application utilization. Also, periodic testing of capacity management plans and assumptions (whether tabletop exercises or direct simulations) can help proactively identify issues that may need to be addressed to preserve a high level of availability of services for critical services.

Whether capacity management is achieved via software, hardware or manual means — or a combination of any of those — it relies on the interception of data movement metrics and the internal processes of individual components.Capacity management could have a fairly narrow scope, providing high-level information on a variety of infrastructure components or, perhaps, providing detail metrics related to one segment of the computing environment. The trend, however, is to gather as much information as possible and then to attempt to correlate those measurements into an application-centric picture that focuses on the performance and requirements of mission-critical applications across the environment, rather than how individual components are performing. Still, to achieve that application-centric view of capacity management, virtually all elements of the IT infrastructure must be monitored and the definition of capacity must be broad enough to consider the impact an application will have on processing power, memory, storage capacity and speed for all physical and software components comprising an infrastructure.

  • Performance — is a key metric in capacity management as it may point to processing bottlenecks that affect overall application processing performance. The central processor unit (CPU) in servers and other connected devices, such as routers, storage and controllers, should be monitored to ensure that their processing capabilities are not frequently “pinning” at or near 100%. An overtaxed processor would be a candidate for upgrading.
  • Memory is also a factor in capacity management. Servers and other devices use their installed memory to run applications and process data — if too little memory is installed, processing will slow down. It’s relatively easy to determine if a server has adequate memory resources, but it’s also important to monitor other devices in the environment to ensure that insufficient memory doesn’t turn them into processing bottlenecks.
  • Physical space is what is most commonly associated with capacity management, with the focus generally on storage space for applications and data. Storage systems that are near capacity will have longer response times, as it takes longer to locate specific data when drives — hard disk or solid-state — are full or nearly full. As with processor and memory measurements, it’s important to monitor space usage in devices other than servers and end-user PCs that may have installed storage that’s used for caching data.

Capacity management in networking
Managing the capacity of IT networks can be a complex process given the number of different networking elements that can be found in an enterprise environment. The number and type of networks being monitored is likely to vary as well. In addition to the wired and wireless Ethernet-based network infrastructure that connects servers to storage, end-user devices, networking gear, etc., comprehensive network capacity management must also consider dedicated storage networks based on Fibre Channel technologies; the FC networks are likely to be physically isolated from other data networks and will require different tools for monitoring and management. External networking should also be monitored. Again, different tools will be required to track traffic and performance for network connections to remote offices and users, the internet and to cloud services. The networking devices that should be monitored include network interface cards (NICs), network switches, network routers, storage network interfaces (e.g., host bus adapters), storage network switches and optical network devices. Although capacity management for networks doesn’t directly address security, it can be a good method of keeping track of network access, which can help inform security procedures.

Benefits of capacity management
Capacity management provides many benefits to an IT organization and is a factor in overall management of a computing infrastructure. In addition to ensuring that systems are performing at adequate levels to achieve a company’s goals, capacity management can often realize cost savings by avoiding over-provisioning of hardware and software resources. It can also help save money and time by identifying extraneous activities like backing up unused data or maintaining idle servers.Good capacity management can also result in more-effective purchasing to accommodate future growth by being able to more accurately anticipate needs and, thus, make purchases when prices may be lower. By constantly monitoring equipment and processing, problems that might have hindered production may be avoided, such as bottlenecks or imminent equipment failures.

Components of capacity management
The activities that support the capacity management process are crucial to the success and maturity of the process. Some of these are done on an ongoing basis, some daily, some weekly, and some at a longer, regular interval. Some are ad-hoc, based on current (or future) needs or requirements. Let’s look at those:

  • Monitoring – Keeping an eye on the performance and throughput or load on a server, cluster, or data center is extremely important. Not having enough headroom can cause performance issues. Having too much headroom can create larger-than-necessary bills for hardware, software, power, etc.
  • Analysis – Taking that measurement and monitoring data and drilling down to see the potential impact of changes in demand. As more and more data become available, having the tools needed to find the right data and make sense of it is very important.
  • Tuning – Determining the most efficient use of existing infrastructure should not be taken lightly. A lot of organizations have over-configured significant parts of the environment while under-configuring others. Simply reallocating resources could improve performance while keeping spend at current levels.
  • Demand Management – Understanding the relationship of current and future demand and how the existing (or new) infrastructure can handle this is incredibly important. Predictive analytics can provide decision support to IT management. Also, moving non-critical workloads to quieter periods can delay purchase of additional hardware (and all the licenses and other costs that go with it).
  • Capacity Planning – Determining the requirements for resources required over some future time. This can be done by predictive analysis, modeling, benchmarking, or other techniques – all of which have varying costs and levels of effectiveness.

ISO 27001:2022 A 8.32 Change management

Changes to information systems such as replacing a network device, creating a new database instance, or upgrading software are often necessary for improved performance, reduced costs, and higher efficiencies.However, those changes to information processing facilities and systems, if not implemented properly, may lead to the compromise of information assets stored in or processed by these facilities.Risks (seen from an information security point of view) arise when changes are performed in an uncontrolled way, i.e., confidentiality, integrity, and availability of systems, applications, information… could easily be endangered.Organisations can establish and apply change management procedures to monitor, review and control changes made to the information processing facilities and systems.Change management which requires that changes to the organization, business processes, information processing facilities, and systems that affect information security are controlled. Change management is a structured process for reviewing proposed IT system or service changes. This process occurs prior to implementing the requested change on an organization’s network, thus minimizing or eliminating network outages. IT change management is necessary to ensure any changes to the network will not degrade the performance of the network. Any changes to the network should be a defined, purposeful action to eliminate a found vulnerability, upgrade a component on the network for improved performance, or replace a currently obsolete or faulty network component. Changes can be categorized into three types:

  1. Standard Changes: Standard changes are routine, and follow a pre-established process regarding risk analysis and pre-approvals. These changes are vetted processes that have been pre-approved for execution. Examples of standard changes include the following:
    • Upgrading RAM or hard drive size
    • Replacing a failing network device
    • Making a new database instance
  2. Normal Changes: Normal changes do not have a pre-established process. A risk analysis and deployment plan must be submitted for approval prior to implementing these changes on the IT network. Examples of normal changes include the following:
    • Upgrading to a new compliance management system
    • Upgrading network devices for improved performance
    • Relocating a server farm
  3. Emergency Changes: Emergency changes are when an unplanned outage has occurred, or is likely to occur, due to a discovered vulnerability that possesses a significant threat to the network. Examples of emergency changes are the following:
    • Installing a security patch
    • A network device outage
    • Recovering from a major incident (i.e. fiber strand cut)

Changes in technology are very frequent, and so are changes that affect our ISMS (not only for the sake of improvements, but also in daily business). But, if we don’t manage them according to a procedure, we might find surprises that can (often) involve an information security incident or an interruption of the business, which can also affect our customers. So, if you manage the changes, I am sure that you can improve your organization, because managing activities in any type of business is the best way to improve it – which also means that controlling the changes decreases the headaches and the costs.

Control

Changes to information processing facilities and information systems should be subject to change management procedures.

Purpose

To preserve information security when executing changes.

ISO 27002 Implementation Guidance

Introduction of new systems and major changes to existing systems should follow agreed rules and a formal process of documentation, specification, testing, quality control and managed implementation. Management responsibilities and procedures should be in place to ensure satisfactory control of all changes. Change control procedures should be documented and enforced to ensure the confidentiality, integrity and availability of information in information processing facilities and information systems, for the entire system development life cycle from the early design stages through all subsequent maintenance efforts. Wherever practicable, change control procedures for ICT infrastructure and software should be integrated. The change control procedures should include:

  1. planning and assessing the potential impact of changes considering all dependencies;
  2. authorization of changes;
  3. communicating changes to relevant interested parties;
  4. tests and acceptance of tests for the changes (see 8.29);
  5. implementation of changes including deployment plans;
  6. emergency and contingency considerations including fall-back procedures;
  7. maintaining records of changes that include all of the above;
  8. ensuring that operating documentation (see 5.37) and user procedures are changed as necessary to remain appropriate;
  9. ensuring that ICT continuity plans and response and recovery procedures are changed as necessary to remain appropriate.

Other information

Inadequate control of changes to information processing facilities and information systems is a common cause of system or security failures. Changes to the production environment, especially when transferring software from development to operational environment, can impact on the integrity and availability of applications. Changing software can impact the production environment and vice versa. Good practice includes the testing of ICT components in an environment segregated from both the production and development environments. This provides a means of having control over new software and allowing additional protection of operational information that is used for testing
purposes. This should include patches, service packs and other updates. Production environment includes operating systems, databases and middle ware platforms. The control should be applied for changes of applications and infrastructures.

Change management processes are essential for ensuring that risks associated with significant revisions to software, systems, and key processes are identified, assessed, and weighed in the context of an approval process. It is critical that information security considerations be included as part of a change review and approval process alongside other objectives such as support and service level management. Change management is a broad subject matter, however, some important considerations from an information security perspective include:

  • Helping to ensure that changes are identified and recorded.
  • Assessing and reporting on information security risks relevant to proposed changes.
  • Helping classify changes according to the overall significance of the change in terms of risk.
  • Helping establish or evaluate planning, testing, and “back out” steps for significant changes.
  • Helping ensure that change communications is handled in a structured manner.
  • Ensure that emergency change processes are well defined, communicated and that security evaluation of these changes is also performed post-change.

All major changes to information systems and the introduction of new systems should be subject to an agreed set of rules and procedures. These changes should be formally specified and documented. Furthermore, they should go through the testing and quality control processes. To ensure that all changes comply with the change control rules and standards, organisations should assign management responsibilities to appropriate management and should set out the necessary procedures.

  1. Organisations should plan and measure the likely impact of planned changes, taking into account all dependencies.
  2. Implementing authorisation controls for changes.
  3. Informing relevant internal and external parties about the planned changes.
  4. Establishing and implementing testing and acceptance testing processes for changes
  5. How the changes will be implemented, including how they will be deployed in practice.
  6. Establishing emergency and contingency plans and procedures. This may also include setting out a fall-back procedure.
  7. Keeping records of all changes and related activities, including all activities listed above(1 to 6).
  8. Operating documentation and user procedures are reviewed and updated to reflect the changes.
  9. ICT continuity plans and recovery and response procedures should be reviewed and revised to reflect the changes.
  10. Organisations should integrate change control procedures for software and ICT infrastructure to the maximum extent possible.

Changes to the production environments such as operating systems, and databases may compromise the integrity and availability of applications, particularly the transfer of software from development to production environment. Another risk that organisations should be cautious against is changing software in the production environment may have unintended consequences. To prevent these risks, organisations should perform tests on ICT components in an environment isolated from the development and production environments. This will enable organisations to have greater control over new software and will provide an extra layer of protection for real-world data used for testing purposes. This extra protection can be achieved via patches and service packs. Steps to manage change includes:

1. Request for change: Each change can be initiated as a Request – better known as a “Request for Change” or “RFC.” This request will also serve as a record and as evidence that a particular change has been requested. The change can be initiated internally (by an employee) or externally (by a customer), and will be registered in a specific form. Changes may affect assets of the organization (hardware, software, networks, etc.), but can also affect processes, services, agreements, etc. Therefore, it is important that detailed information about the type of change is recorded in the RFC. It is also important to record more information, such as the person requesting the change, the date, the department (or interested party) affected, etc.

2.Approval process: The RFC is received by a person who is responsible for analyzing it, so this person is the first filter. This person is only responsible for studying the details of the request and identifying the potential impact to the business, including economic impacts and impacts related to the information security (e.g., if the change is to upgrade the operating system of a server that is in the production environment – that can be critical for the business). Further on, another person (typically the person responsible for changes, e.g., IT Manager or Change Manager), based on the information generated previously, will decide if the change is approved or rejected. For that decision, it is important to consider all the implications that the change may have, including internal ones (departments, compliance with information security requirements, objectives, etc.) as well as external ones (customers, suppliers, etc.). Finally, if the change is approved, another person (typically appointed for change implementation, e.g., Project Manager) is responsible for planning the change and its implementation. That same person will also plan tests that allow for checking that changes are performed in the correct way. These three persons can be the same person (this may be recommended for small companies), although it is recommended that they are different for bigger companies, because in such way it will be possible to separate roles/functions. Finally, not all the changes are equally important, so it is necessary to classify them (for example: Low, Medium, and High). This classification can be based on the impacts to the business and to the ISMS.

3. Communication: It is also important that the company (for example, through the person responsible for changes) keeps in contact with the person who initiated the change, or interested parties involved in the change (stakeholders, users, customers, public, etc.), because they must be informed of every decision or action that is carried out in relation to the change that is being managed. These communications can be via phone or email (in order to be registered), meetings, etc.

4. Fall-back and emergency changes:Another important issue to consider is when an error takes place during the implementation of the change. In this case, it is important to have a fall-back procedure to return to the previous state. The person responsible for executing the fall-back procedure can be the same person responsible for the change implementation. Finally, this fall-back procedure can be defined during the planning-for-implementation step, establishing what needs to be done to return to the previous stage. For example: the Windows 10 operating system is updated to Windows 11, but one application fails (we can think of this as an information security incident, because we lost the availability of the system), so in this case it will be necessary to return to Windows 10.

Changes to systems within the development lifecycle must be controlled by the use of formal change control procedures. System change control procedures should integrate with, be aligned to and support operational change control. Formal change management procedures are designed to reduce the risk of accidental or deliberate development of vulnerabilities that may allow systems to be compromised once the changes are put live. For system change control, it is important that the system owner understands what changes are being made to their system, why and by whom. It is their responsibility to ensure that their systems are not compromised through poor or malicious development. They should therefore be involved in setting the rules for authorization and pre-live testing and checking. Audit logs are required to provide evidence of the correct use of change procedures. Many aspects of change control that should be included, ranging from simple documentation of the changes through to consideration of the time for deployment to avoid negative business impact.When operating platforms are changed, business-critical applications need to be reviewed and tested to ensure there is no adverse impact on organizational operations or security. When operating system platforms are changed it is commonplace that some applications may have compatibility problems. It is important therefore to test operating system changes in a development or test environment where critical business applications can be checked for compatibility with the changed OS. Procedures for control and testing of operating system changes should follow standard change management control.Modifications to software packages need to be discouraged, limited to necessary changes and all changes should be strictly controlled. Vendor supplied software packages are designed for the mass-market and are not really designed for organizations making their own changes to them. In fact most of the time the ability to make such changes is locked out by the vendor and customization limited to within the package. Where open-source software is used, it is far more likely that changes can be made by the organization, however, this should be restricted and controlled to ensure that the changes made do not have an adverse impact on the internal integrity or security of the software.

Common Challenges

“Change Management Takes Too Much Time.” Change management processes are notoriously susceptible to becoming overly complex. Staff who conduct changes are more likely to attempt to bypass change management processes they feel are too burdensome by intentionally classifying their changes at low levels or even not reporting them. If you are starting a Change Management program it is often helpful to first focus on modelling large-scale changes and then working to find the right change level definitions which help balance risk reduction with operational agility and efficiency.

ISO 27001:2022 A 8.20 Networks security ,A 8.21 Security of network services

The objective of this control is to ensure the protection of information in networks and its supporting information processing facilities. Communications encompass the breadth of digital data flows both within an organization and between external entities across network infrastructures. These flows now include data, voice, video, and all of their associated signaling protocols. Securing this information flows as they traverse Intranets, Extranets, and the Internet requires effective network infrastructure management as well as controls, policies, and procedures. This control provides guidance in planning, developing, and implementing the most essential elements of a Communications Security strategy.

Networks must be managed and controlled in order to protect information within systems and applications. Put in simple terms, the organization should use appropriate methods in order to ensure it is protecting any information within its systems and applications. These network controls should consider all operations of the business carefully, be adequately and proportionately designed, and implemented according to business requirements, risk assessment, classifications, and segregation requirements as appropriate. Some possible examples of technical controls for consideration may include; Connection control and endpoint verification, firewalls and intrusion detection/prevention systems, access control lists, and physical, logical, or virtual segregation. It is also important to enforce the fact that when connecting to public networks or those of other organizations outside organizational control, consider the increased risk levels and manage these risks with additional controls as appropriate. You will need to bear in mind that the auditor will be looking to see these implemented controls are effective and managed appropriately, including the use of formal change management procedures.

Security of network services in information security refers to the set of practices and technologies that are used to protect the various services that run on a network, such as email, web, file transfer, and database services. These services are often targeted by attackers because they are critical to the functioning of an organisation and are typically the entry point for attackers to gain access to the network.

A 8.20 Networks security

Control

Networks and network devices should be secured, managed and controlled to protect information in systems and applications.

Purpose

To protect information in networks and its supporting information processing facilities from compromise via the network.

ISO 27002 Implementation guidance

Controls should be implemented to ensure the security of information in networks and to protect connected services from unauthorized access. In particular, the following items should be considered:

  1. the type and classification level of information that the network can support;
  2. establishing responsibilities and procedures for the management of networking equipment and devices;
  3. maintaining up to date documentation including network diagrams and configuration files of devices (e.g. routers, switches);
  4. separating operational responsibility for networks from ICT system operations where appropriate;
  5. establishing controls to safeguard the confidentiality and integrity of data passing over public networks, third-party networks or over wireless networks and to protect the connected systems and applications . Additional controls can also be required to maintain the availability of the network services and computers connected to the network;
  6. appropriately logging and monitoring to enable recording and detection of actions that can affect, or are relevant to, information security;
  7. closely coordinating network management activities both to optimize the service to the organization and to ensure that controls are consistently applied across the information processing infrastructure;
  8. authenticating systems on the network;
  9. restricting and filtering systems connection to the network (e.g. using firewalls);
  10. detecting, restricting and authenticating the connection of equipment and devices to the network;
  11. hardening of network devices;
  12. segregating network administration channels from other network traffic;
  13. temporarily isolating critical sub networks (e.g. with drawbridges) if the network is under attack;
  14. disabling vulnerable network protocols.

The organization should ensure that appropriate security controls are applied to the use of virtualized networks. Virtualized networks also cover software-defined networking (SDN, SD-WAN). Virtualized networks can be desirable from a security viewpoint, since they can permit logical separation of communication taking place over physical networks, particularly for systems and applications that are implemented using distributed computing.

Network security is a key component of an organisation’s broader information security policy. Whilst several controls deal with individual elements of an organisation’s LAN and WAN setup, It is a series of broad protocols that deal with the concept of network security as a governing principle in all its various forms. It is focused on two key aspects of network security across all its general guidance points Information security and Protection from unauthorised access (particularly in the case of connected services). To achieve these two goals, Organization must:

  1. Categorise information across a network by type and classification, for ease of management and maintenance.
  2. Ensure that networking equipment is maintained by personnel with documented job roles and responsibilities.
  3. Maintain up to date information (including version controlled documentation) on LAN and WAN network diagrams and firmware/configuration files of key network devices such as routers, firewalls, access points and network switches.
  4. Segregate responsibilities for an organisation’s network from standard ICT system and application operations , including the separation of administrative traffic from standard network traffic.
  5. Implement controls that facilitate the secure storage and transfer of data over all relevant networks (including third-party networks), and ensure the continued operation of all connected applications
  6. Log and monitor any and all actions that directly impact information security as a whole across the network, or within individual elements
  7. Coordinate network management duties to complement the organisation’s standard business processes.
  8. Ensure that all systems and relevant applications require authentication prior to operation.
  9. Filter traffic that flows through the network via a series of restrictions, content filtering guidelines and data rules.
  10. Ensure that all devices that connect to the network are visible, authentic and are able to be managed by ICT staff.
  11. Retain the ability to isolate business critical sub-networks in the event of a security incident.
  12. Suspend or disable network protocols that are either compromised or have become unstable or vulnerable.

Network security management includes the following controls:

  • Network controls that will ensure information communicated through the networks will be protected – for example, logging and monitoring of the actions in the network, restrictions of the connections to the network. authentication of the systems connected to the network, etc. Annex A doesn’t require documenting this control; however, in order to ensure effective network controls, responsibilities and procedures for managing network equipment can be documented.
  • Security of network services will be managed by defining network service agreements with relevant security parameters and requirements such as the implementation of firewall and intrusion detection systems and monitoring the performance of the network providers. This control should be documented by signing the network service agreements.
  • Segregation in networks is one of the methods to manage the security of networks. This means dividing the network into smaller separate networks that are easier to manage and protect. This division can be made based on the criticality of the domain (for public access, server domain, etc), based on the organizational departments (for example, for top management, for finance department, etc.), or some other combination suitable for the organization. ISO 27001 doesn’t require documenting this control.

Establishing Responsibility and Procedures for Network Management and Operations

Information flowing across networks cannot be secured without effective management of the physical and logical network infrastructure, including physical cabling, logical topologies, network devices, and network services. A centralized entity with appropriate responsibility and authority is generally the most effective way to ensure consistency and manageability across the organization’s Intranet and Extranets. In many organizations achieving a single point of responsibility and authority for all network infrastructure can be challenging. Management of network infrastructure includes network operations, which is a separate function from the data center or information processing operations. Network security operations are often another distinct function but must coordinate closely with network operations.

Network controls

The large scale and high complexity of modern networks in the modern organization contribute to a challenging environment for security professionals and network administrators. The fundamental aspects of network services and protocols were not designed with information confidentiality in mind. Network Controls have been designed and implemented to compensate for this lack of security and continue to evolve as threat actors and their attack methods become more sophisticated.

Methods of Attack

Before determining which controls should be implemented and in which order, it is helpful to understand the common methods of attack. Note that a risk management approach is recommended to fully analyze all threats and responses. The ultimate goal of attackers is to gain access to or modify data of value. Their targets are typically servers, workstations, or other computers connected to your University’s networks, but they will make use of networks, other computers, people, or any other tool to achieve their objectives. Their attack strategies typically involve some form of reconnaissance, followed by exploitation – attempts to bypass or disable network or host security controls by exploiting vulnerabilities, and finally data modification or exfiltration. A Denial of Service (DoS) is a specific type of attack designed to disrupt operations or make networks and systems unavailable.

Reconnaissance – Attackers use reconnaissance to discover networks, hosts, or vulnerabilities. A variety of freely available tools are available that allow scanning or probing of systems accessible to the Internet. In targeting a specific University, an attacker needs only to know publicly available information such as the range of IP subnets used by the school. Scanning often involves discovering what TCP or UDP ports are active on the various hosts within the University’s networks. Firewalls, IDS/IPS, network isolation, authentication, and logging are some of the tools network or security administrators use to limit or detect reconnaissance activities.

Exploitation – The protocols used across the Internet and within the Intranets and Extranets were designed for availability and openness rather than security and privacy. Attackers abuse and exploit the inherent lack of security of TCP/IP and the other various protocols and their associated network devices to their advantage. Their specific methods are numerous and varied, but can generally be categorized as follows:

  • Sniffing – intercepting and examining network traffic
  • Spoofing – impersonating a network host or user
  • Man-in-the-Middle – covertly impersonating an intermediary host or network service such that the parties on either end of the connection are unaware that their communications are being captured and possibly altered
  • Hijacking – taking over or re-routing one end of an otherwise valid communication between two parties
  • Replay attacks – using intercepted communications or authentication interactions to falsely authenticate
  • Password Cracking – using sophisticated or simple brute force attacks to guess weak passwords
  • System or Application exploitation – once an attacker is in contact with a system at any of the application layer protocols such as FTP, Telnet, SSH, HTTP, HTTPS, SNMP, and others, weaknesses in the Operating System or the applications can be exploited to gain unauthorized access

Data Modification and Exfiltration – Once access to systems or data is gained, the data can be modified or copied (exfiltrated). While data owners might quickly know if data is modified, data exfiltration can take place in relative secrecy unless there are sufficient monitoring and controls in place to detect it. Most Universities have reasonable protections in place to prevent or detect external attacks but are not as diligent in monitoring outbound traffic to detect confidential or sensitive data that is being copied by a successful attacker.

Control Types

Like other types of security controls, network controls can be categorized into various types, depending on their primary function.

Preventive controls seek to stop or prevent attacks or intrusions before they occur. Firewalls, Intrusion Prevention Systems, Web Gateways, and physical Isolation of network cabling and devices are all examples of preventive controls.

Detective Controls seek to detect attacks or intrusions in progress or after (ideally very soon after!) they have already taken place. Intrusion Detection Systems, Log collection and review, Security Information and Event Management (SIEM) systems, AntiVirus software, and video surveillance in data centers and communications facilities are examples of detective controls.

Administrative controls direct users – employees, faculty, students, contractors, and partners – to follow specific procedures. Examples include policies against connecting rogue hubs, switches, or routers to the network, the use of network traffic sniffers, unauthorized network services, and procedures for provisioning network access accounts.

Technical controls often enforce administrative controls, but can also limit or prevent network activity/traffic, or isolate network segments or users to increase overall security. Examples include network access control, group policy objects, strong authentication, encryption, and Virtual Private Network (VPN) technology.

Defense In-Depth

A sound network control strategy employs the concept of Defense In-Depth to provide optimal security. Firewalls at the network perimeter limit the traffic that is allowed in and out of the network. IDS/IPS devices detect and prevent traffic that is suspicious or known to be malicious. Internal network isolation limits the visibility of network traffic to devices and users by department or role. Access to wireless and wired networks is restricted to authenticated users only. Strong passwords are enforced for all network computers. Computers run host-based firewalls and AntiVirus software. Certain sensitive network traffic is encrypted so that it cannot be intercepted. All of these controls are combined together to provide a layered or In-Depth defensive strategy.

Network Design and Architecture

Centralized management of networks allows for strategic network design and architecture that can be more readily optimized for performance, availability, and security. All endpoints should terminate to network switches to remove the possibility of internal network traffic sniffing by computers and users. Highly sensitive data and traffic such as for Data Centers or communications facilities should be isolated through virtual LAN (VLAN) technology and/or Firewalls. Highly unregulated traffic such as for student residence halls should also be isolated. The architecture of the network should allow for the strategic placement of firewalls, demilitarized zones (DMZ’s), and IDS/IPS devices such that all network traffic between the University Intranet and the Internet can be adequately controlled and monitored.

Perimeter Controls

Perimeter controls must be strategically placed such that all network traffic flowing in and out of the Organization’s internal networks, i.e. its Intranet, can be controlled and monitored. These controls are critical to network functionality and security and therefore must be fault-tolerant and have redundant backups available. In addition, they must be capable of processing the anticipated peak volume of network traffic. This is especially important for larger Universities with extremely high aggregate Internet bandwidth. Typical perimeter controls include:

  • Routers – The border router is typically capable of allowing or denying connections, but its primary purpose is to route traffic at the network border or DMZ
  • Firewalls – firewalls (sometimes called border firewalls) block or limit traffic, typically by TCP/UDP port
  • IDS/IPS – An Intrusion Detection System and/or Intrusion Prevention System adds an extra layer of protection, examining, limiting, or blocking traffic that was allowed through the border firewall, but is highly suspicious or known to be malicious
  • Data Loss Prevention (DLP) – some DLP solutions inspect all network traffic to detect or block confidential data from leaving the Intranet
  • “Next Generation” Firewalls – The term “NextGen” is a marketing term used by some vendors to imply a higher level of sophistication and thus a higher level of protection. While many of these products do perform as advertised, they are essentially serving the same or combined functions like firewall and IDS/IPS technology.
  • Web Gateway – A secure web gateway does not necessarily sit at the perimeter, but does filter web-based traffic, providing more granular IDS/IPS functionality for web-based traffic or content
  • Network Address Translation (NAT) – not strictly a security control, NAT limits the visibility of endpoints within the University Intranet from potential attackers on the Internet.

Note on encryption – while encryption is an effective control for data in transit, security administrators should also be aware that too much encryption of network traffic can severely limit many perimeter controls such as IDS/IPS, DLP, and Secure Web Gateways. Many vendors are now providing cloud-based network protection, which can supplement or replace many of the on-premise perimeter or interior controls network and security administrators have used.

Interior/Endpoint Controls

Isolation – Network segments or subnets within the University Intranet should be appropriately isolated according to the security requirements of the users and endpoints. Virtual LAN (VLAN) technology is the primary control used to isolate users and endpoints.

Endpoint Hardening – All network devices and endpoints should be hardened to reduce their attack surface. Hardening involves maintaining current patch levels, AntiVirus, host-based firewalls, host-based IDS/IPS, disabling unnecessary services, using strong passwords, and other protections as appropriate. Software whitelisting can also provide additional endpoint protection. Network and security administrators should not neglect printers, multi-function devices, and other network-attached devices which often have insecure services opened up, such as FTP, Telnet, or SNMP.

Vulnerability Management – A Vulnerability Management System can help ensure that all endpoints on the network are adequately hardened. Vulnerability Management should ideally include web-based applications to reduce vulnerability to SQL-Injection, Cross-Site Scripting, and other web-based exploits.

Network Access Control (NAC) – Registering all endpoints before allowing connection to the network can prevent unauthorized devices from connecting as well as enforce security baselines. For instance, University IT Security Policies may state that all endpoints have automatic security updating enabled, authentication must be done via the central Active Directory domain, and AntiVirus and Firewall must be active. NAC can prevent systems that do not meet these requirements from accessing all or certain portions of the network.

WiFi Security Controls –  The WiFi should be protected and in most cases, isolated from all other internal networks, particularly when the Organization has chosen to make WiFi open-access. Open-access WiFi allows any computer within range to connect and therefore should be provided limited services such as Internet access only. WiFi that connects to more sensitive portions of the network should be limited to authorized users only. All WiFi should use WPA2 or stronger encryption. Note that enabling these levels of control across a large campus can be costly and require sophisticated equipment.

Remote Access – remote access to internal or Intranet networks can be a high-security risk if not properly planned and secured. While a Virtual Private Network (VPN) service is an excellent way to allow remote users to securely connect to your internal networks or Intranet, it provides no assurance that the connecting endpoint computer is itself secure. Security administrators should strongly consider enforcing Network Access Control for VPN connections or strictly limiting the use of VPN to selected trusted users. Outbound VPN can also introduce the risk of opening up internal networks to potentially unsecured external networks. Many organizations chose to block outbound VPN at the firewall for this reason. Other remote access tools and protocols need to be carefully controlled or limited. Remote Desktop Protocol (RDP) and Secure Shell (SSH) can introduce additional risks. RDP is best blocked at the firewall or provided through an RDP Gateway. While SSH is a secure protocol, the Linux and Unix systems that typically use SSH are often administered outside of the campus directory service and can thus have weak passwords. External attackers routinely look for open SSH ports and attempt to use Rainbow tables or Brute Force to crack passwords. Web-based services such as LogMeIn, VNC, GoToMyPC, etc. can also introduce the risk of unauthorized remote access. Security administrators should carefully assess the risks associated with these services.

Back Doors – Remote Access protocols and services can create “back doors” of access to internal networks and should be carefully administered. Other back doors include analog modems, cellular services on smartphones and tablets, Bluetooth personal area networks, and removable media such as USB and CD/CDRW drives.

Encryption- Encryption of certain network traffic is an essential network control. All confidential or sensitive information leaving the network should be encrypted with proven strong encryption algorithms. Authentication protocols that transmit passwords or encryption keys over the network should also be encrypted. Secure Sockets Layer (SSL) is a common encryption protocol used for web traffic.

Network Security Policies

A strong set of network security policies complements technical controls. While policies cannot always be technically enforced, users need to be aware of behaviors that are unacceptable by the policy. Examples include:

  • Use of strong passwords
  • No sharing of user account credentials
  • Users are not allowed to install and run illegal software, such as network sniffing/scanning or P2P File Sharing software
  • All user accounts must be centrally managed and issued
  • Prohibition of rogue switches, routers, hubs
  • All network cabling and outlets must be installed by central network services
  • The limited expectation of privacy

Security policies provide a means of enforcement in the event of known violations.

Log Management and Auditing

Routers, switches, IDS/IPS, firewalls, Directory Services controllers, and other network devices have a wealth of information about activity on the network. However, the massive amount of data they produce makes it difficult to adequately correlate and review for possible intrusions or perform forensic investigations. A Security Information and Event Management (SIEM) solution can greatly reduce the effort and expense involved and provide a much higher level of visibility for security. Network Access Control

Penetration Testing

All network controls should be routinely validated by an authorized external third party. The process is typically referred to as Penetration Testing (Pen Tests). A qualified Pen Tester can help ensure that the controls you have carefully implemented are working effectively. Many organizations are required to perform such testing on an annual or biennial basis.

8.21 Security of network services

Control

Security mechanisms, service levels and service requirements of network services should be identified, implemented and monitored.

Purpose

To ensure security in the use of network services.

Implementation guidance

The security measures necessary for particular services, such as security features, service levels and service requirements, should be identified and implemented (by internal or external network service providers). The organization should ensure that network service providers implement these measures. The ability of the network service provider to manage agreed services in a secure way should be determined and regularly monitored. The right to audit should be agreed between the organization and the provider. The organization should also consider third-party attestations provided by service providers to demonstrate they maintain appropriate security measures. Rules on the use of networks and network services should be formulated and implemented to cover:

  1. the networks and network services which are allowed to be accessed;
  2. authentication requirements for accessing various network services;
  3. authorization procedures for determining who is allowed to access which networks and networked services;
  4. network management and technological controls and procedures to protect access to network connections and network services;
  5. the means used to access networks and network services [e.g. use of virtual private network (VPN) or wireless network];
  6. time, location and other attributes of the user at the time of the access;
  7. monitoring of the use of network services.

The following security features of network services should be considered:

  1. technology applied for security of network services, such as authentication, encryption and network connection controls;
  2. technical parameters required for secured connection with the network services in accordance with the security and network connection rules;
  3. caching (e.g. in a content delivery network) and its parameters that allow users to choose the use of caching in accordance with performance, availability and confidentiality requirements;
  4. procedures for the network service usage to restrict access to network services or applications, where necessary.

Other information

Network services include the provision of connections, private network services and managed network security solutions such as firewalls and intrusion detection systems. These services can range from simple unmanaged bandwidth to complex value-added offerings.

A ‘network service’ can broadly be described as a system running on the ‘network application layer’, such as e-mail, printing, or a file server. Network services also include managed applications and security solutions such as firewalls or gateway antivirus platforms, intrusion detection systems and connection services. Network services often represent the most important functional parts of a network, and are critical to the day-to-day operation of a modern commercial ICT network. Security is therefore paramount, and the use of network services needs to be closely monitored and directly managed to minimize the associated risk of failure, intrusion and business disruption.The three main security types, when addressing the broader concept of network service security:

  • Security features
  • Service levels
  • Service requirements

These three measures should be taken into account by all internal and external network service providers, and the organisation should take steps to ensure that providers are fulfilling their obligations at all times. Organisations should judge a network service provider on their ability to manage services as dictated by an unambiguous set of SLAs, and monitor adherence to the best of their ability. Part of this operational assessment should include references obtained from trusted sources that attest to a network service provider’s ability to manage services in a secure and efficient manner. Network security rules should include:

  • Any network services and associated networks that are allowed to be accessed.
  • The authentication requirements for accessing said network services, including who is authorised to access them, from where and when they are able to do so.
  • How personnel obtain prior authorisation to access network services, including final sign-off and business case analysis.
  • A robust set of network management controls that safeguard network services against misuse and unauthorised access.
  • How personnel are allowed to access network services (i.e. remotely or exclusively onsite).
  • Logging procedures that detail key information about network service access, and the personnel who utilise them – e.g. time, location and device data.
  • Monitoring the use of network services.

To increase security across all network services, including back-end functionality and user operation.

  • Organisations should consider security features such as authentication, encryption and connection controls.
  • Rigid parameters should be established that dictate the connection to network services.
  • Users should be able to choose the amount of data cached (temporarily stored) by a network service to both increase overall performance and ensure that data isn’t excessively stored to the point of it being a tangible security risk.
  • Restricting access to network services.

Security features of network services
Network services are basically the provision of connections, private network services, firewalls, and Intrusion Detection Systems.Security features of the network services could also include:

  • Network security technology – This can be implemented through the segregation of networks, for example configuring VLANs with routers/switches, or also if remote access is used, secure channels (encrypted) are necessary for the access, etc.
  • Configuring of technical parameters – This can be implemented through Virtual Private Networks (VPN), using strong encryption algorithms, and establishing a secure procedure for the authentication (for example, with electronic certificates).
  • Mechanisms to restrict access – This can be implemented with firewalls, which can filter internal/external connections, and also can filter access to applications. Intrusion Detection Systems can also be used here, Basically, Intrusion Detection Systems (IDS) are devices that can be based on hardware or software, and they constantly monitor connections to detect possible intrusions to the network of the organization. They can also help firewalls to accept or reject connections, depending on the defined rules. Here it is important to note that an IDS is a passive system, because it can only detect; but, there are also Intrusion Prevention Systems, known as IPS, which can prevent intrusions. The IPS are not specified by the standard, but are very useful and can also help firewalls. So, basically, if you want to manage the security of network services, you can use these types of hardware/software:
  • Routers/switches (for example, for the implementation of VLANs)- Firewalls or similar perimeter security devices (for example, for the establishment of VPNs, secure channels, etc.)
  • IDS/IPS (for intrusion detection/intrusion prevention).

Network services agreements
At this point, we have identified the network services, but if we want to align with the requirement of this control, we need to go one step further. This means that these network services should be included in network services agreements (or SLA, Service Level Agreements), being applicable to internal services provided in-house, and also to services provided from outside, by which I mean those that are outsourced. So, for the development of a network service agreement, basically you need to consider what network services are established, how they are offered (from inside, or outside, resources, etc.), service levels (24×7, response and treatment of incidents, etc.), and other key components. If the network service is outsourced, it is also important to consider periodic meetings with the external company, and in these meetings it is important to review the SLAs .For the security mechanisms included in the SLA, the selection could be based on the results of the risk assessment (basically, for the highest risks, the strongest security mechanism will be necessary), or even using the organization’s contacts with special interest groups for specific environments like government, military, etc., where the implementation of specific regulations could be needed

Network services include Directory services, Domain Name Service (DNS), Dynamic Host Configuration Protocol (DHCP), authentication services, messaging/email, remote access, and others. These services have traditionally been provided on-premise by network and/or security administrators. Today, many organizations are turning to outsourced cloud providers for many of these services. Security mechanisms, service levels, and management requirements of all network services need to be identified and included in-network services agreements, whether these services are provided in-house or outsourced. Put into simple terms, the organization should include all the various security measures it is taking in order to secure its network services, in its network services agreements. Your auditor will want to see that the design and implementation of networks take into account both the business requirements and security requirements, achieving a balance that is adequate and proportionate to both. They will be looking for evidence of this, along with evidence of a risk assessment.

On-Premise Services

Most organizations utilize some form of Directory Service, such as Microsoft Active Directory. Other essential services include DHCP, DNS, and remote access services such as VPN. Because these services operate at the network and IP layers of the OSI stack, and they perform essential functions for all network hosts, they must be well-managed and secured. Only a very small number of network administrators should have administrative access to the underlying servers. These servers must also be hardened and kept up to date with security patches. Logging to an external aggregator or SIEM is also strongly recommended.

External Network Services

Highly available Internet connectivity has opened the door for organizations to shift network and other application services to external cloud providers. While there are many reputable and very capable providers, it is nonetheless more difficult to hold an external entity accountable at the same levels possible with internal staff. Organizations entering into agreements with cloud providers need to carefully review and negotiate the specific terms and conditions of these agreements. Service Level Agreements, Confidentiality Statements, and Privacy Policies are among the types of documents that must be carefully reviewed and updated. The default versions of these documents will typically be written in favor of the external provider rather than their customers. External service providers should be held to the same level of security controls as those that apply to internal services. Organizations should write into their agreements language that specifies required security controls, limitation of access by provider’s employees, confidentiality statements, the right of the Organizations to audit security controls, and any other provisions that reduce risks of data disclosure, alteration, or loss.

Back to Home Page

If you need assistance or have any doubt and need to ask any question contact me at preteshbiswas@gmail.com. You can also contribute to this discussion and I shall be happy to publish them. Your comments and suggestion are also welcome.

ISO 27001:2022 A 8.22 Segregation of networks

When cyber criminals compromise computing networks, services, or devices, they do not limit themselves to the compromised assets. They leverage the initial intrusion to infiltrate an organisation’s entire network, gain access to sensitive information assets, or to carry out ransomware attacks. Organisations can implement and maintain appropriate network segregation techniques to eliminate risks to the availability, integrity, and confidentiality of information assets.Network segregation is a process that separates critical network elements from the internet and other less sensitive networks. It allows IT teams to control traffic flow between various subnets based on granular policies. Organizations can leverage network segregation to improve network monitoring, performance, and security.Network segregation is the tool used for dividing a network into smaller parts which are called subnetworks or network segments. You can think of it as the division of rooms when constructing a new house. The most important things to spend time thinking about in this case are the spacing and positioning as well as purposes.

Control

Groups of information services, users and information systems should be segregated in the organization’s networks.

Purpose

To split the network in security boundaries and to control traffic between them based on business needs.

ISO 27002 Implementation Guidance

The organization should consider managing the security of large networks by dividing them into separate network domains and separating them from the public network (i.e. internet). The domains can be chosen based on levels of trust, criticality and sensitivity (e.g. public access domain, desktop domain, server domain, low- and high-risk systems), along organizational units (e.g. human resources, finance, marketing) or some combination (e.g. server domain connecting to multiple organizational units). The segregation can be done using either physically different networks or by using different logical networks. The perimeter of each domain should be well-defined. If access between network domains is allowed, it should be controlled at the perimeter using a gateway (e.g. firewall, filtering router). The criteria for segregation of networks into domains, and the access allowed through the gateways, should be based on an assessment of the security requirements of each domain. The assessment should be in accordance with the topic-specific policy on access control , access requirements, value and classification of information processed and take account of the relative cost and performance impact of incorporating suitable gateway technology. Wireless networks require special treatment due to the poorly-defined network perimeter. Radio coverage adjustment should be considered for segregation of wireless networks. For sensitive environments, consideration should be made to treat all wireless access as external connections and to segregate this access from internal networks until the access has passed through a gateway in accordance with network controls before granting access to internal systems. Wireless access network for guests should be segregated from those for personnel if personnel only use controlled user endpoint devices compliant to the organization’s topic-specific policies. WiFi for guests should have at least the same restrictions as WiFi for personnel, in order to discourage the use of guest WiFi by personnel.

Other information

Networks often extend beyond organizational boundaries, as business partnerships are formed that require the interconnection or sharing of information processing and networking facilities. Such extensions can increase the risk of unauthorized access to the organization’s information systems that use the network, some of which require protection from other network users because of their sensitivity or criticality.

Groups of information services, users, and information systems should be segregated on networks. Wherever possible consider segregating duties of network operations and computer/system operations e.g. public domains, dept x or y domains. The network design and control must align to and support information classification policies and segregation requirements. One way to protect your confidential and/or critical systems is to segregate your networks along physical or logical lines. Using VLANs to separate your systems creates an additional layer of security between your regular network and your most sensitive systems. This method is often utilized in order to protect data centers, credit card processing systems covered by PCI DSS, SCADA systems, and other systems considered to be sensitive or mission-critical. In order to properly control access to your segregated networks, you should place a firewall or router at the perimeter of each network. That way, different networks can have different access control policies based on the sensitivity classification of the data that they create, transmit, and/or store. Special consideration should be given to wireless networks that allow anyone to connect for Internet access – if you offer an unsecured connection to your wireless network, you should take steps to ensure that wireless traffic is kept separate from the rest of your network or networks. Wireless users should not be able to access domain resources on your wired network without authenticating first, at least; most organizations now offer a secure wireless option (sometimes in addition to a separate, cordoned-off unsecured wireless option) to help maintain the confidentiality and integrity of their wired network. he segregation of networks is important, because you will not want anyone to reach your system and have access to your files – hostile or accidental. To minimize the impact of such network intrusion, it should be difficult for the intruder to move undetected around the network and to access your information. Segregation is about identifying which systems you use and determining which networks need isolating. For example, you may have sensitive data, such as your organization’s financial records and business process workflows. These files do not need to be stored in the location where you process customer inquiries. Instead, you place them in another room in the network, and you define access rules for devices that are used only by managers of related departments.Another scenario: Think of a hacker who wants to access your sensitive information, hosts, and services. This hacker may seek to create a remote connection to a server, use legitimate network admin tools, and execute a malicious code on that server. A well-planned network segmentation comes handy here, being a key security measure for preventing such activities from occurring. In this case, segregating the network and disallowing remote desktop connections or the use of admin tools from user computers, as well as configuring servers to limit the sharing of files, will be helpful.

Network segregation according to ISO 27001 also helps information security personnel with their jobs. With the rules determined, a segregated environment will allow them to establish better auditing and alerting strategies for attacks. They could identify a network intrusion and timely response to incidents.

When implementing network segregation measures, organisations should try to strike a balance between operational needs and security concerns.When segregating the network into smaller network sub-domains, organisations should consider the level of sensitivity and criticality of each network domain. Depending on this analysis, network sub-domains may be assigned to ‘public domains’, ‘desktop domains’, ‘server domains’, or ‘high-risk systems’. Furthermore, organisations can also consider the business departments such as HR, marketing, and finance when segregating the network. It is also noted that organisations can combine these two criteria and assign network sub-domains into categories such as ‘server domain connecting to sales department’. Organisations should define the perimeter of each network sub-domain clearly. If there will be access between two different network domains, this access should be restricted at the perimeter level via the gateway such as firewalls or filtering routers. Organisations should assess the security requirements for each specific domain when implementing network segregation and when authorizing access via the gateways. This assessment should be carried out in compliance with the access control policy as required and should also consider the following:

  • Level of classification assigned to information assets.
  • Criticality of information.
  • Cost, and practical considerations for use of a particular gateway technology.

Considering that defining network security parameters for wireless networks is challenging, organisations can adhere to the following practices:

  • The use of radio coverage adjustment techniques to segregate wireless networks should be assessed.
  • For sensitive networks, organisations can assume all wireless access attempts as external connections and prevent access to internal networks until the gateway control approves access.
  • If personnel only use their own devices in accordance with the organisation’s policy, the wireless network access provided for personnel and for guests should be segregated.
  • Use of Wi-fi by guests should be subject to the same restrictions and controls imposed on the personnel.

Organisations often enter into various business partnerships with other businesses and share their network, IT devices, and other information facilities,therefore, sensitive networks may be exposed to a heightened risk of unauthorized access by other users and organisations should take appropriate measures to prevent this risk.

Benefits of Network Segregation
While traditional flat networks are simple to set up and manage, they don’t provide reliable protection. Segregated networks, on the other hand, require an extra amount of effort to set up. Once implemented, organizations can derive numerous benefits such as:

  • Enhancing operational performance. Network segregation allows IT teams to reduce network congestion. For example, IT teams can easily stop all the network traffic in one part of the network from reaching the other to enhance the overall operational performance.
  • Limiting the damage from cyberattacks. Network segregation enhances the organization’s overall security posture by restricting how far an attack spreads within the organization. For example, you can easily restrain malware from spreading and affecting other systems in the organization.
  • Protecting vulnerable endpoints. Segregating a network can prevent harmful traffic from reaching vulnerable devices. A segregated network isolates these endpoints, restricting the risk of exposure in an organization.
  • Minimizing the scope of compliance. A segregated network can help you reduce the expenses associated with regulatory compliance because it limits the number of in-scope systems. For example, you can separate systems that process payments from those that don’t. This way, you apply compliance requirements and audit processes only to the payment processes but not the rest of the network.

Best Practices of Network Segregation Implementation

  1. . Network Layers: It is highly recommended that you apply technologies at more than just the network layer. Each host and network has to be segregated and segmented. Even the smallest host and network should be segmented at the tiniest level, as long as it’s practically manageable. This type of strategy applies mostly to the data link layer up to (and including) the application layer. However, there are cases of sensitive information when physical isolation is suitable as well. Also, it should be noted that these types of protective network measures should be centrally monitored continuously.
  2. Always Use the Principle of Least Privilege. Implementing the principle of least privilege helps you complement the minimization of privileges and attack surfaces within the organization. You should assign users only the bare minimum privileges they require to access and use corporate resources. If the network does not need to communicate with another network or host, you shouldn’t allow it.
  3. Ensure You Isolate the Hosts from the Network. Separating networks from hosts based on the criticality of business operations is a wise move because it improves the overall network visibility. Depending on various security domains and classifications for particular networks or hosts, you can isolate different platforms to enhance visibility into the network infrastructure.
  4. Refine the Authorization Process. A well-defined authorization process is essential because it allows you to safeguard critical enterprise resources by permitting only authenticated and authorized users to the network. By restricting access to authorized users via rulesets, you can monitor those who bypass the network easily and disable them if necessary.
  5. Implement a Network Traffic Whitelisting Solution. You should allow only legitimate users to access specific enterprise resources rather than denying access to threat actors or blocking specific services. Such a framework is an effective security policy you can leverage to blacklist malicious actors because it enhances the company’s overall capacity to detect breaches while also improving productivity.

To fulfill the recommendations for network segregation, you can consider the following models for network segmentation:

  • Criteria-based segmentation: Pre-defined rules to establish perimeters and create new segments can reduce future administration efforts. Examples of criteria are trust level (e.g., external public segment, staff segment, server segment, database segment, suppliers segment, etc.), organizational unit (e.g., HR, Sales, Customer Service, etc.), and combinations (e.g., external public access to Sales and Customer Service).
  • Use of physical and logical segmentation: Depending upon the risk level indicated in the risk assessment, it may be necessary to use physically separated infrastructures to protect the organization’s information and assets (e.g., top-secret data flowing through a fiber dedicated to management staff), or you may use solutions based on logical segmentation like Virtual Private Network (VPN).
  • Access rules for traffic flowing: Traffic between segments, including those of allowed external parties, should be controlled according to the need to transmit/receive information. Gateways, like firewalls and routers, should be configured based on information classification and risk assessment. A specific case of access control applies to wireless networks, since they have poor perimeter definition. The recommendation is to treat wireless communication as an external connection until the traffic can reach a proper wired gateway before granting access to internal network segments.

ISO 27001:2022 A 8.28 Secure coding

Code is the core of a computer program. If there are any vulnerabilities in it, then the whole program may be compromised by cyber attacks.Secure coding is the practice in which any software is developed in a way that protects accidental security vulnerabilities. It governs the coding techniques, practices, and decisions that developers make during the development. Secure coding’s core aim is to ensure that the written code can minimize any vulnerabilities. It involves writing code in a high-level language that adheres to strict principles of security.Secure coding is much more than just writing, compiling, running, and releasing code into the production environment. To completely embrace secure coding, you must also create a secure development environment that is built on secure and reliable infrastructure using secured software, services, and providers. Besides, the code must not have any logic flaws, bugs, errors, and defects that can form security risks. Poor coding practices such as improper input validation and weak key generation can expose information systems to security vulnerabilities and result in cyber attacks and compromise of sensitive information assets. Organisations should ensure that secure coding principles are followed so that poor coding practices do not lead to security vulnerabilities.When it comes to application security, even the silliest mistake can create security risks.The goal of software security is to maintain the integrity, confidentiality, and availability of information resources to establish successful business operations that can only be achieved by integrating security practices.Flaws in security can happen at any stage of the development life cycle, like:

  • Failure to identify requirements of security upfront.
  • Errors in conceptual designs.
  • Technical vulnerabilities are introduced due to poor coding practices.
  • Implementing the software improperly or introducing errors during maintenance or updates

However, secure coding techniques make it easier for developers to remove common vulnerabilities by following security standards for coding. It also helps in weeding out exploited risks, preventing cyber attacks and leaks of sensitive information.

Control

Secure coding principles should be applied to software development.

Purpose

To ensure software is written securely thereby reducing the number of potential information security vulnerabilities in the software.

ISO 27002 Implementation Guidance

General
The organization should establish organization-wide processes to provide good governance for secure coding. A minimum secure baseline should be established and applied. Additionally, such processes and governance should be extended to cover software components from third parties and open source software. The organization should monitor real world threats and up-to-date advice and information on software vulnerabilities to guide the organization’s secure coding principles through continual improvement and learning. This can help with ensuring effective secure coding practices are implemented to combat the fast-changing threat landscape.

Planning and before coding
Secure coding principles should be used both for new developments and in reuse scenarios. These principles should be applied to development activities both within the organization and for products and services supplied by the organization to others. Planning and prerequisites before coding should include:

  1. organization-specific expectations and approved principles for secure coding to be used for both in-house and outsourced code developments;
  2. common and historical coding practices and defects that lead to information security vulnerabilities;
  3. configuring development tools, such as integrated development environments (IDE), to help enforce the creation of secure code;
  4. following guidance issued by the providers of development tools and execution environments as applicable;
  5. maintenance and use of updated development tools (e.g. compilers);
  6. qualification of developers in writing secure code;
  7. secure design and architecture, including threat modelling;
  8. secure coding standards and where relevant mandating their use;
  9. use of controlled environments for development.

During coding
Considerations during coding should include:

  1. secure coding practices specific to the programming languages and techniques being used;
  2. using secure programming techniques, such as pair programming, refactoring, peer review, security iterations and test-driven development;
  3. using structured programming techniques;
  4. documenting code and removing programming defects, which can allow information security vulnerabilities to be exploited;
  5. prohibiting the use of insecure design techniques (e.g. the use of hard-coded passwords, unapproved code samples and unauthenticated web services).

Testing should be conducted during and after development . Static application security testing (SAST) processes can identify security vulnerabilities in software. Before software is made operational, the following should be evaluated:

  1. attack surface and the principle of least privilege;
  2. conducting an analysis of the most common programming errors and documenting that these have been mitigated.

Review and maintenance
After code has been made operational:

  1. updates should be securely packaged and deployed;
  2. reported information security vulnerabilities should be handled ;
  3. errors and suspected attacks should be logged and logs regularly reviewed to make adjustments to the code as necessary;
  4. source code should be protected against unauthorized access and tampering (e.g. by using configuration management tools, which typically provide features such as access control and version control).

If using external tools and libraries, the organization should consider:

  1. ensuring that external libraries are managed (e.g. by maintaining an inventory of libraries used and their versions) and regularly updated with release cycles;
  2. selection, authorization and reuse of well-vetted components, particularly authentication and cryptographic components;
  3. the licence, security and history of external components;
  4. ensuring that software is maintainable, tracked and originates from proven, reputable sources;
  5. sufficiently long-term availability of development resources and artifacts.

Where a software package needs to be modified the following points should be considered:

  1. the risk of built-in controls and integrity processes being compromised;
  2. whether to obtain the consent of the vendor;
  3. the possibility of obtaining the required changes from the vendor as standard program updates;
  4. the impact if the organization becomes responsible for the future maintenance of the software as a result of changes;
  5. compatibility with other software in use.

Other information

A guiding principle is to ensure security-relevant code is invoked when necessary and is tamper- resistant. Programs installed from compiled binary code also have these properties but only for data held within the application. For interpreted languages, the concept only works when the code is executed on a server that is otherwise inaccessible by the users and processes that use it, and that its data is held in a similarly protected database. For example, the interpreted code can be run on a cloud service where access to the code itself requires administrator privileges. Such administrator access should be protected by security mechanisms such as just-in-time administration principles and strong authentication. If the application owner can access scripts by direct remote access to the server, so in principle can an attacker. Web servers should be configured to prevent directory browsing in such cases. Application code is best designed on the assumption that it is always subject to attack, through error or malicious action. In addition, critical applications can be designed to be tolerant of internal faults. For example, the output from a complex algorithm can be checked to ensure that it lies within safe bounds before the data is used in an application such as a safety or financial critical application. The code that performs the boundary checks is simple and therefore much easier to prove correctness. Some web applications are susceptible to a variety of vulnerabilities that are introduced by poor design and coding, such as database injection and cross-site scripting attacks. In these attacks, requests can be manipulated to abuse the web server functionality.

Organisations are to establish and implement organisation-wide processes for secure coding that applies to both software products obtained from external parties and to open source software components.Organisations should keep up to date with evolving real-world security threats and with the most recent information on known or potential software security vulnerabilities. This will enable organisations to improve, and implement robust secure software coding principles that are effective against evolving cyber threats.Secure software coding principles should be followed both for new coding projects and for software reuse operations. These principles should be adhered to both for in-house software development activities and for the transfer of the organisation’s software products or services to third parties.

Planning and precoding
Both new code development and code reuse require the application of secure coding principles – regardless of whether the code is written for internal software or for external products and services. This requires the evaluation of the organization-specific expectations and definition of recognized principles in advance. In addition, recommended actions for planning and precoding take into account known common and historical coding practices and errors that could lead to vulnerabilities. The configuration of development tools, such as rule-based in integrated development environments (IDE), should enforce the creation of secure code. Quite critical is the qualification of developers and their familiarity with secure architectures and programming standards. The inclusion of information security expertise in the development team goes without saying. When establishing a plan for secure coding principles and determining the prerequisites for secure coding, organisations should comply with the following:

  • Organisations should determine security expectations tailored to their needs and establish approved principles for secure software coding that will apply to both in-house software development and outsourced software components.
  • Organisations should detect and document the most prevalent and historical poor coding design practices and mistakes that result in compromise of information security.
  • Organisations should put in place and configure software development tools to ensure the security of all code created. One example of such tools is integrated development environments (IDE).
  • Organisations should achieve compliance with the guidance and instructions provided by software development tools.
  • Organisations should review, maintain, and securely use development tools such as compilers.

During the coding process
During the coding process, coding practices and structured programming techniques play a primary role, taking into account the specific use case and its security needs. Insecure design techniques – for example, hard-coded passwords – should be consistently prohibited. Code should be adequately documented and reviewed to best eliminate security-related bugs. Code review should take place both during and after development, via static application security testing (SAST) or similar. Static testing methods support the shift left approach (“test early and often”) by shifting left in the lifecycle by testing visible code for rule conformance early. This allows early identification of tainted code, connections to files or specific object classes, or application-level gaps that can be abused for unnoticed interaction with third-party programs as exploitable vulnerabilities. Before the software is put into operation, Control 8.28 requires an evaluation of attack surfaces and the implementation of the least privilege principle. The analysis performed of the most common programming errors and documentation of their correction should be evaluated. Secure coding practices and procedures should take into account the following for the coding process:

  • Secure software coding principles should be tailored to each programming language and techniques used.
  • Deployment of secure programming techniques and methods such as test-driven development and pair programming.
  • Use of structured programming methods.
  • Proper code documentation and removal of code defects.
  • Prohibition on the use of insecure software coding methods such as unapproved code samples or hard-coded passwords.
  • Security testing should be performed both during and after the development .

Before putting the software into actual use in the live application environment, organisations should consider the following:

  • What is the attack surface?
  • Is the principle of least privilege followed?

Carrying out an analysis of the most prevalent programming mistakes and documenting that these risks have been eliminated.

Review and maintenance
Even after the software has gone live, the topic of secure coding remains relevant. This includes secure updates as well as checks for known vulnerabilities in one’s code. In addition, errors and suspected attacks must be documented so that necessary adjustments can be made promptly. In any case, unauthorized access to the source code must be reliably prevented by suitable tools. After the Code Is Put Into Use in the Production Environment, Updates should be applied in a secure manner. Security vulnerabilities reported should be addressed. Suspected attacks on information systems and errors should be recorded and these records should be reviewed on regular intervals so that appropriate changes to code can be made. Unauthorized access to, use of, or changes to source code should be prevented via mechanisms such as management tools. When Organisations Use External Tools, They Should Take Into Account the Following

  • External libraries should be monitored and updated at regular intervals based on their release cycles.
  • Software components should be carefully vetted, selected, and authorised especially cryptography and authentication components.
  • Licensing of external components and ensuring their security.
  • Software should be tracked and maintained. Furthermore, it must be ensured that it comes from a trustworthy source.
  • Development resources should be available for the long term.

When Making Changes to a Software Package, the Following Should Be Considered

  • Risks that may arise out of built-in controls or compromise of integrity processes.
  • Whether the vendor gives consent to changes.
  • Whether it is possible to get consent from the software vendor for regular updates.
  • The likely impact of carrying on the maintenance of the software that arises out of changes.
  • Whether the changes would be compatible with other software components used by the organisation.

In the area of verification and maintenance, it additionally lists explicit instructions for the use of external tools and libraries, such as open source software. These code components should be managed, updated and maintained in inventories. This can be done, for example, via a Software Bill of Material (SBOM). An SBOM is a formal, structured record of a software’s packages and libraries and their relationships to each other and within the supply chain, in particular to keep track of reused code and open source components. The SBOM supports software maintainability and targeted security updates.Organisations should ensure that security-relevant code is used when it is necessary and is resistant to tampering:

  • While programs installed via binary code include security-relevant code, this is limited to the data stored within the application itself.
  • Concept of security-relevant code is only useful when code is run on a server that is not accessible to the user and it is segregated from the processes that use it and its data is kept securely in another database. For instance, you can run an interpreted code on a cloud service and access to code can be restricted to privileged administrators. It is recommended that you protect these access rights via methods such as just-in-time administrator privileges and robust authentication mechanisms.
  • Appropriate configurations on web servers should be implemented to prevent unauthorised access to and browsing of the directory.
  • When designing application code, you should start with the assumption that the code is vulnerable to attacks due to coding errors and actions by malicious actors. You should design critical applications in a way that they are not vulnerable to internal faults. For instance, the output produced by an algorithm can be reviewed to ensure that it complies with security requirements before it can be used in critical applications such as finance-related applications.
  • Certain web applications are highly vulnerable to security threats due to poor coding practices such as database injection and cross-site scripting attacks.

Top Secure Coding Checklist

  1. Input Validation: For input validation, you need to conduct data validation on the authenticated server, determine trusted and untrustworthy data sources, specify accurate character sets and encode data into common characters before validating. In case of input rejection, there must be a validation failure.
  2. Authentication: Another important factor to maintain the security of the code is authentication. It is crucial to authenticate all the pages as well as resources and enforce the authentication system on a trusted system. After that, you can easily establish and use standard authentication services. Another important thing to consider while authenticating is to segregate authentication logic from the resource that is being requested. If the app you are going to develop has a credential store to manage, it should ensure that the passwords are stored cryptographically to enhance security. Only use HTTP POST requests to transfer authentication credentials.
  3. Password Management: When it comes to passwords, they should be concealed on the user’s screen, and if someone is trying to access the account with the wrong credentials the account should be disabled for a certain time period to discourage brute force of password guessing. Temporary passwords should have a short expiration time and users must be verified in case of password reset. You can also add multi-factor authentication for highly sensitive or transactional accounts.
  4. Access Control: While developing an app you should restrict access to protected URLs, functions, services, object references, application data, user and data attributes, and security configurations to authorized users only. Moreover, the app should deny all access if the security configuration information cannot be accessed.
  5. Error Handling: Most software errors are caused by bugs, which can lead to vulnerabilities. Keeping a log of errors and handling them accordingly are two of the best ways to minimize their impact. Error handling attempts to prevent catastrophic failures by identifying errors in the code before they occur. Using error logging, developers can diagnose errors and mitigate their effects. To comply with secure coding standards, it is necessary to document and log all errors, exceptions, and failures.
  6. Data Protection: To improve data protection, you should encrypt highly sensitive information on the server-side by using cryptographic hashes and protect server-side code from being downloaded by the user. Disabling the auto-complete feature on forms containing sensitive information can also help in enhancing security.
  7. Communication Security: Implement encryption for sensitive data transmission by incorporating Transport Layer Security (TLS) certificates.
  8. System Configuration: System configuration can be secured if you are using the latest versions of frameworks, servers, and other system components, or removing unnecessary files, functionalities, test codes, etc, from HTTP response headers. Provide access to only authorized development and test groups and isolate development environments from the production network. Development environments are seldom as secure as production environments, which enables attackers to identify shared vulnerabilities or exploit them. It is thus imperative that you implement a software change control system to manage and track changes in both production and development environments.
  9. Database Security: For database security, use strongly typed queries, use input and output validation and use secure credentials to access the database. You should also change or remove the default passwords of the database with strong passwords or multi-factor authentication.

General Coding Practices
Now that we have gone through the checklist of secure coding, it is time to have a look at general coding practices that any developer should follow:

  • Always use tested and approved codes rather than unmanageable code.
  • Do not allow the app to directly issue commands to the operating system using initiated command shells instead use task specified APIs that conduct tasks related to the OS.
  • To verify the integrity of libraries, interpreted codes, configuration files, and executables use hashes or checksums.
  • Protect shared resources and variables from ill-suited intersecting access.
  • Don’t pass user-provided data to a dynamic function.
  • Users should not be able to alter existing code or develop new ones.
  • Review all third-party code, applications, and libraries to ensure safe functionality and business necessity.
  • Use cryptographic signatures for code in case of automatic updates and determine signature verification by clients.

ISO 27001:2022 A 8.23 Web filtering

The control is regarding Access to external websites which should be managed to reduce exposure to malicious content.Web filtering is a technique that monitors and manages the locations where users are browsing on the Internet, enabling an organization to either allow or block web traffic in order to protect against potential threats and enforce corporate policy.There is a risk of malware attacks on corporate networks and information systems if employees visit websites with malicious content. For example, attackers could send phishing emails to employees’ work emails, enticing them to click on links and visit websites. If an employee visits this website, malware may be automatically uploaded to the employee’s device, enabling infiltration into corporate networks. In this attack, malware is automatically downloaded once the employee accesses a website, referred to as a drive-by download.Therefore, organisations must implement appropriate web filtering controls to restrict and control access to external websites and prevent security threats. The organization must establish policies for the use of online resources which may include restricting access to undesirable or inappropriate websites and web-based applications, which may include an allow-list of acceptable websites or domains or a prohibited-list of websites or domains.A web filter, analyzes the web applications accessed by users and restricts access to block listed websites, content, and domains deemed malicious or inappropriate by admins. Organizations employ web filtering software to block potential cyber risks, limit user interactions on websites, and prevent unsafe or explicit content from being accessed by users.

Internet filtering is one of the layers of network security that thwarts cyber risks and maintain productivity. Admins might use web filters to:

  • Block known dangerous websites or harmful URLs that may contain, for example, malware, or spyware.
  • Restrict access to potential emails containing phishing links.
  • Prevent users or students from accessing explicit content, gaming websites, or video streaming sites.
  • Block access to personal data storage applications .
  • Allow only websites or cloud applications authorized by the organization.

Control

Access to external websites should be managed to reduce exposure to malicious content.

Purpose

To protect systems from being compromised by malware and to prevent access to unauthorized web resources.

ISO 27002 Implementation Guidance

The organization should reduce the risks of its personnel accessing websites that contain illegal information or are known to contain viruses or phishing material. A technique for achieving this works by blocking the IP address or domain of the website(s) concerned. Some browsers and anti-malware technologies do this automatically or can be configured to do so. The organization should identify the types of websites to which personnel should or should not have access. The organization should consider blocking access to the following types of websites:

  1. websites that have an information upload function unless permitted for valid business reasons;
  2. known or suspected malicious websites (e.g. those distributing malware or phishing contents);
  3. command and control servers;
  4. malicious website acquired from threat intelligence ;
  5. websites sharing illegal content.

Prior to deploying this control, the organization should establish rules for safe and appropriate use of online resources, including any restriction to undesirable or inappropriate websites and web-based applications. The rules should be kept up-to-date. Training should be given to personnel on the secure and appropriate use of online resources including access to the web. The training should include the organization’s rules, contact point for raising security concerns, and exception process when restricted web resources need to be accessed for legitimate business reasons. Training should also be given to personnel to ensure that they do not overrule any browser advisory that reports that a website is not secure but allows the user to proceed.

Other information

Web filtering can include a range of techniques including signatures, heuristics, list of acceptable websites or domains, list of prohibited websites or domains and bespoke configuration to help prevent malicious software and other malicious activity from attacking the organization’s network and systems.

This control requires you to manage which websites your users are accessing, in order to protect your IT systems. This way, you can prevent your systems from being compromised by malicious code, and also prevent users from using illegal materials from the Internet.You could use tools that block access to particular IP addresses, which could include the usage of anti-malware software. You could also use non-tech methods like developing a list of forbidden websites and asking users not to visit them.You should set up processes that determine which types of websites are not allowed, and how the web filtering tools are maintained. Make employees aware of the dangers of using the Internet and where to find guidelines for safe use, and train your system administrators on how to perform web filtering. Organisations should establish and implement necessary controls to prevent employees from accessing external websites that may contain viruses, phishing materials, or other types of illegal information. One effective technique to prevent access to dangerous external websites is blocking the IP address or domain of websites identified as dangerous. For instance, some browsers and anti-malware tools enable organisations to do this automatically. Organisations should determine which types of websites should not be accessed by employees. In particular, the following types of websites should be blocked:

  • Websites with information upload functionality. Access should be subject to permission and should only be granted for valid business reasons.
  • Websites that are known or suspected to contain malicious material, such as websites with malware content.
  • Command and control servers.
  • Malicious websites obtained from threat intelligence. Organisations should refer to Control 5.7 for more details.
  • Websites distributing illegal content and materials.

Before designing and implementing this Control, organisations are advised to put in place rules for safe and appropriate access to and use of online resources. This should also include imposing restrictions on websites that contain inappropriate materials. These rules should be reviewed and updated at regular intervals.

All staff should be provided with training on how to access and use online resources safely. This training should cover the organisation’s own rules and should address how staff can raise his/her security concerns by contacting the relevant individual within the organisation. Furthermore, training should also address how staff can access restricted websites for valid business reasons and how this exception process works for such access. Last but not the least, training should address browser advisory that warns users that a website is not secure but that permits users to proceed. Staff should be instructed not to ignore such warnings.

Web filtering provides an organization with the ability to control the locations where users are browsing, which is important for a number of reasons:

  • Malware Protection: Phishing and other malicious sites can be used to deliver malware and other malicious content to users’ computers. Web filtering makes it possible for an organization to block access to websites that pose a threat to company and user security.
  • Data Security: Phishing sites are commonly intended to steal user credentials and other sensitive data. By blocking access to these pages, an organization limits the risk that such data will be leaked or breached.
  • Regulatory Compliance: Companies are responsible for complying with a growing number of data protection regulations, which mandate that they protect certain types of data from unauthorized access. With web filtering, an organization can manage access to sites that are likely to try to steal protected data and ones that may be used intentionally or unintentionally to leak data (such as social media or personal cloud storage).
  • Policy Enforcement: Web filtering enables an organization to enforce corporate policies for web usage. All types of web filtering can be used to block inappropriate use of corporate resources, such as visiting sites containing explicit content.

Types of Web Filtering
A web filtering service can work in a variety of ways. One of the ways by which web filtering solutions can be differentiated is by how they define acceptable content. Web filters can be defined in a few ways, including:

  1. Allow Listing: Allow lists are designed to specify the sites that a user, computer, or application is permitted to visit. All web traffic is compared to this list, and any requests with a destination not included on the list are dropped. This provides very strict control over the sites that can be visited.
  2. Block Listing: Block lists are the exact opposite of allow lists. Instead of specifying the sites that a user can visit, they list sites that should not be visited. With a blocklist, all traffic is inspected and any traffic to a destination on the list is dropped. This approach is commonly used to protect against known-bad locations, such as phishing sites, drive-by malware downloads, and inappropriate content.
  3. Content Filtering: Content and keyword filtering makes decisions whether to allow or block traffic based upon the content of a webpage. For example, an organization may have filters in place to block visits to sites containing explicit content. When a request is made, the content of the site is inspected and the site is blocked if the policy is violated. This filtering approach enables an organization to block malicious or inappropriate sites that they don’t know exist.

In addition to filter types, different web filtering solutions can differ in terms of where they look to apply their rules. Filters can be applied in a few different ways, such as:

  • DNS Filtering: The Domain Name Service (DNS) is the phone book of the Internet, translating domains (like google.com) to the IP addresses used by computers to route traffic. DNS filtering monitors requests for DNS lookups and allows or blocks the traffic based upon policy.
  • URL Filtering: A URL is the address of a webpage. URL filtering inspects the URLs contained within web requests and determines whether or not to allow a request to go through based on policy.
  • Content Filtering: Content filtering looks at the contents of a requested webpage. If a response violates policy, then it is blocked.

Finally, web filtering solutions can be classified by where the filter is applied. The options for this include:

  • Client-Side Filtering: Client-side web filtering is performed by software installed on a user’s computer. It inspects all outbound and inbound traffic and allows or blocks it based upon policy.
  • Server-Side Filtering: Server-side filtering is performed via a solution located either on-premises or in the cloud. All web traffic is routed through this solution, providing it with visibility and control

ISO 27001:2022 A 8.16 Monitoring activities

Monitoring of Network, systems and application is the cornerstone of any successful IT support and information security operation. It involves collecting and analyzing information to detect suspicious behavior or unauthorized system changes on your network, defining which types of behavior should trigger alerts, and taking action on alerts as needed. It is vitally important for organisations to promote a proactive approach to monitoring that seeks to prevent incidents before they happen, and works in tandem with reactive efforts to form an end-to-end information security and incident resolution strategy that ticks every last box. From hackers and malware, to disgruntled or careless employees, to outdated or otherwise vulnerable devices and operating systems, to mobile and public cloud computing, to third-party service providers, most companies are routinely exposed to security threats of varying severity in the normal course of conducting business. Given the ubiquitous, unavoidable nature of security risks, quick response time is essential to maintaining system security, and automated, continuous security monitoring is key to quick threat detection and response. This requires the management and monitoring of systems to identify unusual activity and to instigate appropriate incident responses. The control is regarding networks, systems and applications which should be monitored for anomalous behavior and appropriate actions taken to evaluate potential information security incidents.

Control

Networks, systems and applications should be monitored for anomalous behavior and appropriate actions taken to evaluate potential information security incidents.

Purpose

To detect anomalous behavior and potential information security incidents.

ISO 27002 Implementation Guidance

The monitoring scope and level should be determined in accordance with business and information security requirements and taking into consideration relevant laws and regulations. Monitoring records should be maintained for defined retention periods. The following should be considered for inclusion within the monitoring system:

  1. outbound and inbound network, system and application traffic;
  2. access to systems, servers, networking equipment, monitoring system, critical applications, etc.;
  3. critical or admin level system and network configuration files;
  4. logs from security tools [e.g. antivirus, IDS, intrusion prevention system (IPS), web filters, firewalls, data leakage prevention];
  5. event logs relating to system and network activity;
  6. checking that the code being executed is authorized to run in the system and that it has not been tampered with (e.g. by recompilation to add additional unwanted code);
  7. use of the resources (e.g. CPU, hard disks, memory, bandwidth) and their performance.

The organization should establish a baseline of normal behavior and monitor against this baseline for anomalies. When establishing a baseline, the following should be considered:

  1. reviewing utilization of systems at normal and peak periods;
  2. usual time of access, location of access, frequency of access for each user or group of users.

The monitoring system should be configured against the established baseline to identify anomalous behavior, such as:

  1. unplanned termination of processes or applications;
  2. activity typically associated with malware or traffic originating from known malicious IP addresses or network domains (e.g. those associated with botnet command and control servers);
  3. known attack characteristics (e.g. denial of service and buffer overflows);
  4. unusual system behavior (e.g. keystroke logging, process injection and deviations in use of standard protocols);
  5. bottlenecks and overloads (e.g. network queuing, latency levels and network jitter);
  6. unauthorized access (actual or attempted) to systems or information;
  7. unauthorized scanning of business applications, systems and networks;
  8. successful and unsuccessful attempts to access protected resources (e.g. DNS servers, web portals and file systems);
  9. unusual user and system behavior in relation to expected behavior.

Continuous monitoring via a monitoring tool should be used. Monitoring should be done in real time or in periodic intervals, subject to organizational need and capabilities. Monitoring tools should include the ability to handle large amounts of data, adapt to a constantly changing threat landscape, and allow for real-time notification. The tools should also be able to recognize specific signatures and data or network or application behavior patterns. Automated monitoring software should be configured to generate alerts (e.g. via management consoles, email messages or instant messaging systems) based on predefined thresholds. The alerting system should be tuned and trained on the organization’s baseline to minimize false positives. Personnel should be dedicated to respond to alerts and should be properly trained to accurately interpret potential incidents. There should be redundant systems and processes in place to receive and respond to alert notifications. Abnormal events should be communicated to relevant parties in order to improve the following activities: auditing, security evaluation, vulnerability scanning and monitoring. Procedures should be in place to respond to positive indicators from the monitoring system in a timely manner, in order to minimize the effect of adverse events on information security. Procedures should also be established to identify and address false positives including tuning the monitoring software to reduce the number of future false positives.

Other information

Security monitoring can be enhanced by:

  1. leveraging threat intelligence systems;
  2. leveraging machine learning and artificial intelligence capabilities;
  3. using block lists or allow lists;
  4. undertaking a range of technical security assessments (e.g. vulnerability assessments, penetration testing, cyber-attack simulations and cyber response exercises), and using the results of these assessments to help determine baselines or acceptable behavior;
  5. using performance monitoring systems to help establish and detect anomalous behaviour;
  6. leveraging logs in combination with monitoring systems.

Monitoring activities are often conducted using specialist software, such as intrusion detection systems. These can be configured to a baseline of normal, acceptable and expected system and network activities. Monitoring for anomalous communications helps in the identification of botnets (i.e. set of devices under the malicious control of the botnet owner, usually used for mounting distributed denial of service attacks on other computers of other organizations). If the computer is being controlled by an external device, there is a communication between the infected device and the controller. The organization should therefore employ technologies to monitor for anomalous communications and take such action as necessary.

Security Monitoring is a process of continuously observing the behavior on an organization’s network or we can say keeping an eye on the traffic of an organization’s network which are intended to harm its data (data breach) and making cyber threats, if this happens it will send the alert to the security incident. The main importance of security monitoring is to preserve the following aspects:

  • Reputation
  • Privacy of User Data
  • Availability
  • Misuse of Organization Service

The extent to which you carry out monitoring activities should be determined in accordance with the security requirements from your risk assessment, and take into consideration relevant laws and regulations. Monitoring records should be retained for auditing purposes for a defined retention period. The following should be considered for inclusion within the monitoring system:

  1. outbound and inbound network, system and application traffic,
  2. records of access to physical and virtual systems or applications,
  3. monitoring of changes to configuration files,
  4. logs from security tools such as anti-malware or web filtering systems,
  5. event logs relating to system and user activity,

Monitoring should be first and foremost carried out in line with any regulatory requirements or prevailing legislation, and any records retained in accordance with company retention policy. Suspect events should be reported to all relevant personnel in order to maintain network integrity and improve business continuity alongside the following processes

  • Auditing
  • Security and risk evaluation
  • Vulnerability scanning
  • Monitoring

Organisations should include the following in their monitoring operation:

  • Both inbound and outbound network traffic, including data to and from applications
  • Access to business critical platforms, including (but not limited to) Systems, Servers, Networking hardware, the monitoring system itself
  • Configuration files
  • Event logs from security equipment and software platforms
  • Code checks that ensure any executable programs are both authorised and temper-free
  • Compute, storage and networking resource usage

Organisations should gain a firm understanding of normal user activity and network behavior, and use this as a baseline to identify anomalous behavior across the network, including:

  • Sudden closure or termination of processes and applications
  • Network traffic that is recognised as emanating to and/or from problematic IP addresses and/or external domains
  • Well-known intrusion methods (e.g. DDoS)
  • Malicious system behaviour (e.g. key logging)
  • Network bottlenecks and high ping and/or latency times
  • Unauthorised or unexplainable access to and/or scanning of data, domains or applications
  • Any attempts to access business critical ICT resources (e.g. domain controllers, DNS servers, file servers and web portals)
  • To establish a successful baseline, organisations should monitor network utilisation and access times at normal working levels.

The organisations should optimize its monitoring efforts using specialized monitoring tools that are suited to the type of network and traffic that they deal with on a daily basis. Monitoring tools should be able to:

  • Handle large amounts of monitoring data
  • React to suspect data, traffic and user behavior patterns and one-off activities
  • Amend any monitoring activities to react to different risk level
  • Notify organisations of anomalous activity in real time, through a series of proactive alerts that contain a minimal amount of false positives
  • Rely on an adequate level of application redundancy in order to maintain a continuous monitoring operation

Security monitoring should be optimized through:

  • Dedicated threat intelligence systems and intrusion protection platforms
  • Machine learning platforms
  • Whitelists, blacklists, block lists and allow lists on IP management platforms and email security software
  • Combining logging and monitoring activities into one end-to-end approach
  • A dedicated approach to well-known intrusion methods and malicious activity, such as the use of botnets, or denial of service attacks

There are many methods used by an attacker to make the website or application unavailable to the user by using method like DDoS attacks, injecting malicious code or commands, etc.

DDoS: DDos stands for Distributed Denial of Service. In this attack, an attacker sent large number of packets or we can say a request which is made continuously until an error occurs which also results in unavailability of resources provided by the organization.

Injecting Malicious Code or Command: When an attacker is injecting malicious code or command on different input field or URL endpoint then it can harm the privacy of user’s data. By identifying these kinds of commands or code and block them is suggested. So, to prevent these types of malicious attack security monitoring is configured and done to prevent, block or reject these types of requests

Cyber Security Threat Monitoring gives us the ability of real-time spectating on the network and helps us to identify unusual or malicious behavior on the network. This will help the cyber security or IT team to take prevention steps before the occurrence of the attack incident. Consider two main types of monitoring:

1. Endpoint Monitoring: Endpoints are the devices connected to a network like laptops, desktop, smartphones, cell-phones and IOT (Internet of Things) devices. Endpoint monitoring consist of analyzing the behavior of the devices connected to a specific network and analyze their behavior. It will help IT team to detect threat and they can take prevention measures when the behavior malicious, unusual or suspicious.

2. Network Monitoring: Network is the connection between different devices to communicate and share information and assets. Network Monitoring entails keeping an eye (tracking) and analyzing the network from which it will respond on the basis of the result network monitoring gets during monitoring. If the network components are not properly working means like component being overloaded, keeps crashing, slow etc. all that can lead to certain cyber threats and makes the system vulnerable.

There are many diagnostic tools which will keep diagnosing the components and keeps the logs of the result and if there is any disturbance or threat it will automatically notify the IT team instantly via many medium. From this the IT team can fix the error or problem. To prevent the organization from these kinds of cyber-attacks the organization have to monitor the network and packets which are being thrown toward the network and prevent any casualty from happening.

  1. Minimize Data Breach: Continuously monitoring of the network will help to detect any threat before the occurrence of the event and the organization can prevent these kinds of attacks from affecting the information that the company holds of their users and employees. So, doing continuous security monitoring will help effectively.
  2. Improve your Time to Respond to Attacks: Most organizations take security measures to prevent cyber threats and attacks, but what if the bad guys somehow successfully attacked the organization, then the organization must be ready to respond to the attack and fix it as soon as it is detected. Because the assets of the organization must be available to its user 24 x 7.
  3. Address Security Vulnerability: Every system has loop holes (vulnerability). Address Security Vulnerability means to address or find the vulnerability the network has. Vulnerability is hunted and fix before any bad guy can find and exploit it. This category also includes keeping all the protocols and firewalls up to date. Even many organizations organize Bug Hunting program. In bug hunting program the organization invites ethical hackers to ethically hack the system and make a report of the vulnerability so the organization can confirm the vulnerability and fix it, they also provide bounties, swags or hall of fame according to the severity of the vulnerability.
  4. Compliance with Standards and Regulations: The most basic and fundamental term of cyber security is Confidentiality, Integrity and Availability (CIA ). An organization is required to meet these set of rules for the possession of data. If even a single requirement is not met then it will increase the chances of vulnerability existence in the network which will also harm the reputation of the organization. So, by continuous cyber security monitoring will help to fix these kinds of problems.
  5. Reduce Downtime: Reduce down time means being ensure that organization’s network is fully functional and handle all operations Because networks downtime can harm the organization’s reputation and even financially. And if organization face any threats they should respond and fix it as soon as possible. So, continuous cyber security monitoring will decrease the chances of getting the sever or the network down.
  6. Nature of Threats has Changed: Cyber criminals are getting smarter and sharper day by day. They are always trying to get through the defense which any organization sets up for their network. Day by day cyber criminals are bringing up new attack, trick and tactics to perform their malicious activity. Best way to tackle these kinds of problems is by continuously monitoring the network.
  7. Rise in Remote Work: Organization have started using cloud services to provide the essentials to their employees who are working from home. But this causes a problem that is to do the access control so that an unauthorized person cannot get access to the data even if he tries. But then also this can lead to unauthorized access because there is always a way. So, it’s a good move to monitor the traffic and detect the threat or any unauthorized user trying to access should be blacklisted or blocked.
  8. Increase Productivity of the Employee: Employee plays an important role in any organization. Making the employee productive, that is the thing every organization wants. Focusing on the IT infrastructure will boost the productivity of the employee, because well-structured and secured network will help employees to focus on their core skills and job even can do their work faster. This can be done by keeping a security expert who will handle all the technical responsibilities will be great. So, this will boost the productivity of all the employees

Effective Steps for Cyber Security Monitoring
An organization should always be careful about the traffic which is going through their network because if it comes out to be a malicious packet then it will cost the organization its reputation and its money. So, precaution is better than cure. An organization should focus on its networks traffic by taking some effective and efficient steps.

  1. SIEM Tools and Software Solutions: A Security Information and Event Management platform plays an important role in any organization for security monitoring. Security Incident and Event Management is field where software and services are combined security information management and security event management. The work of Security Information and Event Management is to monitor and analyze log data efficiently then combine all the monitoring logs in one place to make the analyzing or further assessment easy. This will help the IT team to revise the logs and fix or even they can be prepared for further possible cyber threats.
  2. Trained Experts: All the tools we discussed before will do their work properly but this is not enough. A trained expert is important in the team. The person who understands the infrastructure will be much easier for them because the expert will know where to look and for what to look. But an experienced expert is much means those who have knowledge, understanding and ability to identify the threat and fix it as soon as possible. The expert will also know how to make the system much faster for the response to the attack means improving the speed when a cyber threat occurs.
  3. Trained Employees: Trained employees play a vital role as same a trained expert plays in an organization for its security. It is important factor to educate or train the employee or the staff about that how to protect the organization from malicious and abrupt attack the attacker might tries to perform on the organization. A well-trained employee will know the symptoms, effects or precautions that should be taken against some cyber-attacks. They will also understand the importance of cyber security in the organization.
  4. Managed Services: Managed services are the most important factor because an attacker can exploit the services which are not required. By setting the strong protocols and metric will help in improving security. An organization should use or enable only the required services because it will reduce the risks effectively. Some services can help the organization manage or monitor the services running on their network and system. A small mistake in managing the services can lead to a huge reputation or financial loss of a company.
  5. Identify Assets and Events which Needed to be Logged and Monitored: The strange events should be logged (recorded) and monitored. It gives two advantages. First is that if any data compromise occurs, the investigation team can find the attacker. Second is, the security team will analyze the event which is recorded to find the vulnerability and fix it.
  6. Establish Active Monitoring, Alerting and Incident Response Plan: So, here all organization cannot put team for blocking for rejecting every single, same type of event which can harm system so to fix this, three steps are followed
  7. Active Monitoring: Active monitoring is continuously monitoring the traffics using a SIEM (Security Information and Event Management) tool. The work of SIEM is to automate the process of monitoring. There are many SIEM tools available in the market which can be used.
  8. Incident Response: In incident response, the organization will preconfigure the SIEM tool, that which packet (request) should be accepted, rejected or blocked (blacklist) and it is decided on the basis of the structure or pattern of packet (request). Incident response are also done manually. If any big incident happens then the security professional creates a plan and takes instant decision to overcome the incident, this whole scenario is known as incident response.
  9. Alerting: Alerting is used to send alert notifications to the user or admin whose ID is configured. Basically, alerting is used when certain actions are made like if someone is trying to upload any malicious file, trying to brute force admin panel password, etc.
  10. Define the Need for Log and Monitoring By using log, security team can improve the security as per the log content. By using monitoring, the best advantage is automation means even if there is no interaction of any security professional monitoring can block, reject or blacklist any request.
  11. Keep Monitoring Plan, Firewall and Protocols Up-to-date: It is extremely essential to keep monitoring plan, firewall and protocols up-to-date because if any attacker gets the version of any service and if it is not at the latest version then the attacker can exploit that service and harm the organization. The update contains the latest bug fixes which makes system more secure.