ISO 27001:2022 A 8.26 Application security requirements

Audio version of the article

Advertisements

Application security,is the practice of using security software, hardware, techniques, best practices and procedures to protect computer applications from external security threats. Application software programs such as web applications, graphics software, database software, and payment processing software are vital to many critical business operations. However, these applications are often exposed to security vulnerabilities that may result in the compromise of sensitive information.Organisations can establish and apply information security requirements for the development, use, and acquisition of applications.Application security describes security measures at the application level that aim to prevent data or code within the app from being stolen or hijacked. It encompasses the security considerations that happen during application development and design, but it also involves systems and approaches to protect apps after they get deployed. Application security may include hardware, software, and procedures that identify or minimize security vulnerabilities. A router that prevents anyone from viewing a computer’s IP address from the Internet is a form of hardware application security. But security measures at the application level are also typically built into the software, such as an application firewall that strictly defines what activities are allowed and prohibited. Procedures can entail things like an application security routine that includes protocols such as regular testing.

Advertisements

Control

Information security requirements should be identified, specified and approved when developing or acquiring applications.

Purpose

To ensure all information security requirements are identified and addressed when developing or acquiring applications.

ISO 27002 Implementation Guidance

General

Application security requirements should be identified and specified. These requirements are usually determined through a risk assessment. The requirements should be developed with the support of information security specialists.
Application security requirements can cover a wide range of topics, depending on the purpose of the application.
Application security requirements should include, as applicable:

  1. level of trust in identity of entities (e.g. through authentication) ;
  2. identifying the type of information and classification level to be processed by the application;
  3. need for segregation of access and level of access to data and functions in the application;
  4. resilience against malicious attacks or unintentional disruptions [e.g. protection against buffer overflow or structured query language (SQL) injections;
  5. legal, statutory and regulatory requirements in the jurisdiction where the transaction is generated, processed, completed or stored;
  6. need for privacy associated with all parties involved;
  7. the protection requirements of any confidential information;
  8. protection of data while being processed, in transit and at rest;
  9. need to securely encrypt communications between all involved parties;
  10. input controls, including integrity checks and input validation;
  11. automated controls (e.g. approval limits or dual approvals);
  12. output controls, also considering who can access outputs and its authorization;
  13. restrictions around content of “free-text” fields, as these can lead to uncontrolled storage of confidential data (e.g. personal data);
  14. requirements derived from the business process, such as transaction logging and monitoring, non repudiation requirements;
  15. requirements mandated by other security controls (e.g. interfaces to logging and monitoring or data leakage detection systems);
  16. error message handling.

Transactional services

Additionally, for applications offering transactional services between the organization and a partner, the following should be considered when identifying information security requirements:

  1. the level of trust each party requires in each other’s claimed identity;
  2. the level of trust required in the integrity of information exchanged or processed and the mechanisms for identification of lack of integrity (e.g. cyclic redundancy check, hashing, digital signatures);
  3. authorization processes associated with who can approve contents of, issue or sign key transactional documents;
  4. confidentiality, integrity, proof of dispatch and receipt of key documents and the non-repudiation (e.g. contracts associated with tendering and contract processes);
  5. the confidentiality and integrity of any transactions (e.g. orders, delivery address details and confirmation of receipts);
  6. requirements on how long to maintain a transaction confidential;
  7. insurance and other contractual requirements.

Electronic ordering and payment applications

Additionally, for applications involving electronic ordering and payment, the following should be considered:

  1. requirements for maintaining the confidentiality and integrity of order information;
  2. the degree of verification appropriate to verify payment information supplied by a customer;
  3. avoidance of loss or duplication of transaction information;
  4. storing transaction details outside of any publicly accessible environment (e.g. on a storage platform existing on the organizational intranet, and not retained and exposed on electronic storage media directly accessible from the internet);
  5. where a trusted authority is used (e.g. for the purposes of issuing and maintaining digital signatures or digital certificates) security is integrated and embedded throughout the entire end- to-end certificate or signature management process.

Several of the above considerations can be addressed by the application of cryptography, taking into consideration legal requirements

Other information

Applications accessible via networks are subject to a range of network related threats, such as fraudulent activities, contract disputes or disclosure of information to the public; incomplete transmission, mis-routing, unauthorized message alteration, duplication or replay. Therefore, detailed risk assessments and careful determination of controls are indispensable. Controls required often include cryptographic methods for authentication and securing data transfer.

Advertisements

Application security is the process of developing, adding, and testing security features within applications to prevent security vulnerabilities against threats such as unauthorized access and modification.Application security is important because today’s applications are often available over various networks and connected to the cloud, increasing vulnerabilities to security threats and breaches. There is increasing pressure and incentive to not only ensure security at the network level but also within applications themselves. One reason for this is because hackers are going after apps with their attacks more today than in the past. Application security testing can reveal weaknesses at the application level, helping to prevent these attacks. Security measures include improving security practices in the software development life cycle and throughout the application life cycle. All application security activities should minimize the likelihood that malicious actors can gain unauthorized access to systems, applications or data. The ultimate goal of application security is to prevent attackers from accessing, modifying or deleting sensitive or proprietary data. Any action taken to ensure application security is a countermeasure or security control. Security control can be defined as a safeguard or countermeasure prescribed for an information system or an organization designed to protect the confidentiality, integrity, and availability of its information and to meet a set of defined security requirements.

countermeasures include the following:

  • firewalls
  • encryption and decryption programs
  • antivirus programs
  • spyware detection and removal programs
  • biometric authentication systems

Types of application security
Different types of application security features include authentication, authorization, encryption, logging, and application security testing. Developers can also code applications to reduce security vulnerabilities.

Authentication: When software developers build procedures into an application to ensure that only authorized users gain access to it. Authentication procedures ensure that a user is who they say they are. This can be accomplished by requiring the user to provide a user name and password when logging in to an application. Multi-factor authentication requires more than one form of authentication—the factors might include something you know (a password), something you have (a mobile device), and something you are (a thumb print or facial recognition).
Authorization: After a user has been authenticated, the user may be authorized to access and use the application. The system can validate that a user has permission to access the application by comparing the user’s identity with a list of authorized users. Authentication must happen before authorization so that the application matches only validated user credentials to the authorized user list.
Encryption: After a user has been authenticated and is using the application, other security measures can protect sensitive data from being seen or even used by a cybercriminal. In cloud-based applications, where traffic containing sensitive data travels between the end user and the cloud, that traffic can be encrypted to keep the data safe.
Logging: If there is a security breach in an application, logging can help identify who got access to the data and how. Application log files provide a time-stamped record of which aspects of the application were accessed and by whom.
Application security testing: A necessary process to ensure that all of these security controls work properly.

Organisations should carry out a risk assessment to determine the type of information security requirements appropriate to a particular application. While the content and types of information security requirements may vary depending on the nature of the application, the requirements should address the following:

  • The degree of trust assigned to the identity of specific entities.
  • Identification of the level of classification assigned to information assets to be stored on or processed by the application.
  • Whether there is a need to segregate the access to functions and information stored on the application.
  • Whether the application is resilient against cyber attacks such as SQL injections or unintentional interceptions such as buffer overflow.
  • Legal, regulatory and statutory requirements and standards applicable to the transaction processed, generated, stored, or completed by the application.
  • Privacy considerations for all parties involved.
  • Requirements for the protection of confidential data.
  • Protection of information when in use, in transit, or at rest.
  • Whether secure encryption of communications between all relevant parties is necessary.
  • Implementation of input controls such as input validation or performing integrity checks.
  • Carrying out automated controls.
  • Performing output controls, taking into account who can view outputs and the authorisation for Access.
  • Need to impose restrictions on the content of “free-text” fields to protect the dissemination of confidential data in an uncontrollable manner.
  • Requirements arising out of business needs such as logging of transactions and non-repudiation requirements.
  • Requirements imposed by other security controls such as data leakage detection systems.
  • How to handle error messages.

Organisations to take into account the following seven recommendations when an application offers transactional services between the organisation and a partner:

  • The degree of trust each party in the transaction requires in the other party’s identity.
  • The degree of trust required in the integrity of data communicated or processed and identification of a proper mechanism to detect any lack of integrity, including tools such as hashing and digital signatures.
  • Establishment of an authorisation process for who is allowed to approve the content of, sign, or sign off on essential transactional documents.
  • Maintaining the confidentiality and integrity of the critical documents and proving sending and receipt of such documents.
  • Protecting and maintaining the integrity and confidentiality of all transactions such as orders and receipts.
  • Requirements for what time period transactions shall be kept confidential.
  • Contractual requirements and requirements related to insurance.

When applications include payment and electronic ordering functionality, organisations should take into account the following:

  • Requirements ensure that confidentiality and integrity of order information are not compromised.
  • Determining an appropriate degree of verification to verify the payment details provided by a customer.
  • Preventing the loss or duplication of transaction information.
  • Ensuring that information related to information is stored outside of a publicly accessible environment such as on a storage media housed on the organisation’s own intranet.
  • Where organisations rely on a trusted external authority such as for the issuance of digital signatures, they must ensure that security is integrated across the entire process.

Application security controls are techniques to enhance the security of an application at the coding level, making it less vulnerable to threats. Many of these controls deal with how the application responds to unexpected inputs that a cybercriminal might use to exploit a weakness. A programmer can write code for an application in such a way that the programmer has more control over the outcome of these unexpected inputs. Fuzzing is a type of application security testing where developers test the results of unexpected values or inputs to discover which ones cause the application to act in an unexpected way that might open a security hole.When applications are accessed through networks, they are vulnerable to threats such as contract disputes, fraudulent activities, mis-routing, unauthorized changes to the content of communications, or loss of confidentiality of sensitive information. Organisations to perform comprehensive risk assessments to identify appropriate controls such as the use of cryptography to ensure the security of information transfers.The information involved in application service transactions must be protected to prevent incomplete transmission, misrouting, unauthorized message alteration, unauthorized disclosure, unauthorized message duplication, or replay. Additional protection is likely to secure application service transactions (not necessarily just financial transactions). These may include; Use of electronic signatures, Use of encryption; and Use of secure protocols. The ongoing monitoring of such transactions in a near to real-time manner is also likely to be required.

Application developers perform application security testing as part of the software development process to ensure there are no security vulnerabilities in a new or updated version of a software application. A security audit can make sure the application is in compliance with a specific set of security criteria. After the application passes the audit, developers must ensure that only authorized users can access it. In penetration testing, a developer thinks like a cybercriminal and looks for ways to break into the application. Penetration testing may include social engineering or trying to fool users into allowing unauthorized access. Testers commonly administer both unauthenticated security scans and authenticated security scans (as logged-in users) to detect security vulnerabilities that may not show up in both states.

Advertisements

ISO 27001:2022 A 8.27 Secure system architecture and engineering principles

Audio version of the article

Advertisements

Principles for engineering secure systems must be established, documented, maintained, and applied to any information system implementation efforts. Secure software engineering principles exist at both general levels and specific to development platforms and coding languages. Wherever development is being carried out, consideration for the selection and application of such principles should be considered, assessed, formally documented, and mandated. The auditor will want to see that as with many controls, the use of system engineering principles is considered against the risk levels identified and will be looking for evidence to support the choices made. Organizations apply security engineering principles primarily to new development information systems or systems undergoing major upgrades. For legacy systems, organizations apply security engineering principles to system upgrades and modifications to the extent feasible, given the current state of hardware, software, and firmware within those systems. Security engineering principles include, for example: (i) developing layered protections; (ii) establishing sound security policy, architecture, and controls as the foundation for design; (iii) incorporating security requirements into the system development life cycle; (iv) delineating physical and logical security boundaries; (v) ensuring that system developers are trained on how to build secure software; (vi) tailoring security controls to meet organizational and operational needs; (vii) performing threat modeling to identify use cases, threat agents, attack vectors, and attack patterns as well as compensating controls and design patterns needed to mitigate risk; and (viii) reducing risk to acceptable levels, thus enabling informed risk management decisions.

Advertisements

Control

Principles for engineering secure systems should be established, documented, maintained and applied to any information system development activities.

Purpose

To ensure information systems are securely designed, implemented and operated within the development life cycle.

ISO 27002 Implementation Guidance

Security engineering principles should be established, documented and applied to information system engineering activities. Security should be designed into all architecture layers (business, data, applications and technology). New technology should be analysed for security risks and the design should be reviewed against known attack patterns. Secure engineering principles provide guidance on user authentication techniques, secure session control and data validation and sanitation. Secure system engineering principles should include analysis of:

  1. the full range of security controls required to protect information and systems against identified threats;
  2. the capabilities of security controls to prevent, detect or respond to security events;
  3. specific security controls required by particular business processes (e.g. encryption of sensitive information, integrity checking and digitally signing information);
  4. where and how security controls are to be applied (e.g. by integrating with a security architecture and the technical infrastructure);
  5. how individual security controls (manual and automated) work together to produce an integrated set of controls.

Security engineering principles should take account of:

  1. the need to integrate with a security architecture;
  2. technical security infrastructure [e.g. public key infrastructure (PKI), identity and access management (IAM), data leakage prevention and dynamic access management];
  3. capability of the organization to develop and support the chosen technology;
  4. cost, time and complexity of meeting security requirements;
  5. current good practices.

Secure system engineering should involve:

  1. the use of security architecture principles, such as “security by design”, “defense in depth”, “security by default”, “default deny”, “fail securely”, “distrust input from external applications”, “security in deployment”, “assume breach”, “least privilege”, “usability and manageability” and “least functionality”;
  2. a security-oriented design review to help identify information security vulnerabilities, ensure security controls are specified and meet security requirements;
  3. documentation and formal acknowledgement of security controls that do not fully meet requirements (e.g. due to overriding safety requirements);
  4. hardening of systems.

The organization should consider “zero trust” principles such as:

  1. assuming the organization’s information systems are already breached and thus not be reliant on network perimeter security alone;
  2. employing a “never trust and always verify” approach for access to information systems;
  3. ensuring that requests to information systems are encrypted end-to-end;
  4. verifying each request to an information system as if it originated from an open, external network, even if these requests originated internal to the organization (i.e. not automatically trusting anything inside or outside its perimeters);
  5. using “least privilege” and dynamic access control techniques. This includes authenticating and authorizing requests for information or to systems based on contextual information such as authentication information, user identities, data about the user endpoint device, and data classification ;
  6. always authenticating requester and always validating authorization requests to information systems based on information including authentication information and user identities, data about the user endpoint device, and data classification , for example enforcing strong authentication (e.g. multi-factor, ).

The established security engineering principles should be applied, where applicable, to outsourced development of information systems through the contracts and other binding agreements between the organization and the supplier to whom the organization outsources. The organization should ensure that suppliers’ security engineering practices align with the organization’s needs. The security engineering principles and the established engineering procedures should be regularly reviewed to ensure that they are effectively contributing to enhanced standards of security within the engineering process. They should also be regularly reviewed to ensure that they remain up-to- date in terms of combating any new potential threats and in remaining applicable to advances in the technologies and solutions being applied.

Other information

Secure engineering principles can be applied to the design or configuration of a range of techniques, such as:

  • fault tolerance and other resilience techniques;
  • segregation (e.g. through virtualization or containerization);
  • tamper resistance.

Secure virtualization techniques can be used to prevent interference between applications running on the same physical device. If a virtual instance of an application is compromised by an attacker, only that instance is affected. The attack has no effect on any other application or data. Tamper resistance techniques can be used to detect tampering of information containers, whether physical (e.g. a burglar alarm) or logical (e.g. a data file). A characteristic of such techniques is that there is a record of the attempt to tamper with the container. In addition, the control can prevent the
successful extraction of data through its destruction (e.g. device memory can be deleted).

Advertisements

Organisations can eliminate security threats to information systems by creating secure system engineering principles that are applied to all phases of the information system life-cycle. Organisations are to maintain the security of information systems during the design, deployment, operation stages by establishing and implementing secure system engineering principles that system engineers comply with. Secure engineering is actually how you will apply security while developing your IT projects. In order to do that, you should take into account threats from natural disasters and humans. These may include: earthquakes, tornadoes, floods, misuse, and malicious human behavior (find more threats and vulnerabilities in Catalogue of threats & vulnerabilities. To assure management of those treats, high-level rules are defined to apply security. These are your secure engineering principles. For example, most of the projects deal with information. So, your principle will be “Assure information protection in processing, transit, and storage.” Based on principles, procedures will be developed that define activities in detail. For the mentioned example, you will define, e.g., a backup procedure and clearly state that incremental backup should be done every day, and full backup done during the weekend. Also, you will define responsibilities and how to control whether the procedure is followed.It’s important to know that principles apply to every phase of your development projects, and to all architectural layers of your final products (business, data, applications, and technology).

It is important that secure IT Engineering procedures based on security engineering principles be defined, documented, and applied in the IT Engineering department of the organization. It is important to balance data security and accessibility across every architecture layer (including business, data, applications, and technology). There is a need to evaluate new technology for security threats and review the design of documented attack patterns. To make certain that these principles and developed engineering processes contribute effectively toward improved safety standards, such principles and developed engineering processes should be reviewed periodically. As well as being reviewed regularly, they should also be updated as needed to keep up with changes in technology and to make certain that they remain applicable to any new threats. Developing agreements and other binding agreements between outsourced organizations and their suppliers should apply the established principles of security engineering to outsourced information systems as appropriate. Suppliers must adhere to the same rigorous security engineering standards as the company does. A secure engineering approach should be applied during the development of applications with input/output interfaces. Using secure engineering techniques you can prevent unauthorized access to accounts, control secure access to accounts, validate data, perform sanitation, and remove debugging codes from your system. Organisations should embed security into all layers of information systems, including business processes, applications, and data architecture. Secure engineering principles should apply to all activities related to information systems and should be subject to regular review and updates taking into account emerging threats and attack patterns. In addition to information systems developed and operated internally, it also applies to information systems created by external service providers. Therefore, organisations should ensure that service providers’ practices and standards comply with their own secure engineering principles.

  • Guidance on user authentication methods.
  • Guidance on secure session control.
  • Guidance on data sanitisation and validation procedures.
  • Comprehensive analysis of all security measures needed to protect information assets and systems against known threats.
  • Comprehensive analysis on capabilities of security measures to identify, eliminate and respond to security threats.
  • Analyzing security measures applied to specific business activities such as encryption of information.
  • How security measures will be implemented and where. This may include the integration of a specific security control within technical infrastructure.
  • How different security measures work together and operate as a combined set of controls.

Organisations should consider the following zero-trust principles:

  • Starting with the assumption that the organisation’s systems are already compromised and the defined network perimeter security is no longer effective.
  • Adopting a “never trust and always verify” approach for providing access to information systems.
  • Providing assurance that requests made to information systems are protected with end-to-end encryption.
  • Implementing verification mechanism that assumes access requests to information systems are made from external, open networks.
  • Putting in place “least privilege” and dynamic access control techniques i. This covers the authentication and authorization of access requests for sensitive information and information systems considering contextual information such as user identities and information classification .
  • Always authenticating the identity of the requester and verifying authorization requests to access information systems. These authentication and verification procedures should be performed in accordance with authentication information , User Identities , and Multi-Factor .

System Engineering should cover

  • Adopting and implementing secure architecture principles, including “security by design”, “defence in depth”, “fail securely”, “distrust input from external applications”, “assume breach”, “least privilege”, “usability and manageability” and “least functionality”.
  • Adopting and applying a security-focused design review process to detect information security vulnerabilities and guaranteeing that security measures are identified and satisfy security requirements.
  • Documenting and acknowledging the security measures that do not fulfil requirements.
  • System hardening.

Organisations should consider the following when establishing secure system engineering principles:

  • The need to integrate controls with specific security architecture.
  • Existing technical security infrastructure, including public key infrastructure, identity management, and data leakage prevention.
  • Whether the organisation is capable of building and maintaining the selected technology.
  • Cost and time required to satisfy security requirements and complexity of such requirements.
  • Existing best practices.

Organisations can put secure engineering principles into practice when configuring the following:

  • Fault tolerance and similar resilience methods.
  • Segregation techniques such as virtualization.
  • Tamper resistance.
  • Use of secure virtualization technology can help eliminate the risk of interception between two applications running on the same device.
  • Use of tamper resistance systems can help identify both the logical and physical tampering with information systems and prevent unauthorized extraction of information.
Advertisements

To aid in designing secure information systems, the National Institute of Standards and Technology (NIST) compiled a set of engineering principles for system security. These principles provide a foundation upon which a more consistent and structured approach to the design, development, and implementation of IT security capabilities can be constructed. While the primary focus of these principles is the implementation of technical controls, these principles highlight the fact that, to be effective, a system security design should also consider non- technical issues, such as policy, operational procedures, and user awareness and training. Ideally, the principles presented here would be used from the onset of a program then employed throughout the system’s life cycle. However, these principles are also helpful in affirming or confirming the security posture of already deployed information systems. The principles are short and concise and can be used by all organizations to develop their system life-cycle policies. The principles presented herein can be used by:

  • Users when developing and evaluating functional requirements, or when operating information systems within their organizations.
  • System Engineers and Architects when designing, implementing, or modifying an information system.
  • IT Specialists during all phases of the system life-cycle.
  • Program Managers and Information Security Officers to ensure adequate security measures have been considered for all phases of the system life-cycle.

The application of security engineering principles is primarily targeted at new information systems under development or systems undergoing major upgrades and should be integrated into the system development life cycle. For legacy information systems, organizations should apply security engineering principles to system upgrades and modifications, to the extent feasible, given the current state of the hardware, software, and firmware components within the system.

Advertisements

SECURITY FOUNDATION

Principle 1: Establish a sound security policy as the “foundation” for design.
A security policy is an important document to develop while designing an information system. The security policy begins with the organization’s basic commitment to information security formulated as a general policy statement. The policy is then applied to all aspects of the system design or security solution. The policy identifies security goals (e.g., confidentiality, integrity, availability, accountability, and assurance) the system should support and these goals guide the procedures, standards and controls used in the IT security architecture design. The policy also should require definition of critical assets, the perceived threat, and security-related roles and responsibilities.

Principle 2: Treat security as an integral part of the overall system design.
Security must be considered in information system design and should be integrated fully into the system life-cycle. Experience has shown it to be both difficult and costly to introduce security measures properly and successfully after a system has been developed, so security should be implemented in the design stage of all new information systems, and where possible, in the modification and continuing operation of all legacy systems. This includes establishing security policies, understanding the resulting security requirements, participating in the evaluation of security products, and in the engineering, design, implementation, and disposal of the system.

Principle 3: Clearly delineate the physical and logical security boundaries governed by associated security policies.
Information technology exists in physical and logical locations, and boundaries exist between these locations. An understanding of what is to be protected from external factors can help ensure adequate protective measures are applied where they will be most effective. Sometimes a boundary is defined by people, information, and information technology associated with one physical location. But this ignores the reality that, within a single location, many different security policies may be in place, some covering publicly accessible information and some covering sensitive or confidential information. Other times a boundary is defined by a security policy that governs a specific set of information and information technology that can cross physical boundaries. Further complicating the matter is that, many times, a single machine or server may house both public-access and sensitive information. As a result, multiple security policies may apply to a single machine or within a single system. Therefore, when developing an information system, security boundaries must be considered and communicated in relevant system documentation and security policies.

Principle 4: Ensure that developers are trained in how to develop secure software.
Ensure that developers are adequately trained in the design, development, configuration control, integration, and testing of secure software before developing the system.

RISK BASED

Principle 5: Reduce risk to an acceptable level.
Risk is defined as the combination of (1) the likelihood that a particular threat source will exercise (intentionally exploit or unintentionally trigger) a particular information system vulnerability and (2) the resulting adverse impact on organizational operations, assets, or individuals should this occur. Recognize that the elimination of all risk is not cost-effective. A cost-benefit analysis should be conducted for each proposed control. In some cases, the benefits of a more secure system may not justify the direct and indirect costs. Benefits include more than just prevention of monetary loss; for example, controls may be essential for maintaining public trust and confidence. Direct costs include the cost of purchasing and installing a given technology; indirect costs include decreased system performance and additional training. The goal is to enhance mission/business capabilities by mitigating mission/business risk to an acceptable level. (Related Principle: 6)

Principle 6: Assume that external systems are insecure.
The term information domain arises from the practice of partitioning information resources according to access control, need, and levels of protection required. Organizations implement specific measures to enforce this partitioning and to provide for the deliberate flow of authorized information between information domains. The boundary of an information domain represents the security perimeter for that domain. An external domain is one that is not under your control. In general, external systems should be considered insecure. Until an external domain has been deemed “trusted,” system engineers, architects, and IT specialists should presume the security measures of an external system are different than those of a trusted internal system and design the system security features accordingly.

Principle 7: Identify potential trade-offs between reducing risk and increased costs and decrease in other aspects of operational effectiveness.
To meet stated security requirements, a systems designer, architect, or security practitioner will need to identify and address all competing operational needs. It may be necessary to modify or adjust security goals due to other operational requirements. In modifying or adjusting security goals, an acceptance of greater risk and cost may be inevitable. By identifying and addressing these trade-offs as early as possible, decision makers will have greater latitude and be able to achieve more effective systems. (Related: Principle 4)

Principle 8: Implement tailored system security measures to meet organizational security goals.
In general, IT security measures are tailored according to an organization’s unique needs. While numerous factors, such as the overriding mission requirements, and guidance, are to be considered, the fundamental issue is the protection of the mission or business from IT security-related, negative impacts. Because IT security needs are not uniform, system designers and security practitioners should consider the level of trust when connecting to other external networks and internal sub- domains. Recognizing the uniqueness of each system allows a layered security strategy to be used – implementing lower assurance solutions with lower costs to protect less critical systems and higher assurance solutions only at the most critical areas.

Principle 9: Protect information while being processed, in transit, and in storage.
The risk of unauthorized modification or destruction of data, disclosure of information, and denial of access to data while in transit should be considered along with the risks associated with data that is in storage or being processed. Therefore, system engineers, architects, and IT specialists should implement security measures to preserve, as needed, the integrity, confidentiality, and availability of data, including application software, while the information is being processed, in transit, and in storage.

Principle 10: Consider custom products to achieve adequate security.
Designers should recognize that in some instances it may not be possible to meet security goals with systems constructed entirely from commercial off-the-shelf (COTS) products. In such instances, it may be necessary to augment COTS with non-COTS mechanisms.

Principle 11: Protect against all likely classes of “attacks.”
In designing the security controls, multiple classes of “attacks” need to be considered. Those classes that result in unacceptable risk need to be mitigated. Examples of “attack” classes are: passive monitoring, active network attacks, exploitation by insiders, attacks requiring physical access or proximity, and the insertion of back doors and malicious code during software development and/or distribution.

EASY TO USE

Principle 12: Where possible, base security on open standards for portability and interoperability.
Most organizations depend significantly on distributed information systems to perform their mission or business. These systems distribute information both across their own organization and to other organizations. For security capabilities to be effective in such environments, security program designers should make every effort to incorporate interoperability and portability into all security measures, including hardware and software, and implementation practices.

Principle 13: Use common language in developing security requirements.
The use of a common language when developing security requirements permits organizations to evaluate and compare security products and features evaluated in a common test environment. When a “common” evaluation process is based upon common requirements or criteria, a level of confidence can be established that ensures product security functions conform to an organization’s security requirements. The Common Criteria (CC; available at http://www.commoncriteriaportal.org/) provides a source of common expressions for common needs and supports a common assessment methodology. Use of CC “protection profiles” and “security targets” greatly aids the development of products (and to some extent systems) that have IT security functions. The rigor and repeatability of the CC methodology provides for thorough definition of user security needs. Security targets provide system integrators with key information needed in the procurement of components and implementation of secure IT systems.

Principle 14: Design security to allow for regular adoption of new technology, including a secure and logical technology upgrade process.
As mission and business processes and the threat environment change, security requirements and technical protection methods must be updated. IT-related risks to the mission/business vary over time and undergo periodic assessment. Periodic assessment should be performed to enable system designers and managers to make informed risk management decisions on whether to accept or mitigate identified risks with changes or updates to the security capability. The lack of timely identification through consistent security solution re-evaluation and correction of evolving, applicable IT vulnerabilities results in false trust and increased risk. Each security mechanism should be able to support migration to new technology or upgrade of new features without requiring an entire system redesign. The security design should be modular so that individual parts of the security design can be upgraded without the requirement to modify the entire system.

Principle 15: Strive for operational ease of use.
The more difficult it is to maintain and operate a security control the less effective that control is likely to be. Therefore, security controls should be designed to be consistent with the concept of operations and with ease-of-use as an important consideration. The experience and expertise of administrators and users should be appropriate and proportional to the operation of the security control. An organization must invest the resources necessary to ensure system administrators and users are properly trained. Moreover, administrator and user training costs along with the life-cycle operational costs should be considered when determining the cost-effectiveness of the security control.

INCREASE RESILIENCE

Principle 16: Implement layered security (ensure no single point of vulnerability).
Security designs should consider a layered approach to address or protect against a specific threat or to reduce vulnerability. For example, the use of a packet-filtering router in conjunction with an application gateway and an intrusion detection system combine to increase the work-factor an attacker must expend to successfully attack the system. Add good password controls and adequate user training to improve the system’s security posture even more. By using multiple, overlapping protection approaches, the failure or circumvention of any individual protection approach will not leave the system unprotected. Through user training and awareness, well-crafted policies and procedures, and redundancy of protection mechanisms, layered protections enable effective protection of information technology for the purpose of achieving mission objectives. The need for layered protections is especially important when COTS products are used. Practical experience has shown that the current state-of-the-art for security quality in COTS products does not provide a high degree of protection against sophisticated attacks. It is possible to help mitigate this situation by placing several controls in series, requiring additional work by attackers to accomplish their goals.

Principle 17: Design and operate an IT system to limit damage and to be resilient in response.
Information systems should be resistant to attack, should limit damage, and should recover rapidly when attacks do occur. The principle suggested here recognizes the need for adequate protection technologies at all levels to ensure that any potential cyber attack will be countered effectively. There are vulnerabilities that cannot be fixed, those that have not yet been fixed, those that are not known, and those that could be fixed but are not (e.g., risky services allowed through firewalls) to allow increased operational capabilities. In addition to achieving a secure initial state, secure systems should have a well-defined status after failure, either to a secure failure state or via a recovery procedure to a known secure state. Organizations should establish detect and respond capabilities, manage single points of failure in their systems, and implement a reporting and response strategy. (Related: Principle 14)

Principle 18: Provide assurance that the system is, and continues to be, resilient in the face of expected threats.
Assurance is the grounds for confidence that a system meets its security expectations. These expectations can typically be summarized as providing sufficient resistance to both direct penetration and attempts to circumvent security controls. Good understanding of the threat environment, evaluation of requirement sets, hardware and software engineering disciplines, and product and system evaluations are primary measures used to achieve assurance. Additionally, the documentation of the specific and evolving threats is important in making timely adjustments in applied security and strategically supporting incremental security enhancements.

Principle 19: Limit or contain vulnerabilities.
Design systems to limit or contain vulnerabilities. If a vulnerability does exist, damage can be limited or contained, allowing other information system elements to function properly. Limiting and containing insecurities also helps to focus response and reconstitution efforts to information system areas most in need. (Related: Principle 10)

Principle 20: Isolate public access systems from mission critical resources (e.g., data, processes, etc.).
While the trend toward shared infrastructure has considerable merit in many cases, it is not universally applicable. In cases where the sensitivity or criticality of the information is high, organizations may want to limit the number of systems on which that data is stored and isolate them, either physically or logically. Physical isolation may include ensuring that no physical connection exists between an organization’s public access information resources and an organization’s critical information. When implementing logical isolation solutions, layers of security services and mechanisms should be established between public systems and secure systems responsible for protecting mission critical resources. Security layers may include using network architecture designs such as demilitarized zones and screened subnets. Finally, system designers and administrators should enforce organizational security policies and procedures regarding use of public access systems.

Principle 21: Use boundary mechanisms to separate computing systems and network infrastructures.
To control the flow of information and access across network boundaries in computing and communications infrastructures, and to enforce the proper separation of user groups, a suite of access control devices and accompanying access control policies should be used. Determine the following for communications across network boundaries:

  • What external interfaces are required
  • Whether information is pushed or pulled
  • What ports, protocols, and network services are required
  • What requirements exist for system information exchanges; for example, trust relationships, database replication services, and domain name resolution processes

Principle 22: Design and implement audit mechanisms to detect unauthorized use and to support incident investigations.
Organizations should monitor, record, and periodically review audit logs to identify unauthorized use and to ensure system resources are functioning properly. In some cases, organizations may be required to disclose information obtained through auditing mechanisms to appropriate third parties, including law enforcement authorities. Many organizations have implemented consent to monitor policies which state that evidence of unauthorized use (e.g., audit trails) may be used to support administrative or criminal investigations.

Principle 23: Develop and exercise contingency or disaster recovery procedures to ensure appropriate availability.
Continuity of operations plans or disaster recovery procedures address continuance of an organization’s operation in the event of a disaster or prolonged service interruption that affects the organization’s mission. Such plans should address an emergency response phase, a recovery phase, and a return to normal operation phase. Personnel responsibilities during an incident and available resources should be identified. In reality, contingency and disaster recovery plans do not address every possible scenario or assumption. Rather, focus on the events most likely to occur and identify an acceptable method of recovery. Periodically, the plans and procedures should be exercised to ensure that they are effective and well understood.

REDUCE VULNERABILITIES

Principle 24: Strive for simplicity.
The more complex the mechanism, the more likely it may possess exploitable flaws. Simple mechanisms tend to have fewer exploitable flaws and require less maintenance. Further, because configuration management issues are simplified, updating or replacing a simple mechanism becomes a less intensive process.

Principle 25: Minimize the system elements to be trusted.
Security measures include people, operations, and technology. Where technology is used, hardware, firmware, and software should be designed and implemented so that a minimum number of system elements need to be trusted in order to maintain protection. Further, to ensure cost-effective and timely certification of system security features, it is important to minimize the amount of software and hardware expected to provide the most secure functions for the system.

Principle 26: Implement least privilege.
The concept of limiting access, or “least privilege,” is simply to provide no more authorizations than necessary to perform required functions. This is perhaps most often applied in the administration of the system. Its goal is to reduce risk by limiting the number of people with access to critical system security controls (i.e., controlling who is allowed to enable or disable system security features or change the privileges of users or programs). Best practice suggests it is better to have several administrators with limited access to security resources rather than one person with “super user” permissions. . Consideration should be given to implementing role-based access controls for various aspects of system use, not only administration. The system security policy can identify and define the various roles of users or processes. Each role is assigned those permissions needed to perform its functions. Each permission specifies a permitted access to a particular resource (such as “read” and “write” access to a specified file or directory, “connect” access to a given host and port, etc.). Unless permission is granted explicitly, the user or process should not be able to access the protected resource. Additionally, identify the roles/responsibilities that, for security purposes, should remain separate (this is commonly termed “separation of duties”).

Principle 27: Do not implement unnecessary security mechanisms.
Every security mechanism should support a security service or set of services, and every security service should support one or more security goals. Extra measures should not be implemented if they do not support a recognized service or security goal. Such mechanisms could add unneeded complexity to the system and are potential sources of additional vulnerabilities. An example is file encryption supporting the access control service that in turn supports the goals of confidentiality and integrity by preventing unauthorized file access. If file encryption is a necessary part of accomplishing the goals, then the mechanism is appropriate. However, if these security goals are adequately supported without inclusion of file encryption, then that mechanism would be an unneeded system complexity.

Principle 28: Ensure proper security in the shutdown or disposal of a system.
Although a system may be powered down, critical information still resides on the system and could be retrieved by an unauthorized user or organization. Access to critical information systems must be controlled at all times. At the end of a system’s life-cycle, system designers should develop procedures to dispose of an information system’s assets in a proper and secure fashion. Procedures must be implemented to ensure system hard drives, volatile memory, and other media are purged to an acceptable level and do not retain residual information.

Principle 29: Identify and prevent common errors and vulnerabilities.
Many errors reoccur with disturbing regularity – errors such as buffer overflows, race conditions, format string errors, failing to check input for validity, and programs being given excessive privileges. Learning from the past will improve future results.

DESIGN WITH THE NETWORK IN MIND

Principle 30: Implement security through a combination of measures distributed physically and logically.
Often, a single security service is achieved by cooperating elements existing on separate machines. For example, system authentication is typically accomplished using elements ranging from the user- interface on a workstation through the networking elements to an application on an authentication server. It is important to associate all elements with the security service they provide. These components are likely to be shared across systems to achieve security as infrastructure resources come under more senior budget and operational control.

Principle 31: Formulate security measures to address multiple overlapping information domains.
An information domain is a set of active entities (person, process, or devices) and their data objects. A single information domain may be subject to multiple security policies. A single security policy may span multiple information domains. An efficient and cost effective security capability should be able to enforce multiple security policies to protect multiple information domains without the need to separate physically the information and respective information systems processing the data. This principle argues for moving away from the traditional practice of creating separate LANs and infrastructures for various sensitivity levels (e.g., security classification or business function such as proposal development) and moving toward solutions that enable the use of common, shared, infrastructures with appropriate protections at the operating system, application, and workstation level. Moreover, to accomplish missions and protect critical functions, organizations have many types of information to safeguard. With this principle in mind, system engineers, architects, and IT specialists should develop a security capability that allows organizations with multiple levels of information sensitivity to achieve the basic security goals in an efficient manner.

Principle 32: Authenticate users and processes to ensure appropriate access control decisions both within and across domains.
Authentication is the process where a system establishes the validity of a transmission, message, or a means of verifying the eligibility of an individual, process, or machine to carry out a desired action, thereby ensuring that security is not compromised by an untrusted source. It is essential that adequate authentication be achieved in order to implement security policies and achieve security goals. Additionally, level of trust is always an issue when dealing with cross-domain interactions. The solution is to establish an authentication policy and apply it to cross-domain interactions as required. Note: A user may have rights to use more than one name in multiple domains. Further, rights may differ among the domains, potentially leading to security policy violations.

Principle 33: Use unique identities to ensure accountability.
An identity may represent an actual user or a process with its own identity, e.g., a program making a remote access. Unique identities are a required element in order to be able to:

  • Maintain accountability and traceability of a user or process
  • Assign specific rights to an individual user or process
  • Provide for non-repudiation
  • Enforce access control decisions
  • Establish the identity of a peer in a secure communications path
  • Prevent unauthorized users from masquerading as an authorized user
Advertisements

ISO 27001:2022 A 8.30 Outsourced development

The audio version of the article

Advertisements

Outsourcing and offshoring are common in software development and testing, but pose two information security risks. First, the sourcing partners obtain sensitive data they should not have. Second, their software development and testing processes might not address the information security needs properly. It requires the supervision of outsourced development and testing. The work can be outsourced but the responsibility stays with the organization. In general, ISO 27001 requires suppliers also to be managed with regard to information security. Any supplier management can enforce this. The controls are not specific to software development and testing, though the checks might differ slightly.It requires organisations to supervise and monitor all outsourcing activities and to ensure that the outsourced development process satisfies information security requirements. The organization must supervise and monitor the activity of outsourced system development. Where system and software development is outsourced either wholly or partly to external parties the security requirements must be specified in a contract or attached agreement. This is where it is important to have correct nondisclosure and confidentiality. It is also important to supervise and monitor development to gain assurance that organizational standards and requirements for security within systems are achieved. Depending on how embedded outsource partners are within the organization, especially if staff is located on organizational premises, it is important to include their staff in security awareness training and awareness programmed and communications. It is critical to ensure that the internal security practices of the outsource partner, e.g. staff vetting, at least meet assurance requirements relevant to the risk levels related to the developments they will be working on. The auditor will be looking to see that where outsourcing is used, there is evidence of due diligence before, during, and after the engagement of the outsource partner has been conducted and includes consideration for information security provisions.

Advertisements

Control

The organization should direct, monitor and review the activities related to outsourced system development.

Purpose

To ensure information security measures required by the organization are implemented in outsourced system development.

ISO 27002 Implementation Guidance

Where system development is outsourced, the organization should communicate and agree requirements and expectations, and continually monitor and review whether the delivery of outsourced work meets these expectations. The following points should be considered across the organization’s entire external supply chain:
a) licensing agreements, code ownership and intellectual property rights related to the outsourced content;
b) contractual requirements for secure design, coding and testing practices
c) provision of the threat model to consider by external developers;
d) acceptance testing for the quality and accuracy of the deliverables ;
e) provision of evidence that minimum acceptable levels of security and privacy capabilities are established (e.g. assurance reports);
f) provision of evidence that sufficient testing has been applied to guard against the presence of malicious content (both intentional and unintentional) upon delivery;
g) provision of evidence that sufficient testing has been applied to guard against the presence of known vulnerabilities;
h) escrow agreements for the software source code (e.g. if the supplier goes out of business);
i) contractual right to audit development processes and controls;
j) security requirements for the development environment;
k) taking consideration of applicable legislation (e.g. on protection of personal data).

The organisations should continuously monitor and verify that the delivery of outsourced development work satisfies the information security requirements imposed on the external service provider.

Advertisements
  1. Entering into agreements, including licensing agreements, that addresses ownership over code and intellectual property rights.
  2. Imposing appropriate contractual requirements for secure design and coding.
  3. Establishing a threat model to be adopted by third-party developers.
  4. Carrying out an acceptance testing procedure to ensure the quality and accuracy of delivered work.
  5. Evidence that minimally-required privacy and security capabilities are achieved. This may be achieved via assurance reports.
  6. Keeping evidence of how sufficient testing has been performed to protect the delivered IT system or software against malicious content.
  7. Keeping evidence of how sufficient testing has been applied to protect against identified vulnerabilities.
  8. Putting in place escrow agreements that cover the software source code. For example, it may address what will happen if the external supplier goes out of business.
  9. The agreement with the supplier should entail the right of the organisation to perform audits on development processes and controls.
  10. Establishing and implementing security requirements for the development environment.
  11. Organisations should also consider applicable laws, statutes, and regulations.

While outsourcing the Organization should conduct suitable due diligence processes in selecting an appropriate service provider and in monitoring its ongoing performance. It is important to entities exercise due care, skill, and diligence in the selection of service providers. The organization should be satisfied that the service provider has the ability and capacity to undertake the provision of the outsourced task effectively at all times. The organization should also establish appropriate processes and procedures for monitoring the performance of the service provider on an ongoing basis to ensure that it retains the ability and capacity to continue to provide the outsourced task. In determining the appropriate level of monitoring, it should consider the criticality of the outsourced task to the ongoing business and to its regulatory obligations. The organization should enter into a legally binding written contract with each service provider, the nature and detail of which should be appropriate to the criticality of the outsourced task to the business.A legally binding written contract between the organization and a service provider is the critical element underpinning the relationship between the organization and the service provider. Contractual provisions can reduce the risks of non-performance or aid the resolution of disagreements about the scope, nature, and quality of the service to be provided. A written contract will assist the monitoring of the outsourced tasks by the organization and/or by regulators. The level of detail of the written contract should reflect the level of monitoring, assessment, inspection and auditing required, as well as the risks, size and complexity of the outsourced services involved. Where different regulatory requirements may apply for the organization and the service provider due to the cross-border nature of the service, the service provider should recognize and accommodate the requirements of each jurisdiction in which it operates, as appropriate, and ensure it acts in a manner that is consistent with the organization’s regulatory obligations. The organization should include written provisions relating to the termination of outsourced tasks in its contract with service providers and ensure that it maintains appropriate exit strategies.

Advertisements

Step to take When Outsourcing Development

Do your due diligence
It is important to conduct due diligence on a vendor before entering into an agreement with the vendor. The scope of such due diligence should include the vendor’s reputation and any past breaches .You should also investigate the vendor’s internal risk measures, their ability to safeguard your IP and what its response would be to a data breach within their organization. If you are hiring a vendor overseas, you should also consider how your IP will be protected by the laws of that country and the legal remedies available to you for the breach of your IP rights.

Sign Non-Disclosure Agreement (NDA)
Before hiring a vendor, or after hiring a vendor but before sharing any confidential information, you should enter into a non-disclosure agreement (NDA) with the vendor. As the name implies, NDAs place an obligation on a party or parties not to disclose certain information, including software development or technical data, to third parties. The NDA should be wide enough to cover the scope of the vendor’s services, but also narrow enough to define confidential information and what amounts to a breach. The NDA should also bind the vendor’s employees or subcontractors who would be working on the task. The non-disclosure obligations in NDAs would typically outlive your business relationship with the vendor. This may be in perpetuity – if possible – or for a number of years after the completion of the project. Therefore, at the end of your project, you should reiterate the vendor’s obligations to not disclose confidential information.

Use the legal framework applicable
The legal framework and available measures to protect organization rights differ from country to country. For example, European Union countries are required to comply with the EU regulations. Meanwhile, companies in the U.S are required to comply with the Constitution and the regulations issued by the U.S. It is important to pay attention to the legal framework of the country within which the vendor operates and to know how your rights will be treated. Will your intellectual property rights be recognised by the laws of that country? Will you be able to enforce your rights in the event of breach within such a country? It is advisable to seek legal counsel before executing an outsourcing agreement.

Draft a comprehensive Master Service Agreement
A Master Service Agreement is an agreement that documents the terms of future contracts between the parties. Also known as a Framework Agreement, it states the terms of service delivery, work standards, payment terms, rights and liabilities, confidentiality, ownership of intellectual property rights to the final product, and data protection mechanisms. It is important to pay attention to ownership of the IP rights to the work product. Vendors would usually prefer to retain ownership to use the deliverables in future projects. The contract should clearly state the creators, authors, and owners of the work product. It should identify your firm as the owner of the work product and preserve your right to use, assign, and modify the work product. The Master Service Agreement should also include dispute resolution mechanisms, in case a party breaches the agreement. You should anticipate how you would enforce your intellectual property rights under the agreement and identify intellectual property offices you may contact where there is a violation of your IP rights. Preferably, you should state your jurisdiction as the applicable governing law and venue for resolution of the agreement, to ensure that disputes that arise from the agreement are resolved within your jurisdiction.

Inquire about the processes of a potential partner
Verify you are working with a firm that follows correct procedures. The correct practices are ultimately what will safeguard your work, rights and information. The questions below provide an excellent starting point.

  • What contracts do they have in place with their workers and consultants?
  • Is any of their work subcontracted? If so, how do they safeguard their intellectual property?
  • Are they utilizing appropriate project management tools?
  • Where do they keep their servers and source code? Is there a backup support mechanism in place if something occurs at a local office?
  • How does their team interact and exchange documents?
  • How do they ensure that data and documents are removed from the possession of departing or dismissed employees?
  • Allow employees to utilize personal tools and email, or do they require employees to use just company-authorized resources?
  • What is their laptop and internet access security policy?
  • Do they have procedures in place for remote employees?

Limit server and data access
Limiting server and data access is also another means of ensuring protection of your intellectual property when outsourcing to third parties. You should ensure that data is stored on your servers. At no point should your data reside in any place other than on your cloud. Allow the vendors to work remotely via your cloud services, where you can closely watch everything they do and have documented proof in the security or data breach. Access to your server, API and data should also be limited to only what is necessary to complete the outsourced task. If the task requires access to all or a core part of your intellectual property, then you may consider executing some of the task in-house or asking the in-house team to integrate the developer’s work product into the software.Transition protocols should also be discussed before the project commences. There have been cases of vendors’ refusal to transfer the source codes to their clients when disputes arise or the vendor is moving to another firm. To forestall such occurrences, all applications should be built on your firm’s servers right from the start of the project. Source code should also be stored within your firm’s account .

Key questions to ask outsourcing vendor

  • How do you protect the confidentiality of information?
    Before you get down to discuss your ideas, a trustworthy contractor will offer to sign a Non-Confidentiality Agreement (or Non-Disclosure Agreement). Take note of the agreement’s period, the precise categories of information that are covered and excluded from it, and if the parties subject to the agreement are properly stated and represented.
  • How will my data be protected?
    Determine if the software developer has the means and capacity to safeguard your intellectual property from unauthorized use, loss, or theft; at the very least, guarantee secure connections, two-factor authentication, a robust password policy and password updates policy, a well-f VPN tunnel, firewalls, and disc encryption. Do they safeguard home routers for remote work as well? What antivirus do they use, and how frequently do they check for system updates?
  • How will you ensure effective IP rights transfer?
    Enquire about how the software development firm treats your collaboration and the ownership of the software product that results. This will help you avoid being forced to deal with a company that wants to create its own products at your expense.
Advertisements

ISO 27001:2022 A 8.33 Test information

Audio Version of this article

Advertisements

Test information must be selected carefully, protected and controlled. It should ideally be created in a generic form with no relation to live system information. However, it is recognised that often live data must be used to perform accurate testing. Where live data is used for testing it should be; Anonymised as far as possible; Carefully selected and secured for the period of testing; Securely deleted when testing is complete. Use of live data must be pre-authorised, logged and monitored. The auditor will expect to see robust procedures in place to protect data being used in test environments, especially if this is wholly or partly live data. While non-production test environments such as staging environments are essential to building high-quality software products and applications free of bugs or errors, the use of real data and lack of security measures in these environments expose information assets to heightened risks. For example, developers may use easy-to-remember credentials(e.g “admin” for both username and password) for testing environments in which sensitive data assets are stored. This may be exploited by cyber attackers to gain easy access to test environments and steal sensitive information assets. Therefore, organisations should put in place appropriate controls and procedures to protect real-world data used for testing.

  • To protect and maintain the confidentiality of information used in the testing environment.
  • Selection and use of test information that will yield reliable results.
Advertisements

Control

Test information should be appropriately selected, protected and managed.

Purpose

To ensure relevance of testing and protection of operational information used for testing.

ISO 27002 Implementation Guidance

Test information should be selected to ensure the reliability of tests results and the confidentiality of the relevant operational information. Sensitive information (including personally identifiable information) should not be copied into the development and testing environments. The following guidelines should be applied to protect the copies of operational information, when used for testing purposes, whether the test environment is built in-house or on a cloud service:
a) applying the same access control procedures to test environments as those applied to operational environments;
b) having a separate authorization each time operational information is copied to a test environment;
c) logging the copying and use of operational information to provide an audit trail;
d) protecting sensitive information by removal or masking if used for testing;
e) properly deleting operational information from a test environment immediately after the testing is complete to prevent unauthorized use of test information.
Test information should be securely stored (to prevent tampering, which can otherwise lead to invalid results) and only used for testing purposes.

Other information

System and acceptance testing can require substantial volumes of test information that are as close as possible to operational information.

Advertisements

Data used in testing environments such as quality assurance, test, and development must be protected against unauthorized access. For example, test environments may be fire walled to restricted to campus systems. Accounts may be disabled so that only a subset of accounts can be used for testing. Copying data between production and test environment should be approved. Where possible data used for testing should not contain personally identifiable information. Generating non-meaningful test data for performance testing is not a difficult exercise, but generating meaningful data that looks and behaves like real production data for functional testing is the challenge. Meaningful data contains all the characteristics of production data, such as format, context, and referential integrity, but is anonymized for data privacy compliance. While developing an application, developers need to make sure they are testing it under conditions that closely simulate a production environment. Most tests rely on sample data for testing. If the data is manually entered into a test environment, it cannot match the volume and variety of data that the application normally would accumulate in production. Behavior may differ because data inserted into the test database will not match real-world usage, possibly leaving significant bugs. Dev/test managers, application owners, and others know that simulated data fails to effectively support development, and manual scripting cannot keep up with the demand for fast timelines between application development and production.Building a test database with meaningful, protected test data allows the application owner to see and assess how the application will perform once it released. Without meaningful test data in the test environment, it is impossible to predict the way the application will behave after the release. Organizations testing non-production data want to see data that looks real to understand how real data would perform in their application.

  1. Choosing the right data set type
    Choosing your test data is a huge decision; choose the wrong data set type and you could land your business in hot water, particularly if you deal with sensitive data. In any testing environment, production data would be the best choice but this is no longer acceptable. Using production data is too great a risk for your business and you could be held liable for fines and penalties if that data was lost or landed in the wrong hands. With that in mind, it’s important instead to use test data that is as close to production as possible, resulting in a realistic testing environment. Test data management is key here to ensure a process of obfuscating or generating synthetic data.
  2. Creating obfuscated or synthetic data
    Managed test data can help to create test environments on demand, with synthetic data able to deliver a highly repeatable solution. In turn, this can enhance the speed of your business’ testing turnaround time. Manual test preparation can take up to 30% of the testing time, which can prevent your organisation from achieving a continuous deployment or DevOps model.
  3. Obfuscated or synthetic test data can allow you to test on demand to achieve faster and cost-efficient delivery, without risking the integrity of any sensitive data. Having an automated method to support your test data management can help you take this to the next level.

Guidelines for Data De-Identification or Anonymization should be followed to remove sensitive information or to modify it beyond recognition when used for testing purposes. If production data is used unchanged for testing, the data should be protected with the same level of controls used for the production system. Test data must be selected carefully, protected, and controlled. Test data should ideally be created in a generic form with no relation to living system data. However, it is recognized that often live data must be used to perform accurate testing. Where live data is used for testing it should be; Anonymized as far as possible; Carefully selected and secured for the period of testing; Securely deleted when testing is complete. Use of live data must be pre-authorized, logged, and monitored. The auditor will expect to see robust procedures in place to protect data being used in test environments, especially if this is wholly or partly live data. Organisations should not use sensitive information, including personal data, in the development and testing environments.It is noted that system and acceptance testing may require an enormous amount of test information, equivalent to operational information. To protect the test information against loss of confidentiality and integrity, organisations should comply with the following:

  • Access controls applied in real-world environments should also be implemented in test environments.
  • Establishing and implementing a separate authorisation procedure for the copying of real information into test environments.
  • To keep an audit trail, all activities related to copying and use of sensitive information in test environments should be recorded.
  • If sensitive information will be used in the test environment, it should be protected with appropriate controls such as data masking or data removal.
  • Once the testing is completed, information used in the test environment should be safely and permanently removed to eliminate the risk of unauthorised access.
  • Furthermore, organisations should apply appropriate measures to ensure the secure storage of information assets.

When it comes to testing, there are many factors that require consideration to ensure the correct use and protection of data. Compliance standards like the Privacy Act, set out requirements for companies to ensure different types of data are carefully managed and protected. Test data should ideally be created in a generic form with no relation to live system data. However, often data needs to reflect actual real data to ensure accurate testing. If you must use “real” data for testing purposes consider implementing a robust data masking technique to protect the data. When using data for testing, an organisation should ensure it is:

  • Anonymised – Any personal or confidential information that is used should be protected either by deletion or modification.
  • Carefully selected and secured for the period of testing.
  • Securely deleted when testing is complete.
  • Agreed processes used to protect data during testing are securely managed.

Technique to mask data

  1. Data Encryption – An encryption algorithm is used to lock the data from anyone being able to see it pother than the person who has the key. For testing purposes it is often not helpful ass it requires t he system to continually lock and unlock the data and processes need to be in place manage and share encryption keys.
  2. Data Scrambling – Reorganize characters in the data set in a random order, replacing the original content. For example, a number such as 985467 in a production database, could be replaced by 649857 in a test database. Easy to do to but can be less secure if someone figures out the process and can reverse engineer the changes.
  3. Nulling Out – Data is replaced with “null” or is deleted. Not helpful during testing if you need the data to perform certain functions or test outputs appear on a page correctly.
  4. Value Variance – Replace original data values by using a function, such as the difference between the lowest and highest value in a series. For example, if a a list of product prices were between 100 and 1000 the product price can be replaced with a range between the highest and lowest price paid. This can help protect anyone getting access to the original dataset.
  5. Data Substitution – Data values are substituted with fake, but realistic, alternative values. For example, real names or numbers are replaced by random names and numbers
  6. Data Shuffling – Similar to substitution, except data values are switched within the same dataset. Data is rearranged in each column using a random sequence; for example, switching between real customer names across multiple customer records. The output set looks like real data, but it doesn’t show the real information for each individual or data record.
Advertisements

Best practice to protect data

  1. Test Data Strategy: An effective, agile, and comprehensive test data management program must start with the strategy. You must gain an understanding of your test data landscape, and the different teams across the organization that will use the test data and contribute to it. Your plan should include test data needs, testing environments, your company’s data governance policies and relevant regulations that impact data handling. Starting with the test data management strategy will save time, overhead costs, and rework.
  2. Discovering Test Data: The first test data management best practice is to discover, and integrate, test data from multiple source systems and IT environments, across the organization. To achieve this, enterprises should identify all the relevant data channels and sources early on in the process. This includes discovering and categorizing all sensitive data and personally identifying information (PII) according to multiple data protection regulations , and industry legislation .
  3. Protecting Private Data: Today, sensitive data and Personally Identifying Information (PII) is a touchy topic as people and authorities become more aware of the dangers of collecting and using people’s private information. Test data management must follow specific compliance rules, that demand high data governance standards. When sensitive data is involved, data masking keeps it protected. By using data masking tools to obfuscate production data in a way that mimics real-life data – without exposing the real data – we guarantee both authenticity and compliance. Another security aspect to consider is how the test data is stored and managed. Keeping test data accessible only to authorized personnel, and maintaining security protocols, even for apps under development, are essential.
  4. Refreshing Test Data in Real Time: Perhaps the most important factor in test data is keeping it fresh. Due to the sheer volume of enterprise data, many enterprises refresh their test data only periodically, such as once a quarter. Since extracting and provisioning test data is time-consuming, testing teams often reuse old data, over and over again. To maintain the relevance and trustworthiness of test data, a real-time synchronization mechanism is needed that does not require bulk database copying. Another important factor is ensuring that production system performance is not adversely impacted by frequent access.
  5. Ensuring Test Data Relevancy: Time isn’t the only factor impacting the relevancy of test data. Testing quality relies on the ability of the testing teams to source relevant test data to the use case at hand. Due to the complexity of this task, many test data management tools discourage testers from parameter-based subsetting, especially across multiple source systems. Testing teams should examine which data elements are necessary for their particular scenarios, and build the test data subsets accordingly. Not only will the test data sets be more relevant and focused, but they will also improve test data quality and accelerate software delivery.
  6. Maintaining Test Data: Keeping your data fresh and relevant over time leads us to the next test data management best practice, which is ongoing maintenance. In addition to relevancy and accuracy, your team would need to ensure that the data is adequately stored and remains consistent and error-free. This level of accuracy should be maintained over multiple use cases and even as the volume of data increases. Test data management at scale is challenging, so this is one area where your test data tools will have to prove their value. Monitor the cost efficiency of your test data storage solution, and perform regular audits to examine the integrity, quality, and security of your test data.
  7. Automating the Test Data Process:
    By now, you’re probably concerned about the many tedious tasks related to test data . Worry not, because many test data steps can, and should, be automated. Automation makes test data provisioning faster, and helps minimize human errors. Agile software development and shift-left testing demand test data automation for integration into CI/CD pipelines. Best practices for data testing have become increasingly automated over the past years. It’s about time, test data management did the same.
Advertisements

ISO 27001:2022 A 8.25 Secure development life cycle

Audio version of the article

Advertisements

Rules for the development of software and systems should be established and applied to developments within the organization. Secure development Life cycle is used to ensure that development environments are themselves secure and that the processes for developing and implementing systems and system changes encourage the use of secure coding and development practices. It will include showing how security needs to be considered at all stages of the development life cycle from design through to live implementation. Specific coding languages and development tools have different vulnerabilities and require different “hardening” techniques accordingly and it is important that these are identified and agreed upon and developers are made aware of their responsibilities to follow them. Compliant policies will address security checkpoints during development; secure repositories; security in version control; application security knowledge; and developers’ ability to avoid vulnerabilities, then find and fix them when they occur. The auditor will be looking here to see that security considerations are appropriate to the level of risk for the systems being developed or changed and that those doing the development understand the need to include security considerations throughout the development lifecycle. Strong initial screening in hiring for these skills, in-life management, and training of resources is essential and practices like pair programming, peer reviews and independent quality assurance, code reviews, and testing are all positive attributes.

Advertisements

Control

Rules for the secure development of software and systems should be established and applied.

Purpose

To ensure information security is designed and implemented within the secure development life cycle of software and systems.

ISO 27002 Implementation Guidance

Secure development is a requirement to build up a secure service, architecture, software and system. To achieve this, the following aspects should be considered:

  1. separation of development, test and production environments;
  2. guidance on the security in the software development life cycle:
    • security in the software development methodology;
    • secure coding guidelines for each programming language used ;
  3. security requirements in the specification and design phase;
  4. security checkpoints in projects;
  5. system and security testing, such as regression testing, code scan and penetration tests;
  6. secure repositories for source code and configuration ;
  7. security in the version control ;
  8. required application security knowledge and training ;
  9. developers’ capability for preventing, finding and fixing vulnerabilities;
  10. licensing requirements and alternatives to ensure cost-effective solutions while avoiding future licensing issues .

If development is outsourced, the organization should obtain assurance that the supplier complies with the organization’s rules for secure development .

Other information

Development can also take place inside applications, such as office applications, scripting, browsers and databases.

Advertisements

The SDLC is a process that standardizes security best practices across a range of products and/or applications. It captures industry-standard security activities, packaging them so they may be easily implemented. The lack of a standard approach to securing products causes problems. For one thing, vulnerabilities run rampant in shipped products. The second problem is that developers tend to repeat the same security mistakes, each time expecting a different response. The standard approach to SDLC includes requirements, design, implementation, test, and release/response.

  • The requirements phase: In the requirements phase, best practices for security are integrated into a product. These practices may come from industry standards or be based on responses to problems that have occurred in the past. Requirements exist to define the functional security requirements implemented in the product, and include all the activities of the SDLC.
  • The design phase: The design phase of the SDLC consists of activities that occur prior to writing code. Secure design is about quantifying an architecture (for a single feature or the entire product) and then searching for problems.
  • Implementation or coding: The next phase is implementation, or writing secure code. The SDLC contains a few things programmers must do to ensure that their code has the best chance of being secure. The process involves a mixture of standards and automated tools.Implementation tools include static application security testing (SAST) and dynamic application security testing (DAST) software.
  • The test phase: Formal test activities include security functional test plans, vulnerability scanning, and penetration testing. Vulnerability scanning uses industry-standard tools to determine if any system-level vulnerabilities exist with the application or product.
  • The final phase: Release/response. Release occurs when all the security activities are confirmed against the final build and the software is sent to customers (or made available for download). Response is the interface for external customers and security researchers to report security problems in products. Part of the response should include a product security-incident response team that focuses on training and communicating product vulnerabilities, both individual bugs and those that will require industry-wide collaboration

To build secure software products, systems, and architecture, the organisations should comply with

  1. Development, test, and production environments should be segregated.
  2. Organisations should provide guidance on:
    • Security considerations in the software development methodology
    • Secure coding for each programming language.
  3. Organisations should establish and implement security requirements that apply to the specification and design phase in accordance .
  4. Organisations should define security checklists for the projects .
  5. Organisations should perform security and system testing such as penetration tests and code scanning.
  6. Organisations should create safe repositories storing source codes and configuration .
  7. Organisations should maintain security in the version control as prescribed .
  8. Organisations should ensure all personnel involved in the development has sufficient application security knowledge and are provided with the necessary training as defined .
  9. Developers should be capable of identifying and preventing security vulnerabilities .
  10. Organisations should comply with licensing requirements and should evaluate the feasibility of alternative cost-effective methods as prescribed .

Furthermore, if an organisation outsources certain development tasks to external parties, it must ensure that the external party adheres to the organisation’s rules and procedures for the secure development of software and system.

Security is a requirement that must be included within every phase of a system development life cycle. A system development life cycle that includes formally defined security activities within its phases is known as a secure SDLC. Per the Information Security Policy, a secure SDLC must be utilized in the development of all applications and systems.
At a minimum, an SDLC must contain the following security activities. These activities must be documented or referenced within an associated information security plan. Documentation must be sufficiently detailed to demonstrate the extent to which each security activity is applied.

  1. Define Security Roles and Responsibilities
    Security roles must be defined and each security activity within the SDLC must be clearly assigned to one or more security roles. These roles must be documented and include the persons responsible for the security activities assigned to each role.
  2. Orient Staff to the SDLC Security Tasks
    All parties involved in the execution of a project’s SDLC security activities must understand the purpose, objectives and deliverables of each security activity in which they are involved or for which they are responsible.
  3. Establish System Criticality Level
    When initiating an application or system, the criticality of the system must be established. The criticality level must reflect the business value of the function provided by the system and the potential business damage that might result from a loss of access to this functionality.
  4. Classify Information
    As per the Information Security Policy, all information contained within, manipulated by or passing through a system or application must be classified. Classification must reflect the importance of the information’s confidentiality, integrity and availability.
  5. Establish Identity Credential Requirements
    All applications or systems which require authentication must establish a user identity credential. The identity credential must reflect the required confidence level that the person seeking to access the system is who they claim to and the potential impact to the security and integrity of the system if the person is not who they claim to be.
  6. Establish Security Profile Objectives
    When initiating an application or system, the security profile objectives must be identified and documented. These objectives must state the importance and relevance of identified security concepts to the system and indicate the extent and rigor with which each security concept is to be built in or reflected in the system and software. Each security concept must be considered throughout each life cycle phase and any special considerations or needs documented. The purpose behind establishing system security profiles and monitoring them throughout the lifecycle is to be actively aware of the relative priority, weight and relevance of each security concept at each phase of the system’s life cycle. Entities must verify that the security profile objectives adequately consider all federal, state and external security mandates for which the system must be compliant.
  7. Profile the System: The system or application being developed must be iteratively profiled by technical teams within the SDLC. A system profile is a high-level overview of the application that identifies the application’s attributes such as the physical topology, the logical tiers, components, services, actors, technologies, external dependencies and access rights. This profile must be updated throughout the various phases of the SDLC.
  8. Decompose the System: The system or application must be decomposed into finer components and its mechanics (i.e. the inner workings) must be documented. This activity is to be iteratively performed within the SDLC. Decomposition includes identifying trust boundaries, information entry and exit points, data flows and privileged code.
  9. Assess Vulnerabilities and Threats: Vulnerability assessments must be iteratively performed within the SDLC process. Threat assessments must consider not only technical threats, but also administrative and physical threats that could have a potential negative impact on the confidentiality, availability and integrity of the system. Threat assessments must consider and document the threat sources, threat source motivations and attack methods that could potentially pose threats to the security of the system. Threat assessments must adhere to all relevant state and federal mandates to which the entity must comply and follow industry best practices including the documentation of the assessment processes. Threat assessments and the underlying threat modeling deliverables that support the assessment must also be fully documented.
  10. Assess Risk: Risk assessments must be iteratively performed within the SDLC process. These begin as an informal, high-level process early in the SDLC and become a formal, comprehensive process prior to placing a system or software into production. All realistic threats and vulnerabilities identified in the threat assessments must be addressed in the risk assessments. The risk assessments must be based on the value of the information in the system, the classification of the information, the value of the business function provided by the system, the potential threats to the system, the likelihood of occurrence, the impact of the failure of the system and the consequences of the failure of security controls. All identified risks are to be appropriately managed by avoiding, transferring, accepting or mitigating the risk. Ignoring risk is prohibited. Risk assessments must adhere to all relevant state and federal mandates that the entity must document and be compliant. The risk assessments must be periodically reviewed and updated as necessary whenever the underlying threat assessment is modified or whenever significant changes are made to the system.
  11. Select and Document Security Controls
    Appropriate security controls must be implemented to mitigate risks that are not avoided, transferred or accepted. Security controls must be justified and documented based on the risk assessments, threat assessments and analysis of the cost of implementing a potential security control relative to the decrease in risk afforded by implementing the control. Documentation of controls must be sufficiently detailed to enable verification that all systems and applications adhere to all relevant security policies and to respond efficiently to new threats that may require modifications to existing controls. Residual risk must be documented and maintained at acceptable levels. A formal risk acceptance, with executive management sign-off, must be performed for medium and high risks that remain after mitigating controls have been implemented. Security control requirements must be periodically reviewed and updated as necessary whenever the system or the underlying risk assessment is modified.
  12. Create Test Data
    A process for the development of significant test data must be created for all applications. A test process must be available for applications to perform security and regression testing. Confidential production data should not be used for testing purposes. If production data is used, entities must comply with all applicable federal, state and external policies and standards regarding the protection and disposal of production data.
  13. Test Security Controls
    All controls are to be thoroughly tested in pre-production environments that are identical, in as much as feasibly possible, to the corresponding production environment. This includes the hardware, software, system configurations, controls and any other customization. The testing process, including regression testing, must demonstrate that all security controls have been applied appropriately, implemented correctly and are functioning properly and actually countering the threats and vulnerabilities for which they are intended. The testing process must also include vulnerability testing and demonstrate the remediation of critical vulnerabilities prior to placing the system into production. Appropriate separation of duties must be observed throughout the testing processes such as ensuring that different individuals are responsible for development, quality assurance and accreditation.
  14. Perform Accreditation
    The system security plan must be analyzed, updated, and accepted by executive management.
  15. Manage and Control Change
    A formal change management process must be followed whenever a system or application is modified in order to avoid direct or indirect negative impacts that the change might impose. The change management process must ensure that all SDLC security activities are considered and performed, if relevant, and that all SDLC security controls and documentation that are impacted by the change are updated.
  16. Measure Security Compliance
    All applications and systems are required to undergo periodic security compliance assessments to ensure they reflect a security posture commensurate with the definition of acceptable risk. Security compliance assessments must include assessments for compliance with all federal, state and external compliance standards for which the entity is required to comply. Security compliance assessments must be performed after all system and application changes and periodically as part of continuous system compliance monitoring.
  17. Perform System Disposal
    The information contained in applications and systems must be protected once a system has reached end of life. Information must be retained according to applicable federal and state mandates or other retention requirements. Information without retention requirements must be discarded or destroyed and all disposed media must be sanitized in accordance with applicable federal and state standards to remove residual information.

Responsibility for each security activity within the SDLC must be assigned to one or more security roles. To accomplish this, the default definition of an SDLC role may be expanded to include security responsibilities and/or new security roles may be defined to encompass security activities. In all cases, the assignment of security activities to roles, and the identification of persons given responsibility for these roles, must be clearly documented.

Advertisements

Secure Development life cycle best practice

  1. Think security from the beginning: Before creating a single line of code, begin planning how you will integrate security into every phase of the SDLC. Engage the power of automation in testing and monitoring vulnerabilities from day one. Security needs to be baked into your culture and code, and there’s no better place to start than in the earliest development phase.
  2. Create a secure software development policy: This will provide a guideline for preparing your people, processes, and technology to perform secure software development. This formal policy supplies specific instructions for approaching and instrumenting security in each phase of the SDLC. In addition, it provides the governing rules and defines roles to help your people, processes, and tools minimize the vulnerability risk in software production.
  3. Employ a secure software development framework: A proven framework like NIST SSDF will add structure and consistency to your team’s effort in adhering to secure software best practices. Frameworks can help answer the “What do we do next?” question and benefit all new software developers.
  4. Design software with best practices to meet security requirements: Clearly define all security requirements, then train developers to write code in alignment with these parameters using only secure coding practices. Ensure all your third-party vendors are aware of your security requirements and demonstrate compliance, as they can provide an easy pathway for an attack.
  5. Protect code integrity: Keep all code in secure repositories allowing only authorized access to prevent tampering. Strictly regulate all contact with the code, monitor changes, and closely oversee the code signing process to preserve integrity.
  6. Review and test code early and often: Breakaway from the traditional development pattern of testing code toward the end of the SDLC. Instead, use both developer reviews and automated testing to continually examine code for flaws. Catching vulnerabilities early in the life cycle saves money and time while preventing developer frustration later on.
  7. Be prepared to mitigate vulnerabilities quickly: Vulnerabilities are a fact of life in software development. It’s not if they occur, but when, so be ready with a team, plan, and processes in place to address incidents in real-time. Remember, the faster you can identify and respond to vulnerabilities the better, shortening the window of opportunity for exploitation.
  8. Configure secure default settings: Many customers remain vulnerable simply for lack of knowledge of their new software’s functions. This added customer service touch ensures the consumer will be protected in the early stages of adoption.
  9. Use checklists: There are many moving parts to track and monitor during secure software development. Help your team by using action checklists at periodic intervals such as weekly or monthly meetings to ensure all necessary security policies and procedures are current and functional.
  10. Remain agile and proactive: Wise software developers study vulnerabilities—learning their root causes, spotting patterns, preventing repeat occurrences, and updating their SDLC with improved knowledge. They also watch trends and stay up-to-date on best practices. Dave Brennan offers some parting advice here, “The big picture is knowing and staying current on the industry and best practices. The space is always evolving; security best practice approaches aren’t standing still. So no matter what you’re doing with security, it’s critical to look ahead to see what’s coming, keep learning, and identify better ways to secure your software development process.”
Advertisements

ISO 27001:2022 A 8.15 Logging

Audio version of this article

Advertisements

A log, in a computing context, is the automatically produced and time-stamped documentation of events relevant to a particular system. Virtually all software applications and systems produce log files. On a Web server, an access log lists all the individual files that people have requested from a website. These files will include the HTML files and their imbedded graphic images and any other associated files that get transmitted. From the server’s log files, an administrator can identify numbers of visitors, the domains from which they’re visiting, the number of requests for each page and usage patterns according to variables such as times of the day, week, month or year. In Microsoft Exchange, a transaction log records all changes made to an Exchange database. Information to be added to a mailbox database is first written to an Exchange transaction log. Afterwards, the contents of the transaction log are written to the Exchange Server database. An audit log (also known as an audit trail) records chronological documentation of any activities that could have affected a particular operation or event. Details typically include the resources that were accessed, destination and source addresses, a timestamp and user login information for the person who accessed the resources.

Advertisements

Control

Logs that record activities, exceptions, faults and other relevant events should be produced, stored, protected and analysed.

Purpose

To record events, generate evidence, ensure the integrity of log information, prevent against unauthorized access, identify information security events that can lead to an information security incident and to support investigations.

ISO 27002 Implementation Guidance

General
The organization should determine the purpose for which logs are created, what data is collected and logged, and any log-specific requirements for protecting and handling the log data. This should be documented in a topic-specific policy on logging. Event logs should include for each event, as applicable:

a) user IDs;
b) system activities;
c) dates, times and details of relevant events (e.g. log-on and log-off);
d) device identity, system identifier and location;
e) network addresses and protocols.

The following events should be considered for logging:

a) successful and rejected system access attempts;
b) successful and rejected data and other resource access attempts;
c) changes to system configuration;
d) use of privileges;
e) use of utility programs and applications;
f) files accessed and the type of access, including deletion of important data files;
g) alarms raised by the access control system;
h) activation and de-activation of security systems, such as anti-virus systems and intrusion detection systems;
i) creation, modification or deletion of identities;
j) transactions executed by users in applications. In some cases, the applications are a service or product provided or run by a third party.

It is important for all systems to have synchronized time sources as this allows for correlation of logs between systems for analysis, alerting and investigation of an incident.

Protection of logs
Users, including those with privileged access rights, should not have permission to delete or de-activate logs of their own activities. They can potentially manipulate the logs on information processing facilities under their direct control. Therefore, it is necessary to protect and review the logs to maintain accountability for the privileged users. Controls should aim to protect against unauthorized changes to log information and operational problems with the logging facility including:

  • alterations to the message types that are recorded;
  • log files being edited or deleted;
  • failure to record events or over-writing of past recorded events if the storage media holding a log file is exceeded.

For protection of logs, the use of the following techniques should be considered: cryptographic hashing, recording in an append-only and read-only file, recording in a public transparency file. Some audit logs can be required to be archived because of requirements on data retention or requirements to collect and retain evidence .
Where the organization needs to send system or application logs to a vendor to assist with debugging or troubleshooting errors, logs should be de-identified where possible using data masking techniques for information such as usernames, internet protocol (IP) addresses, hostnames or organization name, before sending to the vendor. Event logs can contain sensitive data and personally identifiable information. Appropriate privacy protection measures should be taken .

Log analysis
Log analysis should cover the analysis and interpretation of information security events, to help identify unusual activity or anomalous behavior, which can represent indicators of compromise. Analysis of events should be performed by taking into account:

a) the necessary skills for the experts performing the analysis;
b) determining the procedure of log analysis;
c) the required attributes of each security-related event;
d) exceptions identified through the use of predetermined rules [e.g. security information and event management (SIEM) or firewall rules, and intrusion detection systems (IDSs) or malware signatures];
e) known behaviour patterns and standard network traffic compared to anomalous activity and behavior [user and entity behaviour analytics (UEBA)];
f) results of trend or pattern analysis (e.g. as a result of using data analytics, big data techniques and specialized analysis tools);
g) available threat intelligence.
Log analysis should be supported by specific monitoring activities to help identify and analyse anomalous behavior, which includes:
a) reviewing successful and unsuccessful attempts to access protected resources [e.g. domain name system (DNS) servers, web portals and file shares];
b) checking DNS logs to identify outbound network connections to malicious servers, such as those associated with botnet command and control servers;
c) examining usage reports from service providers (e.g. invoices or service reports) for unusual activity within systems and networks (e.g. by reviewing patterns of activity);
d) including event logs of physical monitoring such as entrance and exit to ensure more accurate detection and incident analysis;
e) correlating logs to enable efficient and highly accurate analysis.

Suspected and actual information security incidents should be identified (e.g. malware infection or probing of firewalls) and be subject to further investigation (e.g. as part of an information security incident management process,).

Other information

System logs often contain a large volume of information, much of which is extraneous to information security monitoring. To help identify significant events for information security monitoring purposes, the use of suitable utility programs or audit tools to perform file interrogation can be considered. Event logging sets the foundation for automated monitoring systems which are capable of generating consolidated reports and alerts on system security. A SIEM tool or equivalent service can be used to store, correlate, normalize and analyse log information, and to generate alerts. SIEMs tend to require careful configuration to optimize their benefits. Configurations to consider include identification and selection of appropriate log sources, tuning and testing of rules and development of use cases. Public transparency files for the recording of logs are used, for example, in certificate transparency systems. Such files can provide an additional detection mechanism useful for guarding against log tampering. In cloud environments, log management responsibilities can be shared between the cloud service customer and the cloud service provider. Responsibilities vary depending on the type of cloud service being used.

Advertisements

Log files are automatically computer-generated whenever an event with a specific classification takes place on the network. The reason log files exist is that software and hardware developers find it easier to troubleshoot and debug their creations when they access a textual record of the events that the system is producing. Each of the leading operating systems is uniquely configured to generate and categorize event logs in response to specific types of events. Log management systems centralize all log files, to gather sort and analyze log data, and make it easy to understand, trace, and address key issues related to application performance.

Large IT organizations depend on an extensive network of IT infrastructure and applications to power key business services. Logfile monitoring and analysis increase the observability of this network, creating transparency and allowing visibility into the cloud computing environment. While observability should not be treated as an ultimate goal, it should always be seen as a mechanism for achieving real business objectives:

  • Improving the reliability of systems for the end-user: Log files include information about system performance that can be used to determine when additional capacity is needed to optimize the user experience. Log files can help analysts identify slow queries, errors that are causing transactions to take too long or bugs that impact website or application performance.
  • Maintain the security posture of cloud computing environments and prevent data breaches: Log files capture things like unsuccessful log-in attempts, failed user authentication, or unexpected server overloads, all of which can signal to an analyst that a cyber attack might be in progress. The best security monitoring tools can send alerts and automate responses as soon as these events are detected on the network.
  • Improve business decision-making.: Log files capture the behavior of users within an application, giving rise to an area of inquiry known as user behavior analytics. By analyzing the actions of users within an application, developers can optimize the application to get users to their goals more quickly, improving customer satisfaction and driving revenue in the process.

An ‘event’ is any action performed by a logical or physical presence on a computer system – e.g. a request for data, a remote login, an automatic system shutdown, a file deletion. An individual event log should contain 5 main components, in order for it to fulfil its operational purpose:

  1. User ID – Who or what account performed the actions.
  2. System activity – What happened
  3. Timestamps – Date and time of said event
  4. Device and system identifiers, and location – What asset the event occurred on
  5. Network addresses and protocols – IP information

For practical purposes, it may not be feasible to log every single event that occurs on a given network.With that in mind, the below 10 events as being particularly important for logging purposes, given their ability to modify risk and the part they play in maintaining adequate levels of information security:

  1. System access attempts.
  2. Data and/or resource access attempts.
  3. System/OS configuration changes.
  4. Use of elevated privileges.
  5. Use of utility programs or maintenance facilities .
  6. File access requests and what occurred (deletion, migration etc).
  7. Access control alarms and critical interrupts.
  8. Activation and/or deactivation of front end and back end security systems, such as client-side antivirus software or firewall protection systems.
  9. Identity administration work (both physical and logical).
  10. Certain actions or system/data alterations carried out as part of a session within an application.

It is vitally important that all logs are linked to the same synchronized time source (or set of courses), and in the case of third party application logs, any time discrepancies catered to and recorded.

Logs are the lowest common denominator for establishing user, system and application behavior on a given network, especially when faced with an investigation. It is therefore vitally important for organisations to ensure that users – regardless of their permission levels – do not retain the ability to delete or amend their own event logs. Individual logs should be complete, accurate and protected against any unauthorized changes or operational problems, including:

  • Message type amendments.
  • Deleted or edited log files.
  • Any failure to generate a log file, or unnecessary over-writing of log files due to prevailing issues with storage media or network performance.

Logs should be protected using the following methods:

  • Cryptographic hashing.
  • Append-only recording.
  • Read-only recording.
  • Use of public transparency files.

Organisations may need to send logs to vendors to resolve incidents and faults. Should this need arise, logs should be ‘de-identified’ and the following information should be masked:

  • Usernames
  • IP addresses
  • Hostnames

In addition to this, measures should be taken to safeguard personally identifiable information (PII) in line with the organisation’s own data privacy protocols, and any prevailing legislation .


When analysing logs for the purposes of identifying, resolving and analysing information security events – with the end goal of preventing future occurrences – the following factors need to be taken into account:

  • The expertise of the personnel carrying out the analysis.
  • How logs are analysed, in line with company procedure.
  • The type, category and attributes of each event that requires analysis.
  • Any exceptions that are applied via network rules emanating from security software hardware and platforms.
  • The default flow of network traffic, as compared to unexplainable patterns.
  • Trends that are identified as a result of specialised data analysis.
  • Threat intelligence.

Log analysis should not be carried out in isolation, and should be done in tandem with rigorous monitoring activities that pinpoint key patterns and anomalous behavior. In order to achieve a dual-fronted approach, organisations should:

  • Review any attempts to access secure and/or business critical resources, including domain servers, web portals and file sharing platforms.
  • Scrutinise DNS logs to discover outgoing traffic linked to malicious sources and harmful server operations.
  • Collate data usage reports from service providers or internal platforms to identify malicious activity.
  • Collect logs from physical access points, such as key card/fob logs and room access information.
  • Supplementary Information

Organisations should consider using specialized utility programs that help them search through the vast amounts of information that system logs generate, in order to save time and resources when investigating security incidents, such as a SIEM tool. If an organisation uses a cloud-based platform to carry out any part of their operation, log management should be considered as a shared responsibility between the service provider and the organisation themselves.

System logs generated by servers and other various network apparatus can create data is in vast quantities, and sooner or later, attempts at managing such information in an off-the-cuff fashion are no longer viable. Consequently, information systems managers are tasked with devising strategies for taming these volumes of log data to remain compliant with company IT policy and also to gain holistic visibility across all IT systems deployed throughout the organization. log management is defining what you need to log, how to log it, and how long to retain the information. This ultimately translates into requirements for hardware, software, and of course, policies. . These are as follows:

  1. Collection
    The organization needs to collect logs over encrypted channels. Their log management solution should ideally come equipped with multiple means to collect logs, but it should recommend the most reliable means of doing so. In general, organizations should use agent-based collection whenever possible, as this method is generally more secure and reliable than its agentless counterparts.
  2. Storage
    Once they have collected them, organizations need to preserve, compress, encrypt, store, and archive their logs. Companies can look for additional functionality in their log management solution such as the ability to specify where they can store their logs geographically. This type of feature can help meet their compliance requirements and ensure scalability.
  3. Search
    Organizations need to make sure they can find their logs once they’ve stored them, so they should index their records in such a way that they are discoverable via plaintext, REGEX, and API queries. A comprehensive log management solution should enable companies to optimize each log search with filters and classification tags. It should also allow them to view raw logs, conduct broad and detailed queries, and compare multiple queries at once.
  4. Correlation
    Organizations need to create rules that they can use to detect interesting events and perform automated actions. Of course, most events don’t occur on a single host in a single log. For that reason, companies should look for a log management solution that lets them create correlation rules according to the unique threats and requirements their environments face. They should also seek out a tool that allows them to import other data sources such as vulnerability scans and asset inventories.

Effective logging allows us to reach back in time to identify events, interactions, and changes that may have relevance to the security of information resources. A lack of logs often means that we lose the ability to investigate events (e.g. anomalies, unauthorized access attempts, excessive resource use) and perform root cause analysis to determine causation. In the context of this control area, logs can be interpreted very broadly to include automated and handwritten logs of administrator and operator activities taken to ensure the integrity of operations in information processing facilities, such as data and network centers.

How do we protect the value of log information?

Effective logging strategies must also consider how to log data that can be protected against tampering, sabotage, or deletion that devalues the integrity of log information. This usually involves consideration of role-based access controls that partition the ability to read and modify log data based on business needs and position responsibilities. In addition, timestamp information is extremely critical when performing correlation analysis between log sources. One essential control needed to assist with this is ensuring that organizational systems all have their clocks synchronized to a common source (often achieve via NTP server) so that the timelining of events can be performed with high confidence.

What should we log?

The question of what types of events to log must take into consideration a number of factors including relevant compliance obligations, organizational privacy policies, data storage costs, access control needs, and the ability to monitor and search large data sets in an appropriate time frame. When considering your overall logging strategy it can very often be helpful to “work backward”. Rather than initially attempting to catalog all event types, it can be useful to frame investigatory questions beginning with those issues that occur on regular basis or have the potential to be associated with significant risk events (e.g. abuse/attacks on ERP systems). These questions can then lead to a focused review of the security event data that has the most relevance to these particular questions and issues. Ideally, events logs should include key information including:

  • User IDs, System Activities; Dates, Times and Details of Key Events
  • Device identity or location, Records of Successful and Rejected System Access Attempts;
  • Records of Successful and Rejected Resource Access Attempts; Changes to System Configurations; Use of Privileges,
  • Use of System Utilities and Applications; Files Accessed and the Kind of Access; Network Addresses and Protocols;
  • Alarms raised by the access control system, Activation and De-activation of Protection systems, such as AV & IDS

5. Output

Finally, companies need to be able to distribute log information to different users and groups using dashboards, reports, and email. Their log management solution should facilitate the exchange of data with other systems and the security team.

Advertisements

Back to Home Page

If you need assistance or have any doubt and need to ask any questions contact me at preteshbiswas@gmail.com. You can also contribute to this discussion and I shall be happy to publish them. Your comments and suggestion are also welcome.

ISO 27001:2022 A 8.19 Installation of software on operational systems

Audio version of the article

Advertisements

Operational software can broadly be described as any piece of software that the business actively uses to conduct its operation, as distinct from test software or development projects. It is vitally important to ensure that software is installed and managed on a given network in accordance with a strict set of rules and requirements that minimize risk, improve efficiency and maintain security within internal and external networks and services.

Control

Procedures and measures should be implemented to securely manage software installation on operational systems.

Purpose

To ensure the integrity of operational systems and prevent exploitation of technical vulnerabilities.

ISO 27002 Implementation Guidance

The following guidelines should be considered to securely manage changes and installation of software on operational systems:
a) performing updates of operational software only by trained administrators upon appropriate management authorization;
b) ensuring that only approved executable code and no development code or compilers is installed on operational systems;
c) only installing and updating software after extensive and successful testing ;
d) updating all corresponding program source libraries;
e) using a configuration control system to keep control of all operational software as well as the system documentation;
f) defining a rollback strategy before changes are implemented;
g) maintaining an audit log of all updates to operational software;
h) archiving old versions of software, together with all required information and parameters, procedures, configuration details and supporting software as a contingency measure, and for as long as the software is required to read or process archived data.
Any decision to upgrade to a new release should take into account the business requirements for the change and the security of the release (e.g. the introduction of new information security functionality or the number and severity of information security vulnerabilities affecting the current version). Software patches should be applied when they can help to remove or reduce information security vulnerabilities. Computer software can rely on externally supplied software and packages (e.g. software programs using modules which are hosted on external sites), which should be monitored and controlled to avoid unauthorized changes, because they can introduce information security vulnerabilities. Vendor supplied software used in operational systems should be maintained at a level supported by the supplier. Over time, software vendors will cease to support older versions of software. The organization should consider the risks of relying on unsupported software. Open source software used in operational systems should be maintained to the latest appropriate release of the software. Over time, open source code can cease to be maintained but is still available in an open source software repository. The organization should also consider the risks of relying on unmaintained open source software when used in operational systems.
When suppliers are involved in installing or updating software, physical or logical access should only be given when necessary and with appropriate authorization. The supplier’s activities should be monitored . The organization should define and enforce strict rules on which types of software users can install. The principle of least privilege should be applied to software installation on operational systems. The organization should identify what types of software installations are permitted (e.g. updates and security patches to existing software) and what types of installations are prohibited (e.g. software that is only for personal use and software whose pedigree with regard to being potentially malicious is unknown or suspect). These privileges should be granted based on the roles of the users concerned.

Advertisements

In order to securely manage change and installations on their network, organisations should:

  • Ensure that software updates are only carried out by trained and competent personnel.
  • Only install robust executable code that’s free from any bugs and has safely exited the development stage.
  • Only install and/or update software after said update or patch has been successfully tested, and the organisation is confident that no conflicts or errors will ensue.
  • Maintain an up to date library system.
  • Utilise a ‘configuration control system’ that manages all instances of operational software, including program documentation.
  • Agree upon a ‘rollback strategy’ prior to any updates or installations, to ensure business continuity in the event of an unforeseen error or conflict.
  • Keep a log of any updates performed to operational software, including a summary of the update, the personnel involved and a timestamp.
  • Ensure that unused software – including all documentation, configuration files, system logs, supporting procedures – are securely stored for further use, should the need arise.
  • Enforce a strict set of rules on the type of software packages that users can install, based on the principles of ‘least privileged’ and in accordance with relevant roles and responsibilities.
Advertisements

Where vendor-supplied software is concerned (e.g. any software used in the operation of machinery or for a bespoke business function) such software should always be kept in good working order by referring to the vendor’s guidelines for safe and secure operation. It’s important to note that even where software or software modules are externally supplied and managed (i.e. the organisation is not responsible for any updates), steps should be taken to ensure that third party updates do not compromise on the integrity of the organisation’s network. Organisations should avoid using unsupported vendor software unless absolutely necessary, and consider the associated security risks of utilizing redundant applications as opposed to an upgrade to newer and more secure systems. If a vendor requires access to an organisation’s network to perform an installation or update, activity should be monitored and validated in line with all relevant authorization procedures

Vendor supplied software used in operational systems should be maintained at a level supported by the supplier. Over time, software vendors will cease to support older versions of the software. The organization should consider the risks of relying on unsupported software. Any decision to upgrade to a new release should take into account the business requirements for the change and the security of the release, e.g. the Introduction of new information security functionality or the number and severity of information security problems affecting this version. Software patches should be applied when they can help to remove or reduce information security weaknesses. Physical or logical access should only be given to suppliers for support purposes when necessary and with management approval. The supplier’s activities should be monitored. Computer software may rely on externally supplied software and modules, which should be monitored and controlled to avoid unauthorized changes, which could introduce security weaknesses. Make sure to establish and maintain documented procedures to manage the installation of software on operational systems. Operational system software installations should only be performed by qualified, trained administrators. Updates to operating system software should utilize only approved and tested executable code. It is ideal to utilize a configuration control system and have a rollback strategy prior to any updates. Audit logs of updates and previous versions of updated software should be maintained. Third parties that require access to perform software updates should be monitored and access removed once updates are installed and tested.

Software should be upgraded, installed and/or patched in accordance with the organisation’s published change management procedures, to ensure uniformity with other areas of the business. Whenever a patch is identified that either totally eliminates a vulnerability (or series of vulnerabilities), or helps in any way to improve the organisation’s information security operation, such changes should almost always be applied (though there is still the need to assess such changes on a case-by-case basis). Where the need arises to use open source software, this should always be of the latest publicly available version that is actively maintained. Accordingly, organisations should consider the inherent risks of using unmaintained software within all business functions.

As a basic principle– the recommendation (as a best practice) is that the software should be installed only by authorized personnel (usually IT staff). This can be applied with the help of the information security policy, or any other rules or best practices established in the organization (although this way implies that each employee applies these rules). To verify this, the organization could make periodic checks to analyze the software installed in the equipment of an employee selected at random. Another way to apply it is to limit user privileges to a minimum, although this will not always be possible, because there are profiles that need to have administrator privileges in the systems to manage them. These privileges also must be checked periodically, since an employee can change area, department, etc., which can mean that you have to enable new privileges, and/or disable others. The organizations must establish a rule that the software installed on the corporate equipment is only for professional use, because the software always consumes resources. Further more, all type of software is affected by threats, so the use of non-professional software in your organization could unnecessarily increase the risks.For the installation of new software, the following control may be established

  • Employees can not download software from the Internet, or bring software from home without authorization. It is prohibited.
  • When an employee detects the need for use of a particular software, a request needs to be transmitted to the IT department. The request can be stored as a record or as evidence.
  • The IT department shall determine if the organization has license of the software requested.
  • If there is license, the IT department notifies the employee and will proceed to install the software on the computer of the user who requested it.
  • If there is no license, a responsible party must assess whether the requested software is really necessary for the performance of the duties of the employee. For the evaluation, the financial feasibility of the software purchase must also be analyzed, when the software costs money.
  • If the software costs money, an analysis should be made as to whether there is another similar tool on the market that is cheaper or even free (Total Cost of Ownership must be calculated).
  • Top management should participate in the decision on the acquisition of new software.
  • Once the decision has been made, the IT department will proceed to include the software in their inventory and will install the software.

The IT department can defines a repository – only for internal use – to store all corporative and definitive versions of applications used by the organization. This repository should be accessed only by authorized personnel. The main idea is that this repository is accessible by authorized personnel only from the internal network of the organization, which will be easier for the installation of the software on the equipment of employees when needed.It is also important to identify all software that is installed inside the organization. For this purpose we can use (discovery) tools that analyze what software is installed on each of the computers through the internal network. These tools will allow us to check if someone has installed software in an uncontrolled way, i.e., without opening a request in accordance with the rules established.

Advertisements

ISO 27001:2022 A 8.31 Separation of development, test and production environments

Audio version of this article

Advertisements

Separation of development, test and production environments are important to achieve segregation of functions involved. It is appropriate that the rules for the development to transfer to a production environments are well defined and documented.Failure to properly segregate development, test, and production environments may result in loss of availability, confidentiality, and integrity of information assets. Production computing environments shall be either logically or physically separate from development and test environments. Developer access to production environments shall be prohibited or limited to troubleshooting and all activity recorded and monitored. Logon procedures and passwords shall be different for production and development/test environments. Procedures shall exist for transferring software or hardware from development and test to production. Where physical separation for development/test is not feasible, security measures shall be equal to or higher than that required for the production environment. Therefore, organisations should implement appropriate procedures and controls to securely segregate development, test, and production environments to eliminate security risk. When developing and testing staff have access to the production environment, they can introduce untested or unauthorized code, or even change the actual data of the system. In some systems, this capability can be hardly used to implement fraud or introduction of malicious code or not tested. This type of code can cause serious operational problems. The development of staff and in charge of the tests also pose a threat to confidentiality of production information. The development and testing activities may cause unintended changes to software and information if they share the same computing environment. The separation of resource development, test and operational is this very desirable way to reduce the risk of accidental modification or unauthorized access to operational software and business data.

Advertisements

Control

Development, testing and production environments should be separated and secured.

Purpose

To protect the production environment and data from compromise by development and test activities.

ISO 27002 Implementation Guidance

The level of separation between production, testing and development environments that is necessary to prevent production problems should be identified and implemented. The following items should be considered:
a) adequately separating development and production systems and operating them in different domains (e.g. in separate virtual or physical environments);
b) defining, documenting and implementing rules and authorization for the deployment of software from development to production status;
c) testing changes to production systems and applications in a testing or staging environment prior to being applied to production systems;
d) not testing in production environments except in circumstances that have been defined and approved;
e) compilers, editors and other development tools or utility programs not being accessible from production systems when not required;
f) displaying appropriate environment identification labels in menus to reduce the risk of error;
g) not copying sensitive information into the development and testing system environments unless equivalent controls are provided for the development and testing systems.

In all cases, development and testing environments should be protected considering:
a) patching and updating of all the development, integration and testing tools (including builders, integrators, compilers, configuration systems and libraries);
b) secure configuration of systems and software;
c) control of access to the environments;
d) monitoring of change to the environment and code stored therein;
e) secure monitoring of the environments;
f) taking backups of the environments.
A single person should not have the ability to make changes to both development and production without prior review and approval. This can be achieved for example through segregation of access rights or through rules that are monitored. In exceptional situations, additional measures such as detailed logging and real-time monitoring should be implemented in order to detect and act on unauthorized changes.

Other information

Without adequate measures and procedures, developers and testers having access to production systems can introduce significant risks (e.g. unwanted modification of files or system environment, system failure, running unauthorized and untested code in production systems, disclosure of confidential data, data integrity and availability issues). There is a need to maintain a known and stable environment in which to perform meaningful testing and to prevent inappropriate developer access to the production environment. Measures and procedures include carefully designed roles in conjunction with implementing segregation of duty requirements and having adequate monitoring processes in place. Development and testing personnel also pose a threat to the confidentiality of production information. Development and testing activities can cause unintended changes to software or information if they share the same computing environment. Separating development, testing and production environments is therefore desirable to reduce the risk of accidental change or unauthorized access to production software and business data. In some cases, the distinction between development, test and production environments can be deliberately blurred and testing can be carried out in a development environment or through controlled roll outs to live users or servers (e.g. small population of pilot users). In some cases, product testing can occur through live use of the product inside the organization. Furthermore, to reduce downtime of live deployments, two identical production environments can be supported where only one is live at any one time. Supporting processes for the use of production data in development and testing environments. are necessary. Organizations can also consider the guidance provided in this section for training environments when conducting end user training.

Advertisements

In the software development life cycle, three environments are commonly used: development environment, testing environment, and production environment. The development environment is the place where software applications and services are created and developed. In a testing environment, developers and quality assurance professionals can test an application’s functionality, performance, and reliability before it is released to users. Software applications or services are deployed and made available to users in production environments. There are several types of environments, each of which serves a different purpose. These environments differ in their level of control, access, and stability. The organisations is to maintain the confidentiality, integrity, and availability of sensitive information assets by segregating developing, testing, and production environments through appropriate procedures, controls, and policies.

  • Development Environment: A development environment allows developers to write code and make changes or updates without breaking anything in a live environment. New features and updates can be tried and your end users aren’t affected. This environment is great in reducing potential errors and maintaining a streamlined workflow. Additionally, at this stage, a lot of preliminary testing happens before moving on to the testing environment.
  • Testing Environment: You know that a test environment is a setup where software undergoes a series of experimental tests. These environments allow you to test individual components of your app, eliminate bugs, provide accurate feedback about the quality and behavior of the app, and encourage improvement and innovation, among others.
  • Production Environment: The production environment is the stage where all apps, software, or product are live for their intended use and its users. Any bugs, issues, or versions are tested and fixed prior to this stage, so the product or update should work perfectly. The production environment contains only the final version of the product or app in order to avoid any confusion or security vulnerabilities. This type of infrastructure (development, testing, staging) allows teams to ensure high-quality products for their users, while encouraging fast innovation and improvements.

In all three environments, developers and other stakeholders have varying degrees of control and access. To minimize disruptions to end users, changes to the application or service in a production environment are usually restricted to a few authorized users. Access is typically more open in the testing environment to test different scenarios and configurations, and changes can be made more quickly and easily. Typically, developers have even greater access to the code base in a development environment and can experiment and make changes as needed to explore different approaches and ideas.Organisations should take into account the potential production problems that should be prevented when determining the required level of separation between the three environments.

  1. Segregation and operation of development and production systems to an adequate degree. For example, the use of separate virtual and physical environments for production could be an effective method.
  2. Appropriate rules and authorisation procedures should be established, documented, and applied for the use of software in the production environment after going through the development environment.
  3. Organisations should review and test changes made to applications and production systems in a test environment separate from the production environment before these changes are used in the production environment.
  4. Testing in production environments should not be allowed unless such testing is defined and approved prior to testing.
  5. Development tools such as compilers and editors should not be accessible from the production environments unless this access is absolutely necessary.
  6. To minimise errors, proper environment identification labels should be prominently displayed in menus.
  7. Sensitive information assets should not be transferred into development and testing environments unless equivalent security measures are applied in the development and testing systems.
Advertisements
EnvironmentPurposeControl and accessStability and reliabil
Development EnvironmentDevelopment and building environments for software applications and services.Developers are free to experiment with the codebase and make changes as necessary to explore different ideas.Application or service is still being built and refined. Therefore, stability and reliability are less than in a testing environment.
Testing EnvironmentAn environment that simulates the production environment so developers and QA professionals can test the application’s performance, functionality, and reliability.The environment is more open for testing different scenarios and configurations, and changes can be made more quickly and easily.Due to stress tests and scenarios, the application or service is less stable and reliable than in production.
Production EnvironmentThe application or service is deployed and made available to end users in this live environment.The service is limited to a limited number of authorized users, and any changes must be carefully planned and coordinated, so that end users are not disrupted.Minimal downtime and disruption, with stable and reliable performance.

There is also an important difference in each environment’s stability and reliability levels. Due to the fact that the application is yet in the process of being created and refined, it may need to be more stable and reliable in the development environment. As the application or service undergoes different stress tests and scenarios in the testing environment, it might be less stable and reliable. Application or service stability and availability are expected in the production environment, with very little downtime or disruption.Organisations should protect development and testing environments against security risks taking into account the following:

  1. All development, integration, and testing tools such as builders, integrators, and libraries should be regularly patched and updated.
  2. All systems and software should be configured securely.
  3. Access to environments should be subject to appropriate controls.
  4. Changes to environments and code stored in it should be monitored and reviewed.
  5. Environments should be securely monitored and reviewed.
  6. Environments should be backed-up.
  7. No single individual should be given the privilege to make changes in both development and production environments without obtaining approval first. To prevent this, organisations can segregate access rights or establish and implement access controls.

In addition, organisations can also consider extra technical controls such as logging of all access activities and real-time monitoring of all access to these environments.

If organisations fail to implement necessary measures, their information systems can face significant security risks. For instance, developers and testers with access to the production environment may make unwanted changes to files or system environments, execute unauthorized code, or accidentally disclose sensitive information. Therefore, organisations need a stable environment to perform robust testing on the code and to prevent developers from accessing the production environments where sensitive real-world data is hosted and processed. Organisations should also apply measures such as designated roles together with separation of duty requirements. Another threat to the confidentiality of production data is the development and testing personnel. When the development and testing teams perform their activities using the same computing devices, they may accidentally make changes to sensitive information or software programs. Organisations are advised to establish supporting processes for the use of the production data in testing and development systems in compliance.Moreover, organisations should take into account the measures addressed in this Control when they perform end-user training in training environments.Organizations need to establish and appropriately protect secure development environments for system development and integration efforts that cover the entire system development life-cycle. Development environments need to be protected to ensure the malicious or accidental development and update of code that may create vulnerabilities or compromise confidentiality, integrity, and availability. Protection requirements should be determined from risk assessment, business requirements, and other internal and external requirements including legislation, regulation, contractual agreement, or policies. In particular, if any form of live data is used in development environments it needs to be specially protected and controlled. Separating development, testing, and production environments have several benefits.

  • This ensures a controlled and predictable development and testing process for the application or service. With distinct environments for each stage, developers and QA professionals can concentrate on one aspect of the application or service without being diverted by other components of the codebase. When errors and bugs occur, this can help minimize them and make it easier to identify and resolve them.
  • The second benefit of separating environments is that it ensures that the application or service is released to users stably and reliably. Developers and quality assurance professionals can identify and fix potential problems before deploying the application to the production environment by testing the application or service in a simulated testing environment. It can also help to maintain the reputation and trust of the application or service by preventing disruptions and downtime.
  • In addition to flexibility and scalability, having separate environments is also advantageous. Developers and other stakeholders can easily switch between environments at each stage of the development process and scale each environment up and down as needed. In addition to supporting agile development methods, it can make experimentation and iteration easier.
Advertisements

ISO 27001:2022 A 8.34 Protection of information systems during audit testing

Audio version of this article

Advertisements

Audit tests play a critical role in detecting and eliminating security risks and vulnerabilities in the information systems. However, the audit process, whether performed in operational, testing, or development environments, can expose sensitive information to the risks of unauthorized disclosure, or loss of integrity and availability. It is important to ensure that all IT controls and information security audits are planned events, rather than reactive ‘on-the-spot’ challenges. Audit requirements and activities involving verification of operational systems need to be carefully planned and agreed on to minimize disruptions to the business processes. Whenever carrying out tests and audit activities (e.g. vulnerability scans, penetration tests etc) on operational systems, consideration needs to be given to ensure that operations are not negatively impacted. Additionally, the scope and depth of testing must be defined. Any such auditing or testing of operational systems must be through a formal and appropriately authorized process.It is highly important that audit standards for access to systems and data should be negotiated with appropriate management. A technical Audit team must updated and control the information if there is any changes to the technical networks.

Advertisements

Control

Audit tests and other assurance activities involving assessment of operational systems should be planned and agreed between the tester and appropriate management.

Purpose

To minimize the impact of audit and other assurance activities on operational systems and business processes.

ISO 27002 Implementation Guidance

The following guidelines should be observed:

  1. agreeing audit requests for access to systems and data with appropriate management;
  2. agreeing and controlling the scope of technical audit tests;
  3. limiting audit tests to read-only access to software and data. If read-only access is not available to obtain the necessary information, executing the test by an experienced administrator who has the necessary access rights on behalf of the auditor;
  4. if access is granted, establishing and verifying the security requirements (e.g. antivirus and patching) of the devices used for accessing the systems (e.g. laptops or tablets) before allowing the access;
  5. only allowing access other than read-only for isolated copies of system files, deleting them when the audit is completed, or giving them appropriate protection if there is an obligation to keep such files under audit documentation requirements;
  6. identifying and agreeing on requests for special or additional processing, such as running audit tools;
  7. running audit tests that can affect system availability outside business hours;
  8. monitoring and logging all access for audit and test purposes.

Other information

Audit tests and other assurance activities can also happen on development and test systems, where such tests can impact for example the integrity of code or lead to disclosure of any sensitive information held in such environments.

Most organization undergo a series of security audits each year ranging from financial IT controls reviews to targeted assessments of critical systems.A security audit is a systematic evaluation of the security of a company’s information system by measuring how well it conforms to an established set of criteria. Steps involved in a security audit. These five steps are generally part of a security audit:

Advertisements
  • Agree on goals. Include all stakeholders in discussions of what should be achieved with the audit.
  • Define the scope of the audit. List all assets to be audited, including computer equipment, internal documentation and processed data.
  • Conduct the audit and identify threats. List potential threats related to each Threats can include the loss of data, equipment or records through natural disasters, malware or unauthorized users.
  • Evaluate security and risks. Assess the risk of each of the identified threats happening, and how well the organization can defend against them.
  • Determine the needed controls. Identify what security measures must be implemented or improved to minimize risks.

A thorough audit typically assesses the security of the system’s physical configuration and environment, software, information handling processes and user practices. Auditors should take privacy regulations and risks into account when planning, performing, and reporting assurance and consulting assignments. Due to the increasing risk of reputation damage and litigation, Auditor/ System tester has to take a significant spectrum of privacy issues and ramifications into account when managing the audit function. Key areas of concern are the staff management process; audit planning; collecting, handling, and storing information when performing and reporting audit results; and potential data leaks. When hiring auditors, there is even a greater need for due diligence to ensure that newly hired auditors act in accordance with relevant laws and policies when using personal information during assurance or consulting engagements. Internal auditors must understand that it may be inappropriate, and in some cases illegal, to access, retrieve, review, manipulate, or use personal information when conducting internal audit engagements. Audit trails maintain a record of system activity both by system and application processes and by user activity of systems and applications. In conjunction with appropriate tools and procedures, audit trails can assist in detecting security violations, performance problems, and flaws in applications. The auditor can obtain valuable information about activity on a computer system from the audit trail. Audit trails improve the auditability of the computer system. Organizations must maintain a complete and accurate audit trail for network devices, servers and applications. This enables organizations to address how businesses identify root causes of issues that might introduce inaccuracy in reporting. Also, problem management system must provide for adequate audit trail facilities that allow tracing from incident to underlying cause. IT security administration must monitor and log security activity, and identify security violations to report to senior management. This control directly addresses the control for audit controls over information systems and networks. To fulfil this control objective, administrators must ensure all network devices, servers, and applications are properly configured to log to a centralized server. Administrators must also periodically review logging status to ensure that these devices, servers and applications are logging correctly. Finally, internal auditors should consider related privacy regulations, regulatory requirements, and legal considerations when reporting information outside the organization.

  1. What privacy laws and regulations impact the Audit /testing?
  2. What type of personal information does the Audit collect?
  3. Does it has privacy polices and procedures with respect to collection, use, retention, destruction, and disclosure of personal information?
  4. Does the auditing process have responsibility and accountability assigned for managing a privacy program?
  5. How is personal information protected?
  6. Is any personal information collected during the audit disclosed to third parties?
  7. Are auditor properly trained in handling privacy issues and concerns?
  8. Does the organization have adequate resources to develop, implement, and maintain an effective privacy program?
  9. Does the organization complete a periodic assessment to ensure that privacy policies and procedures are being
    followed?

Audits that include testing activities can prove disruptive to users if any unforeseen outages occur as a result of testing or assessments. Through working with leadership, it should be possible to determine when audits will occur and obtain relevant information in advance about the specific IT controls that will be examined or tested. Develop an ‘audit plan’ for each audit that provides information relevant to each system and area to be assessed. These audit plans should take into consideration:

  • Asset Inventory with contact information for system administrators/owners;
  • Requirements for testing/maintenance windows;
  • Information about backups (if applicable) in case systems later need to be restored due to unplanned outages;
  • Checklists or other materials provided in advance by auditors, etc.

If applicable, work with IT and other departments to provide audit preparation services to ensure that everyone understands their roles in the audit and how to respond to auditors’ questions, issues, and concerns. Protecting sensitive information during audits is critical, and documents provided to auditors should be recovered if possible, shortly before audits are completed. Any and all audit activity, to assess an operational system, should always be managed to minimize any impact on the system during required hours of operation. Any testing of operating systems that could pose an adverse effect on the system should be conducted during off-hours. Organisations should consider:

  • Appropriate management and the auditor should agree on access to systems and information assets.
  • Agreement on the scope of technical audit tests to be performed.
  • Organisations can only provide read-only access to information and software. If it is not possible to use the read-only technique, an administrator with necessary access rights can gain access to systems or data on behalf of the auditor.
  • If an access request is authorized, organisations should first verify that devices used to access systems meet the security requirements before they provide access.
  • Access should only be provided for isolated copies of files extracted from the system. These copies should be permanently deleted once the audit is complete unless there is an obligation to retain those files. If read-only access is possible, this control does not apply.
  • Requests by auditors to perform special processing such as deploying audit tools should be agreed upon by the management.
  • If an audit runs the risk of impacting system availability, the audit should be carried out outside of business hours to maintain the availability of information.
  • Access requests made for audits should be logged for the audit trail.

When audits are performed on testing or development environments, organisations should be cautious against the following risks:

  • Compromise of the integrity of code.
  • Loss of confidentiality of sensitive information.
Advertisements

ISO 27001:2022 A 8.8 Management of technical vulnerabilities

Audio version of the article

Advertisements

A vulnerability is defined in the ISO 27002 standard as “A weakness of an asset or group of assets that can be exploited by one or more threats”. Vulnerability management is the process in which vulnerabilities in IT are identified and the risks of these vulnerabilities are evaluated. This evaluation leads to correcting the vulnerabilities and removing the risk or a formal risk acceptance by the management of an organization (e.g. in case the impact of an attack would be low or the cost of correction does not outweigh possible damages to the organization). The term vulnerability management is often confused with vulnerability scanning. Despite the fact both are related, there is an important difference between the two. Vulnerability scanning consists of using a computer program to identify vulnerabilities in networks, computer infrastructure or applications. Vulnerability management is the process surrounding vulnerability scanning, also taking into account other aspects such as risk acceptance, remediation, etc. Depending on the size and structure of the organization, the approach to vulnerability scanning might differ. Small organizations that have a good understanding of IT resources throughout the enterprise might centralize vulnerability scanning. Larger organizations are more likely to have some degree of decentralization, so vulnerability scanning might be the responsibility of individual units. Some organizations might have a blend of both centralized and decentralized vulnerability assessment. Regardless, before starting a vulnerability scanning program, it is important to have the authority to conduct the scans and to understand the targets that will be scanned. Vulnerability scanning tools and methods are often somewhat tailored to varied types of information resources and vulnerability classes. The table below shows several important vulnerability classes and some relevant tools.

Common Types of Technical VulnerabilitiesRelevant Assessment Tools
Application VulnerabilitiesWeb Application Scanners (static and dynamic), Web Application Firewalls
Network Layer VulnerabilitiesNetwork Vulnerability Scanners, Port Scanners, Traffic Profilers
Host/System Layer VulnerabilitiesAuthenticated Vulnerability Scans, Asset and Patch Management Tools, Host Assessment and Scoring Tools

Information systems should be regularly reviewed for compliance with the organisation’s information security policies and standards. Automated tools are normally used to check systems and networks for technical compliance and these should be identified and implemented as appropriate. Where tools such as these are used, it is necessary to restrict their use to a few authorized personnel as possible and to carefully control and coordinate when they are used to prevent compromise of system availability and integrity. Adequate

Advertisements

Control

Information about technical vulnerabilities of information systems in use should be obtained, the organization’s exposure to such vulnerabilities should be evaluated and appropriate measures should be taken.

Purpose

To prevent exploitation of technical vulnerabilities.

ISO 27002 Implementation Guidance

Identifying technical vulnerabilities
The organization should have an accurate inventory of assets as a prerequisite for effective technical vulnerability management; the inventory should include the software vendor, software name, version numbers, current state of deployment (e.g. what software is installed on what systems) and the person within the organization responsible for the software. To identify technical vulnerabilities, the organization should consider:

  1. defining and establishing the roles and responsibilities associated with technical vulnerability management, including vulnerability monitoring, vulnerability risk assessment, updating, asset tracking and any coordination responsibilities required;
  2. for software and other technologies (based on the asset inventory list,), identifying information resources that will be used for identifying relevant technical vulnerabilities and maintaining awareness about them. Updating the list of information resources based on changes in the inventory or when other new or useful resources are found;
  3. requiring suppliers of information system (including their components) to ensure vulnerability reporting, handling and disclosure, including the requirements in applicable contracts ;
  4. using vulnerability scanning tools suitable for the technologies in use to identify vulnerabilities and to verify whether the patching of vulnerabilities was successful;
  5. conducting planned, documented and repeatable penetration tests or vulnerability assessments by competent and authorized persons to support the identification of vulnerabilities. Exercising caution as such activities can lead to a compromise of the security of the system;
  6. tracking the usage of third-party libraries and source code for vulnerabilities. This should be included in secure coding .

The organization should develop procedures and capabilities to:

  1. detect the existence of vulnerabilities in its products and services including any external component used in these;
  2. receive vulnerability reports from internal or external sources.

The organization should provide a public point of contact as part of a topic-specific policy on vulnerability disclosure so that researchers and others are able to report issues. The organization should establish vulnerability reporting procedures, online reporting forms and making use of appropriate threat intelligence or information sharing forums. The organization should also consider bug bounty programs where rewards are offered as an incentive to assist organizations in identifying vulnerabilities in order to appropriately remediate them. The organization should also share information with competent industry bodies or other interested parties.

Evaluating technical vulnerabilities
To evaluate identified technical vulnerabilities, the following guidance should be considered:

  1. analyse and verify reports to determine what response and remediation activity is needed;
  2. once a potential technical vulnerability has been identified, identifying the associated risks and the actions to be taken. Such actions can involve updating vulnerable systems or applying other controls.

Taking appropriate measures to address technical vulnerabilities
A software update management process should be implemented to ensure the most up-to-date approved patches and application updates are installed for all authorized software. If changes are necessary, the original software should be retained and the changes applied to a designated copy. All changes should be fully tested and documented, so that they can be reapplied, if necessary, to future software upgrades. If required, the modifications should be tested and validated by an independent evaluation body.
The following guidance should be considered to address technical vulnerabilities:

  1. taking appropriate and timely action in response to the identification of potential technical vulnerabilities; defining a timeline to react to notifications of potentially relevant technical vulnerabilities;
  2. depending on how urgently a technical vulnerability needs to be addressed, carrying out the action according to the controls related to change management or by following information security incident response procedures ;
  3. only using updates from legitimate sources (which can be internal or external to the organization);
  4. testing and evaluating updates before they are installed to ensure they are effective and do not result in side effects that cannot be tolerated [i.e. if an update is available, assessing the risks associated with installing the update (the risks posed by the vulnerability should be compared with the risk of installing the update)];
  5. addressing systems at high risk first;
  6. develop remediation (typically software updates or patches);
  7. test to confirm if the remediation or mitigation is effective;
  8. provide mechanisms to verify the authenticity of remediation;
  9. if no update is available or the update cannot be installed, considering other controls, such as:
    • applying any workaround suggested by the software vendor or other relevant sources;
    • turning off services or capabilities related to the vulnerability;
    • adapting or adding access controls (e.g. firewalls) at network borders ;
    • shielding vulnerable systems, devices or applications from attack through deployment of suitable traffic filters (sometimes called virtual patching);
    • increasing monitoring to detect actual attacks;
    • raising awareness of the vulnerability.

For acquired software, if the vendors regularly release information about security updates for their software and provide a facility to install such updates automatically, the organization should decide whether to use the automatic update or not.

Other considerations
An audit log should be kept for all steps undertaken in technical vulnerability management. The technical vulnerability management process should be regularly monitored and evaluated in order to ensure its effectiveness and efficiency. An effective technical vulnerability management process should be aligned with incident management activities, to communicate data on vulnerabilities to the incident response function and provide technical procedures to be carried out in case an incident occurs. Where the organization uses a cloud service supplied by a third-party cloud service provider, technical vulnerability management of cloud service provider resources should be ensured by the cloud service provider. The cloud service provider’s responsibilities for technical vulnerability management should be part of the cloud service agreement and this should include processes for reporting the cloud service provider’s actions relating to technical vulnerabilities (see 5.23). For some cloud services, there are respective responsibilities for the cloud service provider and the cloud service customer. For example, the cloud service customer is responsible for vulnerability management of its own assets used for the cloud services.

Other information

Technical vulnerability management can be viewed as a sub-function of change management and as such can take advantage of the change management processes and procedures . There is a possibility that an update does not address the problem adequately and has negative side effects. Also, in some cases, uninstalling an update cannot be easily achieved once the update has been applied. If adequate testing of the updates is not possible (e.g. because of costs or lack of resources) a delay in updating can be considered to evaluate the associated risks, based on the experience reported by other users. Where software patches or updates are produced, the organization can consider providing an automated update process where these updates are installed on affected systems or products without the need for intervention by the customer or the user. If an automated update process is offered, it can allow the customer or user to choose an option to turn off the automatic update or control the timing of the installation of the update. Where the vendor provides an automated update process and the updates can be installed on affected systems or products without the need for intervention, the organization determines if it applies the automated process or not. One reason for not electing for automated update is to retain control over when the update is performed. For example, a software used for a business operation cannot be updated until the operation has completed. A weakness with vulnerability scanning is that it is possible it does not fully account for defence in depth: two countermeasures that are always invoked in sequence can have vulnerabilities that are masked by strengths in the other. The composite countermeasure is not vulnerable, whereas a vulnerability scanner can report that both components are vulnerable. The organization should therefore take care in reviewing and acting on vulnerability reports. Many organizations supply software, systems, products and services not only within the organization but also to interested parties such as customers, partners or other users. These software, systems, products and services can have information security vulnerabilities that affect the security of users. Organizations can release remediation and disclose information about vulnerabilities to users (typically through a public advisory) and provide appropriate information for software vulnerability database services.

Advertisements

Prior to implementing and vulnerability controls, it’s essential to obtain a complete and up-to-date list of physical and digital assets that are owned and operated by the organisation.Software asset information should include:

  • Vendor name
  • Application name
  • Version numbers currently in operation
  • Where the software is deployed

When attempting to pinpoint technical vulnerabilities, organisations should:

  • Clearly outline who (within the organisation) is responsible for vulnerability management from a technical perspective, in accordance with its various functions, including (but not limited to):
    • Asset management
    • Risk assessment
    • Monitoring
    • Updating
  • Who is responsible for the software within the organisation
  • Maintain an inventory of applications and resources that will be used to identify technical vulnerabilities.
  • Ask suppliers and vendors to disclose vulnerabilities upon the supply of new systems and hardware, and specify as such in all relevant contracts and service agreements.
  • Make use of vulnerability scanning tools, including patching facilities.
  • Carry out regular, documented penetration tests – either internally or via a certified third-party.
  • Be mindful of the use of third-party code libraries and/or source code for underlying programmatic vulnerabilities.

In addition to internal systems, organisations should develop policies and procedures that detect vulnerabilities across all its products and services, and receive vulnerability assessments relating to the supply of said products and services.Organisations are to make a public effort to track down any vulnerabilities, and encourage third-parties to engage in vulnerability management efforts through the use of bounty programs (where exploits are looked for and reported to the organisation for a reward). Organisations should make themselves available to the general public through forums, public email addresses and research activity so that the collective knowledge of the wider public can be used to safeguard products and services at source. Where remedial action has been taken that affects users or customers, organisations should consider releasing relevant information to the affected individuals or organisations, and engage with specialist security organisations to disseminate information about vulnerabilities and attack vectors. In addition, organisations should consider offering an automatic update procedure that customers are able to opt in or out of, based on their business needs. Adequate reporting is key to ensuring swift and effective remedial action when vulnerabilities are discovered. When evaluating vulnerabilities, organisations should:

  • Carefully analyse any reports and decide what action needs to be taken, including (but not limited to) the amendment, updating or removal of affected systems and/or hardware.
  • Agree upon a resolution that takes into account other ISO controls (particularly those related to ISO 27002:2022) and acknowledges the level of risks involved.

Software vulnerabilities are best combated with a proactive approach to software updates and patch management. Prior to any amendments being implemented, organisations should ensure that incumbent software versions are retained, all changes are fully tested and applied to a designated copy of the software. When directly addressing vulnerabilities after they have been identified, organisations should:

  1. Seek to resolve all vulnerabilities in a timely and efficient manner.
  2. Where possible, follow the organisational procedures on change management and incident response .
  3. Only apply updates and patches that emanate from trusted and/or certified sources, particularly in the vase of third-party vendor software and equipment.
  4. Where vendor software is concerned, organisations should make a judgement call based on the information available as to whether it is necessary to apply automatic updates (or parts thereof) to acquired software and hardware.
  5. Test any updates that are required prior to installation, to avoid any unforeseen incidents in an operational environment.
  6. Seek to address high risk and business-critical systems as a priority.
  7. Ensure that any remedial actions are effective and authentic.

In the event of an update not being available, or any issues that prevent an update being installed (such as cost issues), organisations should consider other measures, such as:

  • Asking the vendor for advice on a workaround or “sticking plaster” solution whilst remedial efforts are ramped up.
  • Disabling or stopping any network services that are affected by the vulnerability.
  • Implementing network security controls at critical gateway points, including traffic rules and filters.
  • Increasing the overall level of monitoring in line with the associated risk.
  • Ensure that all affected parties are aware of the vulnerability, including suppliers and customers.
  • Delaying the update to better evaluate the associated risks, especially where operational cost may be an issue.

Organisations should keep an audit log of all relevant vulnerability management activities, in order to aid remedial efforts and improve procedures in the event of a security incident. The entire vulnerability management process should be periodically reviewed and evaluated, in order to improve performance and identify more vulnerabilities at source. If the organisation used software hosted by a cloud service provider, the organisation should ensure that the service provider’s stance towards vulnerability management is aligned with its own, and should form a key part of any binding service agreement between the two parties, including any reporting procedures

Advertisements

Steps in the Vulnerability Management Life cycle

  1. Asset Discovery
    First, you must create (or maintain) your asset directory. In this step, you take inventory of your organization’s assets, including software, hardware, operating systems, and services, taking note of the current versions and applied patches. Establish a baseline of identified vulnerabilities to serve as a reference when detecting new vulnerabilities. Periodically revisit the inventory and update it when you add new assets (software or devices).
  2. Prioritization
    Classify your assets based on their risk level and importance to business operations. Assign business values to every asset class to determine which assets should be first for vulnerability assessment. Core business software and hardware should be the priority.
  3. Vulnerability Assessment
    Once you’ve established baseline risk profiles and determined the priority level of your assets, arrange them according to the degree of exposure to specific vulnerabilities. The vulnerability assessment should consider each asset’s classification, criticality, and vulnerabilities. Research publicly available vulnerability lists and risk rankings to identify the exposure level of each asset to specific vulnerabilities.
  4. Reporting
    Build an asset security strategy based on your identified risks and priority levels. Document the required remediation steps for each known vulnerability and continuously monitor suspicious behavior to lower the system’s overall risk.
  5. Remediation
    Implement the security strategy to remediate your prioritized vulnerabilities, addressing high-risk and critical assets first. This step typically involves updating software and hardware, applying vulnerability patches, modifying security configurations, and identifying vulnerable areas to protect critical assets and infrastructure. You might have to deactivate specific user accounts, provide additional security awareness training, or introduce new technologies to handle certain tasks that the IT team used to perform manually.
  6. Evaluation and Verification
    The final step in the vulnerability management life cycle involves evaluating your security strategy and verifying that your security measures have successfully reduced or eliminated your prioritized threats. This process will likely include several steps and should be a continuous effort with regular scans and assessments to ensure your vulnerability management policies are effective.

Vulnerability Assessment:

Vulnerability management and vulnerability assessment help address and resolve security vulnerabilities Vulnerability assessment offers visibility into the current state of the situation, while vulnerability management provides continuous, real-time intelligence, reporting, and remediation guidance. A vulnerability assessment is typically the first step in a vulnerability management process. It involves using scanners to gather information from devices and systems on the corporate network and comparing the information to known vulnerabilities disclosed by software vendors. The organization’s IT staff runs scans at regular intervals and schedules upgrades and patching. Vulnerability management is a continuous process rather than a scheduled process performed ad-hoc. It involves running an ongoing program that cycles through vulnerability assessments, prioritization, and remediation. The process employs multiple data sources to continually assess the situation and reassess the existing state of services and software.vulnerability assessments aim to discover vulnerabilities as quickly as possible and are more effective for detecting known vulnerabilities, penetration tests provide an in-depth assessment to identify unknown vulnerabilities.Vulnerability assessment typically involves automated, repeatable scans, which are useful for evaluating remediation attempts.Vulnerability assessments identify outdated applications or operating systems and device configuration issues like insecure ports and weak passwords. They are best suited to detecting common vulnerabilities and exposures, or CVEs, listed on public databases.Vulnerability assessments involve automated tools that scan the network for known CVEs. There are many vulnerability scanning tools ranging from open source to commercial solutions. When choosing a tool, organizations should consider various factors, including the infrastructure to be tested and configuration and support options.Vulnerability management tools scan corporate networks for vulnerabilities that potential intruders could exploit. If the scan finds weaknesses, the software suggests or initiates corrective action. In this way, vulnerability management software reduces the likelihood of a cyber attack. The most important features an organization should expect from a vulnerability assessment tool are:

  • Quality and speed—vulnerability scanning can take a long time in large networks, and can result in false positives. Test a prospective tool on your network, see how long it takes to run, and compare selected findings with a manual assessment of vulnerabilities.
  • User experience—the product should be easy to navigate and use, and vulnerability reports should be easy to understand by all relevant stakeholders.
  • Compatibility—the product’s signature database should support all major operating systems, applications, and infrastructure components used by the organization.
  • Cloud support—most organizations are running workloads in the cloud, and the tool should be able to detect vulnerabilities in IaaS, PaaS and SaaS environments.
  • Compliance—the product must support all relevant compliance standards applicable to the organization, and should provide reports in the format required by auditors.
  • Prioritization—the product should offer both manual review of vulnerabilities and automated prioritization.
  • Remediation instructions—the tool must provide actionable remediation instructions that can be easily followed by IT staff and developers.
Advertisements

Best Practices for an Effective Vulnerability Management Program

  1. Account for all IT Assets and Networks: All organizations have hardware or software that are not in common use, or were deployed without the knowledge of IT staff. They may seem harmless, but these outdated programs and systems are often the most vulnerable parts of the security infrastructure, just waiting to be exploited by potential attackers. This makes it critical to perform a comprehensive inventory of all hardware and software in the organization, and scanning all assets for vulnerabilities.
  2. Establish a Vulnerability Management Policy: The purpose of a vulnerability management policy is to define rules for reviewing and evaluating vulnerabilities, applying system updates to mitigate them, and validating that the risk is no longer present. Vulnerability management policies typically cover network infrastructure, but policy scope can vary by organization size, type, and industry. They can extend to vulnerabilities affecting servers, operating systems, cloud environments, database servers, and more.
  3. Use High Quality Threat Intelligence Feeds: High quality data about threats can be a game changer in keeping your network secure. It allows IT and security teams to stay one step ahead of attackers, by being aware of the latest attack patterns and known vulnerabilities. Threat intelligence feeds can uncover newly discovered vulnerabilities and exploits. These feeds are maintained by experts who track potential threats. Continuous access to updated information is critical for augmenting automated vulnerability scanners.
  4. Perform Regular Penetration Testing: Penetration testing is conducted by ethical hackers working on behalf of an organization, and aims to identify exploitable vulnerabilities in networks or computing systems. It helps businesses find and fix high priority vulnerabilities that attackers can actually exploit.
  5. Penetration testing can help protect your network from external attacks, and also provides unbiased and expert insight into weaknesses in security infrastructure. When combined with other threat management processes, such as vulnerability assessments, regular penetration testing is a highly effective way to identify and remediate vulnerabilities.

Common Challenges

  • “Scanning Can Cause Disruptions.” IT operations teams are quite reasonably very sensitive about how vulnerability scans are conducted and keen to understand any potential for operational disruptions. Often legacy systems and older equipment can have issues even with simple network port scans; To help with this issue, it can often be useful to build confidence in the scanning process by partnering with these teams to conduct risk evaluations before initiating or expanding a scanning program. It is also often important to discuss the “scan windows” when these vulnerability assessments will occur to ensure that they do not conflict with regular maintenance schedules.
  • “Drowning In Vulnerability Data and False Positives.” Technical vulnerability management practices can produce very large data-sets. It is important to realize that just because a tool indicates that a vulnerability is present that there are frequently follow-up evaluations needed to validate these findings. Reviewing all of these vulnerabilities is usually infeasible for many teams; For this reason, it is very important to develop a vulnerability prioritization plan before initiating a large number of scans. These priority plans should be risk-driven to ensure that teams are spending their time dealing with the most important vulnerabilities in terms of both likelihoods of exploitation and impact.
Advertisements