Examples of Rules for development of Software and System

Purpose

To ensure that information security is designed and implemented within the development life cycle for applications and information systems.

Scope

All XXX applications and information systems that are business critical and/or process, store, or transmit sensitive data. This policy applies to all internal and external engineers and developers of XXX software and infrastructure.

Secure Software and System Development Policy

1 Data Storage

Personal Data information and its availability. It describes procedures for the secure storage of information in databases. It details the management of access permissions and distribution of passwords to be adopted for the operationalization of these structures.

1.1 Procedures and Media for Data Storage

You should not use a storage medium that does not have access for reading and writing restricted by password. You should preferably store encrypted data.

1.2 Permissions for Accessing Information in Databases

  • Applications should not have access to any database utilizing a user login with root permissions.
  • Applications should not have access to any database utilizing a user login with permissions to execute commands in Data Definition Language (DDL).
  • Applications should not have access to any database utilizing a user login with permissions beyond those strictly necessary for its operation.

1.3 Password Management and Distribution for Data Access

  • The creation of passwords that do not follow the standards established by Epimed Solutions should not be allowed. Passwords must have at least 6 (six) alphanumeric characters, using special characters (@ # $%).
  • Password storage in source code should not be used.
  • User data and systems using each password provided must be securely stored.
  • The same passwords should not be used for development, testing, homologation and production environments.

2 – Password Management and Distribution of Data Access

2.1 Authorization and Authentication of Users

  • Passwords should not be stored in plain text without using a salted secure hash algorithm.
  • Nominal user and password control must be used to determine the user’s identity.
  • Authentication via AD should be used whenever possible to authenticate internal users.
  • Users must be made aware of the permissions and levels of access they have.
  • Active Directory (AD) groups should be used to determine access policies and user roles.

2.2 Authentication on Web Systems

Since HTTP is a stateless protocol, which uses cookies to maintain user sessions, it is necessary to guarantee both the security of the exchange of credentials and also that of other pages accessed by users of web systems. The HTTPS protocol aims to contribute to ensuring that security is guaranteed.Thus, HTTPS must be used in all system screens

3 – Secure communication

This deal with the secure transmission of Sensitive Personal Data between systems, in order to safeguard the integrity, authenticity and other attributes pertinent to the use of communicated data. A communication channel with control of duplication and loss of information/messages must be used. Thus, HTTPS must be used in all system screens. A communication channel that provides integrity control of transmitted data (HTTPS) must be used. A communication channel with authentication control (HTTPS, digital certificates generated by trusted authorities, VPNs) must be used. The data to be transmitted at both ends of the communication must be securely stored. A communication channel that provides confidentiality of the transmitted data (HTTPS and VPNs) must be used.

4 – Attacks on Systems and their Defenses

It is recommended that the main known attacks be prevented, in order to prevent malicious attacks from compromising the security of the system, exposing Sensitive Personal Data and performing unauthorized operations, among other possible vulnerabilities.

  • SQL injection attacks (SQL Injection) must be prevented.
  • SQLs should not be created by concatenating textual parameters from non-secure sources, such as parameters filled in by users or even stored in the database.
  • Access permissions to the database for application users must be restricted.
  • It is necessary, whenever possible, to pass parameters in SQL commands (DML or DDL) using prepared statements. Queries that cannot be parameterized should receive special treatment, such as escapes or hexadecimal coding.
  • HTML and Javascript injection attacks must be prevented.
  • Cross-site scripting (XSS) attacks should be prevented.
  • Broken Authentication and Session Management attacks must be prevented.
  • Systems must be subjected to intrusion testing tools.

5 – Auditing, Tracking and Logs

This section presents guidelines for the maintenance of records/logs for subsequent auditing, tracking and consultation of incidents related to system security. Each system has a different criticality in terms of data access restriction, non-repudiation and history of operations carried out in the database. For this reason, this section does not define what information should be audited, but rather suggests possible items that can be audited, tracked or logged. These items, then, must be evaluated by product managers.

Examples of events that can be logged:

  • Login and logout operations;
  • Access to certain screens or sections of the system;
  • Access to information with some restrictions (For example: confidential documents, personal data);
  • Operations for the inclusion, alteration or deletion of records in the database;
  • Change of access profile (for systems that have access with different profiles);
  • Execution of jobs and automated tasks.

Examples of information that can be stored, related to each event:

  • Date and time;
  • User who performed the operation;
  • IP address;
  • User session identifier (when applicable, for example: cookies);
  • Screen (page) of the system in which the operation was performed;
  • Instance identifier (for clustered systems);
  • For insertion, alteration or deletion operations, the type of operation, name of the table that was manipulated, record ID and, if applicable, previous and current values for each field;
  • Parameters informed by the user (Examples: GET or POST parameters), being careful not to store Sensitive Personal Data, such as passwords;
  • System response time;
  • To execute jobs and automated tasks, store the result of the operation; failure, success, cancelation, etc.

6 – Prevention, Reaction, and Mitigation of security Breaches.

6.1 Backups

  • The specification of the need and the assignment of the responsibility for making backups of the database and of the system source codes, as well as the access policies for this backup, must be included in the project plan.
  • A structured procedure for restoring backups must be defined.
  • Personnel in charge of the recovery of backups must be properly designated and trained.
  • Baselines of the system versions must be created, facilitating the agile recovery to a previous version.
  • Simulation of data restoration must be carried out continuously.

6.2 Tests

  • Manual security tests must be carried out before each version of the software that changes its structure (login screens, unauthenticated services, new forms with user interaction, etc.).
  • It must be ensured, through automated tests, that the services and confidential data are protected and available only to the users who hold the information.
  • A specific testing policy must be developed, whether automated or not, aiming at guaranteeing non-vulnerability to the main known attacks on systems.
  • Test scenarios should be defined to guarantee the non-functional software requirements, preferably carried out by a test team different from the software development team, in order to avoid bias.
  • Test scenarios should be defined, mainly in terms of security, for cases of updates to the system architecture (application servers, database, browser versions, operating system versions, etc.).

6.3 Incidents

  • A planned procedure must be maintained for immediate system unavailability and corrective maintenance.
  • A specific policy to foster the follow-up on security breach incident response must be defined.
  • Lessons learned from past incidents should be used to review the testing policy and increase system security.

7 – Development Environment .

Security is a requirement that must be included within every phase of a system development life cycle.  A system development life cycle that includes formally defined security activities within its phases is known as a secure SDLC. Per the Information Security Policy, a secure SDLC must be utilized in the development of all applications and systems. At a minimum, an SDLC must contain the following security activities. These activities must be documented or referenced within an associated information security plan. Documentation must be sufficiently detailed to demonstrate the extent to which each security activity is applied. The documentation must be retained for auditing purposes.

  1.  Define Security Roles and Responsibilities
  2.  Orient Staff to the SDLC Security Tasks
  3.  Establish a System Criticality Level
  4.  Classify Information
  5.  Establish System Identity Credential Requirements
  6.  Establish System Security Profile Objectives
  7.  Create a System Profile
  8.  Decompose the System
  9.  Assess Vulnerabilities and Threats
  10.  Assess Risks
  11.  Select and Document Security Controls
  12.  Create Test Data
  13.  Test Security Controls
  14.  Perform Certification and Accreditation
  15.  Manage and Control Change
  16.  Measure Security Compliance
  17.  Perform System Disposal

There is not necessarily a one-to-one correspondence between security activities and SDLC phases. Security activities often need to be performed iteratively as a project progresses or cycles through the SDLC. Unless stated otherwise, the placement of security activities within the SDLC may vary in accordance with the SDLC being utilized and the security needs of the application or system.. Finally, it is important to note that the Secure SDLC process is comprehensive by intention, to assure due-diligence, compliance, and proper documentation of security-related controls and considerations. Designing security into systems requires an investment of time and resources. The extent to which security is applied to the SDLC process should be commensurate with the classification (data sensitivity and system criticality) of the system being developed and risks this system may introduce into the overall environment.  This assures value to the development process and deliverable.  Generally speaking, the best return on investment is achieved by rigorously applying security within the SDLC process to high risk/high cost projects. Where it is determined that a project will not leverage the full Secure SDLC  process – for example, on a lower-risk/cost project, the rationale must be documented, and the security activities that are not used must be identified and approved as part of the formal risk acceptance process.

Note: Data classification cannot be used as the sole determinate of whether or not the project is low risk/cost.  For example, public facing websites cannot be considered low risk/cost projects even if all the data is public.  There is a risk of compromise of the website to inject malware and compromise visitor’s machines or to change the content of the website to create embarrassment.

7.1 Source Code Access

A version control system with access control and recovery in case of failures must be used. (For example: Microsoft Team Foundation Server).

7.2 Separation of Environments

  • The Development/Testing/Homologation environments must be separated from the Production environment.
  • Different databases must be used for each environment.
  • Different application/web servers must be used for each environment.
  • Access to the Development/Testing/Homologation environment should only be provided to members of the development team and to those interested in the project (stakeholders).
  • Periodic tests must be carried out to ensure the security of the development/testing/homologation environment.
  • Developers should not be provided with passwords to access the production environment.

8 – Data Protection

8.1 Cryptography and Hashing

  • A cryptographic method that follows the Kerckhoffs’ Principle should be used. The encryption method and its parameters must be public and documented, only the cryptographic key must be kept confidential.
  • An encryption that admits a known method for breaking the cryptographic key (brute force), based on trial and error, should not be used.
  • Electronic codebook (ECB) block encryption mode or less secure modes should not be used.
  • A key size of less than 128 bits (symmetric encryption) or 1024 bits (asymmetric encryption) should not be used.
  • The hash function should not be used without some type of salt.
  • Algorithms that are considered obsolete for cryptography and cryptographic hashing should not be used. Examples: MD5, SHA1, DES/3DES, RC2, RC4, MD4.
  • A key size of less than 192 bits (symmetric encryption) or 2048 bits (asymmetric encryption) should not be used.
  • Cryptographic keys should not be distributed without the use of a public key infrastructure and, therefore, without the use of asymmetric encryption.
  • A key size of less than 256 bits (symmetric encryption) or 4096 bits (asymmetric encryption) should not be used.

8.2 Passwords

  • Password size: Passwords with less than 6 characters should not be used.
  • Variation of symbols: At least upper and lower case letters must be used, together with at least one type of character (digit, symbol).
  • Randomness: Passwords should not be created without the aid of random password generator software, configured to meet the parameters established below:
  • Tests: You should not use a password that has not been validated by password strength checker software.
  • Change frequency: Same passwords should not be used for more than 6 months.
  • Password change and recovery: The use of the same password validation channel should not be allowed. The old password should not be sent to users, under no circumstances.
  • Storage (user): You should not store a password that is not encrypted following the standard level of encryption set out in this document.
  • Number of attempts: Password validation rate should not be allowed to exceed 5 attempts per minute. Passwords must be blocked in case of a maximum of 5 consecutive validation errors and its recovery must rely on a specific process.

9 – Software Life cycle

9.1 Design

  • The software design model should include the following:
  • Threat modeling stage;
  • Clear definition of security risks;
  • Severity level that the compromise of Sensitive Personal Data would bring to the system and institution.
  • It should not be omitted, during the system development design and its execution, the definition of responsibilities for system data security and how this responsibility will be verified.
  • A design schedule that includes security check points of the system developed during its construction must be used.

9.2 Coding

Protective measures applied in the source code must be documented, including in the application code, in order to indicate precisely the procedure used and its peculiarities.

9.3 Maintenance

  • Automatic updates of software or components used in the construction of a system should not be enabled, otherwise security breaches may, inadvertently, come up.
  • Third party software should not be modified, except when strictly necessary. Internal security controls can be invalidated. This change should be made by the original system developer whenever possible.

9.4 Personnel

Training and qualification of programmers should be provided for the acquisition and review of computer security principles and the development of secure software.

10. Framework for development of Software and System

10.1 Prepare the Organization

1 Define Security Requirements for Software Development

  • Identify and document all security requirements for the organization’s software development infrastructures and processes, and maintain the requirements over time.
  • Identify and document all security requirements for organization-developed software to meet, and maintain the requirements over time.
  • Communicate requirements to all third parties who will provide commercial software components to the organization for reuse by the organization’s own software.

2 Implement Roles and Responsibilities

  • Create new roles and alter responsibilities for existing roles as needed to encompass all parts of the SDLC. Periodically review and maintain the defined roles and responsibilities, updating them as needed.
  • Provide role-based training for all personnel with responsibilities that contribute to secure development. Periodically review personnel proficiency and role-based training, and update the training as needed.
  • Obtain upper management or authorizing official commitment to secure development, and convey that commitment to all with development- related roles and responsibilities.

3 Implement Supporting Tool chains

  • Specify which tools or tool types must or should be included in each tool chain to mitigate identified risks, as well as how the tool chain components are to be integrated with each other.
  • Follow recommended security practices to deploy, operate, and maintain tools and toolchains.
  • Configure tools to generate artifacts of their support of secure software development practices as defined by the organization.

4 Define and Use Criteria for Software Security Checks

  • Define criteria for software security checks and track throughout the SDLC.
  • Implement processes, mechanisms, etc. to gather and safeguard the necessary information in support of the criteria.

5 Implement and Maintain Secure Environments for Software Development

  • Separate and protect each environment involved in software development
  • Secure and harden development endpoints (i.e., endpoints for software designers, developers, testers, builders, etc.) to perform development-related tasks using a risk-based approach.

10.2 Protect Software

1 Protect All Forms of Code from Unauthorized Access and Tampering

  • Store all forms of code – including source code, executable code, and configuration-as-code – based on the principle of least privilege so that only authorized personnel, tools, services, etc. have access.

2 Provide a Mechanism for Verifying Software Release Integrity

  • Make software integrity verification information available to software acquirers.

3 Archive and Protect Each Software Release

  • Securely archive the necessary files and supporting data (e.g., integrity verification information, provenance data) to be retained for each software release.
  • Collect, safeguard, maintain, and share provenance data for all components of each software release (e.g., in a software bill of materials [SBOM]).

10.3 Produce Well-Secured Software

1.Design Software to Meet Security Requirements and Mitigate Security Risks

  • Use forms of risk modeling – such as threat modeling, attack modeling, or attack surface mapping – to help assess the security risk for the software.
  • Track and maintain the software’s security requirements, risks, and design decisions.
  • Where appropriate, build in support for using standardized security features and services (e.g., enabling software to integrate with existing log management, identity management, access control, and vulnerability management systems) instead of creating proprietary implementations of security features and services.

2. Review the Software Design to Verify Compliance with Security Requirements and Risk Information

  • Have 1) a qualified person (or people) who were not involved with the design and/or 2) automated processes instantiated in the tool chain review the software design to confirm and enforce that it meets all of the security requirements and satisfactorily addresses the identified risk information.

3. Reuse Existing, Well-Secured Software When Feasible Instead of Duplicating Functionality

  • Acquire and maintain well-secured software components (e.g., software libraries, modules, middleware, frameworks) from commercial, open- source, and other third-party developers for use by the organization’s software.
  • Create and maintain well-secured software components in-house following SDLC processes to meet common internal software development needs that cannot be better met by third-party software components.
  • Verify that acquired commercial, open-source, and all other third-party software components comply with the requirements, as defined by the organization, throughout their life cycles.

4. Create Source Code by Adhering to Secure Coding Practices

  • Follow all secure coding practices that are appropriate to the development languages and environment to meet the organization’s requirements.

5. Configure the Compilation, Interpreter, and Build Processes to Improve Executable Security

  • Use compiler , interpreter and build tools that offer features to improve executable security.
  • Determine which compiler, interpreter, and build tool features should be used and how each should be configured, then implement and use the approved configurations.

6. Review and/or Analyze Human-Readable Code to Identify Vulnerabilities and Verify Compliance with Security Requirements

  • Determine whether code review (a person looks directly at the code to find issues) and/or code analysis (tools are used to find issues in code, either in a fully automated way or in conjunction with a person) should be used, as defined by the organization.
  • Perform the code review and/or code analysis based on the organization’s secure coding policy, and record and triage all discovered issues and recommended remediations in the development team’s workflow or issue tracking system.

7. Test Executable Code to Identify Vulnerabilities and Verify Compliance with Security Requirements

  • Determine whether executable code testing should be performed to find vulnerabilities not identified by previous reviews, analysis, or testing and, if so, which types of testing should be used.
  • Scope the testing, design the tests, perform the testing, and document the results, including recording and triaging all discovered issues and recommended remediation in the development team’s workflow or issue tracking system.

8. Configure Software to Have Secure Settings by Default

  • Define a secure baseline by determining how to configure each setting that has an effect on security or a security-related setting so that the default settings are secure and do not weaken the security functions provided by the platform, network infrastructure, or services
  • Implement the default settings (or groups of default settings, if applicable), and document each setting for software administrators.

Respond to Vulnerabilities

1.Identify and Confirm Vulnerabilities on an Ongoing Basis

  • Gather information from software acquirers, users, and public sources on potential vulnerabilities in the software and third-party components that the software uses, and investigate all credible reports.
  • Review, analyze, and/or test the software’s code to identify or confirm the presence of previously undetected vulnerabilities.
  • Have a policy that addresses vulnerability disclosure and remediation, and implement the roles, responsibilities, and processes needed to support that policy.

2. Assess, Prioritize, and Remediate Vulnerabilities

  • Analyze each vulnerability to gather sufficient information about risk to plan its remediation or other risk response.
  • Plan and implement risk responses for vulnerabilities.

3. Analyze Vulnerabilities to Identify Their Root Causes

  • Analyze identified vulnerabilities to determine their root causes.
  • Analyze the root causes over time to identify patterns, such as a particular secure coding practice not being followed consistently.
  • Review the software for similar vulnerabilities to eradicate a class of vulnerabilities, and proactively fix them rather than waiting for external reports.
  • Review the SDLC process, and update it if appropriate to prevent (or reduce the likelihood of) the root cause recurring in updates to the software or in new software that is created.

Leave a Reply