ISO 27001:2013 A.10 Cryptography

Cryptography is a science that applies complex mathematics and logic to design strong encryption methods. Achieving strong encryption, the hiding of data’s meaning also requires intuitive leaps that allow the creative application of known or new methods. So cryptography is also an art. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage. It is the practice of hiding information so that unauthorized persons can’t read it. The literal meaning for cryptography is “hidden writing”: how to make what you write obscure, unintelligible to everyone except whom you want to communicate with.

Cryptography can be used to achieve several goals for information security, including confidentiality, integrity, and authentication.

•  Confidentiality: First, cryptography protects the confidentiality (or secrecy) of information. Even when the transmission or storage medium has been compromised, the encrypted information is practically useless to unauthorized persons without the proper keys for decryption.
•  Integrity: Cryptography can also be used to ensure integrity (or accuracy) of information through the use of hashing algorithms and message digests.
• Authentication: Finally, cryptography can be used for authentication (and non-repudiation) services through digital signatures, digital certificates, or a Public Key Infrastructure (PKI).

Cryptography was already used in ancient times, essentially in three kinds of contexts: private communications, art and religion, and military and diplomatic use. The conceptual foundation of cryptography was laid out around 3000 years ago in India and China. Since the early days of writing, heads of state and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of written correspondence and to have some means of detecting tampering. The antique cipher of the Greek historian Polibio used a table, with rows and columns, to associate a letter to a pair of numbers. Famous is the Caesar Cipher, based on a three positions’ shift, that is, mathematically (considering the English 26 letters alphabet): y = (x + 3) mod26. Julius Caesar is credited with the invention of the Caesar cipher ca. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands. World War II brought about much advancement in information security and marked the beginning of the professional field of information security. The end of the 20th century and early years of the 21st century saw rapid advancements in telecommunications, computing hardware and software, and data encryption. The Enigma machine was a field unit used in WWII by German field agents to encrypt and decrypt messages and communications. The Enigma machine s one of the first mechanized methods of encrypting text using an iterative cipher. The Enigma machine was used by all branches of the German military as their main device for secure wireless communications until the end of World War 2. Several types of the Enigma machine were developed before and during World War 2, each more complex and harder to code break than its predecessors. The most complex Enigma type was used by the German Navy. In addition to the complexity of the Enigma machine itself, its operating procedures became increasingly complex, as the German military wanted to make Enigma communications harder to code break.

Ciphers

Cryptography is built on one overarching premise: the need for a cipher that can reliably, and portable, be used to encrypt text so that, through any means of cryptanalysis — differential, deductive, algebraic — the ciphertext cannot be undone with any available technology. Most modern ciphers can be categorized in several ways

•  By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers).
• By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). If the algorithm is symmetric, the key must be known to the recipient and sender and to no one else. If the algorithm is an asymmetric one, the enciphering key is different from but closely related to, the deciphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the Recent Researches in Communications and public/private key property and one of the keys may be made public without loss of confidentiality.
• The Substitution Cipher
In this method, each letter of the message is replaced with a single character. Because some letters appear more often and certain words appear more often than others, some ciphers are extremely easy to decrypt, and some can be deciphered at a glance by more practiced cryptologists.
• The Shift Cipher
Also known as the Caesar cipher, the shift cipher is one that anyone can readily understand and remember for decoding. It is a form of the substitution cipher. By shifting the alphabet a few positions in either direction, a simple sentence can become unreadable to casual inspection.
• The Polyalphabetic Cipher
To make ciphers more difficult to crack, Blaise de Vigenère from the 16th-century court of Henry III of France proposed a polyalphabetic substitution. In this cipher, instead of a one-to-one relationship, there is a one-to-many. A single letter can have multiple substitutes. The Vigenère solution was the first known cipher to use a keyword.
• The Kasiski/Kerckhoff Method
In the 19th century, Auguste Kerckhoff said that essentially, a system should be considered secure, even when everyone knows everything about the system (except the password).

Modern Cryptography

In the mid-1970s the U.S. government issued a public specification, through its National Bureau of Standards (NBS), called the Data Encryption Standard or, most commonly, DES. The Data Encryption Standard (DES) is a block cipher that uses shared secret encryption. It was selected by the National Bureau of Standards as an official Federal Information Processing Standard (FIPS) for the United States in 1976 and has subsequently enjoyed widespread use internationally. It is based on a symmetric-key algorithm that uses a 56-bit key. It became the encryption standard of choice until the late 1990s when it was broken when Deep Crackbroke a DES key in 22 hours and 15 minutes. Later that year a new form of DES, called Triple DES, which encrypted the plaintext in three iterations, was published. It remained in effect until 2002 when it was superseded by AES. The release of DES also included the creation and release of Ron Rivest, Adi Shamir, and Leonard Adleman’s encryption algorithm (RSA). Rivest, Shamir, and Adleman, publicly described the algorithm in 1977.
RSA is the first encryption standard to introduce (to public knowledge) the new concept of digital signing. In cryptography, RSA (which stands for Rivest, Shamir, and Adleman who first publicly described it) is an algorithm for public-key cryptography. It is the first algorithm known to be suitable for signing as well as encryption and was one of the first great advances in public-key cryptography. RSA is widely used in electronic commerce protocols and is believed to be secure given sufficiently long keys and the use of up-to-date implementations.
AES represents one of the latest chapters in the history of cryptography. It is currently one of the most popular encryption standards. In cryptography, the Advanced Encryption Standard (AES) is a symmetric-key encryption standard adopted by the U.S. government. AES was announced by the National Institute of Standards and Technology (NIST) on November 26, 2001, after a 5-year standardization process in which fifteen competing designs were presented and evaluated before Rijndael was selected as the most suitable (see Advanced Encryption Standard process for more details). It became effective as a Federal government standard on May 26, 2002, after approval by the Secretary of Commerce. It is available in many different encryption packages. AES is the first publicly accessible and open cipher approved by the NSA for top-secret information The core principles of information security are confidentiality, integrity, authentication and non-repudiation.

1. Authentication
Authentication is any process by which you verify that someone is who he claims he is. This usually involves a username and a password but can include any other method of demonstrating identities, such as a smart card, retina scan, voice recognition, or fingerprints.
2. Confidentiality
Confidentiality means that only people with the right permission can access and use information. It also means protecting it from unauthorized access at all stages of its life cycle. Confidentiality is necessary (but not sufficient) for maintaining the privacy of the people whose personal information a system holds. Encryption is one way to make sure that information remains confidential while it’s stored and transmitted. Encryption converts information into code that makes it unreadable. Only people authorized to view the information can decode and use it.
3. Integrity
Integrity means that information systems and their data are accurate. Integrity ensures that changes can’t be made to data without appropriate permission. If a system has integrity, it means that the data in the system is moved and processed in predictable ways. The data doesn’t change when it’s processed.
4. Non-repudiation
Nonrepudiation means to ensure that a transferred message has been sent and received by the parties claiming to have sent and received the message. Nonrepudiation is a way to guarantee that the sender of a message cannot later deny having sent the message and that the recipient cannot deny having received the message.
5. Availability
Availability is the security goal of making sure information systems are reliable. It makes sure data is accessible. It also helps to ensure that individuals with proper permission can use systems and retrieve data in a dependable and timely manner..

PUBLIC KEY ENCRYPTION

Public key encryption refers to a type of cipher architecture known as public-key cryptography that utilizes two keys, or a key pair, to encrypt and decrypt data. One of the two keys is a public key, which anyone can use to encrypt a message for the owner of that key. The encrypted message is sent and the recipient uses the private key to decrypt it. Public key cryptography was invented in 1976 by Whitfield Diffie and Martin Hellman. For this reason, it is sometimes called Diffie-Hellman encryption. It is also called asymmetric encryption because it uses two keys instead of one key (symmetric encryption). The latest research focus is on the cryptographic primitive named Signcryption. This represents the combination of the digital signature and the public key encryption in a single logical step. The most important advantage of this new method is the cost which is less than the sum for the cost of digital signature and the cost for encryption. This new encryption schema has been invented by Yuliang Zheng. Public key cryptography is used to solve various problems that symmetric key algorithms cannot. In particular, it can be used to provide privacy and non-repudiation. Privacy is usually provided through key distribution and a symmetric key cipher. This is known as hybrid encryption. Nonrepudiation is usually provided through digital signatures and a hash function.

The Rivest–Shamir–Adelman (RSA) cryptosystem is a public key system. Based on an underlying hard problem and named after its three inventors, this algorithm was introduced in 1978 and to date remains secure. RSA has been the subject of extensive cryptanalysis, and no serious flaws have yet been found. The encryption algorithm is based on the underlying problem of factoring large numbers. So far, nobody has found a shortcut or easy way to factor large numbers in a finite set called a field. In a highly technical but excellent paper, Boneh reviews all the known cryptanalytic attacks on RSA and concludes that none is significant. Because the factorization problem has been open for many years, most cryptographers consider this problem a solid basis for a secure cryptosystem

B. Cryptographic Hash Functions
The most widely used cryptographic hash functions are MD4, MD5 (where MD stands for Message Digest), and SHA/SHS (Secure Hash Algorithm or Standard). The MD4/5 algorithms were invented by Ron Rivest and RSA Laboratories. MD5 is an improved version of MD4. Both condense a message of any size to a 128-bit digest. SHA/SHS is similar to both MD4 and MD5; it produces a 160-bit digest. Wang et al. [WAN05] have announced cryptanalysis attacks on SHA, MD4, and MD5. For SHA, the attack is able to find two plaintexts that produce the same hash digest in approximately 263 steps, far short of the 280 steps that would be expected of a 1 60 bit hash function, and very feasible for a moderately well-financed attacker.
C. Digital Signature
A digital signature is a protocol that produces the same effect as a real signature: It is a mark that only the sender can make, but other people can easily recognize as belonging to the sender. Just like a real signature, a digital signature is used to confirm agreement to a message. A digital signature is a mathematical scheme for demonstrating the authenticity of a digital message or document. A valid digital signature assures the receiver that the message was created by a known sender and that it was not altered in transit. Digital signatures are commonly used for software distribution, financial transactions, and in other cases where it is important to detect forgery and tampering.
D. Certificates
A public-key certificate (also known as a digital certificate or identity certificate) is an electronic document that uses a digital signature to bind together a public key with identity — information such as the name of a person or an organization, their address, and so forth. The certificate can be used to verify that a public key belongs to an individual. The most common use of certificates is for HTTPS-based websites. A web browser validates that an SSL (Transport Layer Security) web server is authentic so that the user can feel secure that their interaction with the website has no eavesdroppers and that the website is who it claims to be. This security is important for electronic commerce. In practice, a website operator obtains a certificate by applying to a certificate provider with a certificate signing request. Contents of a typical digital certificate:

• Serial Number – used to uniquely identify the certificate.
• Subject – the person or entity identified.
• Signature Algorithm – the algorithm used to create the signature.
• Issuer – The entity that verified the information and issued the certificate.
• Valid-From – The date the certificate is first valid from.
• Valid-To – The expiration date.
• Key-Usage – Purpose of the public key (e.g. encipherment, signature, certificate signing…).
• Public Key – the purpose of SSL when used with HTTP is not just to encrypt the traffic, but also to authenticate who the owner of the website is and that someone’s been willing to invest time and money into proving the authenticity and ownership of their domain.
• Thumbprint Algorithm – The algorithm used to hash the certificate.
• Thumbprint – The hash itself to ensure that the certificate has not been tampered with.

INFORMATION SECURITY CONCERNS

• Shoulder Surfing
Shoulder surfing occurs when an attacker looks over the shoulder of another person at a computer to discover sensitive information.
• Social Engineering
Social engineering describes an attack that relies heavily on human relations. It’s not a technical attack. This type of attack involves tricking other people to break normal security procedures to gain sensitive information. These attackers take advantage of human nature.
• Phishing and Targeted Phishing Scams
Phishing is a form of Internet fraud where attackers attempt to steal valuable information from their victims. Phishing attacks take place in electronic communications. These attacks can take place via e-mail or instant messages. Phishing attacks also can take place in Internet chat rooms.
• Malware
Malware is a general term that refers to any type of software that performs some sort of harmful, unauthorized, or unknown activity. Malware includes computer viruses, worms, and Trojan horses. The term malware is a combination of the words malicious and software.
• Logic Bombs
A logic bomb is a harmful code intentionally left on a computer system. It lies dormant for a certain period. When specific conditions are met, it “explodes” and carries out its malicious function. Conditions that cause the logic bomb to explode vary. Programmers can create logic bombs that explode on a certain day or when a specific event occurs.
• Backdoors
A backdoor also called a “trap door,” is a way to access a computer program or system that bypasses normal mechanisms. Programmers sometimes install a backdoor to access a program quickly during development to troubleshoot problems.

INFORMATION SECURITY IN CLOUDING

Design Principles for Information Security in Clouding

• Least privilege
The principle of least privilege maintains that an individual, process, or other types of entity should be given the minimum privileges and resources for the minimum period of time required to complete a task. This approach reduces the opportunity for unauthorized access to sensitive information. Only the minimum necessary rights should be assigned to a subject that requests access to a resource and should be in effect for the shortest duration necessary. Granting permissions to a user beyond the scope of the necessary rights of action can allow that user to obtain or change the information in unwanted ways. Therefore, a careful delegation of access rights can limit attackers from damaging a system. This principle limits the damage that can result from an accident or error. It also reduces the number of potential interactions among privileged programs to the minimum for correct operation, so that unintentional, unwanted, or improper uses of privilege are less likely to occur. Thus, if a question arises related to misuse of a privilege, the number of programs that must be audited is minimized.
• Separation of duties
Separation of duties requires that completion of a specified sensitive activity or access to sensitive objects is dependent on the satisfaction of a plurality of conditions. Separation of duties (SoD) is the concept of having more than one person required to complete a task. It is alternatively called segregation of duties or, in the political realm, separation of powers.
• Defence in depth
Defence in depth is the application of multiple layers of protection wherein a subsequent layer will provide protection if a previous layer is breached. Defence in depth is an information assurance (IA) strategy in which multiple layers of defence are placed throughout an information technology (IT) system. It addresses security vulnerabilities in personnel, technology, and operations for the duration of the system’s lifecycle. Defence in depth is originally a military strategy that seeks to delay, rather than prevent, the advance of an attacker by yielding space in order to buy time. The placement of protection mechanisms, procedures and policies is intended to increase the dependability of an IT system where multiple layers of defence prevent espionage and direct attacks against critical systems. In terms of computer network defence, defence in depth measures should not only prevent security breaches but buys an organization’s time to detect and respond to an attack, thereby reducing and mitigating the consequences of a breach.
• Fail-safe
Fail-safe means that if a cloud system fails it should fail to a state in which the security of the system and its data are not compromised. One implementation of this philosophy would be to make a system default to a state in which a user or process is denied access to the system. A fail-safe or fail-secure device is one that, in the event of failure, responds in a way that will cause no harm, or at least a minimum of harm, to other devices or danger to personnel.
• Economy of mechanism
The economy of mechanism promotes simple and comprehensible design and implementation of protection mechanisms so that unintended access paths do not exist or can be readily identified and eliminated. One factor in evaluating a system’s security is its complexity. If the design, implementation, or security mechanisms are highly complex, then the likelihood of security vulnerabilities increases. Subtle problems in complex systems may be difficult to find, especially in copious amounts of code. For instance, analyzing the source code that is responsible for the normal execution of functionality can be a difficult task, but checking for alternate behaviours in the remaining code that can achieve the same functionality can be even more difficult. One strategy for simplifying code is the use of choke points, where shared functionality reduces the amount of source code required for operation. Simplifying design or code is not always easy, but developers should strive for implementing simpler systems when possible.
• Complete mediation
In complete meditation, every request by a subject to access an object in a computer system must undergo a valid and effective authorization procedure. A software system that requires access checks to an object each time a subject requests access, especially for security-critical objects, decreases the chances of mistakenly giving elevated permissions to that subject. A system that checks the subject’s permissions to an object only once can invite attackers to exploit that system. If the access control rights of a subject are decreased after the first time the rights are granted and the system does not check the next access to that object, then a permissions violation can occur.
• Open design
An open-access cloud system design that has been evaluated and tested by a myriad of experts provides a more secure authentication method than one that has not been widely assessed. The security of such mechanisms depends on protecting passwords or keys. The principles of open design are derived from the Free Software and Open Source movements.
• Least common mechanism
This principle states that a minimum number of protection mechanisms should be common to multiple users, as shared access paths can be sources of unauthorized information exchange. Avoid having multiple subjects sharing mechanisms to grant access to a resource. For example, serving an application on the Internet allows both attackers and users to gain access to the application. Sensitive information can potentially be shared between the subjects via the mechanism. A different mechanism (or instantiation of a mechanism) for each subject or class of subjects can provide the flexibility of access control among various users and prevent potential security violations that would otherwise occur if only one mechanism was implemented.
• Psychological acceptability
Psychological acceptability refers to the ease of use and intuitiveness of the user interface that controls and interacts with the cloud access control mechanisms. Accessibility to resources should not be inhibited by security mechanisms. If security mechanisms hinder the usability or accessibility of resources, then users may opt to turn off those mechanisms. Where possible, security mechanisms should be transparent to the users of the system or at most introduce minimal obstruction. Security mechanisms should be user-friendly to facilitate their use and understanding of a software application.
The security of a cloud system is only as good as its weakest component. Thus, it is important to identify the weakest mechanisms in the security chain and layers of defence and improve them so that risks to the system are mitigated to an acceptable level.

Objective:

To ensure proper and effective use of cryptography to protect the confidentiality, authenticity and/or integrity of information.

Control:

A policy on the use of cryptographic controls for the protection of information should be developed and implemented.

Implementation guidance

When developing a cryptographic policy the following should be considered:

1. the management approach towards the use of cryptographic controls across the organization, including the general principles under which business information should be protected;
2. based on a risk assessment. the required level of protection should be identified taking into account the type. strength and quality of the encryption algorithm required
3. the use of encryption for the protection of information transported by mobile or removable media devices or across communication lines;
4. the approach to key management, including methods to deal with the protection of cryptographic keys and the recovery of encrypted information in the case of lost, compromised or damaged keys;
5. roles and responsibilities, e.g. who is responsible for:
1. the implementation of the policy
2. the key management, including key generation
6. the standards to be adopted for effective implementation throughout the organization (which the solution is used for which business processes);
7. the impact of using encrypted information on controls that rely upon content inspection (e.g. malware detection).

When implementing the organization’s cryptographic policy. consideration should be given to the regulations and national restrictions that might apply to the use of cryptographic techniques in different parts of the world and to the issues of the trans-border flow of encrypted information. Cryptographic controls can be used to achieve different information security objectives, e.g.:
a) conﬁdentiality: using encryption of information to protect sensitive or critical information, either stored or transmitted;
b] integrity/authenticity: using digital signatures or message authentication codes to verify the authenticity or integrity of stored or transmitted sensitive or critical information;
c] non-repudiation: using cryptographic techniques to provide evidence of the occurrence or non- occurrence of an event or action;
d) authentication: using cryptographic techniques to authenticate users and other system entities requesting access to or transacting with system users, entities and resources.
Making a decision as to whether a cryptographic solution is appropriate should be seen as part of the wider process of risk assessment and selection of controls. This assessment can then be used to determine whether a cryptographic control is appropriate, what type of control should be applied and for what purpose and business processes.
A policy on the use of cryptographic controls is necessary to maximize the benefits and minimize the risks of using cryptographic techniques and to avoid inappropriate or incorrect use. Specialist advice should be sought in selecting appropriate cryptographic controls to meet the information security policy objectives.

In order to implement encryption effectively throughout the organization, start by developing a strategy that incorporates risk management, compliance requirements, data protection, policies, and standards.

1. Develop Requirement
1. Asset Management:  discusses the need to identify and categorize/classify all your information assets. Understanding/knowing where confidential information resides (ex. SSNs, PII) is a critical component in establishing an encryption strategy.
2. Access Control: addresses the need to ensure authorized access to information resources. Confidential information needs to be protected throughout its lifecycle (access, process, transmit, store).
3. Compliance: provides information in relation to various legal and information security requirements that stipulate the need to protect specific types of information. These types of requirements (ex. PCI DSS, HIPAA) discuss the need to encrypt specific types of data (cardholder data, electronically protected health information)
4. Risk Management: emphasizes the importance of analyzing risks to information. Risk treatment activities may include deploying encryption solutions to protect confidential information.
5. Information Security Policies: stresses that policies provide the direction organizational leadership wants to take in regards to information security goals and objectives. In order to develop an organizational strategy for encryption that will be widely supported and adopted, it’s necessary to gain the support of organizational leadership.
2. Seek to protect data at rest and in motion using Full Disk Encryption (FDE) solutions and transport layer encryption protocols.
3. Ensure that your encryption keys are sufficiently strong and well protected using professional and open-source vetted encryption products.
4. Use encryption algorithms that are up-to-date and strong. AES 256-bit encryption is the gold standard for FDE. TLS 1.2 is the current gold standard for transport layer security.
5. Provide a means for organizational staff to process confidential data while it is encrypted. Ensure secure data transfer environments in internal and external communication channels.
6. Protect encryption keys by using long, complex passwords with proper access rights to the keys. Maintain audit logs of access to encryption keys.
7. Develop a key management process that automates the process of verifying identity and access rights. Active Directory ensures that only active organizational users can access and authenticate secure resources.

Note: Encryption is often a computationally-intensive process and may degrade the performance of IT applications or infrastructure if not implemented in an optimal way. Be sure to calculate the performance requirements of enterprise services and end-users before implementing encryption methods. Develop an implementation strategy, gather requirements, complete test plans, deploy the following best practices of products, and effectively manage ongoing encryption solutions.

When considering cryptographic controls it is often helpful to first consider your organization’s data. This data exists in one of three states: at rest, in transit, or undergoing processing. Data are particularly vulnerable to unauthorized access when in transit or at rest. Portable computers (storing data at rest) are a common target for physical theft, while attackers may intercept data in transit over a network through man-in-the-middle attacks or packet capturing and analysis. Unauthorized access may also occur while data processes, but here security systems may rely on the processing application to control and report on such access attempts. When used appropriately, encryption is a powerful tool to prevent unauthorized access to data.

The Data States and Encryption Methods

It is important for an organization to categorize information and conduct risk assessments to understand which data requires the most protection. Not all files need to be encrypted. Specific types of data require higher degrees of security like HIPAA, FERPA, and PCI data. Understanding which members of your organization use this sensitive data will maximize the efficiency and effectiveness of implementing an encryption policy. Some organizations require all mobile devices to use encryption, while other organizations require only select members to use encryption. Your organization must determine the scope and scale of the encryption policy to ensure meeting security requirements. Full disk encryption (FDE) mitigates the risk of data-at-rest exposure, but the security is effective only when the computer is off and encryption keys are secure. FDE may be most effective when used on laptops that, when stolen or lost, are often powered off.  The protection of data in motion is accomplished with multiple encryption methods. Virtual private networks (VPN) encrypt and tunnel traffic at the network level across the internet from site-to-site. Another method of protection used relies upon transport layer security (TLS) which encrypts communications between internet applications and web-browser transmissions. Data protected with TLS is encrypted and sent across the unsecured internet. Unlike a VPN that creates an encrypted tunnel to protect data, TLS encrypts the traffic itself and sends directly across unprotected internet space.
Encryption is an important part of an organization’s security apparatus, however, it is not the panacea to all security issues. It is one piece of the proverbial puzzle to securing your data. Encryption is useful for both enterprise and personal appliances. The biggest differences between enterprise and personal implementations are enterprise solutions that allow auditing and tracking of encrypted devices, remote wiping capabilities, and key management. Personal device encryption is usually just as strong as the enterprise level but depends on the implementation practices.  Most enterprise solutions recommend pre-boot authentication before unlocking an encrypted device using full disk encryption for maximum protection. Pre-boot authentication means using a pin, password, or security token to authorize the unlocking of the encrypted drive which then loads the operating system. Encryption keys are not released to memory until pre-boot authentication completes. Personal devices may not require pre-boot authentication for ease of use or the administrator may implement TPM key storage and deployment without pre-boot. The lack of pre-boot authentication leaves a device susceptible to side-channel attacks, meaning an attacker focuses on defeating encryption through stealing encryption keys from memory or other methods rather than breaking the algorithm used to generate the ciphertext.  Each organization must calculate the level of risk of losing data and then implement the solution based on this assessment. Some organizations will decide not to use pre-boot authentication based upon this assessment. It is important to consider the impact of encryption solutions on the business or organization to have proper acceptance of your encryption policy. Some organizations allow greater flexibility with user requests while others mandate policies. It is up to your organization to educate the users on the risks of data loss and find a balance between maximum ease of use and total security while meeting the mandated requirements of data protection in your organization.

Control:

A policy on the use, protection and lifetime of cryptographic keys should be developed and implemented through their whole lifecycle.

Implementation guidance

The policy should include requirements for managing cryptographic keys through their whole Lifecycle including generating. storing. archiving, retrieving, distributing, retiring and destroying keys. Cryptographic algorithms, key lengths, and usage practices should be selected according to best practice. Appropriate key management requires secure processes for generating, storing. archiving, retrieving, distributing, retiring and destroying cryptographic keys.
All cryptographic keys should be protected against modiﬁcation and loss. in addition, secret and private keys need protection against unauthorized use as well as disclosure. Equipment used to generate. store and archive keys should be physically protected. A key management system should be based on an agreed set of standards, procedures and secure methods for:

1. generating keys for different cryptographic systems and different applications;
2. issuing and obtaining public key certificates;
3. distributing keys to intended entities, including how keys should be activated when received;
5. changing or updating keys including rules on when keys should be changed and how this will be done;
6. dealing with compromised keys;
7. revoking keys including how keys should be withdrawn or deactivated, e.g. when keys have been compromised or when a user leaves an organization (in which case keys should also be archived);
8. recovering keys that are lost or ‘corrupted;
9. backing up or archiving keys;
10. destroying keys;
11. logging and auditing of key management related activities.

In order to reduce the likelihood of improper use, activation and deactivation dates for keys should be deﬁned so that the keys can only be used for the period of time defined in the associated key management policy. in addition to securely managing secret and private keys, the authenticity of public keys should also be considered. This authentication process can be done using public key certificates, which are normally issued by a certification authority, which should be a recognized organization with suitable controls and procedures in place to provide the required degree of trust. The contents of service level agreements or contracts with external suppliers of cryptographic services e.g. with a certification authority should cover issues ‘ of liability, the reliability of services and response times for the provision of services
The management of cryptographic keys is essential to the effective use of cryptographic techniques. Cryptographic techniques can also be used to protect cryptographic keys. Procedures may need to be considered for handling legal requests for access to cryptographic keys, e.g. encrypted information can be required to be made available in an unencrypted form as evidence in a court case.

Key Management

The processes underlying all widely accepted ciphers are and should be known, allowing extensive testing by all interested parties: not just the originating cryptographer. We tend to test our expectations of how our software development creations should work instead of looking for ways they deviate from expected behaviour. Our peers do not usually approach our work in that way. Consequently, allowing a large number of people to try to break an encryption algorithm is always a good idea. A secret, proprietary ciphers are suspect. A good encryption solution follows Auguste Kerckhoffs’ principle:

The security of the encryption scheme must depend only on the secrecy of the keys and not on the secrecy of the algorithm  If a vendor, or one of your peers, informs you he or she has come up with a proprietary, secret cipher that is unbreakable, that person is either the foremost cryptographer of all time or deluded. In either case, only the relentless pounding on the cipher by cryptanalysts can determine its actual strength. Now that we have established the key as the secret component of any well-tested cipher, how do we keep our keys safe from loss or theft? If we lose a key, the data it protects is effectively lost to us. If a key is stolen, the encrypted data is at a higher risk of discovery. And how do we share information with other organizations or individuals if they do not have our key? AES is a symmetric cipher; it uses the same key for both encryption and decryption. So, if I want to send AES-encrypted information to a business partner, how do I safely send the key to the receiver?

Principles of Key Management

Managing keys requires three considerations:

1. Where will you store them?
2. How will you ensure they are protected but available when needed?
3. What key strength is adequate for the data protected?

Key Storage

Many organizations store key files on the same system, and often the same drive, as the encrypted database or files. While this might seem like a good idea if your key is encrypted, it is bad security. What happens if the system fails and the key is not recoverable? Having usable backups helps, but backup restores do not always work as planned. Regardless of where you keep your key, encrypt it. Of course, now you have to decide where to store the encryption key for the encrypted encryption key. None of this confusion is necessary if you store all keys in a secure, central location. Further, do not rely solely on backups. Consider storing keys in escrow, allowing access by a limited number of employees (“key escrow,” ). Escrow storage can be a safe deposit box, a trusted third party, etc. Under no circumstances allow any one employee to privately encrypt your keys.

Key Protection

Encrypted keys protecting encrypted production data cannot be locked away and only brought out by trusted employees as needed. Rather, keep the keys available but safe. Key access security is, at its most basic level, a function of the strength of your authentication methods. Regardless of how well protected your keys are when not used, authenticated users (including applications) must gain access. Ensure identity verification is strong and aggressively enforce separation of duties, least privilege, and need-to-know.

Key Strength

Most, if not all, attacks against your encryption will try to acquire one or more of your keys. Use of weak keys or untested/questionable ciphers might achieve compliance, but it provides your organization, its customers, and its investors with a false sense of security.

So what is considered a strong key for a cipher like AES? AES can use 128-, 192-, or 256-bit keys. 128-bit keys are strong enough for most business data if you make them as random as possible. The key strength is measured by key size and an attacker’s ability to step through possible combinations until the right key is found. However, you choose your keys, ensure you get as close as possible to a key selection process in which all bit combinations are equally likely to appear in the keyspace (all possible keys).

Key Sharing and Digital Signatures

It is obvious from the sections on keys and algorithms that secrecy of the key is critical to the success of any encryption solution. However, it is often necessary to share encrypted information with outside organizations or individuals. For them to decrypt the ciphertext, they need our key. Transferring a symmetric cipher key is problematic. We have to make sure all recipients have the key and properly secure it. Further, if the key is compromised in some way, it must be quickly retired from use by anyone who has it. Finally, the distribution of the key must be secure. Luckily, some very smart cryptographers came up with the answer.

Asymmetric Cryptography

In 1978, Ron Rivest, Adi Shamir, and Leonard Adelman (RSA) publicly described a method of using two keys to protect and share data; one key is public and the other private. The organization or person to whom the public key belongs distributes it freely. However, the private key is kept safe and is never shared. This enables a process known as asymmetric encryption and decryption. As shown in Figure shown below, the sender uses the recipient’s public key to convert plaintext to ciphertext. The ciphertext is sent and the recipient uses her private key to recover the plaintext. Only the person with the private key corresponding to the public key can decrypt the message, document, etc. This works because of the two keys, although separate, are mathematically entwined.

At a very high level, the RSA model uses prime numbers to create a public/private key set:

1. The creation begins by selecting two extremely large prime numbers. They should be chosen at random and of similar length.
2. The two prime numbers are multiplied together.
3. The product becomes the public key.
4. The two factors become the private key.

There is more to asymmetric key creation, but this is close enough for our purposes. When someone uses the public key, or the product of the two primes, to encrypt a message, the recipient of the ciphertext must know the two prime numbers that created it. If the primes were small, a brute force attack can find them. However, the use of extremely large primes and today’s computing power makes finding the private key through brute force unlikely. Consequently, we can use asymmetric keys to share symmetric keys, encrypt email, and various other processes where key sharing is necessary. The Diffie-Hellman key exchange method is similar to the RSA model and it was made public first. However, it allows two parties who know nothing about each other to establish a shared key. This is the basis of SSL and TLS security. An encrypted session key exchange occurs over an open connection. Once both parties to the session have the session key (also known as a shared secret), they establish a virtual and secure tunnel using symmetric encryption. So why not throw out symmetric encryption and use only asymmetric ciphers? First, symmetric ciphers are typically much stronger. Further, asymmetric encryption is far slower. So we have settled for symmetric ciphers for data center and other mass storage encryption and asymmetric ciphers for just about everything else. And it works… for now.

Digital Signatures

Although not really encryption the use of asymmetric keys has another use: digital signatures. If Sam, for example, wants to enable verification that he actually sent a message, he can sign it. Refer to the Figure below. The signature process uses Sam’s private key since he is the only person who has it. The private key is used as the message text is processed through a hash function. A hash is a fixed-length value that represents the message content. If the content changes, the hash value changes. Further, an attacker cannot use the hash value to arrive at the plain text.

When Alice receives Sam’s message, she can verify the message came from Sam and is unchanged: if she has Sam’s public key. With Sam’s public key, she rehashes the message text. If the two hash values are the same, the signature is valid, and the data reached Alice unchanged. If hash values do not match, either the message text changed or the key used to create the signature hash value is not Sam’s. In some cases, the public key might not be Sam’s. If the attacker, Eve, is able to convince Alice that a forged certificate she sends to her is Sam’s key, Eve can send signed messages using a forged “Sam” key that Alice will verify. It is important for a recipient to be sure the public key used in this process is valid.

Public Key Infrastructure (PKI)

Verifying the authenticity of keys is critical to asymmetric cryptography. We have to be sure that the person who says he is Bob is actually Bob or that the bank Web server we access is actually managed by our bank. There are two ways this can happen: through the hierarchical trust or a web of trust.

Hierarchical trust

The private industry usually relies on the hierarchical chain-of-trust model that minimally uses three components:

1. The certificate authority (CA)
2. Registration authority (RA)
3. Central directory/distribution management mechanism

The CA issues certificates binding a public key to a specific distinguished name provided by the certificate applicant (subject). Before issuing a certificate, however, it validates the subject’s identity. One verification method is domain validation. The CA sends an email containing a token or link to the administrator responsible for the subject’s domain. The recipient address might take the form of postmaster@domainname or root@domainname. The recipient (hopefully the subject or the subject’s authorized representative) then follows verification instructions. Another method and usually one with a much higher cost for the requestor is extended validation (EV). Instead of a simple administrator email exchange, a CA issuing an EV step through a rigorous identity verification process. The resulting certificates are structurally the same as other certificates; they simply carry the weight of a higher probability that the certificate holder is who they say they are, by

• Establishing the legal identity and physical existence/presence of the website owner
• Confirming that the requestor is the domain name owner or has exclusive control over it
• Using appropriate documents, confirming the identity and authority of the requestor or its representatives

A simple certificate issuance process is depicted in the Figure below. It is the same whether you host your own CA server or use a third party. The subject (end-entity) submits an application for a signed certificate. If verification passes, the CA issues a certificate and the public/private key pair.

The certificate with the public key can be stored in a publicly accessible directory. If a directory is not used, some other method is necessary to distribute public keys. For example, I can email or snail-mail my certificate to everyone who needs it. For enterprise PKI solutions, an internal directory holds all public keys for all participating employees. The figure below depicts the contents of my personal VeriSign certificate. It contains the identification of the CA, information about my identity, the type of certificate and how it can be used, and the CA’s signature (SHA1 and MD5 formats).

When an application/system first receives a subject’s public certificate, it must verify its authenticity. Because the certificate includes the issuer’s information, the verification process checks to see if it already has the issuer’s public certificate. If not, it must retrieve it. In this example, the CA is a root CA and its public key is included in its root certificate. A root CA is at the top of the certificate signing hierarchy. VeriSign, Comodo, and Entrust are examples of root CAs.

Using the root certificate, the application verifies the issuer signature (fingerprint) and ensures the subject certificate is not expired or revoked. If verification is successful, the system/application accepts the subject certificate as valid. Root CAs can delegate signing authority to other entities. These entities are known as intermediate CAs. Intermediate CAs are trusted only if the signature on their public key certificate is from a root CA or can be traced directly back to a root. See below. In this example, the root CA issued CA1 a certificate. CA1 used the certificate’s private key to sign certificates it issues, including the certificate issued to CA2. Likewise, CA2 used its private key to sign the certificate is issued to the subject. This can create a lengthy chain of trust. When you receive the subject’s certificate and public key for the first time, all I can tell is that it was issued by CA2. However, you do not implicitly trust CA2. Consequently, you use CA2‘s public key to verify its signature and use the issuing organization information in its certificate to step up the chain. When you step up, you encounter another intermediate CA whose certificate and public key you need to verify. In our example, a root CA issued the CAcertificate. Once you use the root certificate to verify the authenticity of the CA1 certificate, you establish a chain of trust from the root to the subject’s certificate. Because you trust the root, you trust the subject.

This might seem like a lot of unnecessary complexity, and it often is. However, using intermediate CAs allows organizations to issue their own certificates that customers and business associates can trust. For example, A publicly known and recognized root CA (e.g., Aptech) delegates certificate issuing authority to Trace to facilitate Erudio’s in-house PKI implementation. Using the intermediate certificate, Trace issues certificates to individuals, systems, and applications. Anyone receiving a subject certificate from Trace can verify its authenticity by stepping up the chain of trust to the root. If they trust the root, they will trust the Trace subject.

Revocation

Certificates are sometimes revoked for cause. When this happens, it is important to notify everyone that the revoked certificates are no longer trusted. This is done using a certificate revocation list (CRL). A CRL contains a list of serial numbers for revoked certificates. Each CA issues its own CRL, usually on an established schedule.

Web of Trust (WoT)

Although most medium and large organizations use the hierarchy of trust model, it is important to spend a little time looking at the alternative. The WoT also binds a subject with a public key. Instead of using a chain of trust with a final root, it relies on other subjects to verify a new subject’s identity.  As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of the certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.

Enterprise Key Management

Managing key sharing is easy; managing keys for enterprise encryption implementations is often difficult and ineffective. Good keys are difficult or impossible to remember, must be kept secure, and must be available when production processes require access to encrypted data. Further, all this provides an additional point of attack that, if successful, completely invalidates encryption as a security control. Distributed management of keys is not the answer, as it introduces even more opportunities for key compromise.

Centralized Key Management Example

Central key management governed by organization-specific encryption policies is likely the best key management option. In addition, only two or three people should have administrative access to key management controls. Vormetric Data Security is an example of a product providing these capabilities. In addition to ensuring key security, this type of solution also allows auditing of key creation, use, and retirement. Further, the Figure below depicts the key administrator’s ability to grant custodial access to sets of keys based on job role, enforcing separation of duties.

Centralized key management system

Centralized encryption helps ensure keys are always available and that data is not encrypted when it is not necessary, appropriate, or wanted. Keys are encrypted and easy to backup or export for escrow storage.

Cryptography’s Role in the Enterprise

Encrypting every piece of data in your organization does not guarantee it is protected from unauthorized access. The only thing guaranteed with this approach is unnecessary costs and potentially unhappy production managers. Before looking at when and what to encrypt, it is important to understand where encryption fits in overall security controls architecture.

Just Another Layer

Encryption is just another security control; it adds an additional prevention layer, nothing more. The biggest mistake many organizations make is relying on it as a panacea for all security risks. For example, data is decrypted on servers and end-user devices when processed. What good is encryption in transit when the system attack surfaces are ignored? An attacker who compromises an online payment server could care less whether or not you use encryption. Everything he needs is in plaintext. In other cases, a key might be compromised. Assuming encryption provides 100% protection might cause a security team to ignore the importance of detection (e.g., SIEM) or response policies and controls. Again, inspect what you expect. Never assume anything, including encryption, is achieving expected outcomes. Before or while deploying encryption, implement the following controls or processes :

• Reduce attack surfaces for applications, systems, and the network
• Implement strong application access controls
• Implement log management and monitoring solutions to detect anomalous activity across the network, on systems, and in/around databases
• Separate data access from data management; business users of sensitive data should access it through applications with strong, granular user access controls
• Ensure reasonable and appropriate physical security for the network, storage, and system components
• Implement strong authentication and authorization controls between applications and the databases they access
• Follow best practices for securing databases and the servers that house or manage them

When to Encrypt

1. Encrypt data that moves

Data moving from one trust zone to another, whether within your organization or between you and an external network, is at high risk of interception. Encrypt it. Data moving from trusted portions of the network to end-user devices over wireless LANs almost always at high risk. Encrypt it.

2. Encrypt for separation of duties when access controls are not granular enough

For flat file storage, encrypting a spreadsheet file in a department share provides an additional layer of separation. Only employees with the right authorization have access. Application access controls protecting databases often do not provide granular control to strictly enforce need-to-know or least privilege. Using database solutions with the field- and row-level encryption capabilities can help strengthen weak application-level access controls.

3. Encrypt when someone tells you to

And then there are local, state, and federal regulations. Couple regulatory constraints with auditor insistence and you often find yourself encrypting because you have to. This type of encryption is often based on generalizations instead of the existing security context. For example, just because you encrypt protected health information does not mean it is secure enough… but it satisfies HIPAA requirements.

4. Encrypt when it is a reasonable and appropriate method of reducing risk in a given situation

This law is actually a summary of the previous three. After performing a risk assessment, if you believe the risk is too high because existing controls do not adequately protect sensitive information, encrypt. This applies to risk from attacks or non-compliance.