Sunday, May 16, 2010

Chapter 4. Protection of Information Assets

- Defense-in-depth strategies provide layered protection for the organization’s information systems and data. Realization of this strategy reduces the overall risk of a successful attack
in the event of a single control failure using multiple layers of controls to protect an asset. These controls ensure the confidentiality, integrity, and availability of the systems and data, as well as prevent financial losses to the organization.

- The organization should have a formalized security function that is responsible for classifying assets and the risks associated with those assets, and mitigating risk through the implementation of security controls. The combination of security controls ensures that the organization’s information technology assets and data are protected against both internal and external
threats.

- The security function protects the IT infrastructure through the use of physical, logical, environmental and administrative (that is, policies, guidelines, standards, and procedures) controls.

- Three main components of access control exist:
➤ Access is the flow of information between a subject and an object.
➤ A subject is the requestor of access to a data object.
➤ An object is an entity that contains information.

- The access-control model is a framework that dictates how subjects can access objects and defines three types of access:
➤ Discretionary—Access to data objects is granted to the subjects at the data owner’s discretion.
➤ Mandatory—Access to an object is dependent upon security labels.
➤ Nondiscretionary—A central authority decides on access to certain objects based upon the organization’s security policy.

- In implementing mandatory access control (MAC), every subject and object has a sensitivity label (security label). A mandatory access system is commonly used within the federal government to define access to objects. If a document is assigned a label of top secret, all subjects requesting access to the document must contain a clearance of top-secret or above to view the document. Those containing a lower security label (such as secret or confidential) are
denied access to the object. In mandatory access control, all subjects and objects have security labels, and the decision for access is determined by the operating or security system. Mandatory access control is used in organizations where confidentiality is of the utmost concern.

- Nondiscretionary access control can use different mechanisms based on the needs of the organization. The first is role-based access, in which access to an object(s) is based on the role of the user in the company. In other words, a data entry operator should have create access to a particular database. All data entry operators should have create access based on their role (data entry operator). This type of access is commonly used in environments with high turnover because the access rights apply to a subject’s role, not the subject.

- Task-based access control is determined by which tasks are assigned to a user. In this scenario, a user is assigned a task and given access to the information system to perform that task. When the task is complete, the access is revoked; if a new task is assigned, the access is granted for the new task.

- Lattice-based access is determined by the sensitivity or security label assigned to the user’s role. This scenario provides for an upper and lower bound of access capabilities for every subject and object relationship. Consider, for example, that the role of our user is assigned an access level of secret. That user may view all objects that are public (lower bound) and secret (upper bound), as well as those that are confidential (which falls between public and
secret). This user’s role would not be able to view top-secret documents because they exceed the upper bound of the lattice.

- Another method of access control is rule-based access. The previous discussion of firewalls in Chapter 3, “Technical Infrastructure and Operational Practices and Infrastructure,” demonstrated the use of rule-based access implemented through access control lists (ACLs). Rule-based access is generally used between networks or applications. It involves a set of rules from which incoming requests can be matched and either accepted or rejected. Rule-based controls are considered nondiscretionary access controls because the administrator of the system sets the controls rather than the information users.

- IS auditors should review access control lists (ACL) to determine user permissions that have been granted for a particular resource.

- Restricted interfaces are used to control access to both functions and data within applications and through the use of restricted menus or shells. They are commonly used in database views. The database view should be configured so that only that data for which the user is authorized is presented on the screen. A good example of a restricted interface is an Automatic Teller Machine (ATM).

- The administration of access control can be either centralized or decentralized and should support the policy, procedures, and standards for the organization. In a centralized access control administration system, a single entity or system is responsible for granting access to all users. In decentralized or distributed administration, the access is given by individuals who are closer to the resources.

- The IT organization also should have a method of logging user actions while accessing objects within the information system, to establish accountability (linking individuals to their activities).
Access control involves these components:
1. Identification
2. Authentication
3. Authorization

- The most common form of authentication includes the use of passwords, but authentication can take three forms:
➤ Something you know—A password.
➤ Something you have—A token, ATM bank card, or smart card.
➤ Something you are—Unique personal physical characteristic(s) (biometrics). These include fingerprints, retina scans, iris scans, hand geometry, and palm scans.
These forms of authentication can be used together. If two or more are used together, this is known as strong authentication or two-factor authentication.

- Different types of passwords exist, depending on the implementation. In some systems, the passwords are user created; others use cognitive passwords. A cognitive password uses de facto or opinion-based information to verify an individual’s identity. What is your mother’s
maiden name? What is the name of your favorite pet? What is the elementary school you attended? The user chooses a question and provides the answer, which is stored in the system. If the user forgets the password, the system asks the security question. If it is answered correctly, the system resets the password or sends the existing password via email.

- The token can be either synchronous or asynchronous. When using a synchronous token, the generation of the password can be timed (the password changes every n seconds or minutes) or
event driven (the password is generated on demand with a button). The use of token-based authentication generally incorporates something you know (password) combined with something you have (token) to authenticate. A token device that uses asynchronous authentication uses a challenge response mechanism to authenticate. In this scenario, the system displays a
challenge to the user, which the user then enters into the token device. The token device returns a different value. This value then is entered into the system as the response to be authenticated.

- In a database, system integrity is most often ensured through table link verification
and reference checks.

- IS auditors should first determine points of entry when performing a detailed network
assessment and access control review.

- The longer the key is, the more difficult it is to decrypt a message because of the amount of computation required to try all possible key combinations (work factor). Cryptanalysis is the science of studying and breaking the secrecy of encryption algorithms and their necessary pieces. The work factor involved in brute-forcing encrypted messages relies significantly
on the computing power of the machines that are brute-forcing the message.

- The strength of a cryptosystem is determined by a combination of key length, initial input vectors, and the complexity of the data-encryption algorithm that uses the key.

- An elliptic curve cryptosystem has a much higher computation speed than RSA encryption.

- Rich mathematical structures are used for efficiency. ECC can provide the same level of protection as RSA, but with a key size that is smaller than what RSA requires.

- Digital Signature Algorithm (DSA)can only be used for digital signature. Security comes from the difficulty of factoring discrete algorithms in a finite space.

- A long asymmetric encryption key increases encryption overhead and cost.

- A public key infrastructure (PKI) incorporates public key cryptography, security policies, and standards that enable key maintenance (including user identification, distribution, and revocation) through the use of certificates.

- The certificates used by the CAs incorporate identity information, certificate serial numbers, certificate version numbers, algorithm information, lifetime dates, and the signature of the issuing authority (CA). The most widely used certificate types are the Version 3 X.509 certificates. The X.509 certificates are commonly used in secure web transactions via Secure Sockets Layer (SSL).

- A Certifying Authority (CA) can delegate the processes of establishing a link between the requesting entity and its public key to a Registration Authority (RA). An RA performs certification and registration duties to offload some of the work from the CAs. The RA can confirm individual identities, distribute keys, and perform maintenance functions, but it cannot issue certificates. The CA still manages the digital certificate life cycle, to ensure that adequate security and controls exist.

- Per ISACA, power failures can be grouped into four distinct categories, based on the duration and relative severity of the failure:
➤ Total failure—A complete loss of electrical power, which might involve a single area building up to an entire geographic area. This is often caused by weather conditions (such as a storm or earthquake) or the incapability of an electrical utility company to meet user demands (such
as during summer months).
➤ Severely reduced voltage (brownout)—The failure of an electrical utility company to supply power within an acceptable range (108–125 volts AC in the United States). Such failure places a strain on electrical equipment and could limit its operational life or even cause permanent
damage.
➤ Sags, spikes, and surges—Temporary and rapid decreases (sags) or increases (spikes and surges) in voltage levels. These anomalies can cause loss of data, data corruption, network transmission errors, or even physical damage to hardware devices such as hard disks or memory chips.
➤ Electromagnetic interference (EMI)—Interference caused by electrical storms or noisy elective equipment (such as motors, fluorescent lighting, or radio transmitters). This interference could cause computer systems to hang or crash, and could result in damages similar to those
caused by sags, spikes, and surges.

- The organization can provide a complete power system, which would include the UPS, a power conditioning system (PCS), and a generator. The PCS is used to prevent sags, spikes, and surges from reaching the electrical equipment by conditioning the incoming power to reduce voltage deviations and provide steady-state voltage regulation.

- Electrical equipment must operate in climate-controlled facilities that ensure proper temperature and humidity levels. Relative humidity should be between 40% and 60%, and the temperature should be between 70°F and 74°F.

- A number of fire-detection systems are activated by heat, smoke, or flame. These systems should provide an audible signal and should be linked to a monitoring system that can contact the fire department.
➤ Smoke detectors—Placed both above and below the ceiling tiles. They use optical detectors that detect the change in light intensity when there is a presence of smoke.
➤ Heat-activated detectors—Detect a rise in temperature. They can be configured to sound an alarm when the temperature exceeds a certain level.
➤ Flame-activated detectors—Sense infrared energy or the light patterns associated with the pulsating flames of a fire.

- Water - Common combustibles - Reducing temperatures
CO2 - Liquid and electrical fires- Removing fuel and oxygen
Soda acid - Liquid and electrical fires - Removing fuel and oxygen
Gas - Chemical fires - Interfering with the chemical reaction necessary for fire

- The following are automatic fire suppression systems:
➤ Water sprinklers—These are effective in fire suppression, but they will damage electrical equipment.
➤ Water dry pipe—A dry-pipe sprinkler system suppresses fire via water that is released from a main valve, to be delivered via a system of dry pipes that fill with water when the fire alarm activates the water pumps. A dry-pipe system detects the risk of leakage. Water-based suppression systems are an acceptable means of fire suppression, but they should be combined with an automatic power shut-off system.

- Although many methods of fire suppression exist, dry-pipe sprinklers are considered to be the most environmentally friendly because they are water based as opposed to chemical based in the case of halon or CO2.

- ➤ Halon—Pressurized halon gas is released, which interferes with the chemical reaction of a fire. Halon damages the ozone and, therefore, is banned, but replacement chemicals include FM-200, NAF SIII, and NAF PIII.
➤ CO2—Carbon dioxide replaces oxygen. Although it is environmentally acceptable, it cannot be used in sites that are staffed because it is a threat to human life.

- Personally escorting visitors is a preferred form of physical access control for guests.

- A biometric system by itself is advanced and very sensitive. This sensitivity can make biometrics prone to error. These errors fall into two categories:
➤ False Rejection Rate (FRR) Type I error—The biometric system rejects an individual who is authorized to access the system.
➤ False Acceptance Rate (FAR) Type II error—The biometric system accepts unauthorized individuals who should be rejected.

- Most biometric systems have sensitivity levels associated with them. When the sensitivity level is increased, the rate of rejection errors increases (authorized users are rejected). When the sensitivity level is decreased, the rate of acceptance (unauthorized users are accepted) increases. Biometric devices use a comparison metric called the Equal Error Rate (EER), which is the rate at which the FAR and FRR are equal or cross over. In general, the lower the EER is, the more accurate and reliable the biometric device is.

- When evaluating biometric access controls, a low Equal Error Rate (EER) is preferred because Equal Error Rates (EERs) are used as the best overall measure of a biometric system’s effectiveness.

- Traffic analysis is a passive attack method intruders use to determine potential network
vulnerabilities.

- ➤ Eavesdropping—In this attack, also known as sniffing or packet analysis, the intruder uses automated tools to collect packets on the network. These packets can be reassembled into messages and can include email, names and passwords, and system information.

- A virus is computer program that infects systems by inserting copies of itself into executable code on a computer system.

- A worm is another type of computer program that is often incorrectly called a virus. The difference between a virus and a worm is that the virus relies on the host (infected) system for further propagation because it inserts itself into applications or programs so that it can replicate and perform its functions. Worms are malicious programs that can run independently
and can propagate without the aid of a carrier program such as email. Worms can delete files, fill up the hard drive and memory, or consume valuable network bandwidth.

- the polymorphic virus has the capability of changing its own code, enabling it to have many different variants. The capability of a polymorphic virus to change its signature pattern enables it to replicate and makes it more difficult for antivirus systems to detect it.

- Another type of malicious code is a logic bomb, which is a program or string of code that executes when a sequence of events or a prespecified time or date occurs. A stealth virus is a virus that hides itself by intercepting disk access requests.

- Integrity checkers are programs that detect changes to systems, applications, and data. Integrity checkers compute a binary number for each selected program called a cyclical redundancy check (CRC).

- A vulnerability assessment is used to determine potential risks to the organization’s systems and data. Penetration testing is used to test controls implemented as countermeasures to vulnerabilities.

- To ensure that the organization’s security controls are effective, a comprehensive security program should be implemented. The security program should include these components:

➤ Continuous user awareness training
➤ Continuous monitoring and auditing of IT processes and management
➤ Enforcement of acceptable use policies and information security controls

- The incident response team should ensure the following:
➤ Systems involved in the incident are segregated from the network so they do not cause further damage.
➤ Appropriate procedures for notification and escalation are followed.
➤ Evidence associated with the incident is preserved.
➤ Documented procedures to recover systems, applications, and data are followed.

- An IDS can be signature based, statistical based, or a neural network. A signature-based IDS monitors and detects known intrusion patterns. A statistical-based IDS compares data from sensors against an established baseline (created by the administrator). Neural networks monitor patterns of activity or traffic on a network. This selflearning process enables the IDS to create a database (baseline) of activity for comparison to future activity.

- Data owners are ultimately responsible and accountable for reviewing user access to
systems.

- Per ISACA, the IS auditor should review the following when auditing security management, logical access issues, and exposures.

* Review Written Policies, Procedures, and Standards
* Logical Access Security Policy
These policies should encourage limiting logical access on a need-to-know
basis and should reasonably assess the exposure to the identified concerns.
* Formal Security Awareness and Training
Promoting security awareness is a preventive control. Through this process,
employees become aware of their responsibility for maintaining good physical
and logical security.
* Data Ownership
Data ownership refers to the classification of data elements and the allocation
of responsibility for ensuring that they are kept confidential, complete, and
accurate.
* Security Administrators
Security administrators are responsible for providing adequate physical and
logical security for the IS programs, data, and equipment.
* Access Standards

- When evaluating logical access controls, the highest order should be as follows:
➤ Obtain a general understanding of the security risks facing information processing, through a review of relevant documentation, inquiry, observation, risk assessment, and evaluation techniques
➤ Document and evaluate controls over potential access paths to the system, to assess their adequacy, efficiency and effectiveness, by reviewing appropriate hardware and software security features in identifying any deficiencies
➤ Test controls over access paths, to determine whether they are functioning and effective, by applying appropriate audit techniques
➤ Evaluate the access control environment, to determine whether the control objectives are achieved, by analyzing test results and other audit evidence
➤ Evaluate the security environment, to assess its adequacy, by reviewing written policies, observing practices and procedures, and comparing them to appropriate security standards or practices and procedures used by other organizations

- Data owners, such as corporate officers, are ultimately responsible and accountable for access control of data. Although security administrators are indeed responsible for securing data, they do so at the direction of the data owners. A security administrator is an example of a data custodian. Data users access and utilize the data for authorized tasks.

- Data classification is a process that allows an organization to implement appropriate controls according to data sensitivity. Before data sensitivity can be determined by the data owners, data ownership must be established.

-

No comments:

Post a Comment