Saturday, May 22, 2010

Chapter 7. Business Process Evaluation and Risk Management

- The evaluation of the efficiency and effectiveness of an organization’s IT program involves reviewing the IT governance structure as well as its alignment with the organization’s strategy. The IT organization must also manage the risks associated with ongoing development and operations. The IT organization should have a risk-management program that utilizes internal controls and best practices to mitigate risks to an acceptable level.

- The standard approach to improving business processes is to identify specific areas to be reviewed, document the existing baseline process(s), and identify areas for improvement. After improvement areas have been identified, they should be presented to senior management for prioritization and implementation. Upon implementation of the business processes, the organization
should monitor the new processes against the baseline and establish a continuous improvement process. Known as business process re-engineering (BPR), this usually successfully reduces manual interventions and controls within the organization.

- ISACA defines benchmarking as the continuous, systematic process of evaluating the products, services, and work processes of organizations, recognized as representing best practices for the purpose of organizational improvement.

- ISACA outlines the following steps in a benchmarking exercise:
1. Plan. In the planning stage, critical processes are identified for the benchmarking exercise. The benchmarking team should identify the critical processes and understand how they are measured, what kind of data is needed, and how that data needs to be collected.

2. Research. The team should collect baseline data about its own processes before collecting this data about others. The next step is to identify the benchmarking partners through sources such as business newspapers and magazines, quality award winners, and trade journals.

3. Observe. The next step is to collect data and visit the benchmarking partner. There should be an agreement with the partner organization, a data-collection plan, and a method to facilitate proper observation.

4. Analyze. This step involves summarizing and interpreting the data collected, analyzing the gaps between an organization’s process and its partner’s process, and converting key findings into new operational goals.

5. Adapt. Adapting the results of benchmarking can be the most difficult step. In this step, the team needs to translate the findings into a few core principles and work down from the principles to strategies and action plans.

6. Improve. Continuous improvement is the key focus in a benchmarking exercise. Benchmarking links each process in an organization with an improvement strategy and organizational goals.
Benchmarking partners are identified in the research stage of the benchmarking process.

- This benchmarking methodology assumes that organizations will be able to find partner organizations that will agree to review and observation. In today’s competitive market, most organizations turn to professional consulting companies that have performedv business process re-engineering across industries and use the information gathered during those engagements to compare to their organization.

- Business process re-engineering (BPR) provides an accelerated means of process improvement by assuming that existing business processes do not work; therefore, the re-engineering effort can
focus on a new processes by defining a future state (to be).

- After the future state has been defined, the re-engineering team can create an action plan based on the gap between current processes and the future state. The re-engineering team and management then can create the transition plan and begin to implement the changes. To help ensure the success of the re-engineering effort, determining the scope of areas to be reviewed
should be the first step in the business process re-engineering project.

- An IS auditor should always make sure that a re-engineered business process has not inadvertently removed key controls from the previous control environment.

- Whenever business processes have been re-engineered, the IS auditor should attempt to identify and quantify the impact of any controls that have been removed, or controls that might not work as effectively after a business process changes.

- Generally, the largest impact of re-engineering is on the staff.

- Business process re-engineering often results in increased automation, which results
in a greater number of people using technology.

- A couple emerging business and technology trends illustrate these improvements. The first is customer relationship management (CRM), which focuses on managing detailed customer information. This might include previous transactions and customer requirements, allowing organizations to match customer needs to products and services.

- The second, supply chain management (SCM), is the improvement of an organization’s product and service design, purchasing, invoicing, distribution, and customer service.

- One of the technologies associated with SCM is the process of electronic funds transfer (EFT). EFT is an electronic payment process between buyers and sellers that is very efficient because it reduces paper transactions and manual intervention.

- EFT systems are more efficient than traditional paper checks for accounts payable disbursements.

- After an organization has developed a strategic plan and defined its goals, it must measure its progress toward these goals. Key performance indicators (KPI) are quantifiable measurements that are developed and accepted by senior management. Key performance indicators vary by organization but are created as long-term measurements of an organization’s operational activities against its goals.

- As an example of a goal, the IT organization would expect to deliver services in accordance with service-level agreements (SLA). The IT organization would measure actual service levels against the SLA, identify gaps, and define controls to proactively reduce the service-level failures to meet the SLA.

- To ensure that KPIs are understandable and do not detract from the organization’s mission, they should be kept to a minimum of three to five. The use of KPIs provides management with a compass that allows for course corrections in meeting organizational goals and a communication
tool for the entire organization defining the importance of achieving these goals.

- Another way to measure organizational performance is the balanced scorecard. The balanced scorecard is a management tool that clarifies an organization’s goals, and defines actions and the measurement of those actions to meet goals. The balanced scorecard differs from previous methodologies, in that it combines measurement of all business processes. This allows managers
to see the organization from many different perspectives and identify areas for improvement.

- ISACA defines the application of the balanced scorecard to IT as a three-layered structure that addresses the four perspectives through the following.
Mission:
➤ To be a preferred supplier of information systems
➤ To deliver effective and efficient applications and services
➤ To obtain reasonable business contribution of IT investments
➤ To develop opportunities to answer future challenges

- Application controls are used to ensure that only accurate, complete, and authorized data
is entered into a system. These controls can be either manual or automated and ensure the following:
➤ Valid, accurate, and complete data is entered into the system.
➤ Processing of data is accurate and performs the function(s) it was created for.
➤ The processing of data and results meet expectations.
➤ The data is maintained to ensure validity, accuracy, and completeness.

- Manual controls include checks performed by IT staff and IS auditors such as the review of error logs, reconciliations, and exception reports. Automated controls include programming logic, validation and edit checks, and programmed control functions.

- The IS auditor should use a combination of manual review (system documentation and logs), observations, integrated test facilities, and embedded audit modules. The IS auditor must review application controls, data integrity controls, and controls associated with business systems and components. These components might include electronic data interchange (EDI) and electronic funds transfers (EFT).

- In reviewing application controls, the IS auditor should review the following
areas:
➤ Input/output controls
➤ Input authorization
➤ Batch controls
➤ Processing control procedures
➤ Processing
➤ Validation
➤ Editing
➤ Output controls
➤ Critical forms logging and security
➤ Negotiable instruments logging and security (signatures)
➤ Report distribution
➤ Balancing and reconciliation

- An IS auditor must first understand relative business processes before performing an application audit. This can be accomplished by reviewing the business plan, the IT strategic plan (long and short term), and organizational goals.

- In auditing input and output controls, the auditor must ensure that all transactions have been received, processed, and recorded accurately, and that the transactions are valid and authorized. The auditor should review access controls and validation and edit checks. It is important to remember that in an integrated environment, the output of one system could be the input to another system. Input/output controls should be implemented for both the sending and receiving applications.

- Some systems employ an automated control to provide authorization for data exceptions.
An example is a sales transaction in which the price of the product is being reduced. The salesperson might not be authorized to reduce the price, but an automated request could be sent to a supervisor. The supervisor would then log in with a second-level password to authorize the price change.

- A second-level password is an automated process to facilitate the approval of transaction
data exceptions.

- Automated access controls include the following:
➤ Online controls—Authorized individuals or systems are authenticated before performing sensitive functions
➤ Client identification—Specific workstations and individuals are authenticated before performing sensitive functions

- A batch control transaction summarizes totals of transactions within a batch. This transaction can be based on monetary amount, total items, total documents, or hash totals. These totals can be compared to the source documents to ensure that all items have accurate input. In addition, control totals ensure that the data input is complete and should be implemented as early as data
preparation to support data integrity. Hash totals are generated by selecting specific fields in a series of transactions or records. If a later summation does not produce the number, this indicates that records have been lost, entered or transmitted incorrectly, or duplicated.

- Hash totals are used as a control to detect loss, corruption, or duplication of data.

- Data validation is used to identify errors in data regarding completeness, inconsistencies, duplicates, and reasonableness. Edit controls perform the same function as data-validation controls but are generally used after data has been entered but before it is processed.

- Data-Validation Edits and Controls
A sequence check ensures that data falls within a range sequence and that no values are missing or outside the sequence range. An example would be to ensure that all check numbers in a system fall within an acceptable range (such as 1–100) and that all checks fall within that range, with no missing checks.

- A limit check verifies that the data in the transaction does not exceed a predetermined limit.

- A range check verifies that data is within a predetermined range of values. An example would be a check to ensure that the data falls between two dates (such as 1/1/2005 and 6/1/2005).

- Key verification is an edit check ensuring input integrity by having initial input re-entered by a second employee before the transaction can occur.

- Data edits are implemented before processing and are considered preventative integrity controls.

- During the review of input processing, the IS auditor can compare the transaction journal to authorized source documents. The transaction journal records all transaction activity and provides the information necessary for detecting unauthorized input from a terminal and completeness of transactions.

- Processing controls ensure that data is accurate and complete, and is processed only through authorized routines. The processing controls can be programmed controls that detect and initiate corrective action, or edit checks that ensure completeness, accuracy, and validity. Processing controls also include manual controls, such as these:
➤ Manual recalculation—Periodic sample transaction groups can be recalculated to ensure that processing is performing as expected.
➤ Run-to-run totals—These verify data values throughout the various stages of application processing. They are an effective control to detect accidental record deletion in transaction-based applications.

- Data is stored in the form of files and databases. Data integrity testing ensures the completeness, accuracy, consistency, and authorization of data.

- Two types of tests are associated with data integrity:
➤ Referential integrity tests—Referential integrity works within a relational data model within a database and ensures that the relationships between two or more references are consistent. If the data in one reference is inserted, deleted, or updated, the integrity to the second reference is maintained through the use of primary and foreign keys.
➤ Relational integrity tests—These tests ensure that validation (either application or database) routines check data before entry into the database.

- The purpose of EDI is to promote a more efficient and effective dataexchange process by reducing paper, errors, and delays. In using EDI, organizations with dissimilar computer systems facilitate the exchange and transmittal of information such as product orders, invoices, and business documents.

- A communications handler is an EDI component that transmits and receives documents.

- Functional acknowledgments can be implemented in the EDI interface to provide efficient
data mapping. functional acknowledgment is a message transmitted from the receiver of an
electronic submission to the sender; it notifies the sender that the document was received/processed or was not processed. Functional acknowledgments provide an audit trail for EDI transactions.

- IT governance encompasses the information systems, strategy, and people. This control helps ensure that IT is aligned with the organization’s strategy and goals. The board of directors and executive officers are ultimately accountable for functionality, reliability, and security within IT governance.

- In the development of a risk-management plan, ISACA states that the organization
must do the following:
➤ Establish the purpose of the risk-management program. In establishing the purpose for the program, the organization will be better prepared to evaluate the results and determine its effectiveness.
➤ Assign responsibility for the risk-management plan. To ensure the success of the risk-management plan, the organization should designate an individual or team responsible for developing and implementing the risk-management plan. The team should coordinate efforts across the organization in identifying risks and defining strategies to mitigate the risk.

- As stated in Chapter 1, “The Information Systems (IS) Audit Process,” risk can be defined as the possibility of something adverse happening. Risk management is the process of assessing risk, taking steps to reduce risk to an acceptable level (mitigation), and maintaining that level of risk.

- In developing the risk-management plan, the organization should identify organizational
assets as well as the threats and vulnerabilities associated with these assets. After identifying potential vulnerabilities, the IS auditor should perform a business impact analysis (BIA) of the threats that would exploit the vulnerabilities.

- The IS auditor can use qualitative or quantitative analysis during the BIA to assess the potential impacts, or degree of loss, associated with the assets. Quantitative impacts are easily measured because they can result in a direct loss of money, opportunity, or disruption. Qualitative impacts are harder to measure because they result in losses associated with damage to reputation, endangerment of staff, or breach of confidence.

- The controls, called countermeasures, can be actions, devices, procedures, or techniques. After the organization has applied controls to the asset, the remaining risk is called residual risk.

- The organization’s management sets acceptable risk levels; if the residual risk falls below that level, further controls are not required. The IS auditor can evaluate this control to see
whether an excessive level of control is being used. The removal of excessive controls can result in cost savings to the organization.

- In most organizations, the executive director works with the board of directors to define the purpose for the risk-management program. In clearly defining the risk-management program goals, senior management can evaluate the results of risk management and determine its effectiveness. The risk-management team should be utilized at all levels within the organization and needs the help of the operations staff and board members to identify areas of risk and to develop suitable mitigation strategies.

- By comparing and cross-indexing transaction data from multiple databases, data mining can be used to determine suspicious transactions that fall outside the norm.

- When storing data archives offsite, data must be synchronized to ensure backup data completeness.

Wednesday, May 19, 2010

Chapter 6. Business Application System Development, Acquisition, Implementation, and Maintenance

- The IT department should have clearly defined processes to control the resources associated with the development, acquisition, and implementation of applications. This process, called the systems-development life cycle (SDLC), encompasses a structured approach to do the following:
➤ Minimize risk and maximize return on investment
➤Reduce software business risk, the likelihood that the new system will not meet the application user’s business expectations

- all significant IT projects should have a project sponsor and project steering committee. The project sponsor is ultimately responsible for providing requirement specifications to the software-development team. The project steering committee is responsible for the overall direction, costs, and timetables for systems-development projects.

- A primary high-level goal for an auditor who is reviewing a systems-development project is to ensure that business objectives are achieved. This objective guides all other systems-development objectives. In addition to auditing projects, the IS auditor should be included within a systems-development project in an advisory capacity to ensure that adequate
controls are incorporated into the system during development and to ensure that adequate and complete documentation exists for all projects.

- A Software Development Life Cycle is a logical process that systems analysts and systems developers use to develop software (applications).

- The most common SDLC is the Classic Life Cycle Model, which can be either the Linear Sequential Model or the Waterfall Method.

- The waterfall methodology is the oldest and most commonly used approach. It begins with the feasibility study and progresses through requirements, design, development, implementation, and post-implementation. It is important to remember that, in using this approach, the subsequent step does not begin until all tasks in the previous step are completed. When the process has moved to the next step, it does not go back to the previous step.

- The waterfall approach is best used in environments where the organization’s requirements will remain stable and the system architecture is known early in the development process.

- Phase 1: Feasibility Determine the strategic benefits of implementing the system either in productivity gains or in future cost avoidance, identify and quantify the cost savings of a new system, and estimate a payback schedule for costs incurred in implementing the system. This business case provides the justification for proceeding to the next phase.

Phase 2: Requirements Define the problem or need that requires resolution, and define the functional and quality requirements of the solution system. This can be either a customized approach or a vendor-supplied software package, which would entail following a defined and documented acquisition process. In either case, the user needs to be actively involved.

Phase 3: Design Based on the requirements defined, establish a baseline of system and subsystem specifications that describe the parts of the system, how they interact, and how the system will be implemented using the chosen hardware, software, and network facilities. (During the design phase of an applicationdevelopment project, the IS auditor should strive to ensure that all necessary controls are included in the initial design.) Generally, the design also includes both program and database specifications, and a security plan. (Application controls should be considered as early as possible in the system-development process, even in the development of
the project’s functional specifications.) Additionally, a formal change-control process is established to prevent uncontrolled entry of new requirements into the development process.

Phase 4: Development Use the design specifications to begin programming and formalizing supporting operational processes of the system. Various levels of testing also occur in this phase to verify and validate what has been developed.

Phase 5: Implementation The actual operation of the new information system is established, with final user acceptance testing conducted in this environment. (Acceptance testing is used to ensure that the system meets user and business needs.) The system also may go through a certification and accreditation process to assess the effectiveness of the business application in mitigating risks to an appropriate level, and providing management accountability over the effectiveness of the system in meeting its intended objectives and in establishing an appropriate level of internal control.

- In addition to following a structured approach to systems development, the IT organization must have a sound IT management methodology that includes the following:
➤ Project management—Utilizes knowledge, tools, and techniques to reach the goals of the project
➤ IT organizational policies—Ensures that organizational goals are fulfilled and business risks are reduced
➤ Steering committee—Ensures that the IS department closely supports the corporate mission and objectives
➤ Change-management process—Ensures that there is not uncontrolled entry of new requirements into the development process or existing systems

- To improve software life-cycle processes and software-processing capability, the organization can implement the Software Capability Maturity Model (CMM), developed by Carnegie Melon’s Software Engineering Institute. Software process maturity is the extent to which a specific process is explicitly defined, managed, measured, controlled, and effective. The more mature
an organization’s software process is, the higher the productivity and quality are of the software products produced.

CMM levels:

* Initial (1) The software process is characterized as ad hoc, and occasionally even
chaotic. Few processes are defined, and success depends on individual effort.

* Repetable (2) Detailed measures of the software process and product quality are collected.
Both the software process and products are quantitatively understood and controlled

* Defined (3) The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization’s standard software process for developing and maintaining software

* Managed (4) Detailed measures of the software process and product quality are collected.
Both the software process and products are quantitatively understood and controlled

* Optimizing (5) Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies

- A standard software-development process is included within Level 3 (defined) of the Software Capability Maturity Model (CMM).

- In addition to the life-cycle phases, the organization must utilize formal programming
methods, techniques, languages, and library control software. The utilization of formal coding standards ensures the quality of programming activities and enhances future maintenance capabilities. These standards should include methods of source code documentation, methods of data declaration, and naming standards.

- Coding standards promote compliance with accepted field-naming conventions.

- These are the commonly used programming languages and their attributes:
➤ Common Business Orientated Language (COBOL) and C programming Language—High-level general-purpose languages.
➤ C++ and Java—Object-oriented languages.
➤ SH (Shell), Perl, JavaScript, VBScript—Scripting languages; primarily used in web development.
➤ 4GL—Fourth-generation high-level programming languages; are object-oriented but lack the lower-level detail commands

- Fourth-generation languages (4GLs) are most appropriate for designing the application’s graphical user interface (GUI). They are inappropriate for designing any intensive data-calculation procedures.

- Throughout the SDLC, it is important to protect the integrity of source code and executables. This integrity is maintained through the use of production source code and production libraries. The library control software provides access control to ensure, as an example, that source code is accessible only in a read-only state. The organization should have procedures in place to ensure proper access levels and segregation of duties. As an example, users and application programmers should not have access to the production source code.

- Prototyping is the process of developing a system through the rapid development and testing of code. This process uses trial and error to reduce the level of risks in developing the system.
The developers create high-level code (mostly 4G languages) based on the design requirements and then provide them to the end users for review and testing. The end users can then see a high-level view of the system (generally screens and reports) and provide input on changes or gaps between the code and requirements.

- Rapid application development (RAD) is used to develop strategically important systems faster, reduce development costs, and still maintain high quality. The organization should use a prototype that can be updated continually to meet changing user or business requirements. According to ISACA, this is achieved by using a series of proven application-development techniques within a well-defined methodology:
➤ Small, well-trained development teams
➤ Evolutionary prototypes
➤ Integrated power tools that support modeling, prototyping, and component reusability
➤ A central repository
➤ Interactive requirements and design workshops
➤ Rigid limits on development time frames

- Procedures to prevent scope creep are baselined in the design phase of the system’s SDLC model.

- Phase 4: Development Programming and testing of the new system occurs in this phase. The tests
verify and validate what has been developed.

- ➤ More cohesion (dedication to a single function) and less coupling (interaction with other functions) result in less troubleshooting and softwaremaintenance effort.

- ➤ Online programming facilities can increase programming productivity but can also increase risk of inappropriate access. An online programming facility stores the program library on a server, and developers use individual PC workstations to download code to develop, modify, and
test. Procedures to prevent scope creep are baselined in the design phase of the system’s
SDLC model.

- Online programming can lower development costs, reduce response time, and expand programming resources available. Its disadvantages, however, include reduced integrity of programming and processing; in addition, version control and valid changes can be overwritten by invalid changes.

- Test plans identify specific portions of the application that will be tested, as well as the approach to testing:
➤ Bottom-up approach
➤ Start testing with programs or modules, and progress toward testing the entire system.
➤ Testing can be started before the entire system is complete.
➤ Errors in critical modules are found early.

- ➤ Top-down approach
➤ Tests of major functions or processes are conducted early.
➤ Interface errors can be detected sooner

- Testing levels identify the specific level of testing that will occur and are usually
based on the size and complexity of the application:
➤ Unit testing
➤ Interface or integration testing
➤ System testing
➤ Recovery testing
➤ Security testing
➤ Stress/volume testing
➤ Performance testing
➤ Final acceptance testing

- When the size of the software-development project is determined, the project team should identify the resources required to complete each task. The project team then should develop a work breakdown structure that identifies specific tasks and the resources assigned those tasks, as well as project milestones and dependencies. The team should create Gantt charts to show
timelines, milestones, and dependencies. A Gantt chart is a graphic representation of the timing and duration of the project phases; it typically includes start date, end date, and task duration.

- Determining time and resource requirements for an application-development project is often the most difficult part of initial efforts in application development.

- PERT is the preferred tool for formulating an estimate of development project duration. A PERT chart depicts task, duration, and dependency information. The beginning of each chart starts with the first task, which branches out via a connecting line that contains three estimates:
➤ The first is the most optimistic time for completing the task.
➤ The second is the most likely scenario.
➤The third is the most pessimistic, or “worst case,” scenario.

- The calculation of PERT time uses the following formula:
Optimistic + pessimistic + (4 × most likely) / 6

- The Critical Path Methodology (CPM) is a project-management technique that analyzes successive activities within a project plan to determine the time required to complete a project and which activities are critical in maintaining the schedule. Critical activities have the least amount of flexibility, meaning that the completion time cannot be changed because it would delay the
completion of the overall project.

- In conjunction with a formal SDLC and project-management activities, the organization must implement change-management processes, which include change-control procedures both for software-development projects and for the production environment. The change-management process, usually facilitated by the change-control board (CCB), reviews all changes associated
with the software-development project. The CCB has the authority to accept, deny, or postpone a requested change.

- The change-management process ensures that any deviations from the original requirements are
approved before being added to the project.

- Programmers should perform unit, module, and full regression testing following any
changes to an application or system.

- The organization should implement quality control (QC) procedures to ensure that proper testing is performed through the development life cycle. The QC team is responsible for conducting code reviews and tests to ensure that software is free of defects and meets user expectations. Unit, module, and regression testing ensure that the specific unit or module is complete, performs as expected, and meets requirements. Regression testing should be
required for all changes introduced into the system, whether in development or in production. The purpose of regression testing is to ensure that the change introduced does not negatively impact the system as a whole.

- After development, testing, and implementation have been completed and the new system is part of the production environment, a formal post implementation review should be performed.

- Overall, the post-implementation review should determine whether the development project achieved stated objectives and whether the process of development was performed in an efficient and effective manner. In addition, the post-implementation review should allow the organization
to identify areas of improvement through lessons learned.

- The primary concern for an organization is to ensure that projects are consistently managed through a formal documented process. This process includes the SDLC, project management, change control, and policies/ procedures aligned with the strategic plan.

- Although software acquisition is not part of the SDLC, it should have a formal documented process. According to ISACA, the project team, technical support staff, and key users should be asked to write a request for proposal (RFP). This RFP should be sent to a variety of vendors,
and their responses (proposals) should be reviewed to determine which vendor’s products offer the best solution.

- “Make vs. buy” decisions are typically made during the feasibility study phase of the
software- or systems-development project.

- ➤ Allowance for a software escrow agreement, if the deliverables do not include source code. (A clause for requiring source code escrow in an application vendor agreement is important to ensure that the source code remains available even if the application vendor goes out of business.)

- The difference between emergency changes and regular change requests is that the emergency change is corrected immediately and the change request (with supporting documentation) is completed after the fact.

- In developing control objectives, the IS auditor should keep in mind the following control categories:
➤ Security
➤ Input
➤ Processing
➤ Output
➤ Databases
➤ Backup and recovery

- Section AI5 of COBIT (“Install and Accredit Systems”) provides specific activities that an IS auditor should perform to ensure the effectiveness, efficiency, confidentiality, integrity, availability, compliance, and reliability of the IT system. Accreditation is a process by which an organization, through internal or third parties, IT services, or systems ensures adequate security and control exist.

- Test and development environments should be separated to control the stability of
the test environment.

- Top-down testing is usually used in RAD or prototype development and provides the capability to test complete functions within the system. It also allows for the early correction of interface errors.

- The approach for testing should include the following:
➤ Development of a test plan—Should include specific information on testing (I/O tests, length of test, expected results).
➤ Testing—Utilizes personnel and testing software, and then provides testing reports that compare actual results against expected results. Testing results remain part of the system documentation throughout the SDLC.
➤ Defect management—Defects are logged and corrected. Test plans are revised, if required, and testing continues until the tests produce acceptable results.

- In addition to testing, the quality-assurance activities include ensuring that the processes associated with the SDLC meet prescribed standards. These standards can include documentation, coding, and management standards. The IS auditor should ensure that all activities associated with the SDLC meet the quality-assurance standards of the organization.

- Using a bottom-up approach to software testing often allows earlier detection of errors in critical modules.

- ➤ Unit testing—Used for testing individual modules, and tests the control structure and design of the module. Unit testing pertains to components within a system; system testing pertains to interfaces between application programs.

➤ Interface/integration testing—Used for testing modules that pass data between them. These test are used to validate the interchange of data and the connection among multiple system components.

➤ System testing—Used for testing all components of the system, and usually comprised of a series of tests. System testing is typically performed in a nonproduction environment by a test team.
➤ Final acceptance testing—Used to test two areas of quality assurance. Quality assurance testing (QAT) tests the technical functions of the system, and user acceptance testing (UAT) tests the functional areas of the system. These tests are generally performed independently from one another because they have different objectives.

- Above almost all other concerns, failing to perform user acceptance testing often results in the greatest negative impact on the implementation of new application software.

- ➤ Whitebox testing—Logical paths through the software are tested usingtest cases that exercise specific sets of conditions and loops. Whitebox testing is used to examine the internal structure of an application module during normal unit testing.

➤ Blackbox testing—This testing examines an aspect of the system with regard to the internal logical structure of the software. As an example of blackbox testing, the tester might know the inputs and expected outputs, but not system logic that derives the outputs. Whereas a whitebox test is appropriate for application unit testing, blackbox testing is used for
dynamically testing software modules.

- ➤ Regression testing—A portion of the test scenario is rerun to ensure that changes or corrections have not introduced new errors, that bugs have been fixed, and that the changes do not adversely affect existing system modules. Regression testing should use data from previous tests to obtain accurate conclusions regarding the effects of changes or corrections to a program, and to ensure that those changes and corrections have not introduced new errors.

- Regression testing is used in program development and change management to determine whether new changes have introduced any errors in the remaining unchanged code.

- Decision trees use questionnaires to lead the user through a series of choices to reach a conclusion. Artificial neural networks attempt to emulate human thinking by analyzing many attributes of knowledge to reach a conclusion. Critical path analysis is used in project
management. Function point analysis is used in determining a proposed software application’s size for development planning purposes.

- Application controls should be considered as early as possible in the system-development process, even in the development of the project’s functional specifications. Success of all other phases relies upon proactive security controls planning.

- Function point analysis (FPA) provides an estimate of the size of an information system based on the number and complexity of a system’s inputs, outputs, and files. All other answers are misleaders.

- IS auditor is primarily concerned with having the change properly evaluated and approved by business process users before implementation.

Tuesday, May 18, 2010

Chapter 5. Disaster Recovery and Business Continuity

- Disaster recovery for systems typically focuses on making alternative processes and resources available for transaction processing. A disaster recovery plan (DRP) should reduce the length of recovery time necessary and also the costs associated with recovery.

- A disaster can be classified as a disruption that causes critical information resources to be inoperative for a period of time, adversely affecting business operations.

- Business continuity plans (BCP) are the result of a process of plan creation to ensure that critical business functions can withstand a variety of emergencies.

- Disaster-recovery plans deal with the immediate restoration of the organization’s business systems while the business continuity plan also deals with the long-term issues before, during, and after the disaster. The BCP should include getting employees to the appropriate facilities; communicating with the public, partners, and customers; and making the transition from emergency recovery back to normal operations. The DRP is a part of the BCP and is the responsibility
of senior management.

- These are the attributes of a disaster:
➤ Unplanned and unanticipated
➤ Impacts critical business functions
➤ Has the capacity for significant loss

- During the initiation of the business continuity planning process, the BCP team should prepare for a meeting with senior management to define the project goals and objectives, present the project schedule, and review the proposed interview schedule.

In preparation for this meeting, the BCP team should do the following:
➤ Review the organizational structure to determine what resources will be assigned to the team
➤ Review existing disaster-planning policies, strategies, and procedures
➤ Review existing continuity plans
➤ Research any events that have occurred previously (severe weather, fires,equipment or facility failures, and so on) and that had or could have a negative effect on the organization
➤ Create a draft project schedule and associated documents (timing, resources, interview questionnaires, roles and responsibilities, and so on)

- Per ISACA, the business continuity planning process can be divided into the following phases:
➤ Analyze the business impact
➤ Develop business-recovery strategies
➤ Develop a detailed plan
➤ Implement the plan
➤ Test and maintain the plan

- A business impact analysis (BIA) is used to identify threats that can impact continuity
of operations.

- The results of the BIA should provide a clear picture of the continuity impact in terms of the impact to human and financial resources, as well as the reputation of the organization.

- The BIA team should work with senior management, IT personnel, and end users to identify all resources used during normal operations. Although BCP and DRP are often implemented and tested by middle management and end users, the ultimate responsibility and accountability for the plans
remains with executive management, such as the board of directors.

- The following steps can be used for the framework of business impact assessment:
➤ Gather business impact analysis data
➤ Questionnaires or interviews
➤ Review the BIA results
➤ Check for completeness and consistency
➤ Follow up with interviews for areas of ambiguity or missing information
➤ Establish the recovery time for operations, processes, and systems
➤ Define recovery alternatives and costs

- End-user involvement is critical during the business impact assessment phase of business continuity planning.

- The BIA questionnaire and interviews should gather the following information from the business units:
➤ Financial impacts resulting from the incapability to operate for prolonged periods of time
➤ Operational impacts within each business unit
➤ Expenses associated with continuing operations after a disruption
➤ Current policies and procedures to resume operations in the event of a disruption
➤ Technical requirements for recovery

- The BIA should include both quantitative and qualitative questions. Quantitative questions generally describe the economic or financial impacts of a potential disruption. Qualitative impacts are impacts that cannot be quantified in monetary terms. These types of impacts are generally associated with the business impact of a disaster and include damage to reputation and loss of confidence in customer services or products.

- Before the development of a BCP/DRP, the BIA team should develop a recommendation or findings
report for senior management. The purpose of this report is to provide senior management with a draft priority list of the business unit service and support recovery, as well as the financial and operational impacts that drive the prioritization.

- In reviewing the information gathered during the BIA, the team should determine what the critical information resources are related to the organization’s critical business processes.This relationship is important because the disruption of an information resource is not a disaster unless that resource is critical to a business process. Per ISACA, each resource should be assessed to determine criticality. Indications of criticality might include
these:
➤ The process supports lives or people’s health and safety.
➤ Disruption of the process would cause a loss of income to the organization or exceptional costs that are unacceptable.
➤ The process must meet legal or statutory requirements.

- In making this determination, the BIA team should consider two cost factors. The first is the cost associated with downtime. The stop in growth reflects the point in time when the business can no longer function.The second cost factor is the cost associated with recovery or resumption
of services by implementing the business continuity plan. As stated earlier, an optimal BCP and associated strategies should be based on the point in time when both cost factors are at a minimum.

- The next step in developing the business continuity plan is to identify recovery strategies and select the strategy or strategies that best meet the organization’s needs. It is important to remember that the strategy should include the technologies required for recovery and that the policies and procedures should include specific sequencing. The sequence in which systems are
recovered is important for ensuring that the organization can function effectively following a disaster.

- The selection of the recovery strategy is based on the following:
➤ The criticality of the business process and the applications supporting the process
➤ The cost of the downtime and recovery
➤ Time required to recovery
➤ Security

- Critical : These functions cannot be performed unless they are replaced by identical capabilities. Critical applications cannot be replaced by manual methods. Tolerance to interruption is very low; therefore, cost of interruption is very high.

- Vital : These functions can be performed manually, but only for a brief period of time. There is a higher tolerance of interruption than with critical systems and, therefore, somewhat lower costs of interruption, provided that functions are restored within a certain time frame (usually five days or less).

- Sensitive : These functions can be performed manually, at a tolerable cost and for an extended period of time. Although they can be performed manually, it usually is a difficult process and requires additional staff to perform.

- Noncritical : These functions can be interrupted for an extended period of time, at little or no cost to the company, and require little or no catching up when restored.

- The best strategy is one that takes into account the cost of downtime and recovery, the criticality of the system, and the likelihood of occurrence determined during the BIA.

- In addition to actual recovery procedures, the organization should implement different levels of
redundancy so that a relatively small event does not escalate to a full-blown disaster. An example of this type of control is to use redundant routing or fully meshed wide area networks.

- A hot site is a facility that is basically a mirror image of the organization’s current processing facility. It can be ready for use within a short period of time and contains the equipment, network, operating systems, and applications that are compatible with the primary facility being backed up. When hot sites are used, the staff, data files, and documentation are the only additional items needed in the facility.

- A hot site is generally the highest cost among recovery options, but it can be justified when critical applications and data need to resume operations in a short period of time. The costs associated include subscription costs, monthly fees, testing costs, activation costs, and
hourly or daily charges (when activated). The physical facility should incorporate the same level of security as the primary facility and should not be easily identifiable externally (with
signs or company logos, for example).

- Although hot sites are the most expensive type of alternate processing redundancy, they are very appropriate for operations that require immediate or very short recovery times.

- Warm sites are sites that contain only a portion of the equipment and applications required for recovery. In a warm site recovery, it is assumed that computer equipment and operating software can be procured quickly in the event of a disaster. The warm site might contain some computing equipment that is generally of a lower capacity than the equipment at the primary facility.
The contracting and use of a warm site are generally lower cost than a hot site but take longer to get critical business functions back online. Because of the requirement of ordering, receiving, and installing equipment and operating systems, a warm site might be operational in days or weeks, as opposed to hours with a hot site.

- The costs associated with a warm site are similar to but lower than those of a hot site and include subscription costs, monthly fees, testing costs, activation costs, and hourly or daily charges (when activated).

- A cold site can be considered a basic recovery site, in that it has the required space for equipment and environmental controls (air conditioning, heating, power, and so on) but does not contain any equipment of connectivity. A cold site is ready to receive the equipment necessary for a recovery but will take several weeks to activate. Of the three major types of off-site processing facilities (hot, warm, and cold), a cold site is characterized by at least providing
for electricity and HVAC. A warm site improves upon this by providing for redundant equipment and software that can be made operational within a short time.

- A cold site is often an acceptable solution for preparing for recovery of noncritical systems and data.

- Duplicate processing facilities are similar to hot site facilities, with the exception that they are completely dedicated, self-developed recovery facilities. The organization might have a primary site in Washington, D.C., and might designate a duplicate site at one of its own
facilities in Utah. The duplicate facility would have the same equipment, operating systems, and applications and might have regularly synchronized data. In this example, the facility can be activated in a relatively short period of time and does not require the organization to notify a third party for activation.

- Reciprocal agreements are arrangements between two or more organizations with similar equipment and applications. In this type of agreement, the organizations agree to provide computer time (and sometimes facility space) to one another in the event of an emergency. These types of agreements are generally low cost and can be used between organizations that have unique hardware or software that cannot be maintained at a hot or warm site. The disadvantage of reciprocal agreements is that they are not enforceable, hardware and software changes are generally not communicated over time
(requiring significant reconfiguration in the event of an emergency), and the sites generally do not employ capacity planning, which may render them useless in the event of an emergency.

- A reciprocal agreement is not usually appropriate as an alternate processing solution for organizations with large databases or live transaction processing.

- The BCP team should develop a detailed plan for recovery.
The following factors should be considered when developing
the detailed plan:
➤ Predisaster readiness: Contracts, maintenance and testing, policies, and procedures
➤ Evacuation procedures: Personnel, required company information
➤ Disaster declaration: What defines a disaster? Who is responsible for declaring?
➤ Identification of critical business processes and key personnel (business and IT)
➤ Plan responsibilities: Plan objectives
➤ Roles and responsibilities: Who is responsible for what?
➤ Contract information: Who maintains it, and where is it?
➤ Procedures for recovery: Step-by-step procedures with defined responsibilities
➤ Resource identification: Hardware, software, and personnel required for recovery

- The BCP should be written in clear, simple language and should be understandable to all in the organization. When the plan is complete, a copy should be maintained off-site and should be easily accessible.

- The business continuity plan should be created to minimize the effect of disruptions. The process associated with the development of the plan should include the following steps:
➤ Perform a business impact analysis to determine the effect of disruptions on critical business processes
➤ Identify, prioritize, and sequence resources (systems and personnel) required to support critical business processes in the event of a disruption
➤ Identify recovery strategies that meet the needs of the organization in resumption of critical business functions until permanent facilities are available
➤ Develop the detailed disaster-recovery plan for the IT systems and data that support the critical business functions
➤ Test both the business continuity and disaster recovery plans
➤ Maintain the plan and ensure that changes in business process, critical business functions, and systems assets, such as replacement of hardware, are immediately recorded within the business continuity plan

- As an IS auditor, you should review the plan to ensure that it will allow the organization to resume its critical business functions in the event of a disaster. ISACA states the IS Auditors tasks include the following:
➤ Evaluating the business continuity plans to determine their adequacy and currency, by reviewing the plans and comparing them to appropriate standards or government regulations
➤ Verifying that the business continuity plans are effective, by reviewing the results from previous tests performed by both IT and end-user personnel
➤ Evaluating off-site storage to ensure its adequacy, by inspecting the facility and reviewing its contents, security, and environmental controls
➤ Evaluating the ability of IT and user personnel to respond effectively in emergency situations, by reviewing emergency procedures, employee training, and results of their tests and drills

- The organization’s critical data should be stored both onsite, for quick recovery in nondisaster situations, and off-site, in case of a disaster. The Storage Networking Industry Association defines a backup as follows:
A collection of data stored on (usually removable) nonvolatile storage media for purposes of recovery in case the original copy of data is lost or becomes inaccessible.

- Three backup methods are used:
➤ Full backup—In a full backup, all the files (in some cases, applications) are backed up by copying them to a tape or other storage medium. This type of backup is the easiest backup to perform but requires the most time and space on the backup media.
➤ Differential backup—A differential backup is a procedure that backs up only the files that have been changed or added since the last full backup. This type of backup reduces the time and media required.
➤ Incremental backup—An incremental backup is a procedure that backs up only the files that have been added or changed since the last backup (whether full or differential).

- For instance, the organization might choose to perform a single full weekly backup combined
with daily incremental backups. This method decreases the time and media required for the daily backups but increases restoration time. This type of restoration requires more steps and, therefore, more time because the administrator will have to restore the full backup first and then apply the incremental backups sequentially until all the data is restored.

- Tape backup media is a magnetic medium and, as such, is susceptible to damage from both the environment in which it is stored (temperature, humidity, and so on) and physical damage to the tape through excessive use. For this reason, administrators use backup schemes that allow tapes to be regularly rotated and eventually retired from backup service.

- One popular scheme is the grandfather, father, and son scheme (GFS), in which the central server
writes to a single tape or tape set per backup. When using the GFS scheme, the backup sets are daily (son), weekly (father), and monthly (grandfather).

- Daily backups come first. The four backup tapes are usually labeled (Mon–Thur) and used on their corresponding day. The tape rotation is based on how long the organization wants to maintain file history. If a file history for one week is required, tapes are overwritten each week; if history required for three weeks, each tape is overwritten every three weeks (requiring 12 tapes). The five (some months have five weeks) father tapes are used for full weekly backups (Friday tapes).

- Two types of tape storage are used:
➤ Onsite storage—One copy of the backup tapes should be stored onsite to effect quick recovery of critical files.Another copy should be moved to an off-site location as redundant storage. Onsite tapes should be stored in a secure fireproof vault, and all access to tapes should be logged.
➤ Off-site storage—The organization could contract with a reputable records storage company for off-site tape storage, or could maintain the facility themselves. The physical and environmental controls for the offsite facility should be equal to those of the organization. The contract
should stipulate who from the organization will have the authority to provide and pick up tapes, as well as the time frame in which tapes can be delivered in the event of a disaster.

- A SAN is a special-purpose network in which different types of data storage are associated with servers and users. A SAN can either interconnect attached storage on servers into a storage array or connect the servers and users to a storage device that contains disk arrays.

- If the organization cannot implement an off-site SAN, it might opt for an electronic vaulting option. With this option, the organization contracts with a vaulting provider that provides disk arrays for the backup and storage of the organization’s applications and data. Generally, the organization installs an agent on all the servers and workstations that require a backup and identifies the files to be included in the backup. The agent then performs full and
incremental backups, and moves that data via a broadband connection to the electronic vault. Organizations that have a significant amount of data or high levels of change might incur issues in moving large amounts of data across a broadband connection.

- As a part of regular testing and maintenance, organizations can opt to perform either full or partial testing of recovery and continuity plans, though most organizations do not perform full-scale tests because of resource constraints. To continue to improve recovery and continuity plans, organizations can perform a paper, walk-through, or preparedness test (Full Test)

- A paper test is the least complex test that can be performed. This test helps ensure that the plan is complete and that all team members are familiar with their responsibilities within the plan. With this type of test, the BCP/DRP plan documents are simply distributed to appropriate managers and BCP/DRP team members for review, markup, and comment.

- A walk-through test is an extension of the paper testing, in that the appropriate managers and BCP/DRP team members actually meet to discuss and walk through procedures of the plan, individual training needs, and clarification of critical plan elements.

- Of the three major types of BCP tests (paper, walk-through, and preparedness), a walk-through test requires only that representatives from each operational area meet to review the plan.

- A preparedness test is a localized version of the full test in which the team members and participants simulate an actual outage or disaster and simulate performing the steps necessary to effect recovery and continuity. This test can be performed against specific areas of the plan instead of the entire plan. This test validates response capability, demonstrates skills and training, and practices decision-making capabilities. Only the preparedness test actually
takes the primary resources offline to test the capabilities of the backup resources and processing.

- Of the three major types of BCP tests (paper, walkthrough, and preparedness), only the preparedness test uses actual resources to simulate a system crash and validate the plan’s effectiveness.

- A full operational test is the most comprehensive test and includes all team members and participants in the plan. The BCP team and participants should have multiple paper and preparedness tests completed before performing a full operational test. This test involves the mobilization of personnel, and disrupts and restores operations just as an outage or disaster
would. This test extends the preparedness test by including actual notification, mobilization of resources, processing of data, and utilization of backup media for restoration.

- During the test, detailed documentation and observations should be maintained.
Per ISACA, these measurements might include the following:
➤ Time—The elapsed time for completion of prescribed tasks, delivery of equipment, assembly of personnel, and arrival at a predetermined site.
➤ Amount—Amount of work performed at the backup site by clerical personnel and information systems processing operations.
➤ Count—The number of vital records successfully carried to the backup site versus the required number, and the number of supplies and equipment requested versus those actually received. Also, the number of critical systems successfully recovered can be measured with the number of
transactions processed.
➤ Accuracy—Accuracy of the data entry at the recovery site versus normal accuracy. Also, the accuracy of actual processing cycles can be determined by comparing output results with those for the same period processed under normal conditions.

- It is important for organizations to remember that a BCP plan is a living document and will change according to the needs of the organization.The organization should appoint a business
continuity coordinator to ensure that periodic testing and maintenance of the plan are implemented. The coordinator should also ensure that team members and participants receive regular training associated with their duties in the BCP and maintain records and results of testing.

- Business disruptions, as opposed to disasters, can be caused by a variety of internal and external factors, including these:
➤ Equipment failure (processors, hard drives, memory, and so on)
➤ Service failures (telecommunications outages, power outages, external application failure, and so on)
➤ Application or data corruption

- In addition to the disaster-recovery plan, the IT department should have policies and procedures for backup, storage of backup media (onsite and offsite), defined roles and responsibilities, and recovery. The IS auditor should review the following to ensure that the organization can recover data and applications in the event of a short-term disruption:
➤ Backup procedures—The procedures identify the backup scheme and
define responsibilities for implementing backups
➤ Onsite storage—All storage media should be stored in environmentally
controlled facilities and should be secured in a fire rated safe.
➤ Off-site storage—The off-site storage facility should have environmental and security controls that equal those of the onsite storage facility. The contract with the off-site facility should contain the points of contact within the organization that have the authority to check storage
media in and out of the facility, as well as clearly defined response times for the delivery of storage media in the event of a disaster.

- The organization’s insurance coverage should take into account the actual cost of recovery and should include coverage for media damage, business interruption, and business continuity processing.

- There are two general types of insurance: property and liability.
Property insurance can protect the organization from a wide variety of losses,
including these:
➤ Buildings
➤ Personal property owned by the organization (tables, desks, chairs, and equipment)
➤ Loss of income
➤ Earthquake
➤ Flood (usually an additional rider on the policy) Property insurance

- A general liability policy is designed to provide coverage for the following:
➤ Personal injury
➤ Fire liability
➤ Medical expenses
➤ General liability for accidents occurring on the organization premises

- The organization must ensure that all costs associated with a disaster and the recoveries are included in its insurance policies. It might be necessary to purchase additional insurance policies to extend coverage (sometimes called umbrella policies) or purchase specific insurance coverage (flood or terrorism, for example) based on the needs of the organization.

- The BCP team should define key personnel within the business units and IT to implement the plan. These personnel should be a part of the planning, testing, and maintenance of the BCP. Key personnel should have alternates to function in their place, where necessary.

- ➤ Salvage team—This team manages the relocation project. It also makes a more detailed assessment of the damage to the facilities and equipment than was performed initially, provides the emergency-management team with the information required to determine whether planning should be directed toward reconstruction or relocation, provides information necessary for filling out insurance claims, and coordinates the efforts necessary for immediate records salvage, such as restoring paper documents and electronic media.
➤ Relocation team—This team coordinates the process of moving from the hot site to a new location or to the restored original location.

- the MOST important control aspect of maintaining data backup at off-site storage facilities is Critical and time-sensitive data is kept current at the off-site storage facility.

- Duplicate logging of transactions, use of before-and-after images of master records, and time stamping of transactions and communications data are all recommended best practices for establishing effective redundancy of transaction databases.

- Electronic vaulting and remote journaling are both considered effective redundancy controls for backing up real-time transaction databases.

Sunday, May 16, 2010

Chapter 4. Protection of Information Assets

- Defense-in-depth strategies provide layered protection for the organization’s information systems and data. Realization of this strategy reduces the overall risk of a successful attack
in the event of a single control failure using multiple layers of controls to protect an asset. These controls ensure the confidentiality, integrity, and availability of the systems and data, as well as prevent financial losses to the organization.

- The organization should have a formalized security function that is responsible for classifying assets and the risks associated with those assets, and mitigating risk through the implementation of security controls. The combination of security controls ensures that the organization’s information technology assets and data are protected against both internal and external
threats.

- The security function protects the IT infrastructure through the use of physical, logical, environmental and administrative (that is, policies, guidelines, standards, and procedures) controls.

- Three main components of access control exist:
➤ Access is the flow of information between a subject and an object.
➤ A subject is the requestor of access to a data object.
➤ An object is an entity that contains information.

- The access-control model is a framework that dictates how subjects can access objects and defines three types of access:
➤ Discretionary—Access to data objects is granted to the subjects at the data owner’s discretion.
➤ Mandatory—Access to an object is dependent upon security labels.
➤ Nondiscretionary—A central authority decides on access to certain objects based upon the organization’s security policy.

- In implementing mandatory access control (MAC), every subject and object has a sensitivity label (security label). A mandatory access system is commonly used within the federal government to define access to objects. If a document is assigned a label of top secret, all subjects requesting access to the document must contain a clearance of top-secret or above to view the document. Those containing a lower security label (such as secret or confidential) are
denied access to the object. In mandatory access control, all subjects and objects have security labels, and the decision for access is determined by the operating or security system. Mandatory access control is used in organizations where confidentiality is of the utmost concern.

- Nondiscretionary access control can use different mechanisms based on the needs of the organization. The first is role-based access, in which access to an object(s) is based on the role of the user in the company. In other words, a data entry operator should have create access to a particular database. All data entry operators should have create access based on their role (data entry operator). This type of access is commonly used in environments with high turnover because the access rights apply to a subject’s role, not the subject.

- Task-based access control is determined by which tasks are assigned to a user. In this scenario, a user is assigned a task and given access to the information system to perform that task. When the task is complete, the access is revoked; if a new task is assigned, the access is granted for the new task.

- Lattice-based access is determined by the sensitivity or security label assigned to the user’s role. This scenario provides for an upper and lower bound of access capabilities for every subject and object relationship. Consider, for example, that the role of our user is assigned an access level of secret. That user may view all objects that are public (lower bound) and secret (upper bound), as well as those that are confidential (which falls between public and
secret). This user’s role would not be able to view top-secret documents because they exceed the upper bound of the lattice.

- Another method of access control is rule-based access. The previous discussion of firewalls in Chapter 3, “Technical Infrastructure and Operational Practices and Infrastructure,” demonstrated the use of rule-based access implemented through access control lists (ACLs). Rule-based access is generally used between networks or applications. It involves a set of rules from which incoming requests can be matched and either accepted or rejected. Rule-based controls are considered nondiscretionary access controls because the administrator of the system sets the controls rather than the information users.

- IS auditors should review access control lists (ACL) to determine user permissions that have been granted for a particular resource.

- Restricted interfaces are used to control access to both functions and data within applications and through the use of restricted menus or shells. They are commonly used in database views. The database view should be configured so that only that data for which the user is authorized is presented on the screen. A good example of a restricted interface is an Automatic Teller Machine (ATM).

- The administration of access control can be either centralized or decentralized and should support the policy, procedures, and standards for the organization. In a centralized access control administration system, a single entity or system is responsible for granting access to all users. In decentralized or distributed administration, the access is given by individuals who are closer to the resources.

- The IT organization also should have a method of logging user actions while accessing objects within the information system, to establish accountability (linking individuals to their activities).
Access control involves these components:
1. Identification
2. Authentication
3. Authorization

- The most common form of authentication includes the use of passwords, but authentication can take three forms:
➤ Something you know—A password.
➤ Something you have—A token, ATM bank card, or smart card.
➤ Something you are—Unique personal physical characteristic(s) (biometrics). These include fingerprints, retina scans, iris scans, hand geometry, and palm scans.
These forms of authentication can be used together. If two or more are used together, this is known as strong authentication or two-factor authentication.

- Different types of passwords exist, depending on the implementation. In some systems, the passwords are user created; others use cognitive passwords. A cognitive password uses de facto or opinion-based information to verify an individual’s identity. What is your mother’s
maiden name? What is the name of your favorite pet? What is the elementary school you attended? The user chooses a question and provides the answer, which is stored in the system. If the user forgets the password, the system asks the security question. If it is answered correctly, the system resets the password or sends the existing password via email.

- The token can be either synchronous or asynchronous. When using a synchronous token, the generation of the password can be timed (the password changes every n seconds or minutes) or
event driven (the password is generated on demand with a button). The use of token-based authentication generally incorporates something you know (password) combined with something you have (token) to authenticate. A token device that uses asynchronous authentication uses a challenge response mechanism to authenticate. In this scenario, the system displays a
challenge to the user, which the user then enters into the token device. The token device returns a different value. This value then is entered into the system as the response to be authenticated.

- In a database, system integrity is most often ensured through table link verification
and reference checks.

- IS auditors should first determine points of entry when performing a detailed network
assessment and access control review.

- The longer the key is, the more difficult it is to decrypt a message because of the amount of computation required to try all possible key combinations (work factor). Cryptanalysis is the science of studying and breaking the secrecy of encryption algorithms and their necessary pieces. The work factor involved in brute-forcing encrypted messages relies significantly
on the computing power of the machines that are brute-forcing the message.

- The strength of a cryptosystem is determined by a combination of key length, initial input vectors, and the complexity of the data-encryption algorithm that uses the key.

- An elliptic curve cryptosystem has a much higher computation speed than RSA encryption.

- Rich mathematical structures are used for efficiency. ECC can provide the same level of protection as RSA, but with a key size that is smaller than what RSA requires.

- Digital Signature Algorithm (DSA)can only be used for digital signature. Security comes from the difficulty of factoring discrete algorithms in a finite space.

- A long asymmetric encryption key increases encryption overhead and cost.

- A public key infrastructure (PKI) incorporates public key cryptography, security policies, and standards that enable key maintenance (including user identification, distribution, and revocation) through the use of certificates.

- The certificates used by the CAs incorporate identity information, certificate serial numbers, certificate version numbers, algorithm information, lifetime dates, and the signature of the issuing authority (CA). The most widely used certificate types are the Version 3 X.509 certificates. The X.509 certificates are commonly used in secure web transactions via Secure Sockets Layer (SSL).

- A Certifying Authority (CA) can delegate the processes of establishing a link between the requesting entity and its public key to a Registration Authority (RA). An RA performs certification and registration duties to offload some of the work from the CAs. The RA can confirm individual identities, distribute keys, and perform maintenance functions, but it cannot issue certificates. The CA still manages the digital certificate life cycle, to ensure that adequate security and controls exist.

- Per ISACA, power failures can be grouped into four distinct categories, based on the duration and relative severity of the failure:
➤ Total failure—A complete loss of electrical power, which might involve a single area building up to an entire geographic area. This is often caused by weather conditions (such as a storm or earthquake) or the incapability of an electrical utility company to meet user demands (such
as during summer months).
➤ Severely reduced voltage (brownout)—The failure of an electrical utility company to supply power within an acceptable range (108–125 volts AC in the United States). Such failure places a strain on electrical equipment and could limit its operational life or even cause permanent
damage.
➤ Sags, spikes, and surges—Temporary and rapid decreases (sags) or increases (spikes and surges) in voltage levels. These anomalies can cause loss of data, data corruption, network transmission errors, or even physical damage to hardware devices such as hard disks or memory chips.
➤ Electromagnetic interference (EMI)—Interference caused by electrical storms or noisy elective equipment (such as motors, fluorescent lighting, or radio transmitters). This interference could cause computer systems to hang or crash, and could result in damages similar to those
caused by sags, spikes, and surges.

- The organization can provide a complete power system, which would include the UPS, a power conditioning system (PCS), and a generator. The PCS is used to prevent sags, spikes, and surges from reaching the electrical equipment by conditioning the incoming power to reduce voltage deviations and provide steady-state voltage regulation.

- Electrical equipment must operate in climate-controlled facilities that ensure proper temperature and humidity levels. Relative humidity should be between 40% and 60%, and the temperature should be between 70°F and 74°F.

- A number of fire-detection systems are activated by heat, smoke, or flame. These systems should provide an audible signal and should be linked to a monitoring system that can contact the fire department.
➤ Smoke detectors—Placed both above and below the ceiling tiles. They use optical detectors that detect the change in light intensity when there is a presence of smoke.
➤ Heat-activated detectors—Detect a rise in temperature. They can be configured to sound an alarm when the temperature exceeds a certain level.
➤ Flame-activated detectors—Sense infrared energy or the light patterns associated with the pulsating flames of a fire.

- Water - Common combustibles - Reducing temperatures
CO2 - Liquid and electrical fires- Removing fuel and oxygen
Soda acid - Liquid and electrical fires - Removing fuel and oxygen
Gas - Chemical fires - Interfering with the chemical reaction necessary for fire

- The following are automatic fire suppression systems:
➤ Water sprinklers—These are effective in fire suppression, but they will damage electrical equipment.
➤ Water dry pipe—A dry-pipe sprinkler system suppresses fire via water that is released from a main valve, to be delivered via a system of dry pipes that fill with water when the fire alarm activates the water pumps. A dry-pipe system detects the risk of leakage. Water-based suppression systems are an acceptable means of fire suppression, but they should be combined with an automatic power shut-off system.

- Although many methods of fire suppression exist, dry-pipe sprinklers are considered to be the most environmentally friendly because they are water based as opposed to chemical based in the case of halon or CO2.

- ➤ Halon—Pressurized halon gas is released, which interferes with the chemical reaction of a fire. Halon damages the ozone and, therefore, is banned, but replacement chemicals include FM-200, NAF SIII, and NAF PIII.
➤ CO2—Carbon dioxide replaces oxygen. Although it is environmentally acceptable, it cannot be used in sites that are staffed because it is a threat to human life.

- Personally escorting visitors is a preferred form of physical access control for guests.

- A biometric system by itself is advanced and very sensitive. This sensitivity can make biometrics prone to error. These errors fall into two categories:
➤ False Rejection Rate (FRR) Type I error—The biometric system rejects an individual who is authorized to access the system.
➤ False Acceptance Rate (FAR) Type II error—The biometric system accepts unauthorized individuals who should be rejected.

- Most biometric systems have sensitivity levels associated with them. When the sensitivity level is increased, the rate of rejection errors increases (authorized users are rejected). When the sensitivity level is decreased, the rate of acceptance (unauthorized users are accepted) increases. Biometric devices use a comparison metric called the Equal Error Rate (EER), which is the rate at which the FAR and FRR are equal or cross over. In general, the lower the EER is, the more accurate and reliable the biometric device is.

- When evaluating biometric access controls, a low Equal Error Rate (EER) is preferred because Equal Error Rates (EERs) are used as the best overall measure of a biometric system’s effectiveness.

- Traffic analysis is a passive attack method intruders use to determine potential network
vulnerabilities.

- ➤ Eavesdropping—In this attack, also known as sniffing or packet analysis, the intruder uses automated tools to collect packets on the network. These packets can be reassembled into messages and can include email, names and passwords, and system information.

- A virus is computer program that infects systems by inserting copies of itself into executable code on a computer system.

- A worm is another type of computer program that is often incorrectly called a virus. The difference between a virus and a worm is that the virus relies on the host (infected) system for further propagation because it inserts itself into applications or programs so that it can replicate and perform its functions. Worms are malicious programs that can run independently
and can propagate without the aid of a carrier program such as email. Worms can delete files, fill up the hard drive and memory, or consume valuable network bandwidth.

- the polymorphic virus has the capability of changing its own code, enabling it to have many different variants. The capability of a polymorphic virus to change its signature pattern enables it to replicate and makes it more difficult for antivirus systems to detect it.

- Another type of malicious code is a logic bomb, which is a program or string of code that executes when a sequence of events or a prespecified time or date occurs. A stealth virus is a virus that hides itself by intercepting disk access requests.

- Integrity checkers are programs that detect changes to systems, applications, and data. Integrity checkers compute a binary number for each selected program called a cyclical redundancy check (CRC).

- A vulnerability assessment is used to determine potential risks to the organization’s systems and data. Penetration testing is used to test controls implemented as countermeasures to vulnerabilities.

- To ensure that the organization’s security controls are effective, a comprehensive security program should be implemented. The security program should include these components:

➤ Continuous user awareness training
➤ Continuous monitoring and auditing of IT processes and management
➤ Enforcement of acceptable use policies and information security controls

- The incident response team should ensure the following:
➤ Systems involved in the incident are segregated from the network so they do not cause further damage.
➤ Appropriate procedures for notification and escalation are followed.
➤ Evidence associated with the incident is preserved.
➤ Documented procedures to recover systems, applications, and data are followed.

- An IDS can be signature based, statistical based, or a neural network. A signature-based IDS monitors and detects known intrusion patterns. A statistical-based IDS compares data from sensors against an established baseline (created by the administrator). Neural networks monitor patterns of activity or traffic on a network. This selflearning process enables the IDS to create a database (baseline) of activity for comparison to future activity.

- Data owners are ultimately responsible and accountable for reviewing user access to
systems.

- Per ISACA, the IS auditor should review the following when auditing security management, logical access issues, and exposures.

* Review Written Policies, Procedures, and Standards
* Logical Access Security Policy
These policies should encourage limiting logical access on a need-to-know
basis and should reasonably assess the exposure to the identified concerns.
* Formal Security Awareness and Training
Promoting security awareness is a preventive control. Through this process,
employees become aware of their responsibility for maintaining good physical
and logical security.
* Data Ownership
Data ownership refers to the classification of data elements and the allocation
of responsibility for ensuring that they are kept confidential, complete, and
accurate.
* Security Administrators
Security administrators are responsible for providing adequate physical and
logical security for the IS programs, data, and equipment.
* Access Standards

- When evaluating logical access controls, the highest order should be as follows:
➤ Obtain a general understanding of the security risks facing information processing, through a review of relevant documentation, inquiry, observation, risk assessment, and evaluation techniques
➤ Document and evaluate controls over potential access paths to the system, to assess their adequacy, efficiency and effectiveness, by reviewing appropriate hardware and software security features in identifying any deficiencies
➤ Test controls over access paths, to determine whether they are functioning and effective, by applying appropriate audit techniques
➤ Evaluate the access control environment, to determine whether the control objectives are achieved, by analyzing test results and other audit evidence
➤ Evaluate the security environment, to assess its adequacy, by reviewing written policies, observing practices and procedures, and comparing them to appropriate security standards or practices and procedures used by other organizations

- Data owners, such as corporate officers, are ultimately responsible and accountable for access control of data. Although security administrators are indeed responsible for securing data, they do so at the direction of the data owners. A security administrator is an example of a data custodian. Data users access and utilize the data for authorized tasks.

- Data classification is a process that allows an organization to implement appropriate controls according to data sensitivity. Before data sensitivity can be determined by the data owners, data ownership must be established.

-

Tuesday, May 4, 2010

Chapter 3. Technical Infrastructure and Operational Practices and Infrastructure

-IT managers must define the role and articulate the value of the IT function.This includes the IT organizational structure as well as operational practices.The IT management functions are generally divided into two functional areas:
➤ Line management—Line managers are concerned with the routine operational decisions on a day-to-day basis.
➤ Project management—Project managers work on specific projects related to the information architecture. Projects are normally a one-time effort with a fixed start, duration, and end that reach a specific deliverable or objective.

- Earlier in this section, we discussed some of the attributes of computing systems, including multiprocessing, multitasking, and multithreading. Theseattributes are defined as follows:
➤ Multitasking—Multitasking allows computing systems to run two or more applications concurrently. This process enables the systems to allocate a certain amount of processing power to each application. In this instance, the tasks of each application are completed so quickly that it appears to multiple users that there are no disruptions in the process.
➤ Multiprocessing—Multiprocessing links more than one processor (CPU) sharing the same memory, to execute programs simultaneously. In today’s environment, many servers (mail, web, and so on) contain multiple processors, allowing the operating system to speed the time for
instruction execution. The operating system can break up a series of instructions and distribute them among the available processors, effecting quicker instruction execution and response.
➤ Multithreading—Multithreading enables operating systems to run several processes in rapid sequence within a single program or to execute (run) different parts, or threads, of a program simultaneously. When a process is run on a computer, that process creates a number of additional
tasks and subtasks. All the threads (tasks and subtasks) can run at one time and combine as a rope (entire process). Multithreading can be defined as multitasking within a single program.

- Risk management is the process of assessing risk, taking steps to reduce risk to an acceptable level (mitigation) and maintaining that acceptable level of risk. Risk identification and management works across all areas of the organizational and IT processes. A configuration-management audit should always verify software licensing for authorized use.

- Reviewing a diagram of the network topology is often the best first step when auditing
IT systems.
- The change-control board provides critical oversight for any production IT infrastructure. This board ensures that all affected parties and senior management are aware of both major and minor changes within the IT infrastructure. The change-management process establishes an open line of communication among all affected parties and allows those parties and subject matter experts (SMEs) to provide input that is instrumental in the change process.

- Client/server architectures differ depending on the needs of organization. An additional component of client/server computing is middleware. Middleware provides integration between otherwise distinct applications. As an example of the application of middleware, IT organizations that have legacy applications (mainframe, non–client/server, and so on) can implement web-based front ends that incorporate the application and business logic in a
central access point. The web server and its applications (Java servlets, VBScript, and so on) incorporate the business logic and create requests to the legacy systems to provide requested data. In this scenario, the web “front end” acts as middleware between the users and the legacy systems. This type of implementation is useful when multiple legacy systems contain data that
is not integrated. The middleware can then respond to requests, correlate the data from multiple legacy applications (accounting, sales, and so on), and present to the client.
Middleware is commonly used to provide the following functionality:
➤ Transaction-processing (TP) monitors—These applications or programs monitor and process database transactions.
➤ Remote procedure calls (RPC)—An RPC is a function call in client/server computing that enables clients to request that a particular function or set of functions be performed on a remote computer.
➤ Messaging services—User requests (messages) can be prioritized, queued, and processed on remote servers.

- Three basic database models exist: hierarchical, network, and relational. A hierarchical database model establishes a parent-child relationship between tables (entities). It is difficult to manage relationships in this model when children need to relate to more than one parent; this can lead to data redundancy. In the network database model, children can relate to more than one parent. This can lead to complexity in relationships, making an ID difficult
to understand, modify, and recover in the event of a failure. The relational database model separates the data from the database structure, allowing for flexibility in implementing, understanding, and modifying. The relational structure enables new relationships to be built based on business needs. The key feature of relational databases is normalization, which structures data to minimize duplication and inconsistencies. Normalization rules include
these:
➤ Each field in a table should represent unique information.
➤ Each table should have a primary key.
➤ You must be able to make changes to the data (other than the primary key) without affecting other fields. Users access databases through a directory system that describes the location
of data and the access method. This system uses a data dictionary, which contains an index and description of all the items stored in the database. The directory system of a database-management system describes the location of data and the access method. In a transaction-processing database, all data transactions to include updating, creating, and deleting are logged to a transaction log. When users update the database, the data contained in the update is written first to the transaction log and then to the database. The purpose of the transaction log is to hold transactions for a short period of time until the database software is ready to commit the transaction to the database. This process ensures that the records associated with the change are ready to accept the entire transactions. In environments with high volumes of transactions, records are locked while transactions are committed (concurrency control), to enable the completion of the transactions. Concurrency controls prevent integrity problems when two processes attempt to update the same data at the same time. The database software checks the log periodically and then commits all transactions contained in the log since the last commit. Atomicity is the process by which data integrity is ensured through the completion of an
entire transaction or not at all.

- Atomicity enforces data integrity by ensuring that a transaction is completed either in its entirety or not at all. Concurrency controls are used as a countermeasure for potential database corruption when two processes attempt to simultaneously edit or update the same information.

-By doing so, the browser makes a special request to have the HTTP request encrypted with the Secure Sockets Layer (SSL) encryption protocol. In conjunction with HTTP, SSL operates at OSI Layer 5, or the session layer.

- At this point, the top three layers—application, presentation, and session— have been used in managing the request for data. This all occurs before consideration of how to transport or logically address the actual packets that need to be transmitted. Looking at all the activity just described, it would make sense that the data itself (data payload) and all the ancillary HTTP, HTTPS, JPG, GIF, and SSL communication could be considered the “thought” we desire to transmit. The technical term for this networking “thought” is the protocol data unit (PDU), known as data. You now understand how a computer needs to think before it speaks, just as you do.

- The data PDU can now be encapsulated into a segment for transport using an OSI Layer 4 protocol such as the Transmission Control Protocol (TCP).

- TCP is a nifty OSI Layer 4 (transportlayer) networking protocol that is especially adept at this task. Not only does it segment the communication, but it does so in a methodical way that allows the receiving host to rebuild the data easily by attaching sequence numbers to its TCP segments.

- Without going into technical specifics, TCP can implement a similar system to ensure reliable transport. However, if you do not have the money or time to arrange for a return receipt for your letter, you might opt to forgo the assurance that the return receipt provides and send it via regular post, which guarantees only best effort, or unreliable delivery.

- The technical parallel is to encapsulate the data PDU using the User Datagram Protocol (UDP) at the OSI transport layer instead of TCP. UDP does not implement a system of successful transmission confirmation, and is known as unreliable transport, providing best-effort delivery. The data PDU itself is unchanged either way because it is merely encapsulated by a transport
protocol for transmission.

- By using an OSI Layer 3, or network-layer, protocol such as Internet Protocol (IP), we can
encapsulate the segment into a packet with a logical destination address.

- If Ethernet is being used for the local area network, the IP packet is encapsulated within an Ethernet frame. If the IP packet needs to traverse a point-to-point link to the Internet service provider (via your pointto- point dial-up connection), the packet is encapsulated using PPP. These protocols are used to link the logical data processes with the physical transmission
medium. Appropriately, this occurs at OSI Layer 2, or the data link layer.

- Protocols at this layer provide access to media (network interface cards, for example) using MAC addresses. They can sometimes also provide transmission error detection, but they cannot provide error correction.

- The last step before transmission is to break the frame into electromagnetic digital signals
at the OSI physical layer, which communicates bits over the connected physicalmedium. These bits are received at the destination host, which can reconstruct the bits into Ethernet frames and decapsulate the frames back to IP packets.

- Degradation of the communication signal as it meets resistance of a length of network cabling or signal attenuation is a risk that results from utilizing cables that are longer than permitted by the physical media and network topology type.

- CSMA/CD is a method by which devices on the network can detect collisions and retransmit. When the collision is detected, the source station stops sending the original transmission and sends a signal to all stations that a collision has occurred on the network. All stations then execute what is known as a random collision back-off timer, which delays all transmission on the
network, allowing the original sending station to retransmit.

- CSMA/CA is a method by which a sending station lets all the stations on the network know that it intends to transmit data. This intent signal lets all other devices know that they should not transmit because there could be a collision, thereby affecting collision avoidance

- If two networks are separated by a bridge, broadcast traffic, but not collision traffic, is allowed to pass. This reduces the size of the collision domain. Routers are used to segment both collision and broadcast domains by directing traffic and working at Layer 3 of the OSI model.

- Tokenpassing networks do not have collisions because only one station at a time can transmit data.

- The bus topology is primarily used in smaller networks where all devices are connected to a single communication line and all transmissions are received by all devices. Cable breaks can cause the entire network to stop functioning.

- In a star topology, each device (node) is linked to a hub or switch, which provides
the link between communicating stations.

- In contrast to a bus topology, a star topology enables devices to communicate even if a device is not working or is no longer connected to the network. Generally, star networks
are more costly because they use significantly more cable and hubs/switches. If the IT organization has not planned correctly, a single failure of a hub/switch can render all stations connected incapable of communicating with the network. To overcome this risk, IT organizations should create a complete or partial mesh configuration, which creates redundant interconnections between network nodes.

- Providing network path redundancy is the best countermeasure or control for potential network device failures. A mesh network topology provides a point-to-point link with every network host. If each host is configured to route and forward communication, this topology provides the greatest redundancy of routes and the greatest network fault tolerance.

- A simple ring topology is vulnerable to failure if even one device on the ring fails. IBM’s Token Ring topology uses dual concentric rings as a more robust ring topology solution.

- Communication on WAN links can be either simplex (one-way), half-duplex (one way at a time), or full duplex (separate circuits for communicating both ways at the same time).

- The first generation of firewalls is known as packet-filtering firewalls, or circuit-level gateways. This type of firewall uses an access control list (ACL) applied at OSI layer 3.

- Improper configuration of traffic rules or access lists is the most common and critical
error in firewall implementations.

- Stateful packet-inspection firewalls are considered the third generation of firewall gateways. They provide additional features, in that they keep track of all packets through all 7 OSI layers until that communication session is closed.

- Proxies are application-level gateways. They differ from packet filtering in that they can look at all the information in the packet (not just header) all the way to the application layer. An application-layer gateway, or proxy firewall, provides the greatest degree of protection and control because it inspects all seven OSI layers of network traffic.

- In general, there are three basic types of firewall configurations:
➤ Bastion host—A basic firewall architecture in which all internal and external communications must pass through the bastion host. The bastion host is exposed to the external network. Therefore, it must be locked down, removing any unnecessary applications or services. A bastion
host can use packet filtering, proxy, or a combination; it is not a specific type of hardware, software, or device.
➤ Screened host—A screened host configuration generally consists of a screening router (border router) configured with access control lists. The router employs packet filtering to screen packets, which are then typically passed to the bastion host, and then on to the internal network. The screened host (the bastion host in this example) is the only device that receives traffic from the border router. This configuration provides an additional layer of protection for the screened host.
➤ Screened subnet—A screened subnet is similar to a screened host, with two key differences: The subnet generally contains multiple devices, the bastion host is sandwiched between two routers (the exterior router and the interior router). In this configuration, the exterior router provides packet filtering and passes the traffic to the bastion. After the traffic is
processed, the bastion passes the traffic to the interior router for additional filtering. The screened subnet, sometimes called a DMZ, provides a buffer zone between the internal and external networks. This configuration is used when an external population needs access to services (web, FTP, email) that can be allowed through the exterior router, but the interior router will not allow those requests to the internal network.

- Layering perimeter network protection by configuring the firewall as a screened host
in a screened subnet behind the bastion host provides a higher level of protection from external attack.

- In the case of software-based firewalls, it is important to remember that they will be
installed on top of commercial operating systems, which may have their own vulnerabilities. This type of implementation requires the IT organization to ensure that the operating system is properly locked down and that there is a process in place to ensure continued installation of security patches.

- Modems convert analog transmissions to digital, and digital transmission to analog. They are required for analog transmissions to enter a digital network.

- A switch combines the functionality of a multi-port bridge and the signal amplification of a repeater.

- An IS auditor usually places more reliance on evidence directly collected, such as
through personal observation.

- The COBIT framework provides 11 processes in the management and deployment of IT systems:
1. Develop a strategic plan
2. Articulate the information architecture
3. Find an optimal fit between the IT and the organization’s strategy
4. Design the IT function to match the organization’s needs
5. Maximize the return on the IT investment
6. Communicate IT policies to the user community
7. Manage the IT workforce
8. Comply with external regulations, laws, and contracts
9. Conduct IT risk assessments
10. Maintain a high-quality systems-development process
11. Incorporate sound project-management techniques

- Computer resources should be carefully monitored to match utilization needs with proper resource capacity levels. Capacity planning and management relies upon network, systems, and staffing monitoring to ensure that organizational goals and objectives regarding information confidentiality, integrity, and availability are met.

- A configuration-management audit should always verify software licensing for authorized use. The remaining answers do not focus on software licensing.

- It is important that database referential integrity be enforced, to avoid orphaned references, or “dangling tuples.” Relational integrity is enforced more at the record level.

- A switch is most appropriate for segmenting the network into multiple collision domains to achieve the result of fewer network communications errors because of congestion-related collisions.

-