SERIES 5:
As engineers, we’re often wired to think in systems, processes, and precision, but when unexpected disruptions hit, even the most efficient operations can unravel without a solid continuity plan. That’s where Business Impact Assessment (BIA) steps in. It’s not just corporate jargon, it’s a practical tool we can use to map out which parts of our work are most critical, what happens if they fail, and how long we can afford to be offline. Too often, engineers are left reacting to failures rather than planning around them, simply because no one put the technical impact into a business context. This guide bridges that gap, giving you a clear way to protect your projects, your team’s time, and your organization’s bottom line. workability, work stability, also matters as longs as other concerns are into account. And this type of data often results in categories of prioritization. Now business impact assessment involves the following steps. The first one is identifying priorities. So there will be certain activities that are most essential to our day-to-day operations when disaster strikes. And the priority identification task or critical prioritization involves creating a comprehensive list of business processes and ranking them in order of importance. So asset value can be used. So here the BCB teams should draw up a list of organization assets and the value that the asset has in monetary terms. Then maximum tolerable downtime. This defines the maximum length of time a business can function without inoperable harm to the users or the business. Then recovery time objective which is the amount of time in which the function can be recovered after the disruption. The goal of the business BCP process is to ensure that your RPOS’s are less than your MTDs. Now risk identification. So this is the second step and the risk comes in two forms. So natural risk and manmade risk. So natural risk can be hurricanes, earthquakes etc. And man-made risk can be fires, theft, terrorism etc. So the risk identification of the process is purely qualitative in nature. At this point in the process, the BCP team should not be concerned about the likelihood that these type of risk will actually materialize or the amount of damage such an occurrence would inflict upon the continued operation of the business. So the likelihood assessment is expressed in terms of an analyzed rate of occurrence. ARO these numbers should be based on corporate history, professional experience of team members and advice from experts such as methologist, salesologist, fire prevention professionals and other consultants.
Risk and Impact Assessment Strategy
Then there is impact assessment. Here we analyze the data gathered during risk identification and likelihood assessment and attempt to determine what impact each one of the identified risks would have on the business if it were required. So in quantitative we calculate metrics like EF, SL etc. And in qualitative we calculate reputation loss, customer loss etc. Here we do resource prioritization which is prioritize the allocation of business continuity resources to the various risks that you identified and assessed in the preceding tasks of the business impact assessment. Now let’s talk about continuity planning. So developing and implementing a continuity strategy to minimize the impact realized impact of the realized risks might have an onproduct of assets. So there are multiple subtasks involved in continuity planning. So the first one is strategy development. So this bridges the gap between business impact assessment and the continuity planning phases of BCP development. The BCP team must now take the prioritized list of concerns raised by the quantitative and qualitative resource prioritization exercises and determine which risks will be addressed by the BCP. Then there is provisions and processes. So the BCB team designs the specific procedures and mechanisms that will mitigate the risk deemed unacceptable during the strategy development stage. So three categories of assess must be protected through BC provisions and processes. The people, buildings of facilities and the infrastructure. Then there is plan approval which requires to get the top level management endorsement of the plan. So this move demonstrates the importance of the plan to the entire organization and showcases the business leaders commitment to the business continuity. Then plan implementation where the BCB team should get together and develop an implementation schedule that utilizes the resources dedicated to the program to achieve the stated process and provision goals. Then there is training and education. So everyone in the organization should receive at least a plan overview briefing. So people with direct BCP responsibilities should be trained and evaluated on their specific BCB tasks and at least one backup person should be trained for every BCP task.
Documentation and Strategic Communication
Now documentation. So committing our BCB methodology to paper provides several important benefits like it ensures that BCB personnel have a written continuity document to reference in the event of an emergency even if senior BCB team members are not present to guide the effort and it provides a historical record of the BCB process that will be useful to future personnel seeking to both understanding the reason behind the various procedures and implementing necessary changes in the plan. And it forces the team members to commit their thoughts to the paper. A process that often facilitates the identification of flaws in the plan. So having a plan on the paper also allows draft documents to be distributed to individuals not on the BCB team for a sanity check. Now statement of importance. So this reflects the criticality of the BCP to the organization’s continued viability. This document commonly takes the form of a letter to the organization’s employees stating the reason that the organization devoted significant resources to the BCB development process and requesting the cooperation of all personnel in the BCB implementation phase. So statement of priorities is the it flows directly from the identify priorities phase of the business impact assessment and it simply involve listing the functions considered critical to continue business operations in a prioritized order. The statement of organizational responsibility. It comes from a senior level executive and can be incorporated into the same letter as the statement of importance. It basically echoes the sentiment that business continuity is everyone’s responsibility.
Comprehensive Guide to Business Continuity, Cybersecurity, and Data Protection Best Practices
Then vital records program. So the BCP documentation should outline a vital records program for the organization. This document states where critical business records to be stored and the procedures for making and storing backup copies of those records. One of the biggest challenges in implementing a vital record program is often identifying the vital records. Then emergency response guideline. So this outlines the organizational and individual responsibilities for immediate response to an emergency. This document provides the first employees to detect an emergency with the steps they should take to activate provisions of the BCP and immediate response procedures like security and safety procedures, fire suppression procedures, notification of property emergencies etc. And also this includes a list of individuals who should be notified of the incident. Then maintenance. So this BCP documentation has the plan itself and it is it includes documentation and it must be living documents. So every organization encounters nearly constant change and this dynamic nature ensures that the business continuity requirements will also evolve. Now what is advanced persistent threat AP? AP is very focused and motivated to aggressively and successfully penetrate a network with variously different attack methods and then hiding its presence while achieving a well-developed multi-level foothold in the environment. So the advanced aspect of this term pertains to the expansive knowledge capabilities and the skill base of the AP and the persistent component has to do with the fact that the group of attackers is not in a hurry to launch an attack quickly but will wait for the correct opportunity. This is also referred to as low and slow attack.
Now what is intellectual property? Intellectual property refers to creations of the mind such as inventions, literary and artistic works, designs and symbols, names and images used in commerce. So the major types of intellectual property are copyrights, trademarks, patents and trade secrets. Now what is privacy? So personally identifiable information PII. These are data that can be used to uniquely identify, contact or locate a single person or can be used with other sources to uniquely identify a single individual. So the PII needs to be highly protected because it is commonly used in identity theft, financial crimes and various criminal activities. So the typical components are full name, national identification numbers, IP addresses, vehicle registration plate number, driver’s license number, face, fingerprints or handwriting and credit card numbers.
Now let’s talk about employee rights. So within a corporation several employee privacy must be thought through and addressed. So the monitoring must be workrelated meaning that a manager may have the right to listen in on his employees conversation with customers but he does not have the right to listen in on his personal conversations that are not workrelated. So monitoring also must happen in a consistent way such that all employees are subject to monitoring not just one or two people. So here we have to have the employees a document describing what type of monitoring they should be subjected to, what is considered acceptable behavior and what the consequences of not meeting those expectations are. The employees should be asked to sign the document referred to as a waiver of reasonable expectation of privacy.
Now let’s talk about international issues. When computer crime crosses international boundaries, the complexity of such issues shoots up considerably and the chances of the criminal being brought to any court decreases. So organization for economic cooperation and development OECD. So this is a glo says the global organization must also follow OECD guidelines on the protection of privacy and transport of laws of PI. So like collection limitation, data quality, purpose, specification, use limitation, security safeguards, openness, individual participation and accountability. Now export or import. So the government require recognizes that the various computers and software technologies that drive the internet and e-commerce can be extremely powerful tools in the hands of a military force. For this reason, a complex set of regulations were developed governing the export of sensitive hardware and software products to other nations. The regulations include the management of transported data flow of new technologies, intellectual property and personally identifying information. So encryption technologies are used here and these are controls on exporting encrypted software where even more severe rendering it virtually impossible to export any encryption technology outside the country.
Now what is PCIDSS? This is the payment card industry data security standard. This is a set of security standards formed in 2004 by Visa, Mastercard, Disco Financial Services, JCB International and American Express. This is governed by the payment card industry security standard council and the compliance scheme aims to secure credit and debit card transaction against data theft and fraud. The PCIDSS has no legal authority to compel compliance. It is a requirement for any business that process credit or debit card transactions and PCI certification is also considered the best way to safeguard sensitive data and information thereby helping business build long-lasting and trusting relationship with their customers.
Asset security. Now what is information classification? Information classification is a process in which organizations assess the data that they hold and the level of protection it should be given. So data classification helps ensure that data is protected in the most cost effective manner. So asset classification is used to store or process information should be as high as the classification of the most valuable data in it like media, laptop, phones, paper prints. Now what are the classification criteria? Once the scheme is designed, the organization must develop the criteria it will use to decide what information goes into which classification. Following are the parameters. So they are usefulness of data, value of data, age of data, level of damage that could be caused if the data were disclosed, level of damage that would be caused if the data were modified or corrupted, legal, regulatory or contractual responsibility to protect the data. Lost opportunity cost that could be incurred if the data were not.
Data Disposal and Security Concerns
Labels are used for data that are stored on systems or media. Digital marking is used for pages or is digitally marked using watermarks or headers etc. Now let’s see what is data disposal. So data disposal is the last and often the most ignored phase of the information life cycle. Most of the time everyone is concerned about acquiring the data, using it and processing it, but no one really cares about data disposal. So what happens is when data is not properly disposed of, some data can remain in the system and this is known as data remnants. Data remnants refer to data that remains on a media as residual magnetic flux. Using system tools to delete data generally leaves much of the data remaining in the media. This information can be retrieved using many tools. So data policy should be able to define acceptable methods of destroying data based on the data classification because conventional deletion methods can leave remaining data on the disk.
Methods of Data Disposal and Data Security Controls
Now let’s see what are the methods of data disposal. The first one is erasing. Erasing means simply performing a delete operation against a file, a selection of files or the entire media. In most cases, deletion or removal process removes only the directory or the catalog link to the data. This means the actual data remains on the drive. Then there is clearing. Clearing or writing is a process of preparing media for reuse and assuring that the cleared data cannot be recovered using traditional recovery tools. When media is cleared, unclassified data is written over all the addressable locations on the media. Then there is degaussing. A degausser creates a strong magnetic field that erases data on some media in a process called degaussing. This is done so that the data that is remaining in the magnetic drives can be removed. Technicians commonly use degaussing methods to remove data from magnetic tapes with the goal of returning the tape to its original state. Then there is encryption. Data stored on the medium is encrypted using a strong key to render the data unrecoverable. The system simply needs to securely delete the encryption key, which is many times faster than deleting the encrypted data. Recovering this data is computationally impossible. Encryption is a really good method to dispose of data. Then purging. Purging is the more intense form of clearing that prepares media for reuse in a less secure environment. A purging process will repeat the clearing process multiple times and may combine it with other methods such as degaussing to completely remove the data. Then there is sanitization. Sanitization is a combination of processes that removes data from a system or from media. It ensures that the media cannot be recovered by any means when a computer is disposed of. Sanitization includes ensuring that all non-volatile memory has been removed or destroyed. Then finally there is destruction. Destruction is the final stage in the life cycle of media and is the most secure method of sanitizing media. When destroying media, it’s important to ensure that the media cannot be reused or repaired and that the data cannot be extracted from the destroyed media.
Data Disposal, Data Security Controls, and CPU Architecture in Information Security
Now let’s see what are the data security controls. The controls that we use for securing our data do not only depend on the value of the data. It also depends on the dynamic state in which the data exists. And data can exist in one of three states: it can be at rest, it can be in motion or it can be in use. So let’s see the security controls used for data at rest. What is data at rest? Data that resides in external or auxiliary storage devices such as hard disks, solid state drives or optical discs or even on magnetic tapes are called data at rest. The data in this state is very vulnerable not only to threat actors attempting to over access it but also to anyone who can gain physical access to the device. The solution to protecting data in such scenarios is encryption. Every major operating system now provides means to encrypt individual files or entire volumes in a way that is most transparent to the user. Examples of encryption services are BitLocker on Windows, Dmcrypt in Linux etc. Then let’s see what are the security controls for data in motion. What is data in motion? The data that is moving between computing nodes over a data network such as the internet is called data in motion. This is perhaps the riskiest time for our data as it is going through a multiple number of places. The best way to protect data in motion is to use a VPN or a virtual private network. TLS or IPsec which stands for Internet Protocol Security can be used for encrypting the data in motion in a virtual private network. Now let’s see what are the controls for data in use. Data in use refers to data residing in primary storage devices like the volatile memory, RAM, memory caches or CPU registers. When a process is using data, it is typically stored in the primary storage devices for a short period of time. The common issues that can happen in this period are buffer overflow attacks, which can happen when a program while writing data to a buffer overrides the buffer and overrides the adjacent memory locations, memory leaks or side channel attacks. The best protection for data in use is secure software development practices.
Now let’s see what is security baselining. Baselining provides a starting point and ensures a minimum security standard. The practices used in attaining the security baseline are the first one is imaging. What we do here is configure a single system with desired settings, capture its image and then deploy the image to other systems. After deploying the systems in a secure state, auditing processes can be periodically done to check the systems to ensure that they remain in a secure state. But we should always remember that a single set of security controls does not apply to all situations and organizations can select the set of baseline security controls based on its needs. Then there is scoping. What we do here is review the baseline security controls and select only those controls that apply to the scope of our organization. Then there is tailoring. Tailoring is modifying the baseline security controls so that it will align with our needs or it will align with the mission of our organization.
Now let’s talk about data loss prevention. Data loss prevention comprises the actions that organizations should take to prevent unauthorized external parties from gaining access to sensitive data. The real challenge here is taking a holistic view of our organization and the perspective must be to incorporate our people, processes and then our information. The data loss prevention technology features sensitive data awareness, policy engines, interoperability and accuracy. Now the different types of DLP are the first one is network DLP. This is where we apply DLP to data in motion. Normally, this is implemented as dedicated appliances at perimeter. It will not protect data on devices that are not on the organization network and high cost probably force the organization to play only at network checkpoints. Let’s have a quick quiz question: through which system can we detect SQL injection attacks? The options that we have here are attack detection system, injection detection system, intrusion prevention system and intrusion detection system. Think thoroughly and mention which out of these four options is the right answer for this particular question.
Just a quick info: Intellipaat provides an advanced certification in cyber security by EICT Academy IIT Guwahati. You will get to learn the most important concepts such as ethical hacking, penetration testing and network security. In this course, you’ll get to learn from IIT faculty and industry experts. Reach us out to know more. Then there is endpoint DLP where DLP is in use for data in use and data at rest. Here, a DLP agent is installed on the endpoint systems. The features of this are that it is more complex and we have to manage the agent and the endpoint DLP is unaware of the violations happening to data in motion. And finally, there is hybrid DLP where we deploy both endpoint DLP and network DLP. This is the costliest and most complex approach but this offers the best coverage and protection as it has the features of both the network DLP and endpoint DLP.
Bell-LaPadula and Biba Integrity Models
Not concerned with the security levels and confidence level. The BA model uses integrity levels to prevent data at any integrity level from flowing to a higher integrity level. So the rules and properties are the star integrity axiom. This is where a subject cannot write data to an object at a higher integrity level. And simple integrity aim. This is where a subject cannot read data from a lower degree level. And invocation property where a subject cannot request service at a higher integrity. The drawbacks are that it does not address integrity or availability and it does not address covert channels.
Then there is the Clark Wilson model. This is a commercially used model which preserves integrity. It focuses on well-formed transactions and separation of duties. It addresses all the goals of the integrity model like prevents unauthorized users from making modifications, prevents unauthorized users from making improper modifications, and maintains internal and external consistency. This model uses the following elements: users that are active agents, transformation procedures which are programmed abstract of operations such as read, write and modify, constrained data items that can be manipulated only by TPs, unconstrained data items that can be manipulated by users by primitive read and write operations, and integrity verification procedures which check the consistency of the CDI with external reality.
Now let’s talk about the Brewer and Nash model also known as the Chinese Wall. The Brewer and Nash model also called the Chinese Wall model states that subject can write to an object if and only if the subject cannot read another object that is in a different data set. So it was created to provide access control that can change dynamically depending upon a user’s previous action. The main goal of the model is to protect against conflicts of interest by users’ access attempts.
Access Control Models and Graph-Based Rights Delegation
Now what is the Graham model? So this model defines basic rights in terms of commands that a subject can execute over an object. It has eight protection rights that detail how these functionalities should take place securely. So how to securely create an object? How to securely create a subject? How to securely delete an object? How to securely delete a subject? How to securely provide the read access right? How to securely provide the grant access right? How to securely provide the delete access right? How to securely provide the transfer access right? So the model is based on access control matrix model.
Now let’s talk about the Sutherland model. The Sutherland model is an integrity model. It focuses on preventing interference in support of integrity. So it is formally based on the state machine model and the information flow model. However, it does not directly indicate specific mechanisms for protection of integrity. Instead, the model is based on the idea of defining a set of system states, initial states and state transition. Through the use of only these predetermined secure states, integrity is maintained and interference is prohibited. A common example of the Sutherland model is used to prevent a covert channel from being used to influence the outcome of a process or activity.
Then Take Grant model. So this employs a direct graph to depict how rights can be passed from one subject to another subject or an object. So this defines the following rules like take rule which allows the subject to take rights over an object. Grant rule which allows the subject to grant rights to object. Create rule which allows the subject to create new rights and remove rule which allows a subject to remove rights it has.
Security Evaluation, Common Criteria, and Cloud Shared Responsibility Models
Composition Theories and Covert Channel Classifications
Now let’s talk about the composition theories. So systems are usually built by combining smaller systems and security of components must be considered when these components are combined into larger ones. So this is based on the notion of how inputs and output between multiple systems relate to one another which follows how information flows between systems rather than within an individual system. So types of composition theories are cascading which is where input for one system comes from the output of another system. Then there is feedback where one system provides input to another system which reciprocates by reversing those roles. So that system A provides for system B and then system B provides input to system A. Then there is hookup where one system sends input to another system but also sends input to external entities.
Now what is a covert channel? So this is a type of attack that creates the capability to transfer information objects between processes that are not supposed to be allowed to communicate by the computer security policy or by the usual communication channel. So covert channel storage is there. This communicates by modifying a storage location such as a hard drive and occurs when out-of-band data is stored in messages for the purpose of memory reuse. Example is steganography. Then there is covert timing channel. This performs operations that affect the real response time observed by the receiver. The methods are knowing when data is transmitted between parties and monitoring the timing of operations.
Security evaluation examines the security relevant parts of a system that is the TCB, access control mechanism, reference monitor, kernel and the protection mechanism. The relationship and interaction between these components are also evaluated in order to determine the level of protection required and provided by the system. The purpose of security evaluation is to measure the security of product and systems. Provide a common mechanism to evaluate vendor products, publish the findings of the product and give level of security assurance to the product and assure or allow customers to select products based on the evaluation rate. The techniques used are TCSEC and common criteria.
Now what is TCSEC? The Trusted Computer System Evaluation Criteria TCSEC developed by the US DOD to impose security standards for the systems it used. The TCSEC established guidelines to be used when evaluating a standalone computer from the security perspective. These guidelines address basic security functionality and allow evaluators to measure and rate a system’s functionality and assurance. It combines the functionality and assurance rating of the confidentiality protection offered by a system into the following categories like the labels like A1, B3, B2, B1, C2, C1 and D. And it requires a verified design, security domains, structural protection, label security, discretionary protection, control access and minimal security.
Now what is ITSEC? The Information Technology Security Evaluation Criteria represents an initial attempt to create security evaluation criteria in Europe. It was developed as an alternative to the TCSEC guidelines and the ITSEC guidelines evaluate the functionality and assurance of a system using separate ratings for each category. The differences between both these guidelines are TCSEC concentrate exclusively on confidentiality while ITSEC addresses some concerns about the loss of integrity and availability. Also the ITSEC does not rely on the notion of a TCB and it doesn’t require a system security components be isolated within a TC unlike TC which required any changed systems to be re-evaluated anew by operating system upgrades, patches or fixes, application upgrades or changes and so forth. The ITSEC includes coverage for maintaining targets of evaluation after such changes without requiring a new formal evaluation.
Now what is the common criteria? The common criteria for information technology security evaluation is the official name for the international standard that is ISO 15408. It is an international set of specifications and guidelines developed for evaluation of information security products. The thorough evaluation of computer security products is assured by rigorous evaluation of the process of implementation, specification and testing of computer security product. Evaluated products are assigned evaluation assurance levels or EAL and these address functionality and assurance. The ISO IEC 15408 is used as the basis for evaluation of security properties. The 15408-1 is introduction and general evaluation model, 2 is the security functional components and 3 security assurance components.
Now what is the process involved in the common criteria? So the common criteria uses the following specific terms. The first one is protection profile. So for a particular category of systems or products such as firewalls or IDS, protection profile is an independent set of security requirements and objectives. Then target of evaluation. This is the target product or system whose evaluation has to be done. Security target is the document that describes the target of evaluation which includes security requirements and operational environment. Evaluation assurance level is the degree or score of evaluation of the tested system or product.
Now what are the different levels in the common criteria? The common criteria has seven assurance levels. The range is from EAL1 where functionality testing takes place to EAL7 where thorough testing is performed and the system design is verified. So the different levels that come in between are the EAL2 where the structure is tested, EAL3 where majorly tested and checked, EAL4 methodically tested, designed and reviewed, EAL5 semi-formally designed and tested, EAL6 semi-formally verified, designed and tested.
Now what is shared security in cloud computing? So the shared responsibility model for security in the cloud defines what should be the users’ responsibility and what should be the cloud provider’s responsibility. So in the given table the things given in the red font are the customer’s responsibility or the user’s responsibility and those given in gray are the cloud security provider responsibility. So if you see in the case of on-premises system for reference everything is the users’ or the customer’s responsibility like user access, data, applications, operating system, network traffic, hypervisor, infrastructure, physical all comes under the customer’s responsibility but in cloud for IaaS that is infrastructure as a service, platform as a service and software as a service, different things come under customer’s responsibility and provider’s responsibility.
Infrastructure as a service: User access, data, applications and operating system, network traffic comes under customer responsibility where hypervisor, infrastructure and physical come under cloud provider responsibility. In platform as a service: user access, data and applications come under customer responsibilities while operating system, network traffic, hypervisor, infrastructure and physical come under cloud provider responsibility. In software as a service: user access and data come under customer responsibility whereas applications, operating system, network traffic, hypervisor, infrastructure and physical come under cloud providers responsibility.
Now let’s talk about industrial systems IC. So the first one is programmable logic controller PLC. These are small industrial computers originally designed for factory automation and industrial process control. These can be programmed as per the process that needs to be controlled and they are used as the primary controllers in smaller systems configuration and they are extensively used in almost all industrial processes. Then there are distributed control systems or DCS. These monitor and control distributed equipment in process plants and industrial processes. Unlike the PLC which is standalone DCS consist of dividing plant or process control into several areas each managed by its own controllers with the whole system connected to form a single entity. Supervisory control and data acquisition SCADA. So this monitors and controls a plant or equipment industries such as telecommunications, water and waste control, energy, oil and gas refining etc. And they consist of remote terminal units spread geographically for the collection of data and are connected to these master stations for centralized control.
SCADA Systems and Data Acquisition
Data acquisition via any communication system. The SCADA has remote terminal units which are endpoints that connect directly to the sensors or actuators. Data acquisition services are backends that receive all data from endpoints and Human Machine Interface (HMI) that users in charge of controlling the systems interact with.
Kerckhoffs’s Principle in Cryptography
What is the Kerckhoffs’s principle? August Kerckhoffs published a paper in 1883 stating that only secrets involved with the cryptography system should be the key. He claimed that the algorithm should be publicly known. He asserted that if security were based on too many secrets, there would be more vulnerabilities to possibly exploit. Making an algorithm publicly available means that many more people can view the source code, test it, and uncover any flaws or weaknesses. But everyone does not agree with it. Governments around the world create their own algorithms that are not released to the public. Their stance is that if a smaller number of people know how the algorithm works, then a smaller number of people will know how to possibly break it. So it is basically the same as the open-source versus compiled software debate.
Strength of a Crypto System and One-Time Pads
What is the strength of a crypto system? The strength of a crypto system depends on its algorithm, the secrecy of the key, the length of the key, and the initialization vector. A term that is used is work factor, which is used to define the cryptographic strength and is an estimate of the effort and resources it would take an attacker to penetrate a crypto system. What is a one-time pad? A one-time pad is a perfect encryption scheme because it is considered unbreakable if implemented properly. It was invented by Gilbert Vernam in 1970. Sometimes it is referred to as the Vernam cipher. A plain text message that needs to be encrypted is converted into bits and a one-time pad is made up of random bits. This encryption process uses binary mathematical functions called exclusive OR, usually operated as XOR. The scheme is deemed unbreakable only if the following conditions are met. The pad must be used only one time. The pad must be as long as the message. The pad must be securely distributed and protected at all destinations. The pad must be made up of truly random values.
Key derivative functions. For a complex key to be generated, a master key is commonly created and then symmetric keys are generated from it. For example, if an application is responsible for creating a session key for each subject that requests one, it should not be giving the same instance of the one. Key derivation functions are used to generate keys that are made up of random values. Different values can be used independently or together as a random key. It is important to remember that algorithms stay static and randomness provided by cryptography is mainly by means of the keying material.
What are confusion and diffusion? A strong cipher contains the right level of two main attributes: confusion and diffusion. Confusion is commonly carried out through substitution. This uses complex substitution functions so that attackers cannot figure out how to substitute the right value. This makes the relationship between the key and the resulting cipher text as complex as possible. The key cannot be uncovered from the cipher text. Each cipher text value should depend upon several parts of the key. Then there is diffusion which is carried out by using transposition. Diffusion takes place as individual bits of blocks are scrambled throughout the block. It means that a single plain text bit has influence over several of the cipher text values. Changing a plain text value should change many cipher text values, not just one. This is also called the avalanche effect.
What are block ciphers? When a block cipher is used for encryption and decryption processes, the message is divided into blocks of bits. These blocks are then put through mathematical functions one block at a time. For example, an encryption cipher that uses 64 bits would take a message of 640 bits and chop it into 10 individual blocks of 64 bits. Each block is then put through a succession of mathematical formulas, resulting in 10 blocks of encrypted text. They generally use symmetric keys and are widely used in commercial encryption algorithms. Examples are DES, 3DES, and AES.