Dbms unit 5 questions
1)2 phase locking to ensure serializability
2)ARIES Algorithm
3) Differentiate primary index from secondary index
4)Dirty Read
5)Trasaction properties.
6)Explain about Trigger and differentiation between integrity constraints
Answers
1. Two-Phase Locking to Ensure Serializability
Two-phase locking defines how transactions acquire and relinquish locks.
Two-phase locking guarantees serializability, but it does not prevent deadlocks.
The two phases are:
A growing phase, in which a transaction acquires all required locks without unlocking any data. Once all locks have been acquired, the transaction is in its locked point.
A shrinking phase, in which a transaction releases all locks and cannot obtain any new lock.
The two-phase locking protocol is governed by the following rules:
Two transactions cannot have conflicting locks.
No unlock operation can precede a lock operation in the same transaction.
No data are affected until all locks are obtained—that is, until the transaction is in its locked point.
In this example, the transaction acquires all of the locks it needs until it reaches its locked point.
When the locked point is reached, the data are modified to conform to the transaction requirements.
Finally, the transaction is completed as it releases all of the locks it acquired in the first phase.
Two-phase locking increases the transaction processing cost and might cause additional undesirable effects.
One undesirable effect is the possibility of creating deadlocks.
2. ARIES Recovery Algorithm (Algorithm for Recovery Isolation and Exploiting Semantics)
ARIES is a state of the art recovery method
Incorporates numerous optimizations to reduce overheads during normal processing and to speed up recovery
The recovery algorithm we studied earlier is modeled after ARIES, but greatly simplified by removing optimizations
Unlike the recovery algorithm described earlier, ARIES
Uses log sequence number (LSN) to identify log records
Stores LSNs in pages to identify what updates have already been applied to a database page
Physiological redo
Dirty page table to avoid unnecessary redos during recovery
Fuzzy checkpointing that only records information about dirty pages, and does not require dirty pages to be written out at checkpoint time
ARIES OPTIMIZATIONS
Physiological redo
Affected page is physically identified, action within page can be logical
Used to reduce logging overheads
e.g. when a record is deleted and all other records have to be moved to fill hole
Physiological redo can log just the record deletion
Physical redo would require logging of old and new values for much of the page
Requires page to be output to disk atomically
Easy to achieve with hardware RAID, also supported by some disk systems
Incomplete page output can be detected by checksum techniques,
But extra actions are required for recovery
Treated as a media failure
3.

4. Dirty read
When many transactions are executed simultaneously then we call them concurrent transactions. Concurrency is required to increase time efficiency. If there are many transactions which are trying to access the same data, then inconsistency arises. Concurrency control is required to maintain consistency of data.
Dirty Read
Dirty read is a read of uncommitted data. If a particular row is modified by another running application and not yet committed, we also run an application to read the same row with the same uncommitted data. This is the state we say it as a dirty read.
The one main thing is that the dirty reader has to stop reading dirty.
We can try to use the shared locks to prevent other transactions to modify the row, if one is carried out here.
Example of dirty read problem
Example 1
Step 1 − Consider we have an online shopping system where users can buy and view the buyer products at the same time.
Step 2 − Let us suppose a case in which a user tries to buy a product , and as soon as the user buys the product then the count value in update stock will change immediately.
Step 3 − Let us consider that there were 10 items in stock, but now they are 9.
Step 4 − Moreover due to this transaction, there will also be communication with the billing gateway.
Step 5 − Meanwhile, if there is any other user who has also done a transaction at the same time, the new user will be able to see 9 items in the stock.
Step 6 − But, let us suppose that the first user was unable to complete his/her transactions due to some error or insufficient funds.
Step 7 − Then, in this case the transaction done by the first user will roll back and now the value in stock will be 10 again.
Step 8 − But, when the 2nd user was doing a transaction the no items in stocks were 9.
Step 9 − This is called DIRTY DATA and this whole problem is called the Dirty Problem.
S.no | Constraints | Triggers |
---|---|---|
1. | This is defined by the relationship between data elements. | These are the actions that are executed when a specific reaction occurs. |
2. | It is used for a column. | It is used for a table. |
3. | These are useful for database consistency. | These are used for logging and auditing. |
4. | It precedes triggers when getting fired. | First constraints get fired. Only then do triggers get fired. |