You are on page 1of 19

Concurrency Control

• Problem – in a multi-user environment, simultaneous


access to data can result in interference and data loss
• Solution – Concurrency Control
– The process of managing simultaneous operations
against a database so that data integrity is
maintained and the operations do not interfere with
each other in a multi-user environment.

PRESENTED BY S.HAYAT 1
Concurrency Control Problems

• 1. Lost Updates Problem


• 2. Inconsistent Read
• LOST UPDATES PROBLEM
• The most common problem encountered when multiple
users attempt to update a database without adequate
concurrency control is that of lost updates.

• The below Fig shows the lost update problem.

PRESENTED BY S.HAYAT 2
Figure LOST UPDATE

Simultaneous access causes updates to cancel each other


A similar problem is the inconsistent read problem
PRESENTED BY S.HAYAT 3
Inconsistent Read Problem

• This problem occurs when one user reads data that have
been partially updated by another user.

• The read will be incorrect and is sometimes referred to as a


dirty read or an unrepeatable read.

• The following fig represent inconsistent read problem.

PRESENTED BY S.HAYAT 4
Time T3 T4 balx
t1 begin-transaction 100
t2 read(balx) 100
t3 balx = balx +100 100
t4 begin_transaction write(balx) 200
t5 read(balx) : 200
t6 balx = balx -10 rollback 100
t7 write(balx) 190
t8 commit 190

• Occurs when one transaction can see intermediate results of another


transaction before it has committed.
• T4 updates balx to £200 but it aborts, so balx should be back at original
value of £100.
• T3 has read new value of balx (£200) and uses value as basis of £10
reduction, giving a new balance of £190, instead of £90.
• Problem avoided by preventing T3 from reading balx until after T4
commits or aborts.

PRESENTED BY S.HAYAT 5
Concurrency Control Techniques

• Serializability –
– Finish one transaction before starting another
• Locking Mechanisms
– The most common way of achieving serialization
– Data that is retrieved for the purpose of updating is
locked for the updater
– No other user can perform update until unlocked

PRESENTED BY S.HAYAT 6
Figure-Updates with locking for concurrency control

This prevents the lost update problem


PRESENTED BY S.HAYAT 7
Locking Mechanisms

• Locking level:
– Database – used during database updates
– Table – used for bulk updates
– Block or page – very commonly used
– Record – only requested row; fairly commonly used
– Field – requires significant overhead; impractical
• Types of locks:
– Shared lock - Read but no update permitted. Used
when just reading to prevent another user from placing
an exclusive lock on the record
– Exclusive lock - No access permitted. Used when
preparing to update

PRESENTED BY S.HAYAT 8
Deadlock
• An impasse that results when two or more transactions
have locked common resources, and each waits for the
other to unlock their resources

Figure:
A deadlock situation

UserA and UserB will wait


forever for each other to
release their locked resources!

PRESENTED BY S.HAYAT 9
Managing Deadlock

• Deadlock prevention:
– Lock all records required at the beginning of a
transaction
– Two-phase locking protocol
• Growing phase
• Shrinking phase
– May be difficult to determine all needed resources in
advance
• Deadlock Resolution:
– Allow deadlocks to occur
– Mechanisms for detecting and breaking them
• Resource usage matrix

PRESENTED BY S.HAYAT 10
Versioning

• Optimistic approach to concurrency control


• Instead of locking
• Assumption is that simultaneous updates will be
infrequent
• Each transaction can attempt an update as it wishes
• The system will reject an update when it senses a
conflict
• Use of rollback and commit for this

PRESENTED BY S.HAYAT 11
use of versioning

Better performance than locking


PRESENTED BY S.HAYAT 12
Database Recovery & Its Techniques

• Mechanism for restoring a database quickly and accurately after


loss or damage.
OR
• The process of restoring the database to the correct state after
loss or damage

Recovery Techniques/facilities:
Backup Facilities
Journalizing Facilities
Checkpoint Facility
Recovery Manager

PRESENTED BY S.HAYAT 13
Database Recovery & Its Techniques

• Back-up facilities: the DBMS should provide back-up facilities


that produce a back-up copy(or save) of the entire database plus
files and journals.

• Each DBMS normally provides a COPY utility for this purpose.


• Including database files , the back-up facility should create a copy
of related database objects including the repository (or system
catalog), database indexes , source libraries and so on.
• A back-up copy is produced at least once per day.
• The copy should be stored in a secured location where it is
protected from loss or damage.

PRESENTED BY S.HAYAT 14
Database Recovery & Its Techniques
• Journalizing facilities: A DBMS must provide Journalizing
facilities to produce an audit trial of transactions and database
changes. In the event of a failure, a consistent database state can
be re-established using the information in the journals together.

• Transaction log: contains a record of the essential data for each


transaction include the transaction code or identification, action
or type of transaction, time of the transaction, user ID, input data
values, table and records accessed, records modified, and possibly
the old and new field values.

PRESENTED BY S.HAYAT 15
Database Recovery & Its Techniques

• Database change log: contains before and after –images of


records that have been modified by transactions.
• Before-image: a copy of a record before it has been modified.
• After-image: is a copy of the same record after it has been
modified

PRESENTED BY S.HAYAT 16
Database audit trail

From the backup and logs,


databases can be restored in
case of damage or loss

PRESENTED BY S.HAYAT 17
Database Recovery & Its Techniques
• Checkpoint Facility:
• DBMS periodically refuses to accept new transactions
• system is in a quiet state
• Database and transaction logs are synchronized
• The DBMS writes a special record( called a checkpoint record) to
the log file, which is like a snapshot of the state of the database.
• Checkpoints should be taken frequently when failures do occur, it
is often possible to resume processing from the most recent
checkpoint.
• Only a few minutes of processing work must be repeated,
compared with several hours for a complete restart of the day’s
processing.

PRESENTED BY S.HAYAT 18
Database Recovery & Its Techniques

• Recovery manager: a module of the DBMS that restores the


database to a correct condition when a failure occurs and then
resumes processing user questions

PRESENTED BY S.HAYAT 19

You might also like