[vox-tech] Unnecessary RDBMS features [was Need your help in recovering database.]
Matt Roper
vox-tech@lists.lugod.org
Thu, 24 Apr 2003 20:44:32 -0700
On Thu, Apr 24, 2003 at 07:23:52PM -0700, Tim Riley wrote:
...
> My observations have shown a significant performance hit. Consider what happens:
> for every insert, update, or delete, the database's status-quo is copied to
> another file on the disk.
Who says the database has to be copied to another file on disk for every
insert/update/delete? I think usually the disk would just have the
"global version" of the database and all working copies would be stored
in RAM.
One way to implement transactions would be to store a copy of
modifications to the database in memory (not a copy of the whole
database, just a description of the changes) until the transaction is
committed. When a commit comes along you'd perform the following steps:
* grab a "validation lock"
* see if the updates in the transaction conflict with the updates
from any other transaction that has committed since the current
transaction started; if they do, restart the transaction
* if there are no conflicts, write all the updated data to disk
atomically (this would be journaled somehow to ensure that a crash
during this phase would be easy to recover from)
* release the "validation lock"
On the other hand, if the transaction is rolled back, you simply dump
your memory copy and there's no disk activity at all. This is a
well-known scheme called "optimistic concurrency control," but there are
certainly other transaction implementations that work just as well.
Note that there's no unnecessary disk writing -- all the working copies
are stored in memory.
> I think this is why benchmarks have shown Mysql to be ten times faster
> than Oracle.
Do you have a link to these benchmarks? I doubt that transactions are
the source of this difference. I've seen lots of benchmarks that
indicate that Postgres (which uses transactions) performs about the same
or maybe even a little faster than MySQL (without transactions).
If transactions were such a performance hit, the performance of MySQL
and Postgres wouldn't be so similar.
> > The database system that we use in our shop
> > (intersystems Cache) has no problem with gigabytes of database and thousands of
> > updates daily, with full rollback functionality.
>
> Oh, they function--they don't break the database--they just slow things down.
Probably slightly, but I doubt that the slowdown is as great as you
imply.
> > There's no good reason why
> > the SIZE of a database has anything to do with it's ability to audit db
> > transactions (commit/rollback).
> >
> > We have been doing testing of DB systems, when we encounter strange errors.
> > So, we take a current snapshot of the db, rollback to just before the time that
> > the error occured, and step one by one through transactions until the error
> > occurs.
>
> I don't think the software technology of commiting/rollbacking transactions is
> providing this functionality. Once a transaction is commited, it can't be rolled
> back, and how can you take a snapshot of uncommitted transactions in a database?
> RDBMS' have a snapshot feature, but its purpose is to synchronize multiple
> databases,
> which sounds like what you're doing here.
I agree here; transactions aren't the mechanism to use for taking DB
snapshots.
Matt
--
*************************************************
* Matt Roper <matt@mattrope.com> *
* http://www.mattrope.com *
* PGP Key: http://www.mattrope.com/mattrope.asc *
*************************************************