[vox-tech] Need your help in recovering database.
Soumyadeep nandi
vox-tech@lists.lugod.org
Sat, 26 Apr 2003 06:08:20 -0700 (PDT)
I couldn't retrieve the data from that database, now I
am planing to set up a new database, as such I would
request you if you could guide me in making the my.cnf
file for better stability.
I am using system of 256M memory, the alloted space
for the database is 20GB, 32-bit processor. The server
version is mysql-4.0.12(max) OS linux(redHat-9).
Considering the above parameter I wrote the my.cnf
file:
[mysqld]
port = 3306
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
skip-locking
set-variable = key_buffer = 256M
set-variable = max_allowed_packet = 40M
set-variable = table_cache = 256
set-variable = sort_buffer_size = 1M
set-variable = sort_buffer = 1M
set-variable = record_buffer = 1M
set-variable = read_buffer_size = 1M
set-variable = max_connections = 200
innodb_data_home_dir =
# Data files must be able to
# hold your data and indexes
innodb_data_file_path =
/ibdata/ibdata1:100M:max:1990M:max:1990M;/ibdata/ibdata2:100M:max:1990M;/ibdata/ibdata3:100M:max:1990M:max:1990M;
# I want to make multiple file rather
# autoextending the file, also I want give the upper #
limit here. Please advice am I right here.
# Set buffer pool size to
# 50 - 80 % of your computer's
# memory
set-variable = innodb_buffer_pool_size=140M
set-variable = innodb_additional_mem_pool_size=20M
# Set the log file size to about
# 25 % of the buffer pool size
set-variable = innodb_log_file_size=40M
set-variable = innodb_log_buffer_size=16M
# Set ..flush_log_at_trx_commit
# to 0 if you can afford losing
# some last transactions
innodb_flush_log_at_trx_commit=1
set-variable = innodb_lock_wait_timeout=50
[mysql.server]
user=mysql
basedir=/var/lib
[mysqldump]
quick
set-variable = max_allowed_packet = 200M
[mysql]
no-auto-rehash
safe-updates
with regards,
Soumyadeep
--- Jeff Newmiller <jdnewmil@dcn.davis.ca.us> wrote:
> On Wed, 23 Apr 2003, Soumyadeep nandi wrote:
>
> > Thank you All,
> >
> > Thankyou very much for your cooperation. I doubt
> if I
> > am able to present you properly.
>
> Your "present" is getting better, but you are
> talking to the wrong people.
>
> I have a few comments below, but you need to go ask
> the InnoDB experts at
> http://www.innodb.com or http://www.mysql.com if you
> want authoritative
> responses.
>
> > Replying to Tim:-
> > > > Didn't find any fields in table
> 'blast_genescan'
> >
> > > Does "describe blast_genescan;" return the
> correct
> > > columns? If so, maybe the perl script needs a
>
> >
> > > closer look.
> >
> > > >
> > > > This for all the tables.
> >
> > > Do you mean that "describe table2;", "describe
>
> > > table3;", and "describe table4;" all return no
> >
> > columns?
> >
> > This is after the command "use Entamoeba".
> >
> > After searching the solution in
> > "http://www.innodb.com/ibman.html"
> > I could get I must have specified "max" size. I've
> > specified the max size along with some server
> > variables viz.
> > set-variable = key_buffer=256M
> > set-variable = max_allowed_pack = 40M
> > ...
> > set-variable = myisam_sort_buffer_size = 64M
> > set-variable = thread_cache = 8
> > set-variable = thread_concurrency = 4
> > ...
> > innodb_data_file_path =
> > /ibdata/ibdata1:100M:autoextend:max:2000M
>
> Once the database is corrupt, recovery is unlikely.
> (Let this be a
> reminder that mysqldump should be used periodically
> to create a
> recoverable backup.)
>
> You will probably need to eliminate the "autoextend"
> option when you build
> a new database configuration to prevent the same
> problem from
> re-occurring. Review the documentation... you can
> specify multiple files
> in the innodb_data_file_path.
>
> > But then I could once get into the database
> > i.e. by command "use Entamoeba"
> > It flushed nothing only "database changed"
> > As I tried to select the rows from one of the four
> > tables it flushed
> > "Didn't find any fields in table 'blast_genescan'"
> > and was same for all other tables.
> >
> > But after that I cant start server anymore its
> showing
> > error:
> > ERROR 2002: Can't connect to local MySQL server
> > through socket '/var/lib/mysql/mysql.sock' (2)
>
> I don't know much about the specific messages that
> may arise due to
> database corruption, but the Behavior of most
> software when presented with
> corrupt data may be unpredictable.
>
> > Replying to Jeff:-
> > > The fact that this exceeds 2GB is certainly
>
> > > suspicious. There is a warning in the InnoDB
>
> > > documentation [1] to beware of OS filesize
> > > limitations. They don't seem to be specific
> about >
> > the consequences, but I would not be surprised if
> >
> > the file was corrupt by now. I suspect that
> > > using multiple data files rather than the
> >
> > autoextend option will be necessary if you are
> >
> > dealing with large databases.
> >
> > How would I do this if I have to rebuild the
> database.
>
> Read the manual... you just need to specify multiple
> innodb files in the
> configuration. I don't know if you can add files
> whenever you want to,
> but I think you can do it whenever you restart the
> server.
>
> > > There are kernel patches (options by now?) that
> >
> > support large file sizes, but they require
>
> >
> > But mine is 32 bit processor could I reconfigure
> the
> > kernel for 64-bit
>
> No... the Large File Size kernel patch supports an
> alternative set of
> system calls that the calling software may be
> configured to use if the
> programmers have set it up to support that option.
> [1]
>
> > > some cooperation on the part of the application
> >
> > (mysql compile options) before they will take
> >
> > effect.
> > How would I do this?
>
> I don't know... details vary for different software.
> Download a
> precompiled version set up to support the option?
> Recompile from source,
> following instructions provided with the source?
>
> > Is there any way I can recover or take back up of
> the
> > database "Entamoeba".
>
> I already said I am not an expert at this... go ask
> the InnoDB people.
> I think you will most likely have to rebuild the
> database from external
> data.
>
> [...]
>
> [1] http://www.linuxgazette.com/issue67/tag/13.html
>
>
---------------------------------------------------------------------------
> Jeff Newmiller The .....
> ..... Go Live...
> DCN:<jdnewmil@dcn.davis.ca.us> Basics: ##.#.
> ##.#. Live Go...
> Live: OO#..
> Dead: OO#.. Playing
> Research Engineer (Solar/Batteries O.O#.
> #.O#. with
> /Software/Embedded Controllers) .OO#.
> .OO#. rocks...2k
>
---------------------------------------------------------------------------
>
> _______________________________________________
> vox-tech mailing list
> vox-tech@lists.lugod.org
> http://lists.lugod.org/mailman/listinfo/vox-tech
__________________________________
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo.
http://search.yahoo.com