mysql - MYSQLDUMP failing. Couldn't execute 'SHOW TRIGGERS LIKE errors like (Errcode: 13) (6) and (1036) -


this question has answer here:

does know why mysqldump perform partial backup of database when run following instruction:

"c:\program files\mysql\mysql server 5.5\bin\mysqldump" databaseschema -u root --password=rootpassword > c:\backups\daily\myschema.dump 

sometimes full backup performed, @ other times backup stop after including fraction of database. fraction variable.

the database have several thousand tables totalling 11gb. of these tables quite small 1500 records, many have 150 - 200 records. column counts of these tables can in hundreds though because of frequency data stored.

but informed number of tables in schema in mysql not issue. there no performance issues during normal operation.

and alternative of using single table not viable because of these tables have different column name signatures.

i should add database in use during backup.

well after running backup instruction set:

"c:\program files\mysql\mysql server 5.5\bin\mysqldump" myschema -u root --password=xxxxxxx -v --debug-check --log-error=c:\backups\daily\myschema_error.log > c:\backups\daily\myschema.dump 

i this:

mysqldump: couldn't execute 'show triggers '\_dm\_10730\_856956\_30072013\_1375194514706\_keyword\_frequencies'': error on delete of 'c:\windows\temp\#sql67c_10_8c5.myi' (errcode: 13) (6) 

which think permissions problem.

i doubt 1 table in schema in 2gb range.

i using mysql server 5.5 on windows 7 64 bit server 8 gb of memory.

any ideas?

i aware changing number of files mysql can open, open_files_limit parameter, may cure matter.

another possibility interference anti virus products described here:

how fix intermittent mysql errcode 13 errors on windows

there few possibilities issue have run , here workup:

first: enable error/debug logging and/or verbose output, otherwise won't know of error creating issue:

    "c:\path\to\mysqldump" -b yourdb -u root -prootpasswd -v --debug-check --log-error=c:\backup\mysqldump_error.log > c:\backup\visualrss.dump 

so long debug enabled in distribution, should both able log errors file, view output on console. issue not clear here, great first step.

have reviewed error or general logs? not useful information issue, there is, , every little bit helps tracking these problems down.

also watch show processlist while running this. see if seeing status columns like: waiting for..lock/metadata lock indicates operation unable acquire lock because of operation.

depending on info gathered above: assuming found nothing , had shoot blind, here next common cases have experienced:

  • max packet size errors: if receive error regarding max-allowed-packet-size, which, can add --max_allowed_packet=160m parameters see if can large enough:

"c:\path\to\mysqldump" -b yourdb -u root -prootpasswd -v --debug-check --log-error=c:\backup\mysqldump_error.log --max_allowed_packet=160m > c:\backup\visualrss.dump

  • try reduce run time/size using --compact flag. mysqldump add need create schema , insert data along other information: can reduce run-time , file size requiring dump contain inserts schema , avoid statements create schema , other non-critical info within ea. insert.this can mitigate lot of problems appropriate use, want use separate dump --nodata export schema ea. run allow create empty tables etc.

/create raw data, exclude add-drop table, comment, lock , key check statements/ "c:\path\to\mysqldump" -b yourdb -u root -prootpasswd -v --debug-check --log-error=c:\backup\mysqldump_error.log --compact > c:\backup\visualrss.dump

/create schema dump no data:/ "c:\path\to\mysqldump" -b yourdb -u root -prootpasswd -v --debug-check --log-error=c:\backup\mysqldump_error.log --nodata > c:\backup\visualrss.dump

  • locking issues: default, mysqldump uses lock table (unless specify single transaction) read table while dumping , wants acquire read-lock on table, ddl operations , global lock type may create case. without seeing hung query typically see small backup file size described, , mysqldump operation sit until kill it, or server closes idle connection. can use --single-transaction flag set repeatable read type transaction take snapshot of table without blocking operations or being block, saved older server vers have issues alter/truncate table while in mode.

  • filesize issues: if read incorrectly this backup has not run before, indication 2gb filesize potential issue, can try piping mysqldump output straight 7zip on fly:

    mysqldump |7z.exe -si name_in_outfile output_path_and_filename

if continue have issues or there unavoidable issue prohibiting mysqldump being used. percona xtrabackup prefer, or there enterprise backup mysql oracle. open source, far more versatile mysqldump, has reliable group of developers working on , has many great features mysqldump not have, streaming/hot backups, etc. unfortunately windows build old, unless can compile binary or run local linux vm handle you.

very important noticed not backing information_schema table, needs mentioned exclusively if of significance backup scheme.


Comments

Popular posts from this blog

css - Which browser returns the correct result for getBoundingClientRect of an SVG element? -

gcc - Calling fftR4() in c from assembly -

.htaccess - Matching full URL in RewriteCond -