- 论坛徽章:
- 0
|
MySQL is a widely used and fast SQL
[color="#3333ff"]database server
. It is a client/server implementation that consists of a server daemon (mysqld) and many different client programs/libraries.
You can check the same tips from
here
.Here is very useful tips for all
[color="#3333ff"]mysql
DBA’s,Developers these tips are noted from MySQL Camp 2006 suggested by
[color="#3333ff"]mysql community experts
.
Kaj (Most Excellent Obvious Facilitator) Index stuff.Ronald Don’t Index EverythingUse benchmarkingMinimize traffic by fetching only what you need.
Paging/chunked data retrieval to limitDon’t use SELECT *Be wary of lots of small quick queries if a longer query can be more efficient
Use EXPLAIN to profile the query execution planUse Slow Query Log (always have it on!)Don’t use DISTINCT when you have or could use GROUP BYUse proper data partitions
For Cluster. Start thinking about Cluster *before* you need them
Insert performance
Batch INSERT and REPLACEUse LOAD DATA instead of INSERT
LIMIT m,n may not be as fast as it soundsDon’t use ORDER BY RAND() if you have > ~2K recordsUse SQL_NO_CACHE when you are SELECTing frequently updated data or large sets of dataavoid wildcards at the start of LIKE queriesavoid correlated subqueries and in select and where clause (try to avoid in)config params –no calculated comparisons — isolate indexed columnsinnodb_flush_commit=0 can help slave lagORDER BY and LIMIT work best with equalities and covered indexesisolate workloads don’t let administrative work interfere with customer performance. (ie backups)use optimistic locking, not pessimistic locking. try to use shared lock, not exclusive lock. share mode vs. FOR UPDATEuse row-level instead of table-level locking for OLTP workloadsKnow your
[color="#3333ff"]storage engines
and what performs best for your needs, know that different ones exist.
use MERGE tables ARCHIVE tables for logs
Optimize for data types, use consistent data types. Use PROCEDURE ANALYSE() to help determine if you need lessseparate text/blobs from metadata, don’t put text/blobs in results if you don’t need themif you can, compress text/blobscompress static datadon’t
[color="#3333ff"]back up
static data as oftenderived tables (subqueries in the FROM clause) can be useful for retrieving BLOBs w/out sorting them. (self-join can
[color="#3333ff"]speed up
a query if 1st part finds the IDs and use it to fetch the rest)enable and increase the query and buffer caches if appropriateALTER TABLE…ORDER BY can take chronological data and re-order it by a different fieldInnoDB ALWAYS keeps the
primary key as part of each index, so do not make the primary key very
large, be careful of redundant columns in an index, and this can make
the query fasterDo not duplicate indexesUtilize different storage engines on master/slave ie, if you need fulltext indexing on a table.BLACKHOLE engine and replication is much faster than FEDERATED tables for things like logs.Design sane query schemas. don’t be afraid of table joins, often they are faster than denormalizationDon’t use boolean flagsUse a clever key and ORDER BY instead of MAXKeep the
[color="#3333ff"]database
host as clean as possible. Do you really need a windowing system on that server?Utilize the strengths of the OSHire a MySQL ™ Certified DBAKnow that there are many consulting companies out there that can help, as well as MySQL’s Professional Services.Config variables & tips:
use one of the supplied config fileskey_buffer, unix cache (leave some
[color="#3333ff"]RAM
free), per-connection variables, innodb memory variablesbe aware of global vs. per-connection variablescheck SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0 and up)be aware of swapping esp. with
[color="#3333ff"]Linux
,
“swappiness” (bypass OS filecache for innodb data files,
innodb_flush_method=O_DIRECT if possible (this is also OS specific))defragment tables, rebuild indexes, do table maintenanceIf you use innodb_flush_txn_commit=1, use a battery-backed hardware cache write controllermore RAM is good so faster disk speeduse 64-bit architectures
Know when to split a complex query and join smaller onesDebugging sucks, testing rocks!Delete small amounts at a time if you canArchive old data — don’t be a pack-rat! 2 common engines for this are ARCHIVE tables and MERGE tablesuse INET_ATON and INET_NTOA for
[color="#3333ff"]IP addresses
, not char or varcharmake it a habit to REVERSE() email addresses, so you can easily search domains–skip-name-resolveincrease myisam_sort_buffer_size to optimize large inserts (this is a per-connection variable)look up memory tuning parameter for on-insert cachingincrease temp table size in a
[color="#3333ff"]data warehousing
environment (default is 32Mb) so it doesn’t write to disk (also constrained by max_heap_table_size, default 16Mb)Normalize first, and denormalize where appropriate.Databases are not spreadsheets, even though Access really really looks like one. Then again, Access isn’t a real databaseIn 5.1 BOOL/BIT NOT NULL type is 1 bit, in previous versions it’s 1 byte.A NULL data type can take more room to store than NOT NULLChoose appropriate character
sets & collations — UTF16 will store each character in 2 bytes,
whether it needs it or not, latin1 is faster than UTF8.make similar queries consistent so cache is usedHave good
[color="#3333ff"]SQL query
standardsDon’t use deprecated featuresUse Triggers wiselyRun in SQL_MODE=STRICT to help identify warningsTurning OR on multiple index
fields (/tmp dir on battery-backed write cacheconsider battery-backed RAM for innodb logfilesuse min_rows and max_rows to
specify approximate data size so space can be pre-allocated and
reference points can be calculated.as your data grows, indexing
may change (cardinality and selectivity change). Structuring may want
to change. Make your schema as modular as your code. Make your code
able to scale. Plan and embrace change, and get developers to do the
same.pare down cron scriptscreate a test environmenttry out a few schemas and storage engines in your test environment before picking one.Use HASH indexing for indexing across columns with similar data prefixesUse myisam_pack_keys for int dataDon’t use COUNT * on Innodb
tables for every search, do it a few times and/or summary tables, or if
you need it for the total # of rows, use SQL_CALC_FOUND_ROWS and SELECT
FOUND_ROWS()use –safe-updates for clientRedundant data is redundantUse INSERT … ON DUPLICATE KEY update (INSERT IGNORE) to avoid having to SELECTuse groupwise maximum instead of subqueriesbe able to change your schema without ruining functionality of your codesource control schema and config filesfor LVM innodb backups, restore to a different instance of MySQL so Innodb can roll forwarduse multi_query if appropriate to reduce round-tripspartition appropriatelypartition your database when you have real datasegregate tables/databases that benefit from different configuration variables
本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u/9501/showart_296729.html |
|