Tips and Tricks - AppDynamics Documentation

Transcription

Tips and Tricks - AppDynamics Documentation
Tips and Tricks
AppDynamics for Databases
Version 2.9.5
Page 1
Tips and Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshooting Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SQL Server Tips and Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Checking for Available Free Space on SQL Server . . . . . . . . . . . . . . . . . . . . . . . . . .
SQL Server Lock Wait Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CXPACKET Wait in SQL Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PAGEIOLATCH_EX Waits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RESOURCE_SEMAPHORE Waits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
sp_who and sp_who2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SQL Server Execution Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
WMI Permissions and Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Collector Fails to Connect to SQL Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Oracle Tips and Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
'db file sequential' Read Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
'read by other session' Wait Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Direct Path Reads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enqueues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
pga_aggregate_target Parameter for Performance Tuning . . . . . . . . . . . . . . . . . . . .
MySQL Tips and Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Check Database and Table Space in MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Connect to MySQL Using SSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Copying to tmp Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitor MySQL Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitor the MySQL Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MySQL Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MySQL Explain Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MySQL Optimize Table Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MySQL query_cache_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MySQL Query Cache Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Repair with keycache Performance Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
3
4
4
4
6
8
8
9
10
12
14
14
15
16
16
17
18
19
20
22
25
26
27
29
30
33
35
36
37
Page 2
Tips and Tricks
Troubleshooting Tips
IBM DB2 Tips and Tricks
DB2 UDB/WebSphere Performance Tuning Guide
mongoDB Tips and Tricks
mongoDB Performance Tuning
Oracle RAC Tips and Tricks
Monitoring and Diagnosing Oracle RAC Performance with Oracle Enterprise Manager
PostgreSQL Tips and Tricks
Performance Optimization
Sybase Tips and Tricks
Performance and Tuning: Basics, Locking, Monitoring and Analyzing, Optimizer and
Abstract Plans
Troubleshooting Tips
SQL Server Tips and Tricks
Checking for Available Free Space on SQL Server
SQL Server Lock Wait Types
CXPACKET Wait in SQL Server
PAGEIOLATCH_EX Waits
RESOURCE_SEMAPHORE Waits
sp_who and sp_who2
SQL Server Execution Plan
WMI Permissions and Security
Collector Fails to Connect to SQL Server
Oracle Tips and Tricks
'db file sequential' Read Event
'read by other session' Wait Event
Direct Path Reads
Enqueues
pga_aggregate_target Parameter for Performance Tuning
MySQL Tips and Tricks
Check Database and Table Space in MySQL
Connect to MySQL Using SSH
Copying to tmp Table
Monitor MySQL Connections
Monitor the MySQL Log File
MySQL Connections
MySQL Explain Plan
MySQL Optimize Table Command
MySQL query_cache_size
MySQL Query Cache Performance
Repair with keycache Performance Issues
Copyright © AppDynamics 2012-2015
Page 3
Troubleshooting Tips
Scenario: The collector is not showing any data for the database after restarting the AppDynamics
for Databases server.
Resolution: Ensure the collector service is running. Check the services in Windows -> Control
Panel -> Services. For Linux check the processes list (cmdshell: ps). Ensure the collector service
is running. It's name will resemble DBTuna Agent - "CollectorName", where CollectorName is the
name you assigned to the collector during setup. For information on how to set AppDynamics for
Databases to automatically startup when you startup your machine, see Start AppDynamics for
Databases.
SQL Server Tips and Tricks
This section provides information about using SQL Server with AppDynamics for Databases.
SQL Server provides access to detailed performance information. In Service Pack 3 of SQL Server
2000 a new function was introduced called fn_get_sql which exposed information about which
SQL text is executing at any one time on the database. This information, along with information
gathered from sysprocesses and other internal SQL Performance tables allows the DBA to gather
detailed information about database performance bottlenecks. Further information is exposed in
2005 and later in 2008 which allows even more detailed performance data to be monitored.
AppDynamics for Databases is a tool which takes advantage of these database features in order
to monitor SQL Server 24x7 and give the DBA visibility into which are his slowest or biggest
resource consuming queries and fundamentally what is the root cause of any database slow
downs.
Checking for Available Free Space on SQL Server
There is a T-SQL extended stored procedure, xp_fixeddrives, that gives you visibility into the
Windows drives. Calling xp_fixeddrives on its own will list each drive letter for the Windows host
followed by its available disk space in MB.
Here is an example of how you can check for free space on any drive with a simple piece of SQL,
compared to a threshold. Note the "where clause" on the select statement which returns just those
drives containing less than 100MB of free space.
create table #appd4db_drive_check( drive varchar(100), mb_free int );
insert into #appd4db_drive_check EXEC master.dbo.xp_fixeddrives;
select * from #appd4db_drive_check where mb_free < 100;
drop table #appd4db_drive_check;
SQL Server Lock Wait Types
There are many different types of lock within SQL Server.
AppDynamics for Databases will monitor time spent in any type of lock, so if you see excessive
Copyright © AppDynamics 2012-2015
Page 4
wait times on any of these lock types for a particular stored procedure or statement, then you need
to investigate the blocking session.
The following table gives a short description of lock wait types.
Wait Type
Description
LCK_M_SCH_S
Schema stability
LCK_M_SCH_M
Schema modification
LCK_M_S
Share
LCK_M_U
Update
LCK_M_X
Exclusive
LCK_M_IS
Intent-Share
LCK_M_IU
Intent-Update
LCK_M_IX
Intent-Exclusive
LCK_M_SIU
Shared intent to update
LCK_M_SIX
Share-Intent-Exclusive
LCK_M_UIX
Update-Intent-Exclusive
Copyright © AppDynamics 2012-2015
Page 5
LCK_M_BU
Bulk Update
LCK_M_RS_S
Range-share-share
LCK_M_RS_U
Range-share-Update
LCK_M_RI_NL
Range-Insert-NULL
LCK_M_RI_S
Range-Insert-Shared
LCK_M_RI_U
Range-Insert-Update
LCK_M_RI_X
Range-Insert-Exclusive
LCK_M_RX_S
Range-exclusive-Shared
LCK_M_RX_U
Range-exclusive-update
LCK_M_RX_X
Range-exclusive-exclusive
CXPACKET Wait in SQL Server
CXPacket wait in SQL Server occurs during parallel query execution when a session is waiting on
a parallel process to complete.
MSDN says that CXPACKET "Occurs when trying to synchronize the query processor exchange
iterator. You may consider lowering the degree of parallelism if contention on this wait type
becomes a problem". Whilst this is true, it's also often the case that CXPACKET waits are the
result of inefficient queries or stored procedures.
The following is an example of a SQL Server monitored instance with high CXPACKET wait:
Copyright © AppDynamics 2012-2015
Page 6
In this example, several stored procedures had high CXPACKET waits because they included
innefficient SQL statements which were not correctly indexed. A typical well-tuned database
instance would not parallelize a query unless there was a missing index or there is an incomplete
WHERE clause, etc.
Potential Solutions to CXPACKET Wait
To resolve long CXPACKET waits you first of all need to establish:
1. Is the problem related to inefficient SQL which can be tuned?
Use AppDynamics for Databases to quickly find out which stored procedures or batches are taking
the time, and which have high CXPACKET wait. Once these have been identified, drill-down to
establish which individual SQL's the wait is on. Once isolated, use a tool such as the SQL Server
Index Tuning Wizard to check for missing indexes, or out of date statistics. Fix if possible. This
was the process used to solve the above real-life example. The top stored procedure included
multiple select statements, but just one was the bottleneck which included an unindexed
sub-query.
2. If the problem cannot be tuned with Indexing
If the statement cannot be tuning using normal mechanisms e.g. Indexing, re-writing etc. then it
may be that the solution is to turn off parallelism, either for an individual query or for the whole
server.
To find out the current configuration of parallelism you can run the following command:
sp_Configure "max degree of parallelism".
If max degree of parallelism = 0, you might want to turn off parallelism completely for the instance
Copyright © AppDynamics 2012-2015
Page 7
by setting max degree of parallelism to 1. You could also limit parallelism by setting max degree of
parallelism to some number less than the total number of CPUs. For example if you have 4
processors, set max degree of parallelism to 2.
See the MSDN page for more information on CXPACKET and the Max Degree of Parallelism.
PAGEIOLATCH_EX Waits
PAGEIOLATCH_EX is a wait event often seen in SQL Server related to I/O, It actually
corresponds to an "Exclusive I/O page latch".
This wait event occurs when a user needs a page that is not in the buffer cache, SQL Server has
to first allocate a buffer page, and then puts an exclusive PAGEIOLATCH_EX latch on the buffer
while the page is transferred from disk to cache. After the write to cache finishes, the
PAGEIOLATCH_EX latch is released.
Waits of this type are to be expected whenever SQL Server carries out I/O operations, but if you
see excessive waits of this type then it may indicate problems with the disk subsystem.
The following AppDynamics for Databases screenshot displaying an instance with
PAGEIOLATCH_EX waits:
RESOURCE_SEMAPHORE Waits
RESOURCE_SEMAPHORE waits occurs when a query memory request cannot be granted
immediately due to other concurrent queries. High waits and wait times may indicate excessive
number of concurrent queries, or excessive memory request amounts.
High waits on RESOURCE_SEMAPHORE usually result in poor response times for all database
users, and need to be addressed.
The AppDynamics for Databases Waits Report displays a time-series profile of wait events for
your monitored instance. The example below is taken from a SQL Server 2000 instance which
suffered from sporadic problems which resulted in a spike in RESOURCE_SEMAPHORE waits.
Copyright © AppDynamics 2012-2015
Page 8
It is also useful to correlate high waits on RESOURCE_SEMAPHORE with the Memory Grants
Pending and Memory Grants Outstanding SQL Memory Mgr performance counters. Higher values
for these counters indicate a definite memory problem especially a non-zero value for Memory
Grants Pending.
The root cause of this type of memory problem is when memory-intensive queries, such as those
involving sorting and hashing, are queued and are unable to obtain the requested memory. The
solution would be to tune the offending queries, or manage their workload so that they are
executed at less busy times.
sp_who and sp_who2
Use sp_who or sp_who2 (sp_who2 is not documented in the SQL Server Books Online, but offers
more details than sp_who) to provide locking and performance-related information about current
connections to SQL Server. Sometimes, when SQL Server is very busy, you can't use Enterprise
Manager or Management Studio to view current connection activity via the GUI, but you can
always use these two commands from Query Analyzer or Management Studio, even when SQL
Server is very busy.
AppDynamics for Databases offers a detailed current activity screen which uses information from
the sp_who2 command. It also displays a graphical view of SQL Server CPU consumption and
Wait Events for the last 5 minutes and a snapshot of current memory usage broken down by
category.
Copyright © AppDynamics 2012-2015
Page 9
SQL Server Execution Plan
On this page:
What to Look for in the Execution Plan
Rate this page:
If you want to analyze why your queries or stored procedures are performing poorly, the first place
to look is the execution plan. You can visualize the plan in SQL Server query analyzer, or other
products, and of course in AppDynamics for Databases.
In Query Analyzer you can see the Estimated Execution Plan or the Actual Execution Plan. If you
want to see an estimation of how SQL Server will execute your query select the "Display
Estimated Execution Plan" button on the toolbar; this shows you the plan without actually
executing the query (which should be very similar if not identical to the real plan anyway).
For long running queries, or those involving DML i.e. inserts/updates/deletes etc. this is the best
option. The downside to the Estimate Plan is if you have a stored procedure, or other batch T-SQL
code that uses temp tables, you cannot use the "Display Estimated Execution Plan". The reason
for this is that the Optimizer, which is what is used to generate the estimated plan, doesn't execute
the T-SQL, so since the query has not been executed the temporary table does not exist. An error
message of this type will be displayed if you do try to display an estimated plan for a query
containing temp tables:
Copyright © AppDynamics 2012-2015
Page 10
Msg 208, Level 16, State 1, Line 107
Invalid object name '#temp_table'
What to Look for in the Execution Plan
If you have a poorly performing piece of T-SQL that you are trying to tune, the obvious place to
start it to look at the most costly step of the plan. The screenshot displays the plan for a stored
procedure in the MSPetShop database. You can see the step of the plan with the greatest cost,
and therefore the step which you can optimize.
Tuning T-SQL is of course a vast topic, but a couple of things to look out for include:
Index or table scans: May indicate a need for better or additional indexes.
Bookmark Lookups: Consider changing the current clustered index, using a covering index
and limiting the number of columns in the SELECT statement.
Filter: Remove any functions in the WHERE clause, don't include views in your
Transact-SQL code, may need additional indexes.
Sort: Does the data really need to be sorted? Can an index be used to avoid sorting? Can
sorting be done at the client more efficiently?
Copyright © AppDynamics 2012-2015
Page 11
(*taken from sql-server-performance.com)
WMI Permissions and Security
On this page:
Minimum Security Requirements When Using WMI
Preventing Unauthorized Remote Access to WMI
Rate this page:
When monitoring CPU information on a Windows based machine with AppDynamics for
Databases, either the OS collector part of the database collector, or the Server collector, Windows
Copyright © AppDynamics 2012-2015
Page 12
Management Instrumentation (WMI) is used to remotely gather the metrics. WMI is often
frustrating to troubleshoot. This article explains some potential problems and pitfalls.
Minimum Security Requirements When Using WMI
WMI makes use of RPC. RPC listens on a well-known port (135) but then allocates a dynamic port
for subsequent communication. Therefore, you need to configure your firewall to allow 135
(always) and follow the dynamic RPC ports. It can be done - you do not need to restrict the port
range, and there are several articles already on the internet that explain this, just ask Google!
To delegate permissions for WMI Control, run wmimgmt.msc. If you don't have a shortcut to the
program, then simple click the Start Menu, and then search for the executable.
Now step through the following instructions to confirm you will have the correct permissions:
1.
2.
3.
4.
Right click the WMI Control icon on the left and click Properties.
Click the Security tab.
Click the Root node of the tree, and click Properties.
For the named user account you would like to run AppDynamics for Databases as you will
need to ensure that it has the relevant permissions. The minimum permissions that your
remote Windows account needs for AppDynamics for Databases are:
Execute Methods
Enable Account
Remote Enable
If you miss any one of the three then you end up with one of:
Error=800706BA The RPC server is unavailable.
SWbemLocator
or
Error=80070005 Access is denied.
SWbemLocator
Preventing Unauthorized Remote Access to WMI
For Windows 2003 R2 SP2
You can also setup extra security in the WIndows Distributed Component Object Model (DCOM) to
prevent unauthorized users from accessing WMI remotely. The following prevents users other than
those configured as follows from remotely accessing the WMI. You can configure the named user
account under which you would like to run AppDynamics for Databases as follows:
1. Add the user to the Performance Monitor Users group
2. In Services and Applications, bring up the properties dialog of WMI Control. On the Security
tab, highlight Root/CIMV2, click Security->add Performance Monitor Users and enable
the options: Enable Account and Remote Enable.
3. Run dcomcnfg. Click Component Services->Computers->My
Computer->Properties->COM Security, and then click Edit Limits for both Access
Permissions and Launch and Activation Permissions. Add Performance Monitor Users and
Copyright © AppDynamics 2012-2015
Page 13
3.
allow remote access, remote launch, and remote activation permissions.
4. In Component Services->Computers->My Computer->DCOM Config->Windows
Management Instrumentation, give Remote Launch and Remote Activation privileges to
the Performance Users Group.
Collector Fails to Connect to SQL Server
The AppDynamics for Databases collector connects to SQL Server via JDBC (Java Database
Connectivity), which works over a TCP/IP protocol. The collector requires the following:
SQL Server must have TCP/IP protocol enabled.
You must enter the port number of the SQL Server listener port (the default is 1433)
Network connectivity must exist between your AppDynamics for Databases machine and the
SQL Server machine (for example make sure there is no blocking firewall).
The following error usually points to an incorrectly entered listener port number, or TCP/IP
disabled on the monitored SQL Server:
Unable to establish a connection.
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the
host has failed. java.net.ConnectException: Connection refused: connect
Failed Connection to SQL Server
If you have named or multiple SQL Server instances and have dynamically assigned ports for
those instances, then
When configuring the collector for a specific instance, use only the host name for the
"Hostname or IP Address:" field
For the "Database Listener Port:" field, use the dynamic port number that is in use by the
instance you want to connect to.
The collector connection does not currently support using the browser service, so you must specify
the dynamic port number associated with the instance you want to monitor.
Oracle Tips and Tricks
With hundreds of configuration parameters and thousands of metrics to monitor, it's no small task
for Oracle DBAs to monitor the overall health of their Oracle databases.
AppDynamics for Databases takes a time-based approach to monitoring, allowing you to examine
the performance of SQL statements over time and where that time was spent, for example,
fetching, sorting or waiting on a lock. The statements are then ranked with the worst performing at
the top. Data is also by Client, session, user, database, Program, Module and host, allowing you to
quickly drill up and down. Once you have identified a problematic statement you can click on the
text and it will take you to the SQL screen where you can examine the execution plan.
Other screens within AppDynamics for Databases allow you to see the performance of sessions
currently connected, view database statistics and browse database objects.
Using AppDynamics for Databases approach to monitoring you only need to address problems
when you see they are affecting your application SQL i.e. if your SQL is running fast why bother
Copyright © AppDynamics 2012-2015
Page 14
focusing on configuration parameters and metrics?
The AppDynamics for Databases performance management solution covers Oracle 8i, 9i, 10g and
11g database running on any hardware or OS platform, therefore we have great coverage of the
most popular and widely used database technologies of today.
Tiny snapshots are taken sub-second which builds up a complete picture of what is happening on
your Oracle instance. Within minutes AppDynamics for Databases can capture and display which
users, programs, machines, modules and sessions are active within the instance, also which
schema they are active in, and most importantly what SQL they are executing.
AppDynamics for Databases is designed to monitor busy production databases 24x7 without
impacting on the performance of the instance. It uses agentless technology which allows it to
monitor remotely, meaning that there is no agent consuming resource on the database machine.
Being agentless, it also means that the installation is rapid; all you need to do is provide an Oracle
user and network connectivity to the instance, nothing else.
The depth of data collected from AppDynamics for Databases is comprehensive, and allows
detailed drilldown. An expert DBA can view the resource consumption profile of his Oracle
instance, drill into a performance spike and then find the underlying root cause in seconds.
AppDynamics for Databases maintains a repository of historical performance data. This gives the
DBA the ability not only to see what is happening now, but what has happened over the last day,
the last week or the last month. Having historical performance data greatly facilitates problem
resolution and allows the DBA to answer important questions such as: i) What happened to the
online application yesterday to make it slow down ii) Why is the overnight batch job still running
this morning at 8:55 etc.
'db file sequential' Read Event
The db file sequential read event signifies that the user process is reading data into the SGA buffer
cache and is waiting for a physical I/O call to return. It corresponds to a single-block read.
Single block I/Os are usually the result of using indexes.
On rare occasions full table scan calls could get truncated to a single block call due to extent
boundaries, or buffers already present in the buffer cache, if so then these waits would also show
up as 'db file sequential read'. Usually when Oracle carries out a full table scan however, it will
read multiple blocks of data in sequence, and the wait state assigned to this is 'db file scattered
read'... a somewhat confusing naming convention given that many people think of a multi-block
read as sequential I/O!
For further information on db file sequential read you can check columns in V$SESSION_WAIT for
the following information:
P1 - The absolute file number
P2 - The block being read
P3 - The number of blocks (should be 1)
The Wait State report will display a profile of Oracle wait events over time, and will allow you to
quickly spot excessive waits. See Create and Interpret Reports#SQL Wait Report.
AppDynamics for Databases has detailed monitoring and reporting related to Oracle I/O. Complete
Copyright © AppDynamics 2012-2015
Page 15
correlation between SQL and Oracle Data File wait is possible, as well as a comprehensive report
on Physical I/O.
'read by other session' Wait Event
Read by other session is a new wait event in Oracle 10.1.
Definition
When information is requested from the database, Oracle will first read the data from disk into the
database buffer cache. If two or more sessions request the same information, the first session will
read the data into the buffer cache while other sessions wait. In previous versions this wait was
classified under the "buffer busy waits" event. However, in Oracle 10.1 and higher this wait time is
now broken out into the "read by other session" wait event. Excessive waits for this event are
typically due to several processes repeatedly reading the same blocks, e.g. many sessions
scanning the same index or performing full table scans on the same table. Tuning this issue is a
matter of finding and eliminating this contention.
Direct Path Reads
Direct path reads occur when a session reads buffers from disk directly into the PGA (as opposed
to the buffer cache in SGA). If the I/O subsystem does not support asynchronous I/Os, then each
wait corresponds to a physical read request.
If the I/O subsystem supports asynchronous I/O, then the process is able to overlap issuing read
requests with processing the blocks already existing in the PGA. When the process attempts to
access a block in the PGA that has not yet been read from disk, it then issues a wait call and
updates the statistics for this event. Hence, the number of waits is not necessarily the same as the
number of read requests (unlike db file scattered read and db file sequential read).
Below is a screenshot of a system with a large amount of direct path read waits:
Copyright © AppDynamics 2012-2015
Page 16
Direct path reads can happen in the following situations:
Parallel slaves are used for scanning data.
Sorts are too large to fit in memory and therefore data is sorted on disk. This data is later
read back using direct reads.
The server process is processing buffers faster than the I/O system can return the buffers.
This can indicate an overloaded I/O system.
Enqueues
Enqueues are locks that coordinate access to database resources. This event indicates that the
session is waiting for a lock that is held by another session.
The name of the enqueue is included as part of the wait event name, in the form enq:
enqueue_type - related_details. In some cases, the same enqueue type can be held for different
purposes, such as the following related TX types:
enq: TX - allocate ITL entry
Waits for TX in mode 4 can occur if the session is waiting for an ITL (interested transaction list)
Copyright © AppDynamics 2012-2015
Page 17
slot in a block. This happens when the session wants to lock a row in the block but one or more
other sessions have rows locked in the same block, and there is no free ITL slot in the block.
Usually, Oracle dynamically adds another ITL slot. This may not be possible if there is insufficient
free space in the block to add an ITL. If so, the session waits for a slot with a TX enqueue in mode
4. This type of TX enqueue wait corresponds to the wait event enq: TX - allocate ITL entry.
The solution is to increase the number of ITLs available, either by changing the INITRANS or
MAXTRANS for the table (either by using an ALTER statement, or by re-creating the table with the
higher values).
Here is an example that visualizes a serious problem:
pga_aggregate_target Parameter for Performance Tuning
The parameter pga_aggregate_target specifies the PGA memory which is made available to the
Oracle server processes.
If pga_aggregate_target is set to a non-zero value then it will automatically enable the workarea
size policy, meaning that operations involving memory intensive operations will be automatically
sized. These operations include:
SQL which involve Sorts e.g. "Order By" clauses or "Group By" clauses
SQL which involves Hash Joins
If pga_aggregate_target is non-zero, then the minimum value is 10M, but can be any much greater
if required by your Oracle workload.
An Oracle instance with an inadequate pga_aggregate_target i.e. the setting is too low, will often
suffer from I/O related bottlenecks, and Oracle will often spend time in direct path read temp and
direct path write temp wait states.
The screenshot below displays this problem in a pre-production environment. The top queries in
this case take in excess of 5 seconds on average to complete, while the instance suffers from an
Copyright © AppDynamics 2012-2015
Page 18
I/O bottleneck related to direct path I/O caused by disk sorts. Once the pga_aggregate_target had
been re-sized appropriately these queries where improved dramatically as sorts were performed in
memory.
To dynamically re-size pga_aggregate_target run the following command (this example changes
the value to 100 megabytes):
SQL> alter system set pga_aggregate_target=100M;
System altered.
The bottom line is that you need to monitor in order to understand exactly what's going on in your
Oracle instance, and tune accordingly!
MySQL Tips and Tricks
Watch the video:
&lt;p&gt; &lt;/p&gt;
Rate this page:
AppDynamics for Databases is a MySQL monitoring tool which gives a database administrator
(DBA) the ability to monitor what is happening, in real time and historically, within the MySQL
server. It was one of the first deep-dive database monitoring solutions available for MySQL,
pre-dating MySQL's own Query Analyzer.
AppDynamics for Databases is an essential tool for any IT organization using MySQL; it allows the
DBA to work re-actively to resolve current/past MySQL performance issues, and potentially more
importantly to work pro-actively by giving visibility into the user activity of the database which
allows the team to prioritize and focus on the needs of the business.
Copyright © AppDynamics 2012-2015
Page 19
Today's web-based applications are getting more and more complex; and that multi-tiered
complexity often leads to a performance challenge. Research shows that when performance
issues occur in one of today's typical multi-tier web applications, it can often take many man hours
of effort to locate the root cause of the issue. AppDynamics for Databases is one tool which can
dramatically reduce the time needed to find the issue, by correlating historical database activity
and focusing on the problem. Reducing the mean time to resolution (MTTR) is the key value
proposition of AppDynamics for Databases.
Check Database and Table Space in MySQL
On this page:
Database Space Usage Report
Table Space Usage Report
Check for Tables that have Free Space
Rate this page:
This topic applies to MySQL Version 5 and newer.
If you've ever wondered how much space your MySQL databases are consuming, or even the
tables within those databases, then you may like one of these small scripts issued against the
INFORMATION_SCHEMA tables.
Database Space Usage Report
SELECT s.schema_name,
CONCAT(IFNULL(ROUND((SUM(t.data_length)+SUM(t.index_length))/1024/1024,2),0.00),
"Mb") total_size,
CONCAT(IFNULL(ROUND(((SUM(t.data_length)+SUM(t.index_length))-SUM(t.data_free))/
1024/1024,2),0.00),"Mb") data_used,
CONCAT(IFNULL(ROUND(SUM(data_free)/1024/1024,2),0.00),"Mb") data_free,
IFNULL(ROUND((((SUM(t.data_length)+SUM(t.index_length))-SUM(t.data_free))/((SUM(
t.data_length)+SUM(t.index_length)))*100),2),0) pct_used
FROM INFORMATION_SCHEMA.SCHEMATA s, INFORMATION_SCHEMA.TABLES t
WHERE s.schema_name = t.table_schema
GROUP BY s.schema_name
ORDER BY total_size DESC
Table Space Usage Report
Copyright © AppDynamics 2012-2015
Page 20
SELECT s.schema_name, table_name,
CONCAT(IFNULL(ROUND((SUM(t.data_length)+SUM(t.index_length))/1024/1024,2),0.00),
"Mb") total_size,
CONCAT(IFNULL(ROUND(((SUM(t.data_length)+SUM(t.index_length))-SUM(t.data_free))/
1024/1024,2),0.00),"Mb") data_used,
CONCAT(IFNULL(ROUND(SUM(data_free)/1024/1024,2),0.00),"Mb") data_free,
IFNULL(ROUND((((SUM(t.data_length)+SUM(t.index_length))-SUM(t.data_free))/((SUM(
t.data_length)+SUM(t.index_length)))*100),2),0) pct_used
FROM INFORMATION_SCHEMA.SCHEMATA s, INFORMATION_SCHEMA.TABLES t
WHERE s.schema_name = t.table_schema
GROUP BY s.schema_name, table_name
ORDER BY total_size DESC
Check for Tables that have Free Space
SELECT s.schema_name, table_name,
IFNULL(ROUND(SUM(data_free)/1024,2),0.00) data_free
FROM INFORMATION_SCHEMA.SCHEMATA s, INFORMATION_SCHEMA.TABLES t
WHERE s.schema_name = t.table_schema
GROUP BY s.schema_name, table_name
HAVING data_free > 100
ORDER BY data_free DESC
This SQL will check for all objects with free space. It will display tables with any more than 100Kb
of free space so you may want to tweak the having clause, but the idea is that it can spot tables
which may benefit from an Optimize Table command.
For example:
Copyright © AppDynamics 2012-2015
Page 21
Connect to MySQL Using SSH
On this page:
Set up the Tunnel
Rate this page:
If you do not have direct network connectivity from your client to your MySQL database e.g. if a
firewall is blocking port communication, then an SSH tunnel is often the best and easiest solution.
An SSH tunnel can be set up using PuTTY (freely downloadable from the official PuTTY site) with
the MySQL port (e.g. 3306 by default) forwarded. Once a tunnel has been set up a port on your
local machine will be listening and forwarding to your remote server's port, which means you can
effectively connect to the remote server's MySQL database as though it were running on your local
box.
Set up the Tunnel
1. Create a session in PuTTY - Insert the hostname or IP address of the machine running
MySQL.
Copyright © AppDynamics 2012-2015
Page 22
1.
2. Select the Tunnels tab in the SSH section.
In the Source port text box enter 3306 (or any other unused port that you would like to
connect your client to). This is the port PuTTY will listen on on your local machine. It can be
any standard Windows-permitted port. In the Destination field below Source port enter
127.0.0.1:3306. This means, from the server, forward the connection to IP 127.0.0.1 port
3306. If your MySQL Server listens on a non-standard port, or on a specific network
interface then you will need to insert these details instead on the form IP:Port.
Copyright © AppDynamics 2012-2015
Page 23
3. Click Add and then Open the connection.
Copyright © AppDynamics 2012-2015
Page 24
Note: When you want to set up your collector to connect to this remote MySQL Server you
need to specify the hostname/IP address as localhost, and the port number as the one you
configured as the Source Port in the tunnel.
Copying to tmp Table
Copying to tmp table is a common thread state within MySQL which occurs when the MySQL
server is copying to a temporary table in memory.
MySQL will need to create temporary tables under certain circumstances e.g.:
Sorting operations e.g. if a query contains an "ORDER BY" clause, or "GROUP BY" clause
If the query contains "distinct"
To determine whether a query requires a temporary table, use EXPLAIN and check the Extra
column to see whether it says "Using temporary".
According to the MySQL site:
Some conditions prevent the use of an in-memory temporary table, in which case the server uses
an on-disk table instead:
Presence of a BLOB or TEXT column in the table
Presence of any column in a GROUP BY or DISTINCT clause larger than 512 bytes
Presence of any column larger than 512 bytes in the SELECT list, if UNION or UNION ALL
Copyright © AppDynamics 2012-2015
Page 25
is used
Queries which involve disk based Temporary tables are often slow, and a large number of queries
of this type can result in a serious performance bottleneck.
The example below shows how the same query performs when it uses a memory based temporary
table, and then how it performs using a disk based temporary table.
This is for indicative purposes only, you probably don't want to set such a low value for
tmp_table_size on your MySQL Server! The query cache has been disabled for the purposes of
the test.
The following query will produce a list of statistic_id and the summation of their values. The
GROUP BY directs MySQL to use a temporary table to sort and process the results.
SELECT statistic_id, sum(statistic_value)
FROM statistics
WHERE statistic_type = 'mysql'
GROUP BY statistic_id;
Average response time for the query is 0.23 seconds
Now for the second test we will reduce the size of tmp_table_size to just 1k, which will mean that
our MySQL server does not have adequate space in memory to process the query.
mysql> set tmp_table_size = 1024;
Query OK, 0 rows affected (0.00 sec)
This time the query now takes 1.5 seconds (650% slower!) to execute which is much longer than
before, due to the fact that MySQL now needs to create a temporary table on disk.
So... the bottom line is:
Temporary tables are used frequently in MySQL, and can be a source of bottlenecks
particularly if the tables need to be created on disk.
Although the above example shows you how you can quickly degrade performance by
reducing the size of tmp_table_size, the inverse could be true if your MySQL Server
tmp_table_size value is configured too low i.e. you could dramatically improve performance
and throughput by increasing the value.
You need to monitor to get visibility into exactly what it occurring.
Monitor MySQL Connections
Many people will have experienced a "Too many connections" error when trying to connect to the
mysqld server, this means that all available connections are in use by other clients, and no doubt
impacted the availability of your application. Being one of the main causes of MySQL availability
issues, it is therefore essential to monitor the number of connections in use, and receive a timely
warning if you are approaching the limit of your MySQL Server.
Copyright © AppDynamics 2012-2015
Page 26
The maximum available connections is defined by the max_connections server variable. It defaults
to 151 in later versions of MySQL, and often people increase this to hundreds more to offset the
chances of a too many connections error occurring.
A nice easy way to monitor connections in use compared with the maximum available connections
is with the SQL provided below:
select ( pl.connections / gv.max_connections ) * 100 as
percentage_used_connections
from
( select count(*) as connections from information_schema.processlist ) as pl,
( select VARIABLE_VALUE as max_connections from
information_schema.global_variables where variable_name = 'MAX_CONNECTIONS' ) as
gv
This will provide a percentage figure output e.g. X% of your connections are in use etc. Why not
plug it into your alerting framework (Alerts) and set a threshold e.g. 80%, and then receive a
warning as soon as your connection count creeps up into the danger zone?
Monitor the MySQL Log File
MySQL provides a built-in function called load_file which is capable of reading a file from the file
system and returning the results as a single column single row result set via SQL. This function
can be used to read the error log using the following technique:
1. Locate the MySQL Error Log, from a command shell enter the following:
C:\> C:\AppD4DBInstallDir\mysql\bin\startClient.bat
mysql> show variables like 'log_error';
The system should return something like the following:
+-------------------------------------------------------------------+
| Variable_name | Value
|
+-------------------------------------------------------------------+
| log_error
| C:\AppD4DBInstallDir\mysql\data\hostname.err
|
+-------------------------------------------------------------------+
2. Check that you can read the file from the command line:
mysql> select load_file ('C:\AppD4DBInstallDir\mysql\data\hostname.err');
You can then use a custom SQL alert to monitor the log file by creating a diff alert, which will
notify you of changes to the log file:
3. Click Alerts->New Alert and then click Custom SQL Alert.
4. Paste the "select load_file( 'file name...' )" command in as the SQL Text text box.
5.
Copyright © AppDynamics 2012-2015
Page 27
5. Set Threshold to "0".
6. For the Return Type specify "Diff Results".
7. Specify a name for the Alert Name.
AppDynamics for Databases will report on differences in the new log messages.
Copyright © AppDynamics 2012-2015
Page 28
For example:
Copyright © AppDynamics 2012-2015
Page 29
MySQL Connections
On this page:
"Too many connections" Error
Rate this page:
"Too many connections" Error
A "Too many connections" error when trying to connect to the mysqld server means that all
available connections are in use by database clients.
The number of connections allowed is controlled by the max_connections system variable. Its
default value is 100. If you need to support more connections, you should set a larger value for this
variable.
MySQL actually allows max_connections+1 clients to connect. The extra connection is reserved
for use by accounts that have the SUPER privilege. The AppDynamics user should have SUPER
privilege, which means that AppDynamics for Databases highlights the issue with the "Too many
connections" error. For example:
According to MySQL, the maximum number of connections MySQL can support depends on the
quality of the thread library on a given platform. Linux should be able to support 500-1000
simultaneous connections, depending on how much RAM you have and what your clients are
doing. Static Linux binaries provided by MySQL AB can support up to 4000 connections.
You should also research the best approach to mysql connection handling supported by your
language (such as PHP and Java ) or a 3rd party DB library (such as Pear.) In MySQL it is
relatively easy to create and destroy connections; therefore, connection pooling is not always the
best option. It may be best to open a new connection for a client request and then explicitly close
the connection at the end of the request.
MySQL Explain Plan
The MySQL explain plan shows how the MySQL optimizer has decided to run a SELECT
statement and access the data. MySQL can only explain SELECT statements.
The command syntax for viewing the EXPLAIN output is:
Copyright © AppDynamics 2012-2015
Page 30
explain [select statement]
For example:
explain SELECT t0.id AS id1, t0.name AS name2, t0.price AS price3,
t0.description AS description4
FROM product t0
WHERE t0.id = 1
The AppDynamics for Databases window displays the explain output for this command:
Copyright © AppDynamics 2012-2015
Page 31
The columns in the Explain plan table are:
id: The SELECT identifier. This is the sequential number of the SELECT within the query.
Copyright © AppDynamics 2012-2015
Page 32
select_type: The type of SELECT, which can be any of those shown in the following table:
SIMPLE
Simple SELECT (not using UNION or subqueries)
PRIMARY
Outermost SELECT
UNION
Second or later SELECT statement in a UNION
DEPENDENT UNION
Second or later SELECT statement in a UNION, dependent on
outer query
UNION RESULT
Result of a UNION.
SUBQUERY
First SELECT in subquery
DEPENDENT
SUBQUERY
First SELECT in subquery, dependent on outer query
DERIVED
Derived table SELECT (subquery in FROM clause)
table: The table to which the row of output refers.
type: The join type.
possible_keys: The possible_keys column indicates which indexes MySQL can choose from
use to find the rows in this table.
key: The key column indicates the key (index) that MySQL actually decided to use. The key
is NULL if no index was chosen. To force MySQL to use or ignore an index listed in the
possible_keys column, use FORCE INDEX, USE INDEX, or IGNORE INDEX in your query.
key_len: The key_len column indicates the length of the key that MySQL decided to use.
The length is NULL if the key column says NULL.
ref: The ref column shows which columns or constants are compared to the index named in
the key column to select rows from the table.
rows: The rows column indicates the number of rows MySQL believes it must examine to
execute the query.
filtered:
Extra: This column contains additional information about how MySQL resolves the query.
MySQL Optimize Table Command
The MySQL Optimize Table command will effectively defragment a mysql table and is very useful
for tables which are frequently updated and/or deleted.
There is a table called articles which has many thousands of rows which are often inserted,
updated and deleted. The table contains variable length column data types:
Copyright © AppDynamics 2012-2015
Page 33
mysql> desc articles;
+----------------+--------------+------+-----+---------+----------------+
| Field
| Type | Null | Key | Default | Extra |
+----------------+--------------+------+-----+---------+----------------+
| id
| int(11) | NO
| PRI | NULL
| auto_increment |
| content
| text
| NO
|
| NULL
|
|
| author_id
| int(11)
| YES |
| NULL
|
|
| article_title | varchar(120) | YES |
| NULL
|
|
| article_hash
| int(11)
| YES |
| NULL
|
|
+----------------+--------------+------+-----+---------+----------------+
6 rows in set (0.00 sec)
The size of the table on disk is about 190MB. Query the table on a column that is indexed to see
the average query response time:
mysql> select count(*) from articles where article_title like 'The%';
+----------+
| count(*) |
+----------+
|
15830 |
+----------+
1 row in set (0.63 sec)
Optimize the table with the following command:
mysql> optimize table articles;
+-----------------------+----------+----------+----------+
| Table
| Op
| Msg_type | Msg_text |
+-----------------------+----------+----------+----------+
| books.articles
| optimize | status
| OK
|
+-----------------------+----------+----------+----------+
1 row in set (6.27 sec)
The optimization has the effect of defragmenting the table and reducing the size of the table on
disk down to 105 MB. It also has a very positive effect on query performance, reducing the select
query response time from 0.63 to 0.39 seconds.
Note: The MySqQL query cache was turned off to demonstrate.
Copyright © AppDynamics 2012-2015
Page 34
mysql> select count(*) from articles where article_title like 'The%';
+----------+
| count(*) |
+----------+
|
15830 |
+----------+
1 row in set (0.39 sec)
MySQL query_cache_size
The MySQL Query Cache can have a huge positive impact on your database performance if you
have a database which processes lots of reusable SELECT statements. By reusable I am referring
to statements which repeated by multiple users, therefore if user A executes a SELECT, and then
user B issues the exact same statement, the MySQL Query Cache will just return the results of the
first execution without needing to re-execute the SQL.
What are the Query Cache Key Performance Indicators?
This article addresses what metrics to look for when assessing the benefits of your query cache.
1. Current Size compared with maximum available size. To calculate the percentage used
value for the query cache you can use the following formula:
((query_cache_size-Qcache_free_memory)/query_cache_size)*100
query_cache_size is a variable, which can be found from a show variables like
'query_cache_size'; command.
Qcache_free_memory is a status variable which can be retrieved from show status like
'Qcache_free_memory';
Copyright © AppDynamics 2012-2015
Page 35
2. The Query Cache Hit Rate
The percentage hit rate on the cache can be calculated as follows:
((Qcache_hits/(Qcache_hits+Qcache_inserts+Qcache_not_cached))*100)
This percentage figure shows how much the query cache is used e.g. the figure in the
screenshot of 33% says that of all select statements executed, 33% of them can be satisfied
by the cache and hence do not have to be re-executed.
3. Hits to Insert Ratio and Insert to Prune Ratio
These two ratios are calculated by the following two formulae:
Qcache_hits/Qcache_inserts
Qcache_inserts/Qcache_prunes
A ratio of Hits to Inserts is displayed in order to show the Query Cache effectiveness. A high
ratio of hits to inserts tells us that there are lots of identical SQL statements being run on the
database and are therefore being serviced directly from cache. A low ratio (as indicated in
the sceenshot above) shows that the cache is not much utilized.
The ratio of Inserts to Prunes represents how many times SQL queries are being inserted
into the cache compared with how many times a query is being removed from the cache
(pruned). This is also a good indicator of SQL reuse on the database and hence query
cache effectiveness.
MySQL Query Cache Performance
The MySQL Query Cache is a powerful feature which when used correctly can give big
performance gains on your MySQL instance. The Query Cache works by storing in memory the
SQL query (or Stored Procedure) along with the retrieved results. Whilst in cache, if the SQL query
is called again, MySQL does not have to rerun it, it can quickly retrieve the results from the cache
and send these back. Of course if there has been an insert, update or delete on a table which is
referenced by the cached query then the statement will be forced to rerun. Whilst the Query Cache
is great for applications with a high number of reads versus writes, there are also a couple of
reasons which will make your queries un-cacheable.
Use of functions, such as CURRENT_DATE, RAND and user defined functions
Queries that use bind variables
AppDynamics for Databases shows the Key Performance Indicators of the MySQL Query Cache
in graphical format.
Copyright © AppDynamics 2012-2015
Page 36
Graphical View of current allocation compared to maximum
Percentage Utilization figures
A ratio of Hits to Inserts is displayed in order to show the Query Cache effectiveness. A high
ratio of hits to inserts tells us that there are lots of identical SQL statements being run on the
database and are therefore being serviced directly from cache. A low ratio shows that the
cache is not much utilized.
The ratio of Inserts to Prunes represents how many times SQL queries are being inserted
into the cache compared with how many times a query is being removed from the cache
(pruned). This is also a good indicator of SQL reuse on the database and hence query
cache effectiveness.
The maximum size (in MB) of the Query Cache is set in the database variable query_cache_size.
A setting of 0 will automatically disable the cache.
Repair with keycache Performance Issues
If you've ever had to alter a large table or even rebuild a table and encountered an extremily long
load process then it could be because MySQL is creating the table indexes in a sub-optimal way.
If the total size of the indexes required by the new table exceeds the myisam_max_sort_file_size
parameter, or if there is insufficient temporary space available on the tmp directory, then MySQL
will elect to "Repair with keycache" rather than the more performant "Repair by sorting".
For example:
I have a table called statistics which is currently has 94 MB of data (i.e. the size of the MyISAM
statistics.MYD file) and 129MB of indexes (i.e. the size of the statistics.MYI file).
If I want to populate a new table (called stat_test in the below example) with the contents of this
table, MySQL will first create the data file, and then create the necessary indexes. If the
myisam_max_sort_file_size is configured too low, then MySQL will opt for the Repair with
keycache method.
Note: In this case we will force myisam_max_sort_file_size to be just 10MB. In reality your default
setting could be 2GB or more, and this issue would only affect very large tables.
Copyright © AppDynamics 2012-2015
Page 37
mysql> truncate table stat_test;
mysql> set global myisam_max_sort_file_size=10485760;
Query OK, 0 rows affected (0.00 sec)
mysql> insert into stat_test select * from statistics;
Query OK, 3258166 rows affected (3 min 0.71 sec)
Records: 3258166 Duplicates: 0 Warnings: 0
mysql> set global myisam_max_sort_file_size=2147483648;
Query OK, 0 rows affected (0.00 sec)
mysql> truncate table stat_test;
mysql> insert into stat_test select * from statistics;
Query OK, 3258166 rows affected (1 min 50.43 sec)
Records: 3258166 Duplicates: 0 Warnings: 0
The Repair by sorting method is over a minute faster in this case, or 64% Faster.
AppDynamics for Databases can display both executions, and also the time spent in each wait
state.
Copyright © AppDynamics 2012-2015
Page 38