Hi,
What this deadlock graph indicates and how to resolve this issue?
Hi,
What this deadlock graph indicates and how to resolve this issue?
Hi
what is the use of the always on availability groups in sql server 2012 ?
which case we use this ?
please provide the information of the above .
Thanks,
Sreekanth
sreekanth kancharla
Hi,
Why SQL Database files ware called "Virtual File". Like sys.dm_io_virtual_file_stats - defination of this DMV is "Returns I/O statistics for data and log files. This dynamic management view replaces thefn_virtualfilestats function."
This DMV returns I/O Statistics for data and log files. Then why we cannot call this DMV as dm_io_physical_file_stats or dm_io_database_file_stats? why sql team suggest name "virtual_file"?
Ramdas Singh
We have a table which has close to 400 columns, about 950 million rows and 3400+ partitions. There are 4 statistics created with indexes and another 40+ column statistics. When I tried updating all statistics on the table I noticed sql server is updating single statistics at a time, if I send multiple update statistics command they are blocking each other. The sql server is running 2012 enterprise.
Is there any way I can update all statistics in the table in a single table scan and using as many threads as it needs to update all statistics at once instead of scanning the table for each statistic?
Thank you
Gokhan Varol
It seems that MSDN only said system_internals_partition_columns, system_internals_partitions, etc. are internal views. But do not provide any more information about its column information. Where can I find more about them?
Thaks
Hi All,
SQL Server 2008 R2 Express with Advanced Services is free for lifetime??
and can i use as a witness server in mirroring??
HI Friends,
i am facing BLOCKING issues in no.of times
how to resolve the BLOCKING's with out " Killing "
please tem solution step by step..
Regards,
Purna...
Thanks, Purna
Hi All. Hope this makes sense...
When you query sys.dm_os_performance_counters, you get perf counter data normalized something like:
COUNTER_1, VALUE
COUNTER_2, VALUE
If you get the same counter data out of Performance Monitor, it's denormalized something like:
COUNTER_1, VALUE, COUNTER_2, VALUE
My question is in what format do you store counter data in your baselining, and do you modify it for display/analysis purposes? I'm thinking that normalized is better for display, for example in Excel Pivot table, whereas denormalized is better for display in Performance Monitor.
Thanks!
Hi,
I am trying to implement audit on SQL Server 2012. Everything went fine as per the documentation, except that the events I monitor:( a select statement on som tables, ) are not recorded.
What am I missing ?
Hi All,
I have a table with 1.4 billion rowsin - it is a many to many - and I have been carrying out some work before christmas towards partitioning it.
the length of the row is only 33 bytes, however there is a non-unique clustered index so the row size is between 33 to 41 bytes including the Uniquifier column that gets created.
When I query the index physical stats, this is what it reports, and based on that calculation, the table should take approx 55-60GB, however when I then check the free space etc (it is on its own 100Gb disk) it is showing as 85Gb used, which calculates out at 62bytes per row.
Other systems has a simular setup and they calculate out right.
Has anybody got any pointers as to what the problem is?
Cheers
Steve
Hello All,
We have sql 2008 R2 SP1 A/P cluster running on VM.
SQL Server Agent jobs are not running. If I manually try to run the job it gives below error
'SQL Server Agent is not currently running so it cannot be notified of this action. (Microsoft SQL Server, Error: 22022)'
I checked SQL server Agent service is running, and the service account under it is running has 'sa' access to sql server.
'Agent Xps' has been set to 1.
If I browse to SQLagent.out file. it is completely blank.
Any help on this will be appreciated. Thank You.
-Kranp.
Hi All
I create a update trigger for a table, if the specific column is updated, my trigger will save the old value, new value and the sql statement to another table, now old value and new value is in "inserted" and "deleted" virtual table, how to get the sql statement that cause the update?
I use this code to retrieve the sql statement:
select command,text FROM sys.dm_exec_requests er cross apply sys.dm_exec_sql_text(er.sql_handle) AS st WHEREer.session_id=@@SPID
The return result is not the SQL statement that cause the update but the "Create trigger" statement - I test the trigger just after create trigger.
What am I missing?
Location: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA
USE [master]
GO
ALTER DATABASE [RdmStoreInformation] SET EMERGENCY
GO
ALTER DATABASE [RdmStoreInformation] SET SINGLE_USER
GO
DBCC CHECKDB ([RdmStoreInformation], REPAIR_ALLOW_DATA_LOSS)
GO
ALTER DATABASE [RdmStoreInformation] SET MULTI_USER
GO
ALTER DATABASE [RdmStoreInformation] SET ONLINE
GO
This is a issue I had to fix and it's solution listed here for your reference(I gathered all the In-formations by surfing web)
Hi,
I am working on an Oracle to SQL Server migration project and need help with the estimation of stored procedures. What criteria needs to be used for estimation? Currently I need to provide estimation for around 350 stored procedures which are inside Oracle packages. Is there any way apart from digging into each procedure to come out with the estimates?
Regards
Manoj
SQL Server 2008 Fully Patched Enterprise Edition
Database in Full Recovery Model
Logs backed up every 15 minutes
All indexes fully optimized
=================================
Thanks for looking at my question.
So here we go:
I have a single table for holding transactions for an inventory system.
There are 2 applications that INSERT/UPDATE records into this table in near real time (sometimes running concurrently) and flag the records as "Ready to Process"
There is 1 application that processes those "Ready to Process" records and it is primarily running UPDATES.
All applications wrap their separate interactions with the table in TRANS with COMMITS.
So everything is working or I should say trying to work on this small set of records. As such we get one or 2 timeouts a day trying to process the records an one or two Deadlocks a week.
Is there anything I can do AT THE TABLE OR DATABASE level to help handle this data orgy!! Keep in mind we are not talking about processing thousands of records in a short period/batch - more like a few hundred.
Thanks!
OS: Windows 2008 R2
SQL Server: 2008 R2 SP2
OS Memory: 16 GB
SQL Server Max Memory: 12 GB
Database is in SIMPLE recovery mode.
1. I saw some blocking on executing sp_who2 and ran Paul Randal's Where it hurts?
The results show, LCK_M_U and LCK_M_IX wait stats at 90%
2. I ran Glenn Berry's Memory DMV:
-- Good basic information about memory amounts and state
SELECT total_physical_memory_kb, available_physical_memory_kb,
total_page_file_kb, available_page_file_kb,
system_memory_state_desc
FROM sys.dm_os_sys_memory OPTION (RECOMPILE);
-- You want to see "Available physical memory is high"
The result was Available physical memory is high.
3. I ran Pinal Dave's DMV:
SELECT dm_ws.wait_duration_ms,
dm_ws.wait_type,
dm_es.status,
dm_t.TEXT,
--dm_qp.query_plan,
--dm_ws.session_ID,
--dm_es.cpu_time,
--dm_es.memory_usage,
--dm_es.logical_reads,
--dm_es.total_elapsed_time,
dm_es.program_name,
DB_NAME(dm_r.database_id) DatabaseName,
-- Optional columns
dm_ws.blocking_session_id--,
--dm_r.wait_resource,
--dm_es.login_name,
--dm_r.command,
--dm_r.last_wait_type
FROM sys.dm_os_waiting_tasks dm_ws
INNER JOIN sys.dm_exec_requests dm_r ON dm_ws.session_id = dm_r.session_id
INNER JOIN sys.dm_exec_sessions dm_es ON dm_es.session_id = dm_r.session_id
CROSS APPLY sys.dm_exec_sql_text (dm_r.sql_handle) dm_t
CROSS APPLY sys.dm_exec_query_plan (dm_r.plan_handle) dm_qp
WHERE dm_es.is_user_process = 1
order by wait_duration_ms desc
GO
The result showed an Update/Insert Trigger on a 34 million row table causing LCK_M_U and LCK_M_IX waits
4. I ran Glenn Berry's DMV for Signal Waits:
-- Signal Waits for instance
SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2))
AS [%signal (cpu) waits],
CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2))
AS [%resource waits]
FROM sys.dm_os_wait_stats
The result was 2% signal waits
5. dbcc loginfo returned 283 rows (VLFs).
On analyzing the logs, it was determined that 10% autogrowth was set for logs.
I changed the log growth from % to MB and brought the VLFs to close to 50.
6. I ran Glenn Berry's script for IO bottleneck:
-- Calculates average stalls per read, per write, and per total input/output for each database file.
SELECT DB_NAME(fs.database_id) AS [Database Name], mf.physical_name, io_stall_read_ms, num_of_reads,
CAST(io_stall_read_ms/(1.0 + num_of_reads) AS NUMERIC(10,1)) AS [avg_read_stall_ms],io_stall_write_ms,
num_of_writes,CAST(io_stall_write_ms/(1.0+num_of_writes) AS NUMERIC(10,1)) AS [avg_write_stall_ms],
io_stall_read_ms + io_stall_write_ms AS [io_stalls], num_of_reads + num_of_writes AS [total_io],
CAST((io_stall_read_ms + io_stall_write_ms)/(1.0 + num_of_reads + num_of_writes) AS NUMERIC(10,1))
AS [avg_io_stall_ms]
FROM sys.dm_io_virtual_file_stats(null,null) AS fs
INNER JOIN sys.master_files AS mf
ON fs.database_id = mf.database_id
AND fs.[file_id] = mf.[file_id]
ORDER BY avg_io_stall_ms DESC OPTION (RECOMPILE);
-- Helps you determine which database files on the entire instance have the most I/O bottlenecks
-- This can help you decide whether certain LUNs are overloaded and whether you might
-- want to move some files to a different location
The results are listed in the spreadsheet image below. Database1 is the database causing blocking. I don't know about what the numbers in the individual columns mean with regards to accepted scale of good to worse I/O but since Database1 is the first in the list, I am assuming that Database1 is causing the most I/O bottleneck.
Can I infer from the spreadsheet below that the LUN assigned to drive E:\ is overloaded and moving some files off the drive E:\ assigned LUN could potentially ease I/O bottlenecks?
This is how far my knowledge will take me.
I am starting to look into purging rows from the 34 million table. However, the purge will be a "future" fix as the vendor for the app needs to get in the game. Moving the drive with .mdf files to a faster storage (like Solid State Drives) is not an option but moving some files off the LUN assigned to E:\ could be an option.
Any guidance is appreciated.
Thanks in advance.
-Jeelani
Hello,
I am using SQL Server 2008 R2 SE installed on WIndows Server 2008 R2 SE. I recently installed a third party tool to monitor the SQLServer and I am seeing O/S Memory Utilization in a critical state. The server has 56GB of Installed memory of which 32GB is usable. Hence for the Maximum memory setting of SQL Server I assigned was 28GB. Now to get rid of this issue should I lower my max memory settings assigned to SQL Server to a lower value?
Thanks.
Hi All
If 250 blockings occured on the database,genarally performance is down. i know how to resove if less blockings occured on the database so can any one explain this type of scenario and how we can find lead block.
Raveendra
Hi Experts,
I am looking for,how to get the max no of sessions/connections on sql server instance ever in last 5 days.
Oracle has one report called AWR to get such information, Does Sql Server equipped with some built-in report/way. Please share.
Best Regards
Khalil
Hello,
Since few days ago I am getting several messages like the one below. Any idea what should I do?
FlushCache: cleaned up 6611 bufs with 6381 writes in 90940 ms (avoided 462 new dirty bufs) for db 302:0
average throughput: 0.57 MB/sec, I/O saturation: 6372, context switches 13037
last target outstanding: 260, avgWriteLatency 25
Javier Villegas | @javier_vill | http://sql-javier-villegas.blogspot.com/
Please click "Propose As Answer" if a post solves your problem or "Vote As Helpful" if a post has been useful to you