Tag: SQL

Estimate SQL db Restore Time

If you’ve ever had to restore a large SharePoint or other SQL DB, here’s a handy little T-SQL for guesstimating how long a it’s going to take. Run this when one or more Restore’s are in progress. Note that it would spin up actual TimeRemainingMin’s until it actually picks up speed and get’s going (past 0% in the dialog GUI):

    d.PERCENT_COMPLETE AS [%Complete],
    d.TOTAL_ELAPSED_TIME/60000 AS ElapsedTimeMin,
    d.ESTIMATED_COMPLETION_TIME/60000   AS TimeRemainingMin,
    d.TOTAL_ELAPSED_TIME*0.00000024 AS ElapsedTimeHours,
    d.ESTIMATED_COMPLETION_TIME*0.00000024  AS TimeRemainingHours,
   s.text AS Command
FROM    sys.dm_exec_requests d
CROSS APPLY sys.dm_exec_sql_text(d.sql_handle)as s
ORDER   BY 2 desc, 3 DESC

11-22-2013 11-03-25 AM

Method to estimate the recovery time of a SQL Server database
estimated time for restoring a database
Estimated Completion Time for Restores
SQL Server – Restore Database – Time Estimate or Progress
How can I query how much time a SQL server database restore takes?
Working out how long a SQL Server backup/restore will take

Disable and Cap SharePoint 2013 Usage and Health Data Collection

Our sysadmin was having his mind blown by large SharePoint Usage Data database sizes (17 GB). Since the on-page guidance even recommends to not keep all that data logging unless you really need it (i.e. are actively monitoring it):

Usage Data Collection

Usage data collection will log events whenever various events occur in your SharePoint deployment. Usage Logging enables analysis and reporting, but also uses system resources and can impact performance and disk usage.

Event Selection

Logging enables analysis and reporting, but also uses system resources and can impact performance and disk usage. Only log those events for which you want regular reports.
For sporadic reports or investigations, consider turning on logging for specific events and then disabling logging for these events after the report or investigation is complete.

You can find out about Usage and Health Data Collection here: http://technet.microsoft.com/en-us/library/ee663480.aspx
So, we disable this service until such time as we need to monitor this stuff:

To configure usage and health data collection by using Central Administration:

  1. Verify that you have the following administrative credentials:

    The user account that performs this procedure has to be a member of the Farm Administrators group.

  2. In Central Administration, on the home page, click Monitoring.
  3. On the Monitoring page, in the Reporting section, click Configure usage and health data collection.
  4. On the Configure usage and health data collection page, in the Usage Data Collection section, select the Enable usage data collection check box.
  5. In the Event Selection section, select the check boxes of the events that you want to log.

    Logging uses system resources and can affect  performance and disk usage. Only log those events for which you want  regular reports.

    For impromptu reports or investigations, enable logging for events, and then disable logging for the events after the report or investigation is complete. For more information, see Configure usage data collection for events by using Windows PowerShell.

  6. In the Usage Data Collection Settings section, type the path of the folder to which you want usage and health information to be written in the Log file location box. The path that you specify must exist on each server in the farm.

    These settings are applied to all events.

  7. In the Health Data Collection section, select the Enable health data collection check box. To change the collection schedules, click Health Logging Schedule. You can see a list of timer jobs that collect health data. Click any of the timer jobs to change its schedule, or disable that timer job. If  you disable a timer job, it stops collecting corresponding health data.  For more information, see Timer job reference (SharePoint 2013).
  8. To change log collection schedules, click Log Collection Schedule, and then click any of the timer jobs to change its schedule, or disable that timer job. If you disable a timer job, it stops collecting  corresponding log data.
  9. In the Logging Database Server section, to change the authentication method, select either the Windows authentication or SQL authentication option.

    To change the Database Server and Database Name values, you must use Windows PowerShell. For more information, see Log usage data in a different logging database by using Windows PowerShell.


And, just to ensure that in the future happenstance that we decide to turn this type of logging back on, we are going to use SharePoint Powershell to cap the overall log size limit to a leaner 5 GB max:
Set-SPUsageService -UsageLogMaxSpaceGB 5

Version History in SharePoint via SQL

Recently I posted about how to get check-in comments with Nintex via MS SQL – turns out there was a bit more complexity involved in the structure of the version history then first thought (surprise surprise). Below is the stored procedure created to reliably extract the highest MAJOR version of a SharePoint document. So, if a document is currently v5.4 in your SharePoint library, this will grab the 5.0 version:

USE [MySharePoint_Content_DB]
/****** Object:  StoredProcedure [dbo].[proc_GetDocVersion]    Script Date: 02/17/2012 13:37:36 ******/
ALTER PROCEDURE [dbo].[proc_GetDocVersion](
@LeafName nvarchar(260)


AS UIVersion
JOIN AllDocs ON AllDocs.[ID]= AllDocVersions.[ID]
AllDocs.LeafName = @LeafName
AND ((CONVERT([nvarchar],AllDocVersions.UIVersion/(512),0)+'.')+ CONVERT([nvarchar],AllDocVersions.UIVersion%(512),0)) LIKE '%.0'
As UIVersion
AllDocs.LeafName = @LeafName
AND ((CONVERT([nvarchar],AllDocs.UIVersion/(512),0)+'.')+ CONVERT([nvarchar],AllDocs.UIVersion%(512),0)) LIKE '%.0'

) x
ORDER BY UIVersion Desc;

Accessing the SQL DB in SharePoint 2010 directly (as opposed to using the SharePoint API’s etc.) is generally considered a cowboy maneuver and can get in you in lot’s of trouble with inconsistent results as well as performance hits. Use this SQL at your own risk, if not as just a means to better understand the plumbing that goes on in the basement of SharePoint.

Additionally, note that if you are accessing version history via the /vti_history// method, there are some major caveats as described in the following (note it’s referring to SP 2007 which uses single-digit version numbers but the description of the potential run-on situation still applies): http://blogs.msdn.com/b/roberthorvick/archive/2007/01/04/wss-rant-linking-to-the-latest-version-of-a-sharepoint-document-considered-harmful-lessons-6-and-7.aspx

Get Check-in Comments with Nintex

NOTE: I’ve posted a revised version of this type of method for getting version history. The SQL in this post is not to be trusted! (but the rest is still good so I’ll leave the post as-is for posterity).

I needed to grab the document check-in comment insides of a Nintex 2010 / SharePoint 2010 list workflow. I couldn’t seem to locate the field anywhere Was told by Nintex support the following:

“It is not possible to reference the version history of a document within a workflow similar to how you can reference standard SharePoint fields. Even the web service method GetVersionCollection: http://msdn.microsoft.com/en-us/library/lists.lists.getversioncollection(v=office.12).aspx only returns the version history for a specific field. There may be another web service Microsoft exposes to enable this however this would be a question directly for MS.”

Oh well, right? Never say die! Use the Execute SQL Query widget in Nintex and let’s do some data mining:

1. Add an Execute SQL Query action to wherever in your Nintex workflow you need to access the Check-in comment.

2. Create a variable to store the check-in comment, in this case I call it vCheckInComment.

3. Add your SQL connection string. Considering that using Windows Authentication is probably what you’re going to be dealing with when using SharePoint, you will need the DB service account credentials.

4. Create your SQL query. For getting the check-in comments it is simple : In your SharePoint Content database there will be a table called AllDocs which contains two columns of interest: LeafName and CheckInComment.  You Can also grab the ListID GUID in there if needed – in this example I am already inside a Query List / For Each loop which has filtered the records based on the current list I am working with. The LeafName column stores the document e.g. MyDocument.doc. The CheckInComment column stores the comment for that file. In this example the {WorkflowVariable:vDocumentName} Nintex variable is my document name variable:

And that’s that – the check-in comment query associated with that documents record goes in the vCheckInComment column.

Disclaimer: there’s much more to the DB structure then just the document name and check-in comment. Versioning isn’t this simple. Also, running ad-hoc SQL queries on SharePoint is generally not best practice and you will get chewed out by MVP’s.  Reasons why ad-hoc SQL is bad news (see http://www.sharepoint4arabs.com/AymanElHattab/Lists/Posts/Post.aspx?ID=99 for a good summary)

  1. This is completely unsupported by the EULA you agreed to when you installed SharePoint.
  2. Your queries are not guaranteed to work after applying any patches or service packs to SharePoint since Microsoft could change the database schema anytime.
  3. Directly querying the database can place extra load on a server and hence performance issues.
  4. Direct SELECT statements against the database take shared read locks at the default transaction level so your custom queries might cause deadlocks and hence stability issues.
  5. Your custom queries might lead to incorrect data being retrieved.

Definitely run your SQL queries with NoLock on and keep it to simple, read-only queries if you’re going to pull stunts like running SQL queries from Nintex as described in this post. Also carefully consider versioning and data commit issues inherent from the SharePoint architecture side of things before counting on any of the results you pull out as gospel.