SharePoint Code Analysis Framework 5 released

I’m not one to make a blatant product plug but I really like the SharePoint Code Analysis Framework tool and they’ve upped the ante on how effective it is for running QA on SharePoint code, including tests for SharePoint Apps.

What is the SPCAF tool?
A desktop program that evaluates SharePoint code, solutions, features, Apps etc. and  checks all XML, HTML, ASPX, CSS, JavaScript and also assembly code against the quality policies, calculates metrics, generates dependency graphs and builds an inventory report.

Features list here:

Grab it from:

WARNING: This tool can reveal flaws big and small in your SharePoint Solutions. Prepare for some soul-crushing issues to be discovered in your code that your previously considered beautiful.  Hearts may be broken but remember the phoenix always rises out of the ashes. SharePoint is hard, Dev is hard, SharePoint + Dev = well.. a grey hair or two should be expected. Tools like SPCAF help keep us in line with best practices.

You may find that some of the problems reported by SPCAF do not meet your operational engineering minimum standards for rectifying – in particular HTML, CSS, & JS validation is such a patchwork of standards & recommendations, automated validation reports need to be taken with a grain of salt.

Overall, I’ve found SPCAF to be a worthwhile exercise to run on any major chunk of new or updated SharePoint code.

New Features in v5

  • Analyzers
    • New analyzer for JavaScript code applies ca. 150 rules to .JS files in WSPs and Apps
    • New analyzer for SharePoint Apps with lots of rules, metrics, dependencies and inventory checks
  • Client application
    • Completely new client application to improve usability and functionality
    • New client application “Result Viewer” (separate download) to view analysis results without license
    • New settings editor application
  • Migration Assessment
    • New analyzers and reports to analyze WSPs and give recommendations for a transition to the App model
    • Free limited version available
  • Reporting
    • New format of HTML reports with filters, charts, sorting, grouping and many more
    • Extensibility with custom reports and report section
    • New reporting engine based on Razor to ease the creation of custom reports
    • New report type PDF

Can’t wait to try SPCAF? Get your trial now or update the SharePoint Code Analysis Framework already installed on your machine.

New Client Application

The new SPCAF client makes your code analysis even easier.

Just drop your WSP or App Packages in the center of the application and start the analysis or access your recent analysis results directly from the start screen.

Learn more

SPCAF Client

Better Analysis Dashboard

The new analysis dashboard shows you a 360° degree overview of Code Quality, Metrics, Dependencies and Inventory.

From there you can access the individual reports and download Word, PDF, XML or CSV reports to share them with team members.

Learn more

SPCAF Analysis Dashboard
SPCAF Analysis Dashboard

New dynamic reports

The new reports have a simple overview dashboard showing the key findings in a graphical presentation.

You can filter, sort and re-arrange the results and dig down deep into source code to find out what is inside your solution or app.

Learn more

SharePoint Code Quality Report


Code Quality Analysis HTML DOCX PDF XML CSV
Code Dependency Analysis HTML DOCX PDF XML CSV DGML
Code Migration Assessment HTML DOCX PDF XML

New SharePoint Code Migration Assessment Report

Full trust customizations are the main risk and cost driver for migrations to a newer SharePoint version or to Office 365. Without knowing what has been customized you cannot manage the transformation or elimination of custom code.

The new SharePoint Code Migration Assessment provides deep insight into your customizations and allows better effort estimations and risk mitigation.

Learn more

SharePoint Migration Assessment Report

New Analyzers for Apps and JavaScript

With the JavaScript and Apps becoming the only future-proof way of customizing SharePoint both on-premises and in the cloud many seasoned SharePoint developers are now facing a paradigm shift which requires them to adjust their skills.

With the new App and JavaScript analyzers, which contain already in this first release over 170 rules, developers can assure their code quality with SPCAF like they are used to for full-trust code.

Learn more

Documentation of JavaScript Rules

Try it!

Would you like to see these new features in action?

Get a trial and start getting your SharePoint Code under control!

T-SQL to get SQL Transaction Log Sizes

When dealing with the slew of SharePoint MS SQL databases that can be found in a typical install, it’s a bit of a time suck to check transaction log sizes manually. I put together the following T-SQL script to quickly show which transaction logs out of all the logs in the particular SQL instance, are above 299 MB in size. Adjust that threshold to your tastes.

declare @LogSpace table
DatabaseName varchar(255),
[Log Size (MB)] float,
[Log Space Used (%)] float,
[Status] int)
insert into @LogSpace
execute('dbcc sqlperf(''LogSpace'')')
select * from @LogSpace 
where [Log Size (MB)] > 299 
order by [Log Size (MB)] desc 
--order by [Log Space Used (%)] desc 

Delete a SharePoint 2010 service application database for a service application that was previously removed/deleted

In this post I will explain how to delete a SharePoint 2010 service application database for a service application that was previously removed/deleted.

When you remove your service application, you may see on on the page Central Administration > Management Databases Upgrade Status that the database is still visible even though it a) is not in use anymore and/or b) has been deleted from SSMS.

You may notice in the event log:

SQL Database ‘db_name’ on SQL Server instance ‘sql_instance’ not found. Additional error information from SQL

Server is included below.

Cannot open database “db_name” requested by the login. The login failed.
Login failed for user ‘login’.

To overcome this error and remove the old DB reference, fill your zombie DB name into the following PowerShell and execute from the SharePoint PowerShell interface:

get-spdatabase | where {$ -eq 'db_name'} | foreach {$_.Delete()};

This will remove the DB from SharePoint’s frame of reference and clear up any related error messages. Don’t forget – if the physical DB is still in SQL Server, you will need to go into SSMS and archive/delete it as ye may so desire.

SharePoint & SQL Server – itgroove Blog Roundup

Database Maintenance for Microsoft SharePoint 2010 Products

Routine database maintenance is essential for the smooth operation of Microsoft® SharePoint® 2010 databases. This white paper describes the database maintenance tasks supported for SharePoint 2010.

The recommended maintenance tasks for SharePoint 2010 databases include:
• Checking database integrity.
• Defragmenting indexes by either reorganizing them or rebuilding them.
• Setting the fill factor for a server.

Note: This article discusses database maintenance and not planning for capacity or performance. For information about capacity or capacity planning, see Storage and SQL Server capacity planning and configuration (SharePoint Server 2010) (

Although previous versions of SharePoint Products and Technologies required manual intervention to perform index defragmentation and statistics maintenance, SharePoint 2010 automates this process for its databases. This is accomplished by several SharePoint Health Analyzer rules. These rules evaluate the health of database indexes and statistics daily, and will automatically address these items for these databases:

• Configuration Databases
• Content Databases
• User Profile Service Application Profile Databases
• User Profile Service Application Social Databases
• Web Analytics Service Application Reporting Databases
• Web Analytics Service Application Staging Databases
• Word Automation Services Databases

Database maintenance tasks can be also performed by either executing Transact-SQL commands, or running the Database Maintenance Wizard. This whitepaper will initially present the Transact-SQL commands that you can use, and then explain how to create database maintenance plans by using the Microsoft SQL Server Database Maintenance Wizard.

Note: For the T-SQL approach I generally prefer Michelle Ufford’s SQLFool Defrag Script.

Download Database Maintenance SharePoint 2010

Version History in SharePoint via SQL

Recently I posted about how to get check-in comments with Nintex via MS SQL – turns out there was a bit more complexity involved in the structure of the version history then first thought (surprise surprise). Below is the stored procedure created to reliably extract the highest MAJOR version of a SharePoint document. So, if a document is currently v5.4 in your SharePoint library, this will grab the 5.0 version:

USE [MySharePoint_Content_DB]
/****** Object:  StoredProcedure [dbo].[proc_GetDocVersion]    Script Date: 02/17/2012 13:37:36 ******/
ALTER PROCEDURE [dbo].[proc_GetDocVersion](
@LeafName nvarchar(260)


AS UIVersion
JOIN AllDocs ON AllDocs.[ID]= AllDocVersions.[ID]
AllDocs.LeafName = @LeafName
AND ((CONVERT([nvarchar],AllDocVersions.UIVersion/(512),0)+'.')+ CONVERT([nvarchar],AllDocVersions.UIVersion%(512),0)) LIKE '%.0'
As UIVersion
AllDocs.LeafName = @LeafName
AND ((CONVERT([nvarchar],AllDocs.UIVersion/(512),0)+'.')+ CONVERT([nvarchar],AllDocs.UIVersion%(512),0)) LIKE '%.0'

) x
ORDER BY UIVersion Desc;

Accessing the SQL DB in SharePoint 2010 directly (as opposed to using the SharePoint API’s etc.) is generally considered a cowboy maneuver and can get in you in lot’s of trouble with inconsistent results as well as performance hits. Use this SQL at your own risk, if not as just a means to better understand the plumbing that goes on in the basement of SharePoint.

Additionally, note that if you are accessing version history via the /vti_history// method, there are some major caveats as described in the following (note it’s referring to SP 2007 which uses single-digit version numbers but the description of the potential run-on situation still applies):

SharePoint SQL Server Performance Tuning Roundup

On my mission to take what I know about SQL Server performance tuning and expand it into to the world of SharePoint, i’ve come across many docs and blog posts that, while helpful for symptomatic isolation, miss promoting a basic healthy lifestyle for the poor SQL servers that get hammered by SharePoints DB-centric usage profile.

Frequently you read about how to mitigate aggravating factors that can come up with SharePoints relationship to SQL Server, but it’s harder to find info that addresses the root causations that lead to the “problems” in the first place. “Get faster disks” or “buy more RAM” is a response to an architecture issue akin to telling the Dutch boy to grow a bigger finger in response to a widening dam leak. Let’s look at what causes SQL issues to overflow in the first place.

To troubleshoot performance issues, you must complete a series of steps to isolate and determine the cause of the problem. Possible causes include:

  • Blocking
  • System resource contention
  • Application design problems
  • Queries or stored procedures that have long execution times

Apply Filegroups for the Search DB’s

The whole goal of using filegroups is to improve the performance of the system. This is done by providing an additional file. This file must be placed on a different set of spindles to see any kind of performance enhancement. If your SQL machine is not IO bound for the Search database then implementing filegroups will not provide you with any benefits.

Configure Blob Cache in the SharePoint web.config

The BLOB cache is disk-based caching that increases browser performance and reduces database loads. When you open a web page for first time, the files will be copied from the database to the cache on the hard drive on SharePoint server and then all subsequent requests to this site will be accessed from the local hard drive cache instead of issuing a resource intensive request to the SQL Server database.
“enable” attribute to “true”. It is strongly recommended to store the cache on a dedicated partition, which isn’t a part of the operating system (C: partition is not recommended).]

Manage Index Fragmentation

As data is modified in a system, pages can split, and data can become fragmented or physically scattered on the hard disk. Contrary to popular belief, Microsoft SQL Server is not a self-healing system. Use the DBCC SHOWCONTIG command to see the density and the degree of fragmentation for an index for a table. The SQL Fool Index Defrag Script ( is a great tool for dealing with SQL fragementation.

Locate Logs and the Tempdb Database on Separate Devices from the Data

You can improve performance by locating your database logs and the tempdb database on physical disk arrays or devices that are separate from the main data device. Because data modifications are written to the log and to the database, and to the tempdb database if temp tables are used, having three different locations on different disk controllers provides significant benefits.

Provide Separate Devices for Heavily Accessed Tables and Indexes

If you have an I/O bottleneck on specific tables or indexes, try putting the tables or indexes in their own file group on a separate physical disk array or device to alleviate the performance bottleneck.

Pre-Grow Databases and Logs to Avoid Automatic Growth and Fragmentation Performance Impact

If you have enabled automatic growth, ensure that you are using the proper automatic growth option. You can grow database size by percent or by fixed size. Avoid frequent changes to the database sizes. If you are importing large amounts of data that tend to be of a fixed size on a weekly basis, grow the database by a fixed size to accommodate the new data.

When an index is created or rebuilt, the fill factor value determines the percentage of space on each leaf level page to be filled with data, therefore reserving a percentage of free space for future growth. Based on past performance and index expansion rates, the SharePoint Operations team reccommends the database fill factor to 70 percent on all content databases.

Maximize Available Memory

Use performance counters to decide the amount of memory that you need. Some performance counters that you can use to measure your need for memory are listed below:

  • The SQLServer:Buffer Manager:Buffer cache hit ratio counter indicates that data is retrieved from memory cache. The number should be around 90. A lower value indicates that SQL Server requires more memory.
  • The Memory:Available Bytes counter shows the amount of RAM that is available. Low memory availability is a problem if the counter shows that 10 megabytes (MB) of memory or less is available.
  • The SQLServer:Buffer Manager: Free pages counter should not have a sustained value of 4 or less for more than two seconds. When there are no free pages in the buffer pool, the memory requirements of your SQL Server may have become so intense that the lazy writer or the check pointing process is unable to keep up. Typical signs of buffer pool pressure are a higher than normal number of lazy writes per second or a higher number of checkpoint pages per second as SQL Server attempts to empty the procedure and the data cache to get enough free memory to service the incoming query plan executions. This is an effective detection mechanism that indicates that your procedure or data cache is starved for memory. Either increase the RAM that is allocated to SQL Server, or locate the large number of hashes or sorts that may be occurring.

Install the latest BIOS, storage area network (SAN) drivers, network adapter firmware and network adapter drivers

Hardware manufacturers regularly release BIOS, firmware, and driver updates that can improve performance and availability for the associated hardware. Visit the hardware manufacturer’s Web site to download and apply updates for the following hardware components on each computer in the BizTalk Server environment:

  • BIOS updates
  • SAN drivers (if using a SAN)
  • NIC firmware
  • NIC driver

Disable hyper-threading

Hyper-threading should be turned off for SQL Server computers because applications that can cause high levels of contention (such as SharePoint) may cause decreased performance in a hyper-threaded environment on a SQL Server computer.

Defragment all disks on a regular basis

Excessive disk fragmentation in the SQL Server will negatively affect performance. Defragment all disks (local and SAN/NAS) on a regular basis by scheduling off-hours disk defragmentation. Defragment the Windows PageFile and pre-allocate the Master File Tables of each disk in the BizTalk Server environment to boost overall system performance.
Use the PageDefrag Utility ( to defragment the Windows PageFile and pre-allocate the Master File Tables.

Synchronize Time on All Servers

Many operations involving tickets, receipts and logging rely on the local system clock being accurate. This is especially true in a distributed environment, where time discrepancies between systems may cause logs to be out of sync or tickets issued by one system to be rejected by another as expired or not yet valid.

For more information on configuring a server to automatically synchronize time, see Configure a client computer for automatic domain time synchronization (

Disable real-time scanning of data and transaction files

Real-time scanning of the SQL Server data and transaction files (.mdf, .ndf, .ldf, .mdb) can increase disk I/O contention and reduce SQL Server performance.

Review disk controller stripe size and volume allocation units

When configuring drive arrays and logical drives within your hardware drive controller, ensure you match the controller stripe size with the allocation unit size that the volumes will be formatted with. This will ensure disk read and write performance is optimal and offer better overall server performance. Configuring larger allocation unit (or cluster or block) sizes will cause disk space to be used less efficiently, but will also provide higher disk I/O performance as the disk head can read in more data during each read activity.
To determine the optimal setting to configure the controller and format the disks with, you should determine the average disk transfer size on the disk subsystem of a server with similar file system characteristics. Use the Windows Performance Monitor tool to monitor the Logical Disk object counters of Avg. Disk Bytes/Read and Avg. Disk Bytes/Write over a period of normal activity to help determine the best value to use.

Although smaller allocation unit sizes may be warranted if the system will be accessing many small files or records, an allocation unit size of 64 KB delivers sound performance and I/O throughput under most circumstances. Improvements in performance with tuned allocation unit sizes can be particularly noted when disk load increases.

Monitor drive space utilization

The less data a disk has on it, the faster it will operate. This is because on a well-defragmented drive, data is written as close to the outer edge of the disk as possible, as this is where the disk spins the fastest and yields the best performance.

Disk seek time is normally considerably longer than read or write activities. As noted above, data is initially written to the outside edge of a disk. As demand for disk storage increases and free space reduces, data is written closer to the center of the disk. Disk seek time is increased in locating the data as the head moves away from the edge, and when found, it takes longer to read, hindering disk I/O performance.

This means that monitoring disk space utilization is important not just for capacity reasons but for performance also.
As a rule of thumb, work towards a goal of keeping disk free space between 20% to 25% of total disk space. If free disk space drops below this threshold, then disk I/O performance will be negatively impacted