Learn, Speak, Learn, Repeat.

I started speaking a year ago and yesterday at SQL Saturday Exeter was my 6th session. Looking back on it, I tried to cram way too much into the allocated fifty minutes. I rammed it in, had to deliver the material too fast and got flustered and struggled with some of the questions due to the time-stress. This annoys me, as I feel I didn’t do the best that I could. The session in future will be less than half as big, with plenty of room for questions. Learn, speak, learn, repeat.

Thinking back, I think I confused Hugo Kornelis the most, whilst trying to explain how Always On worked with in-memory data. Hugo, I apologise for this. It has annoyed me so much that I felt compelled to write this quick post to clear things up.

Always On AGs and In-Memory OLTP

  • Log records are transferred to the secondary replicas and replayed for all types of tables: disk-based data to disk, in-memory data to memory (so on failover there is no need for a reload of the in-memory data as it is already there and ready to rock)

Oh, and talking about bleeding edge stuff your testing but haven’t finished yet, such as controlled failovers for in-memory maintenance? Get them sorted and tested first, talk about them second. I learn….

This was where I possibly confused Hugo and every other attendee by talking about “memory-mapping”. Fool. Why did I plump for that phrase? This will NOT be used in future.

I’m going to have a week off thinking of sessions now but when I get back to it they’re going to be smaller and more focused. Less is more. Quality over quantity. I learn, I speak, I learn, I repeat (the speaking, minus the crap bits and with more good bits).

Wondering what deprecated features you’re rocking?

Is yo SQL box a dinosaur? Thinking about pimping the version? No idea where to start? Well forget traces for now and dig on this:

SELECT *
FROM sys.dm_os_performance_counters PC
WHERE [object_name] IN('SQLServer:Deprecated Features')
AND [cntr_value] > 0;

Boom! So simple it hurts. It’s your starter for ten.

Now go search and destroy.

Note: This is a STARTER FOR TEN. Only a total fool would rely solely on this as upgrade planning. Check yourself before you wreck yourself. Damn straight.

Up the EF Junction

Let’s face it, Entity Framework does not really like any tables that do not have a unique numeric primary key. Not the best I know but what are you going to do? I have just had to convert a large number of junction tables to work with an EF application and it was a pain. In short any table like this:


CREATE TABLE [dbo].[ProductCategory]
(ProductID INT,
CategoryID INT,
CONSTRAINT [PK_ProductCategory] PRIMARY KEY CLUSTERED ([ProductID] ASC, [CategoryID] ASC),
CONSTRAINT [FK_ProductCategory_Product] FOREIGN KEY ([ProductID]) REFERENCES [dbo].[Product] ([ProductID]),
CONSTRAINT [FK_ProductCategory_Groups] FOREIGN KEY ([CategoryID]) REFERENCES [dbo].[Category] ([CategoryID]));
GO

Needs to become like this:


CREATE TABLE [dbo].[ProductCategory]
(ProductCategoryID INT IDENTITY(1,1),
ProductID INT,
CategoryID INT,
CONSTRAINT [PK_ProductCategory] PRIMARY KEY NONCLUSTERED ([ProductCategoryID] ASC),
CONSTRAINT [FK_ProductCategory_Product] FOREIGN KEY ([ProductID]) REFERENCES [dbo].[Product] ([ProductID]),
CONSTRAINT [FK_ProductCategory_Groups] FOREIGN KEY ([CategoryID]) REFERENCES [dbo].[Category] ([CategoryID]));
GO

CREATE UNIQUE CLUSTERED INDEX [IX_UQ_ProductCategory]
ON [dbo].[ProductCategoryEF] ([ProductID] ASC, [CategoryID] ASC);
GO

Simple. If not entirely sensible. Note the non-clustered PK and the unique clustered index on the previous PK columns. It does exactly the same thing as before but has an extra column simply for the benefit of EF. With the resurgence of EF and OData this is a handy thing to know and maybe to design in at the start, should your shop or client be going the OData route.

Laters.

MCM Y’All! Now the hard work really starts….

Well, it’s just over ten months ago that I did my first QuickStart MCM session with Robert L Davies (@SQLSoldier) and now, finally, I am a full blown MCM. I must be honest and say that I have never had to work so hard in my entire life as I have had to do to obtain this certification. I thought I knew everything after working with SQL Server for twelve years and it was a shock to realise how much I didn’t know. Yes I had done all these things before but having a working knowledge is not enough for the MCM, you have to REALLY know what you are doing, plus be able to do it under pressure with no scripts or Google.
I breezed through the knowledge exam in May last year – well, I say breezed, I didn’t actually get a score breakdown from Prometric so I may have just scraped – and went straight onto the lab in September full of confidence. Big mistake right there. The lab is a monster. It really is the best test of both what you know and what you can do. It also tests how well you can concentrate under pressure, managing both your time and priorities for the full 5.5 hours.
Well, I failed. Not badly, but I failed. However boB’s email was very encouraging and I decided to go again as soon as the 90 day period was up.
I know others have said this and I will too, failing was the best thing that ever happened to me. I nailed back down to study with more confidence than before. I plugged the gaps in my knowledge where I thought I knew things that when put under pressure I couldn’t do. This I did with practice. And more practice. And yet more practice. I retook the exam on January the 10th and found out that I had passed on the 29th. I don’t think I would have felt as good if I’d passed the first time, as I would still be thinking there was some luck involved. Now however I feel great. I am fully confident in my abilities and ready to rock.
Becoming an MCM is no easy task but is well worth the effort. The support you get from the SQL Community is awesome and the feeling at the end of the process is fantastic. It’s hard-core but who doesn’t want to be hard-core. I do. And I am. It’s all good baby, all good.
To round off a great week my Saturday talk for SQLBits in May was also voted for! Get in. I will be blogging about Table-based database object factories in the next few weeks – what I’m presenting on – so stay tuned folks. I’ve got the bit between my teeth now and have hit top gear. Happy days! More blogging, speaking and the MCSM 2012 upgrade all planned in for this year. No point slowing down now.

Schema Removal: All Objects Must Be Obliterated!

‘Sup people. Whilst playing with SQL Azure Data Sync I often get databases rammed full of DataSync schema objects I no longer need; mainly when I’m done with a particular database or when it has issues. Removing this schema and it’s objects by hand is a ball ache and besides, I don’t do manual.

So I had a quick Google, as you do, as I normally find that some helpful cat has probably already done what you need to do and blogged it. Big up the SQL Family community spirit. So I found an article by Ranjith Kumar S that was my starting point. Way too much temp table and cursor stuff going on here for my liking and no deletion of user-defined table types. Also, the dropping of tables wasn’t Azure-friendly – as you can’t drop the PK in Azure if it’s clustered before the table, as a table without a clustered index is a no-no. So I tweaked it to have no temp tables or cursors, added some error trapping and ended up with this:

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[obliterateSchema]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[obliterateSchema]
GO

CREATE PROCEDURE dbo.obliterateSchema
(
	@schemaName varchar(100),
	@debug char(1) = 'Y' -- defaults to debug to avoid accidental deletion
)
AS

BEGIN TRY

	IF (@schemaName = 'dbo')
	BEGIN
		RAISERROR ('Obliteration of the dbo schema is forbidden.', 16, 1)
	END

	DECLARE @removal_sql nvarchar(max) = ''
 
	-- 01: removes all the foreign keys that reference a PK in the target schema
	SET @removal_sql = '-- 01: drop FKs that reference PKs in the removing schema;' + char(10)
	
	SELECT @removal_sql = @removal_sql + 'ALTER TABLE ['+ SCHEMA_NAME(fk.schema_id) 
		+ '].[' + OBJECT_NAME(fk.parent_object_id) + '] DROP CONSTRAINT [' + fk.name + '];' + CHAR(10)
	FROM sys.foreign_keys fk
	INNER JOIN sys.tables t 
		ON t.object_id = fk.referenced_object_id
	WHERE t.schema_id = schema_id(@SchemaName)
		and fk.schema_id = t.schema_id
	 order by fk.name desc
 
	-- 02: remove all FKs and constraints
	SET @removal_sql = @removal_sql + '-- 02: remove all FKs and constraints;' + char(10)
	
	SELECT @removal_sql = @removal_sql + 'ALTER TABLE [' + schema_name(t.schema_id) + '].['
		+ OBJECT_NAME(fk.parent_object_id) + '] DROP CONSTRAINT ['+ fk.[Name] + '];' + char(10)
	FROM sys.objects fk
	 join sys.tables t on t.object_id = fk.parent_object_id
	 where t.schema_id = schema_id(@SchemaName)
	and fk.type IN ('D', 'C', 'F')
	 
	-- 02.5: if the removing schema is DataSync then drop any dss triggers
	IF (@SchemaName = 'DataSync')
	BEGIN
	
		SET @removal_sql = @removal_sql + '-- 02.5: remove all dss triggers;' + char(10)
	
		SELECT @removal_sql = @removal_sql + 'DROP TRIGGER [' + schema_name(so.schema_id) 
			+ '].[' + st.[name] + '];' + char(10)
		FROM sys.triggers st
		inner join sys.objects so
			on st.parent_id = so.object_id
		WHERE st.[name] like '%dss_update_trigger'
			or st.[name] like '%dss_insert_trigger'
			or st.[name] like '%dss_delete_trigger'
	
	END
 
	-- 03: drop all other objects in order
	SET @removal_sql = @removal_sql + '-- 03: remove all PKs, tables, views, triggers, and functions;' + char(10)
	
	SELECT @removal_sql = @removal_sql + 
		CASE 
			WHEN SO.type = 'PK' THEN 'ALTER TABLE [' + SCHEMA_NAME(SO.schema_id) + '].[' + OBJECT_NAME(SO.parent_object_id) + '] DROP CONSTRAINT [' + SO.name + '];' + char(10)
			WHEN SO.type = 'U' THEN 'DROP TABLE [' + SCHEMA_NAME(SO.schema_id) + '].['+ SO.[Name] + '];' + char(10)
			WHEN SO.type = 'V' THEN 'DROP VIEW [' + SCHEMA_NAME(SO.schema_id) + '].[' + SO.[Name] + '];' + char(10)
			WHEN SO.type = 'P' THEN 'DROP PROCEDURE [' + SCHEMA_NAME(SO.schema_id)+ '].[' + SO.[Name] + '];' + char(10)
			WHEN SO.type = 'TR' THEN 'DROP TRIGGER [' + SCHEMA_NAME(SO.schema_id) + '].[' + SO.[Name] + '];' + char(10)
			WHEN SO.type IN ('FN', 'TF','IF','FS','FT') THEN 'DROP FUNCTION [' + SCHEMA_NAME(SO.schema_id) + '].[' + SO.[Name] + '];' + char(10)
		END
	 FROM SYS.OBJECTS SO
	 WHERE SO.schema_id = schema_id(@SchemaName)
		AND SO.type IN
			(
			select 'FN' union all
			select 'TF' union all
			select 'TR' union all
			select 'V' union all
			select 'U' union all
			select 'P' union all
			select 'PK' where charindex('Azure', @@VERSION) = 0
			-- as no table can exist in Azure without a clustered key, we can't drop this first
			)
	 ORDER BY 
		CASE 
			WHEN type = 'PK' THEN 1
			WHEN type in ('FN', 'TF', 'P','IF','FS','FT') THEN 2
			WHEN type = 'TR' THEN 3
			WHEN type = 'V' THEN 4
			WHEN type = 'U' THEN 5
			ELSE 6
		END

	-- 04: drop user-defined types
	SET @removal_sql = @removal_sql + '-- 04: drop user-defined types;' + char(10)
	
	SELECT @removal_sql = @removal_sql + 'SELECT DROP TYPE [' + schema_name(schema_id) 
		+ '].['+ [name] + '];' + CHAR(10)
	from sys.types
	where schema_id = schema_id(@SchemaName)
	 
	-- if debugging, just print the SQL and bail out
	IF (@debug = 'Y')
	BEGIN
		PRINT @removal_sql
		RETURN
	END 
	
	BEGIN TRAN

	-- actually remove the schema and all it's objects
	EXEC (@removal_sql)

	COMMIT TRAN

END TRY

BEGIN CATCH

		DECLARE @ErrorMessage nvarchar(4000)
		DECLARE @ErrorSeverity int
		DECLARE @ErrorState int

		SELECT	@ErrorMessage = ERROR_MESSAGE(), 
				@ErrorSeverity = ERROR_SEVERITY(), 
				@ErrorState = ERROR_STATE();
				
		IF @@TRANCOUNT > 0 ROLLBACK TRAN

		RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState);

END CATCH;
GO

-- exec dbo.obliterateSchema 'DataSync', 'N'

Tidy. Does the job for me and can be run for any schema. Dig that it was created for removing the MS DataSync Schema which only consists of tables, triggers, stored procedures, and user-defined table types (plus the schema itself). You may need to extend it a bit as there’s probably stuff I’ve missed but this’ll get you rolling.

Word.

Health Monitoring Multiple SQL Azure Databases Part 5: Analysing the Data – Critical Alerts

Amigos, we’ve come a long way. In Part 1 we pimped the Azure databases to return health data XML, in Part 2 we made a control database that knew where these databases were at, in Part 3 we hoovered out the XML data we wanted from them via PowerShell, and in Part 4 we shredded this ready for analysis. Tidy. So how do you go about analysing it? Man, this part’s up to you.

Me? First thing I want to know is when the databases are going to go boom. There aint no auto-grow in Azure, so if you’ve gone for 10gb, when it gets there it’ll stop. Dead. Silent. Bar the barely audible whistle of your boss’ boot sailing towards your ass.

So we need a view to identify the potential butt killers and fast. This will do the job:

-- db space alert view: > 70% full
CREATE VIEW dbhealth.vCheckSpaceUsed
AS
SELECT	azdbs.azDatabaseID, 
		azdbs.dbMaxSize, 
		azdbs.dbPctUsed,
		'DB_SIZE' as alertType,
		'DB ' +  azd.azDatabaseName + ' on server ' + azs.azServerName + ' is ' + convert(varchar(14), convert(decimal(8,2), azdbs.dbPctUsed)) + '% full' as alertMsg
FROM dbhealth.azureDBSize azdbs
INNER JOIN dbhealth.azureResultSet azrs
	on azdbs.azResultSetID = azrs.azResultSetID
INNER JOIN dbhealth.azureDatabase azd
	ON azdbs.azDatabaseID = azd.azDatabaseID
INNER JOIN dbhealth.azureServer azs
	ON azd.azServerID = azs.azServerID
WHERE azrs.azResultSetDate > DATEADD(hour, -6, getdate()) -- results in the last six hours
	and azdbs.dbPctUsed > convert(decimal(8,6), 70);
GO

I’ve added a third step to my sql agent job: now the first step polls and retrieves the data, the second one shreds it, and the third runs critical alert checks. Skills.

(Dig that the view restricts results to ones in the last six hours. This is because my polling is every six hours, so I’m just checking the latest result set – i.e. the results obtained by job step one of my agent job. Don’t want to be reporting old records that have since been sorted hey. You’ll need to adjust to suit your polling period.)

My job step three executes the following stored procedure (utilising the above view and database mail):

CREATE PROCEDURE [dbhealth].[processCriticalAlerts]
AS
BEGIN TRY

	DECLARE @alertstogo nvarchar(max) = ''

	SELECT @alertstogo = @alertstogo + alertMsg + CHAR(13) + CHAR(10) -- '<br>' for HTML
	FROM dbhealth.vCheckSpaceUsed

	IF (@alertstogo <> '')
	BEGIN
		EXEC msdb.dbo.sp_send_dbmail
			@profile_name = 'Your DB Mail Profile name',
			@recipients = 'a@b.com;c@b.com',
			@copy_recipients = '',
			@body = @alertstogo,
			--@body_format = 'HTML', -- if you want to use this
			@subject = 'ALERT: SQL Azure Database Size',
			@importance = 'High'
	END
	
END TRY

BEGIN CATCH

		declare @ErrorMessage NVARCHAR(4000);
		declare @ErrorSeverity INT;
		declare @ErrorState INT;

		select	@ErrorMessage = ERROR_MESSAGE(), 
				@ErrorSeverity = ERROR_SEVERITY(), 
				@ErrorState = ERROR_STATE();

		raiserror (@ErrorMessage, @ErrorSeverity, @ErrorState);

END CATCH
GO

GRANT EXECUTE ON dbhealth.processCriticalAlerts to HealthCheck;
GO

Job done. On each run of the job an email is sent out for all databases that are more than 70% full. Add more views and more alerts to the procedure as needed but keep them critical only – don’t be an info freako here, save this for SSRS. Job step alerts = need attention right now, SSRS reports = longer-term trend analysis.

The next logical step is then to add custom SSRS reports to Management Studio. I will be doing this in the next few weeks and laying it down for you soon, as there’s good stuff to be had here. But hey, don’t wait for me. Add your critical alerts and get SSRS fired up. Automated alerts, shiny graphs and charts – these are review time gold baby. So go get analysing. You know it makes sense.

As soon as my SSRS Management Studio reports are grooving you’ll be hearing from me. Till then peace people.

Word.