Server Side Paging – MSSQL 2008

SQL

Introduction

Paging is almost perfunctory when dealing with large lists of online data. Problem is, most paging solutions out there (at least those I’ve seen) perform this vital function on the client side – either through dangerous dynamic SQL or even worse – pulling the entire record set down and disseminating pages for the user afterward.

You wouldn’t (I hope) trust the client to put data into your tables, so why trust it to filter and page? That’s what your database engine was specifically designed to do! Server side paging gives you a couple of big advantages:

  • Security – See above. I would never, and will never trust a client generated SQL. You’re just asking for it.
  • Scaling – Client side record processing may seem faster at first because of the instant response. Then your record count passes the VERY modest six digit mark, and suddenly you’re looking for ways to mediate ten minute load times.

The only real downsides to sever side paging are reloading and complexity of initial setup. The former can be dealt with using an AJAX or similar solution. The later is where I come in. The following stored procedure completely encapsulates paging, all of any other data extraction.

Implementation

Execute within data procedure after all other record processing is complete on the primary data table. Assumes primary data is in temp table #cache_primary. Pass following arguments:

  • page_current – Current record page to view as requested by control code.
  • page_rows (optional, uses default value if NULL) – Number of rows (records) to output per page.

 

Outputs following record set for use by control code:

Column Type Description
row_count_total int Total number of rows in the paged record set.
page_rows int Maximum number of rows per page. Will be same as the maximum row argument passed form control code unless that argument was null, in which case this will reflect the default maximum rows.
page_last int Last page number / total number of pages.
row_first int ID of first record in requested page.
row_last int ID of last record in requested page.

 

SQL

-- Master Paging
-- Caskey, Damon V.
-- 2016-07-08
--
-- Output recordset in divided pages. Also creates and outputs
-- a recordset of paging data for  control code. Execute in another 
-- stored procedure after all other record work (filters, sorting, joins, etc.) 
-- is complete. Make sure final table variable name is #cache_primary.

-- Set standard ISO behavior for handling NULL 
-- comparisons and quotations.
ALTER PROCEDURE [dbo].[master_paging]
    
    -- Parameters. 
        @param_page_current    int            = 1,    -- Current page of records to display.
        @param_page_rows    smallint    = 25    -- (optional) max number of records to display in a page.
            
AS
BEGIN
    
    -- If non paged layout is requested (current = -1), then just
    -- get all records and exit the procedure immediately.
        IF @param_page_current = -1
            BEGIN
                SELECT *
                    FROM #cache_primary
                    RETURN
            END 

    -- Verify arguments from control code. If something
    -- goes out of bounds we'll use stand in values. This
    -- also lets the paging "jumpstart" itself without
    -- needing input from the control code.
                
        -- Current page default.
        IF    @param_page_current IS NULL OR @param_page_current < 1
            SET @param_page_current = 1
            
        -- Rows per page default.
        IF    @param_page_rows IS NULL OR @param_page_rows < 1 SET @param_page_rows = 10 -- Declare the working variables we'll need. DECLARE @row_count_total int, -- Total row count of primary table. @page_last float, -- Number of the last page of records. @row_first int, -- Row ID of first record. @row_last int -- Row ID of last record. -- Set up table var so we can reuse results. CREATE TABLE #cache_paging ( id_row int, id_paging int ) -- Populate paging cache. This is to add an -- ordered row number column we can use to -- do paging math. INSERT INTO #cache_paging (id_row, id_paging) (SELECT ROW_NUMBER() OVER(ORDER BY @@rowcount) AS id_row, id FROM #cache_primary _main) -- Get total count of records. SET @row_count_total = (SELECT COUNT(id_row) FROM #cache_paging); -- Get paging first and last row limits. Example: If current page -- is 2 and 10 records are allowed per page, the first row should -- be 11 and the last row 20. SET @row_first = (@param_page_current - 1) * @param_page_rows SET @row_last = (@param_page_current * @param_page_rows + 1); -- Get last page number. SET @page_last = (SELECT CEILING(CAST(@row_count_total AS FLOAT) / CAST(@param_page_rows AS FLOAT))) IF @page_last = 0 SET @page_last = 1 -- Extract paged rows from page table var, join to the -- main data table where IDs match and output as a recordset. -- This gives us a paged set of records from the main -- data table. SELECT TOP (@row_last-1) * FROM #cache_paging _paging JOIN #cache_primary _primary ON _paging.id_paging = _primary.id WHERE id_row > @row_first 
                AND id_row < @row_last
                    
            ORDER BY id_row    
                
    -- Output the paging data as a recordset for use by control code.
                
        SELECT    @row_count_total    AS row_count_total,
                @param_page_rows        AS page_rows,
                @page_last            AS page_last,
                @row_first            AS row_first,
                @row_last            AS row_last
            
        
END

 

 

Master Table Design

SQL

Introduction

Master table layout and creation. The master table controls all data tables via one to one relationship and carries audit info.

 

Layout

 

Column Type Description
id int (Auto Increment) Primary key for master table. All data tables must include an ID field (NOT auto increment) linked to this field via one to one relationship.
id_group int Version linking ID. Multiple entries share an identical group ID to identify them as a single record with multiple versions. If no previous versions of a new record exist, then this column is seeded from ID field after initial creation.
active bit If TRUE, marks this entry as the active version of a record. Much faster than a date lookup and necessary for soft delete.
create_by int Account creating this version. -1 = Unknown.
create_host varchar(50) Host (usually IP provided by control code) creating entry.
create_time datetype2 Time this entry was created.
create_etime Computed column Elapsed time in seconds since entry was created.
update_by Same as create_x, but updated on every CRUD operation.
update_host
update_time
update_etime

 

Set Up

CREATE TABLE [dbo].[_a_tbl_master](
	id				int IDENTITY(1,1)	NOT NULL,						-- Primary unique key.
	id_group		int					NULL,							-- Primary record key. All versions of a given record will share a single group ID.
	active			bit					NOT NULL,						-- Is this the active version of a record? TRUE = Yes.
		-- Audit info for creating version. A new 
		-- version is created on any CRUD operation 
		-- in the data tables controlled by master.
	create_by		int					NOT NULL,						-- Account creating this version. -1 = Unknown.
	create_host		varchar(50)			NOT NULL,						-- Host (usually IP from control code) creating version. 
	create_time		datetime2			NOT NULL,						-- Time this version was created. 
	create_etime	AS (datediff(second, [create_time], getdate())),	-- Elapsed time in seconds since creation.	
		-- Audit information for updating version.
		-- When any CRUD is performed on a data
		-- table, the previously active version
		-- is marked inactive. Deleting a record
		-- simply marks all versions inactive.
		-- In short, the only updates made to 
		-- a master table are toggling Active
		-- flag.
	update_by		int					NOT NULL,						-- Account updating this version. -1 = Unknown.
	update_host		varchar(50)			NOT NULL,
	update_time		datetime2			NOT NULL,
	update_etime	AS (datediff(second, update_time, getdate())),
 
CONSTRAINT PK__a_tbl_master PRIMARY KEY CLUSTERED 
(
	id ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

ALTER TABLE _a_tbl_master ADD CONSTRAINT DF__a_tbl_master_active		DEFAULT ((1))			FOR active
GO

ALTER TABLE _a_tbl_master ADD CONSTRAINT DF__a_tbl_master_create_by		DEFAULT ((-1))			FOR create_by
GO

ALTER TABLE _a_tbl_master ADD CONSTRAINT DF__a_tbl_master_create_host	DEFAULT (host_name())	FOR create_host
GO

ALTER TABLE _a_tbl_master ADD CONSTRAINT DF__a_tbl_master_created		DEFAULT (getdate())		FOR create_time
GO

ALTER TABLE _a_tbl_master ADD CONSTRAINT DF__a_tbl_master_update_host	DEFAULT (host_name())	FOR update_host
GO

 

Encapsulated XML ID List Parsing – In Progress

SQL

Introduction

As part of my ongoing MSSQL Versioning Project, it is often times necessary to pass a list of records to stored procedures, either from the controlling application or from a calling procedure. By use of standardized table design, you will normally only need to pass a list of primary keys. The procedure can query for any other information it needs from that point forward, making it fully self contained more maximum encapsulation.

For the list itself there are several options I know of, but only one is fully viable for my own needs. Your mileage may vary of course:

  1. Delimited List: On the surface this is the simplest of means. Just slap together a comma delimited string (“x, y, z”), break it down at the database console and off you go. If only it were actually that simple, and even if it was there’s not a chance on Earth I’m doing that. Neither should you. This is breaking the law of First Normal Form, something you never want to do for reasons well beyond the scope of this article. If you are curious, several (but by no means all) of the pitfalls of this approach are explained quite nicely here.
  2. Table Valued Parameters: TVPs are extremely powerful when used correctly and have their place, but for purposes of acting as lists or caches in a stored procedure, they have two serious drawbacks.
    1. TVPs are assumed by the query optimizer to contain a single row. This is of little to no consequence if only a few records are involved, but it can be disastrous once a threshold of ~100 is reached. Queries with execution times normally in the millisecond range may suddenly balloon into resource hogs requiring several minutes to complete.
    2. It’s rather unusual, but our environment is WIMP (Windows, IIS, MSSQL, PHP). Connectivity is provided by the sqlsrv driver, with an object oriented wrapper penned by yours truly. Unfortunately at time of writing, this otherwise excellent driver set does not support TVP.
  3. Common Language Runtime: Lots of fun to be had here, but like TVPs, it depends on a very specific environment. Otherwise it simply isn’t applicable. Even when it is, the realities of business often mean taking advantage of CLR adds layers of extra management and time wasted on service requests for the simplest of modifications. No thank you.
  4. XML: This is my method of choice. It’s reasonably fast, only giving up speed to CLR and TVP, and eventually surpassing the later as the number of records increases. It’s also T-SQL compliant, and thus quite portable. The downside is there’s more of a learning curve and you’ll want to design carefully to avoid huge strings. Let’s have a closer look at how…

Considerations

  • Efficiency: We want the XML itself and parser to be compact and fast as possible.
  • Scaling: The design should be solid and not break down under a heavy load.
  • Reuse. We need to encapsulate and standardize our code. It won’t do much good if every query or procedure requires an inline rewrite of the XML parsing.

Implementation

There are three basic scenarios where we will need to parse a list of IDs via XML.

Procedure A executes Procedure B, Sending List of IDs

This will be a common occurrence – in my MSSQL versioning design every procedure that updates data must first update the Master Table. Should sub data be involved, then multiple updates to Master table must first take place – one for each row of sub data updated. First Procedure A will establish the list of records to update as a temp table, as in the following example:

id created update_by update_ip active
1 2016-12-28 115 128.163.237.37 False
2 2016-12-28 115 128.163.237.37 False
3 2016-12-28 115 128.163.237.37 False

Once the table is ready, this query is run against it:

SET @xml_string_var = (SELECT id FROM <temp table of items to update> FOR XML RAW, ROOT)

The variable @xml_string_var will be populated with an XML string as follows. Note <root> and row. These are default outputs that we could change these by modifying our SQL above, but I prefer to leave them be. Since this little bit of SQL will be in nearly every data update procedure, let’s keep it simple and reusable as possible.

<root>
<row id=”1″ />
<row id=”2″ />
<row id=”3″ />
</root>

We can now execute Procedure B passing @xml_string_var as an XML string argument.

Procedure B Receives XML From Procedure A

Upon execution, Procedure B will need to break the XML back down into a table. Rather thanĀ Procedure B breaking the XML down inline, let’s outsource the work. We could do this with a stored procedure, but the moment we executed a procedure that in turn executed our XML parser, we would run smack into the irritating limitation of nested executions. For those unfamiliar, MSSQL 2008 and below simply do not allow nested stored procedure execution. Any attempted to do so will produce the following error:

Msg 8164, Level 16, State 1, Procedure <procedure_name>, Line <line> An INSERT EXEC statement cannot be nested.

In short, encapsulation as a stored procedure just won’t work. That really just leaves user defined functions. I personally loathe them for a lot of different reasons. They appeal to the programmer in me, but in SQL tend to cause more trouble than they’re worth. Still, if we want to encapsulate the XML parsing (and we DO), a table valued function is the best way to go. We’ll call it tvf_get_id_list:

-- tvf_get_id_list
-- Caskey, Damon V.
-- 2017-01-25
-- Returns recordset of IDs from xml list
-- 
-- <root>
--		<row id="INT" />
--		... 
-- </root>
			

CREATE FUNCTION tvf_get_id_list (@param_id_list xml)
RETURNS TABLE AS
RETURN (SELECT x.y.value('.','int') AS id	
			FROM @param_id_list.nodes('root/row/@id') AS x(y))

Procedure B will call tvf_get_id_list, passing along the XML. The tvf_get_id_list will break the XML down and produce a record-set of IDs, which we can then insert into temp table:

id
1
2
3

Procedure B will now have a access to record set of IDs that it can use to perform whatever work we need done.

As you can see, the XMl parsing work is fairly simple – we specifically planned the XML markup for easy break down. Even so encapsulating the XML work out to a separate function gives us a couple of advantages over just pasting the XML parsing code inline.

  • Obviously we will use the fastest and best scaled technique for breaking down the XML (see here for examples), but should even better techniques be developed, we only need to modify this one function.

Procedure B and any other procedures that we send our XML list to are simpler and more compact. They need only call tvf_get_id_list to break down the XML list into a usable record set.

Procedure Called By Control Code

This is more or less identical to procedures executing each other, except the procedures are being called by application code. In this case, it is the application’s responsibility to send XML formatted as above. The simplicity of XML makes this rather easy, and the parsing code can be made part of a class file.

foreach($this->id as $key => $id)
{	
					
	if($id == NULL) $id = DB_DEFAULTS::NEW_ID;
							
	$result .= '<row id="'.$id.'">';									
}

MSSQL Relational Record Versioning – In Progress

SQL

Versioning Notes (In Progress)

Master Table Update

The master table controls all data tables in the database, including sub tables containing one to many relational data (ex. One Person -> Many Phone Numbers). This means our master update procedure must be able to handle multiple record updates at once and be modular enough to execute by another update procedure – both for the updating the that procedure’s target data table AND any related sub tables. Otherwise the whole encapsulation concept falls apart.

  1. Populate a temp table with list of update IDs. These are the record IDs that we want modified (or created as the case may be). Ultimately we will be inserting these as new records with new IDs no matter what, but we’ll need to perform versioning if these IDs already exist in the master table.
    ID
    1
    2
    3
  2. Find any records in Master Table that match the Update List, and mark them as inactive. They will be replaced with new inserts.
    UPDATE
    			_a_tbl_master
    		SET
    			active = 0
    		FROM
    			#master_update_source _new
    		WHERE 
    			_a_tbl_master.id = _new.id;
  3. Prepare a list of inserts consisting of records where update list and master table IDs math, AND unmatched items in the Update List. The combined list is used to populate a temp table. This is also where we acquire the group ID for existing records. The group ID will be applied to new inserts (versions) of the existing records.
    INSERT INTO 
    			#master_update_inserts (id, id_group, update_by)
    		SELECT 
    			_current.id, _current.id_group, @update_by
    		FROM #master_update_source _source  
    			LEFT JOIN _a_tbl_master _current ON _source.id = _current.id
    
  4. Apply list of inserts to the Master Table. Use OUTPUT clause to populate a temp table with a list of IDs for each insert.
    INSERT INTO 
    			_a_tbl_master (id_group, update_by)
    		OUTPUT 
    			INSERTED.ID 
    				INTO #master_update_new_id
    		SELECT 
    			id_group, update_by 
    		FROM 
    			#master_update_inserts
  5. Using the list of IDs created when records were inserted to Master Table, we run an UPDATE against Master Table on the list of newly created IDs, where the id_group field is empty. This is to seed new records (not new versions of existing records) with a group ID.
  6. Master Table is now populated. New records will have a group ID identical to their ID, while existing records will have a new ID, but retain their previous group ID.
    ID id_group created update_by update_ip active
    2844 2884 2016-12-28 10:13:45.1900000 115 128.163.237.37 False
    2845 2845 2016-12-28 10:13:45.1900000 115 128.163.237.37 False
    2846 2846 2016-12-28 10:13:45.1900000 115 128.163.237.37 False
    2989 2844 2016-12-28 22:42:14.7930000 115 128.163.237.37 True
    2990 2845 2016-12-28 22:42:14.7930000 115 128.163.237.37 True
    2991 2846 2016-12-28 22:42:14.7930000 115 128.163.237.37 True

Full procedure (in progress)

-- Caskey, Damon V.
-- 2016-12-20
--
-- Update master table. Must be run before
-- any data table controlled by master is
-- updated. Outputs record set containing
-- IDs for the updated master that a
-- calling data update procedure will need.


SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER PROCEDURE [dbo].[_a_master_update]
	
	-- Parameters
	@arg_id					int				= NULL,		-- Primary key. 			
	@arg_update_by			int				= NULL,		-- ID from account table.
	@arg_update_ip			varchar(50)		= NULL		-- User IP, supplied from application.
			
AS
BEGIN

	-- Let's create the temp tables we'll need.
		
		-- List of update requests. All we
		-- need are IDs. The rest is handled
		-- by parameters or generated by
		-- default data binds in the master
		-- table.
		CREATE TABLE #master_update_source
		(
			id			int
		)

		-- Prepared list of items that
		-- will be inserted into the master
		-- table.
		CREATE TABLE #master_update_inserts
		(
			id			int,
			id_group	int
		)

		-- List of new item IDs created when
		-- inserts are perform on master
		-- table.
		CREATE TABLE #master_update_new_id
		(
			id			int
		)
		

	-- Populate update source (for experiment).
	INSERT INTO #master_update_source (id)
	VALUES (-1), (-1), (2844), (2845), (2846)


	-- Find any records that match our 
	-- update list and mark them as inactive.
		UPDATE
			_a_tbl_master
		SET
			active = 0
		FROM
			#master_update_source _new
		WHERE 
			_a_tbl_master.id = _new.id;

	-- Prepare inserts. Here we are adding inserts for new
	-- records AND for records that already exist. We do the
	-- later so we can get the current group ID and pass it on. 
		INSERT INTO 
			#master_update_inserts (id, id_group)
		SELECT 
			_current.id, _current.id_group
		FROM #master_update_source _source  
			LEFT JOIN _a_tbl_master _current ON _source.id = _current.id

	-- Apply the insert list (insert into master table). New
	-- IDs created by the database are output into
	-- a temp table.
		INSERT INTO 
			_a_tbl_master (id_group, update_by, update_ip)
		OUTPUT 
			INSERTED.ID 
				INTO #master_update_new_id
		SELECT 
			id_group, @arg_update_by, @arg_update_ip 
		FROM 
			#master_update_inserts

	-- For new records, seed the group ID with
	-- new record's ID.
		UPDATE
			_a_tbl_master
		SET
			id_group = _new.id
		FROM
			#master_update_new_id _new
		WHERE 
			_a_tbl_master.id = _new.id AND _a_tbl_master.id_group IS NULL;

END

Quick Tip – Copy Table With Transact SQL

SQL

Here’s an easy one, but sometimes still troublesome for individuals testing the waters of SQL: How to quickly copy a table. There are lots of cases where you need to duplicate a table in all but name. Maybe you want the data as well, maybe not. Unfortunately this seemingly simple task is lacking as a functionality in most development tools – including the big boys like MS SQL Server Management Studio.

The good news is there’s a perfectly logical reason such a feature isn’t apparent: It’s part of SQL already, and super simple to boot. Here’s what to do:

SELECT * INTO schema.new_table FROM schema.existing_table

Execute this as a query and voila! You now have a duplicate of the old table, complete with data. Want to make a copy sans data? Just query for a non existent key value.

SELECT * INTO schema.new_table FROM schema.existing_table WHERE some_field = non_existing_value

It’s really that simple. No plug ins or complex write ups required.

Extra Credit

  • You needn’t copy every field. Swap out * with a list of desired fields, just like any other query.
  • Play with the WHERE cause a bit and you can copy portions of data rather than all or nothing. Again, to your SQL engine it’s just another query, so use your imagination.

 

PHP – PIM (PHP IIS MSSQL) Database Class

PHP

Please Note: This class has long been replaced with a truly object oriented solution that utilizes the sqlsrv driver. I am only leaving it here for posterity (and to embarrass myself). Use at your own peril šŸ™‚

 

This PHP class file is designed to take care of day to day database operations for the rare but not unheard of combination of PHP with IIS and MSSQL.

I’m not a fan of reinventing the wheel, but after extensive searching had been unable to find any existing classes that suited the needs of a WIMP combo. Not much of a surprise considering the majority of PHP use falls under the LAMP model, but it is certainly frustrating.

I hope anyone in a similar situation can find this of use. The goals are simple and straight forward:

  • Reduce repetitive code and increase portability. This is an obvious goal – reusable, portable code not only saves time, but cuts down on human error (typos, misplaced values, etc.).
  • Move to an object oriented model. While a function based model can accomplish many of the listed goals here, and an object oriented coding model carries many other advantages, the discussion of which is beyond the scope of this article.
  • Quick implementation. To reduce clutter in pages and functions utilizing the class, class functions should accomplish common tasks with few calls as possible. To accomplish this goal, functions are created as basic building blocks with a cascading operation order in mind. This allows creation of master functions which can in turn run all of the basic building blocks for a common task with a single call.
  • Block SQL injection. This is achieved by designing for and stressing the use of parameters. Used properly this will effectively stops SQL injection attempts cold.
  • Robust error trapping. In the event of an error, as much information as possible should be collected and sent to webmaster. Additionally, an attempt should be made to log collected information into another remote database for easy dissemination.

Elements

Dependencies

Global Constants

The following values are used from global constants list.

class constants
{
	/*
	constants
	Damon Vaughn Caskey
	2012_12_18

	Global constants.
	*/

	/* Basic values */
	const	c_cAdmin		= &quot;&quot;;										//Default &quot;all access&quot; user accounts.
	const	c_cDateF		= &quot;Y-m-d H:i:s&quot;;							//Default date format.
	const	c_cWMName		= &quot;Damon V. Caskey&quot;;						//Default webmaster name.
	const	c_cWMEmailIn	= &quot;&quot;;										//Default webmaster email address (sending mail to webmaster)
	const	c_cWMEmailOut	= &quot;&quot;;										//Default address when server sends mail.
	const	c_cVFGuid		= &quot;00000000-0000-0000-0000-000000000000&quot;;	//False guid.

	/* Database */
	const 	c_cDBHost		= &quot;&quot;;										//Default DB Host.
	const 	c_cDBLName		= &quot;&quot;;										//Default DB logical name.
	const 	c_cDBUser		= &quot;&quot;;										//Default DB user.
	const 	c_cDBPword		= &quot;&quot;;										//Default DB password.
	const 	c_cDBEHost		= &quot;&quot;;										//Error log DB host.
	const 	c_cDBELName		= &quot;&quot;;										//Error log DB logical name.
	const 	c_cDBEUser		= &quot;&quot;;										//Error log DB user.
	const 	c_cEDBPword		= &quot;&quot;;										//Error log DB password.
	const 	c_cDBEEmailSub	= &quot;Database Failure&quot;;						//Error email subject line.
	const 	c_iDBFetchType	= SQLSRV_FETCH_BOTH;						//Default row array fetch type.

	/* Media &amp; Icons */
	const	c_cIconExcel	= &quot;&quot;;										//MS Excel icon.
	const	c_cIconPDF		= &quot;&quot;;										//PDF icon.
	const	c_cIconPPT		= &quot;&quot;;										//MS Powerpoint icon.
	const	c_cIconWord		= &quot;&quot;;										//MS Word icon.
	const	c_cPathImage	= &quot;&quot;;										//Default path for image files.
}
  • c_cDateF: This is the default format for dates. If you reconfigure this value, make sure it is one that MSSQL date fields will accept; they can get fairly picky.
  • c_cWMEmailIn: This is the webmaster email address. It is where any alerts will be sent, so make sure to include working addresses that won’t flag an alert as spam.
  • c_cWMEmailOut: Address used in the “from” tag of mails sent to webmaster.
  • c_cDBHost: The connection string or IP of your primary database host (or just the one you use most often).
  • c_cDBLName: Your most often used logical database name on your primary database host.
  • c_cDBUser: The user account name to access your primary database host.
  • c_cDBPword: The user account password to access your primary database host.
  • c_cDBEHost: The connection string or IP of on alternate database host to record errors.
  • c_cDBELName: Logical database name where error log table is located on error database host.
  • c_cDBEUser: The user account name to access your error database host.
  • c_cEDBPword: The user account password to access your error database host.
  • c_cDBEEmailSub: TheĀ  e-mail subject line of alert mails sent when an error occurs.
  • c_iDBFetchType: The row key fetch type. This determines how the line row array can be accessed. See here for more information.

Next, you will need to initialize the database class in whatever script you would like to use it in. This is fairly simple:

Variables

			
	public	$cDBLine		= NULL;									//Line array.
	public 	$cDBFMeta		= NULL;									//Line metadata array.
	public 	$iDBFCount		= NULL;									//Line field count.
	public	$iDBRowCount	= NULL;									//Table row count.
	public	$iDBRowsAffect	= NULL;									//Number of rows affected by an action query.
	public 	$rDBConn 		= NULL;									//Database connection ID resource.
	public 	$rDBResult		= NULL;									//Database query result ID resource.
	public 	$rDBStatement	= NULL;									//Database prepared query statement ID resource.		

The following public variables are made available upon initialization. These are set by the various function calls and will be the main means of actually extracting values returned from your database.

  • cDBLine: An array of the fields (columns) found in a given table row.
  • cDBFMeta: An array of metadata. This contains information on the table fields themselves (size, name, type, etc.).
  • iDBFCount: Simple count of the number of fields returned from a query.
  • iDBRowCount: How many rows were returned by a select query.
  • iDBRowsAffect: How many rows were affected by an action query.
  • rDBConn: ID of the connection to a database host.
  • rDBResult: ID of the data returned from a query.
  • rDBStatement: ID of a prepared query (SQL string and parameters).

Functions

db_basic_action

function db_basic_action($cQuery, $cParams, $cHost = constants::c_cDBHost, $cDB = constants::c_cDBLName, $cUser = constants::c_cDBUser, $cPword = constants::c_cDBPword)
	{
		/*
		db_basic_action
		Damon Vaughn Caskey
		2012_11_13
		
		Connect and execute an action query with single call.
		
		$cQuery: 	SQL string.
		$cParams:	Parameter array.
		$cHost:		DB Host.
		$cDB:		DB logical name.
		$cUser:		DB user.	
		$cPword:	DB password.
		*/
				
		/* Connect to DB */
		$this-&gt;db_connect($cHost, $cDB, $cUser, $cPword);
		
		/* Execute query. */
		$this-&gt;db_query($cQuery, $cParams);		
		
		/* Set rows affected. */
		$this-&gt;iDBRowsAffect = sqlsrv_rows_affected($this-&gt;rDBResult);		
		
		return $this-&gt;iDBRowsAffect;
	}

This function is designed to run simple action queries with a single call. There is no need to run a separate call to connect, prepare, and query execute. Simply pass the query, parameters and (if necessary) connection values.

Sets and returns iDBRowsAffect.

db_basic_select

function db_basic_select($cQuery, $cParams, $iLine=FALSE, $cHost = constants::c_cDBHost, $cDB = constants::c_cDBLName, $cUser = constants::c_cDBUser, $cPword = constants::c_cDBPword)
	{
		/*
		db_basic_select
		Damon Vaughn Caskey
		2012_11_13
		
		Connect, query and populate common variables from database with a single call.
		
		$cQuery: 	SQL string.
		$cParams:	Parameter array.
		$iLine:		Populate line array with first row?
		$cHost:		DB Host.
		$cDB:		DB logical name.
		$cUser:		DB user.	
		$cPword:	DB password.
		*/
				
		/* Connect to DB */
		$this-&gt;db_connect($cHost, $cDB, $cUser, $cPword);
		
		/* Execute query. */
		$this-&gt;db_query($cQuery, $cParams);
		
		/* Get row count */
		$this-&gt;db_count();
		
		/* Get field count. */
		$this-&gt;db_field_count();
		
		/* Get metadata. */
		$this-&gt;db_field_metadata();	
		
		if($iLine===TRUE)
		{
			$this-&gt;db_line();			
		}
		
		return $this-&gt;cDBLine;
	}

This function is designed to run simple select queries with a single call. There is no need to run a separate call to connect, prepare, and query execute. Simply pass the query, parameters and (if necessary) connection values.

Note the iLineĀ parameter. If this is passed as TRUE, cDBLine will be populated with the first row from your query result. This option is very useful for times when you expect and only need a single row result; one call to this function is all you need. However, you will want to be careful with multiple rows as the row cursor will now be at the next record. This means a subsequent call to db_line() will start at the second record, NOT the first.

Returns cDBLine.

db_close

function db_close()
	{
		/*
		db_close
		Damon Vaughn Caskey
		2012_11_13
		
		Close current conneciton. Normally not needed.
		*/
		
		/* Close DB conneciton. */
		sqlsrv_close($this-&gt;rDBConn);
		
		/* Return TRUE. */
		return TRUE;
	}

Closes connection with rDBConn.

db_connect

function db_connect($cHost = constants::c_cDBHost, $cDB = constants::c_cDBLName, $cUser = constants::c_cDBUser, $cPword = constants::c_cDBPword)
	{		 
		/*
		db_connect
		Damon Vaughn Caskey
		2012_11_13
		
		Connect to database host and store reference to public variable.
		
		$cHost:		DB Host.
		$cDB:		DB logical name.
		$cUser:		DB user.	
		$cPword:	DB password.		
		*/
			
		$db_cred = NULL; //Credentials array.
	
		/* Only bother connecting to DB host if no previous connection is established. */
		if(!$this-&gt;rDBConn)
		{	
			/* Set up credential array. */
			$db_cred = array(&quot;Database&quot;=&gt;$cDB, &quot;UID&quot;=&gt;$cUser, &quot;PWD&quot;=&gt;$cPword);
									
			/* Establish database connection. */
			$this-&gt;rDBConn = sqlsrv_connect($cHost, $db_cred);		
		}
		
		/* False returned. Database connection has failed. */
		if($this-&gt;rDBConn === FALSE)
		{
			/* Stop script and log error. */
			$this-&gt;db_error(&quot;db_connect&quot;, NULL, NULL, TRUE);	
		}	
	}

Opens a connection to database host; defaults to primary host and database using connection pooling.

db_count

function db_count()
	{
		/*
		db_line
		Damon Vaughn Caskey
		2012_11_13
		
		Return number of records from query result.
		*/
		
		/* Get row count. */
		$this-&gt;iDBRowCount = sqlsrv_num_rows($this-&gt;rDBResult);	
		
		$this-&gt;db_error(&quot;db_count&quot;);	
		
		/* Return count. */
		return $this-&gt;iDBRowCount;
	}

Set and return iDBRowCount with record count from last query (rDBResult).

db_error

function db_error($cLocation = NULL, $cSql = NULL, $cParams = NULL, $bFatal = FALSE)
	{
		
		/*
		db_error
		Damon Vaughn Caskey
		2012_06_08
		
		Wrapper for sqlsrv_errors(). Record any errors into an alternate database and send email to webmaster.
		
		$cLocation:	Code location where error trap was called. Aids in dubugging.
		$cSql:		SQL string passed at time of error.
		$cParams:	Parameter array passed at time of error.
		$bFatal:	If error triggers, should process be terminated?
		*/
		
		$cMBody			= NULL;	//Mail message.
		$aErrors		= NULL;	//Errors list array.
		$aError			= NULL;	//Error output array.
		$mysqli			= NULL;	//Connection reference to DB error log.
		$query			= NULL; //Error query string.
		$stmt			= NULL;	//Prepared query reference.
		$val			= NULL;	//Array of error values.
		$cParam			= NULL;	//Individual item from parameter array.
		
		/* Get error collection */
		$aErrors = sqlsrv_errors();
		
		/* Any errors found? */
		if($aErrors)					
		{	
			/* Connect to error log database (obviously this should be an alternate DB from the one that failed). */	
			$mysqli = new mysqli(constants::c_cDBEHost, constants::c_cDBELName, constants::c_cDBEUser, constants::c_cEDBPword);
			
			/* Loop through error collection. */
			foreach($aErrors as $aError)
			{
				/*
				Ignore these codes; they are informational only:
					0:		Cursor type changed.
					5701:	Changed database context.
					5703:	Changed language setting.
				*/
				if($aError['code'] != 0 &amp;&amp; $aError['code'] != 5701 &amp;&amp; $aError['code'] != 5703)
				{
					
					$val[0] = $cLocation;				//Function &amp; Line location. Manually set in execution code to aid debugging; not part of error routine.
					$val[1] = $aError['SQLSTATE'];		//Status of DB host.
					$val[2] = $aError['code'];			//Error code.
					$val[3] = $aError['message'];		//Error message.
					$val[4] = $cSql;					//SQL string (if any) attempting to be executed.
					$val[5] = NULL;						//Parameter array.
					$val[6] = $_SERVER[&quot;PHP_SELF&quot;];		//Calling PHP file.
					$val[7] = $_SERVER['REMOTE_ADDR'];	//Client IP address.
					
					/* Dump parameter array into single string. */
					if(isset($cParams))								//Parameter array passed?
					{
						foreach($cParams as $cParam)				//Loop array collection.
						{
							$val[5] .= $cParam .&quot;, &quot;;				//Add to error parameter string.
						}
					}
					
					/* Build mail body string */
					$cMBody .= 
					&quot;\n Code Location: &quot;.	$val[0]. 
					&quot;\n SQLSTATE: &quot;.		$val[1].
					&quot;\n Code: &quot;.			$val[2].
					&quot;\n Message: &quot;.			$val[3].
					&quot;\n SQL String: &quot;.		$val[4].
					&quot;\n Parameters: &quot;.		$val[5].
					&quot;\n File: &quot;.			$val[6].
					&quot;\n Client IP: &quot;.		$val[7].	
					&quot;\n&quot;;
					
					/* If the error log database connection was successful, insert each error to table. */
					if (!$mysqli-&gt;connect_error) 
					{	
						/* Build query string. */		
						$query = &quot;INSERT INTO tbl_query_errors (codel, state, code, msg, query, params, source, ip) VALUES (?,?,?,?,?,?,?,?)&quot;;
						$stmt = $mysqli-&gt;prepare($query);
						
						/* Bind parameters. */
						$stmt-&gt;bind_param(&quot;ssssssss&quot;, $val[0], $val[1], $val[2], $val[3], $val[4], $val[5], $val[6], $val[7]);
						
						/* Execute and close query. */
						if($stmt != false)
						{							
							/* Execute and close query. */
							$stmt-&gt;execute();
							$stmt-&gt;close();
						}
					}
				}
			}
			
			/* Close DB connection. */
			$mysqli-&gt;close();
			
			/* Send EMail alert. */
			mail(constants::c_cWMEmail, constants::c_cDBEEmailSub, $cMBody, constants::c_cWMEmailOut);
			
			/* If error is fatal, stop PHP execution immediately. */
			if($bFatal)
			{
				die(&quot;A fatal database error has occurred. Please contact the webmaster immediately.&quot;);
			}
			
			return true;
		}
		
		return false;	
	}

Capture, log and return any errors encountered by the various other functions. If errors are found, they will be sent to webmaster via e-mail Additionally an attempt is made to log the errors to another database.

My method is to use the MySQL database in my personal web portal to log various activities from production sites I am responsible for using an INSERT only account. This ensures I am always aware of potential issues and have organized debugging info for problems that arise. For this reason, the error function is written primarily for MySQL. It would be easy enough to alter this for your own needs.

Note the bFatal parameter. If this is set to TRUE, the entire script (i.e. the page being generated) will fail and be replaced with an alert. This is less than user friendly, but in some cases may be a preferable result.

db_execute

function db_execute()
	{
		/*
		db_execute
		Damon Vaughn Caskey
		2012_11_13
		
		Execute prepared query.
		*/
	
		/* Execute statement. */
		$this-&gt;rDBResult = sqlsrv_execute($this-&gt;rDBStatement);
		
		/* Error trapping. */
		$this-&gt;db_error(&quot;db_execute&quot;);
		
		/* Set rows affected. */
		$this-&gt;iDBRowsAffect;
		
		/* Return ID resource. */
		return $this-&gt;rDBResult;		
	}

Executes rDBStatement, then populates and returns rDBResult with the results. Also sets iDBRowsAffect with the number of rows affected by execution. Useful for efficiently running a single action query where only the parameters (if anything) change.

db_field_count

function db_field_count()
	{
		/*
		db_field_count
		Damon Vaughn Caskey
		2012_11_13
		
		Get number of fields from query result.
		*/
		
		/* Get field count. */
		$this-&gt;iDBFCount = sqlsrv_num_fields($this-&gt;rDBResult);
		
		/* Error trapping. */
		$this-&gt;db_error(&quot;db_field_count&quot;);
		
		/* Return field count. */
		return $this-&gt;iDBFCount;
	}

Sets and returns iDBFCount with the number of fields returned by rDBResult.

db_field_metadata

function db_field_metadata()
	{
		/*
		db_field_metadata
		Damon Vaughn Caskey
		2012_11_13
		
		Fetch table rows metadata array (column names, types, etc.).
		*/
		
		/* Get metadata array. */
		$this-&gt;cDBMeta = sqlsrv_field_metadata($this-&gt;rDBResult);
		
		/* Error trapping. */
		$this-&gt;db_error(&quot;db_field_metadata&quot;);
		
		/* Return metadata array. */
		return $this-&gt;cDBMeta;
	}

Sets and returnsĀ cDBMeta with field meta data from rDBResult.

db_line

function db_line($iFetchType = constants::c_iDBFetchType)
	{
		/*
		db_line
		Damon Vaughn Caskey
		2012_11_13
		
		Fetch line array from table rows.
		
		$iFetchType: Row key fetch type
		*/
		
		/* Get line array. */
		$this-&gt;cDBLine = sqlsrv_fetch_array($this-&gt;rDBResult, $iFetchType);		
				
		/* Error trapping. */
		$this-&gt;db_error(&quot;db_line&quot;);
		
		/* Return line array. */
		return $this-&gt;cDBLine;
	}

Sets and returnsĀ cDBLine with row array data from rDBResult. This is the function you would place inside of a while loop to fetch the entire row list from a record set.

db_prepare

function db_prepare($cQuery, $cParams, $cOptions = array(&quot;Scrollable&quot; =&gt; SQLSRV_CURSOR_STATIC))
	{
		/*
		db_prepare
		Damon Vaughn Caskey
		2012_11_13
		
		Prepare query statement.
		
		$cQuery: 	Basic SQL statement to execute.
		$cParams:	Parameters to pass with query (prevents SQL injection).
		$cOptions:	Options for cursor array, etc.
		*/
	
		/* Execute query. */
		$this-&gt;rDBStatement = sqlsrv_prepare($this-&gt;rDBConn, $cQuery, $cParams, $cOptions);
		
		/* Error trapping. */
		$this-&gt;db_error(&quot;db_prepare&quot;, $cQuery, $cParams);
		
		/* Return query ID resource. */
		return $this-&gt;rDBStatement;		
	}

Prepare a query statement from SQL string and parameters, then place into rDBStatement. Use this function to prepare action queries you need to execute repeatedly.

db_query

function db_query($cQuery, $cParams, $cOptions = array(&quot;Scrollable&quot; =&gt; SQLSRV_CURSOR_STATIC))
	{
		/*
		db_query
		Damon Vaughn Caskey
		2012_11_13
		
		Prepare and execute query.
		
		$cQuery: 	Basic SQL statement to execute.
		$cParams:	Parameters to pass with query (prevents SQL injection).
		$cOptions:	Options for cursor array, etc.
		*/
	
		/* Execute query. */
		$this-&gt;rDBResult = sqlsrv_query($this-&gt;rDBConn, $cQuery, $cParams, $cOptions);
		
		/* Error trapping. */
		$this-&gt;db_error(&quot;db_query&quot;, $cQuery, $cParams);
		
		/* Return query ID resource. */
		return $this-&gt;rDBResult;		
	}

Prepare a query statement from SQL string and parameters, then execute immediately. Populates and returns rDBResult with result. This function is useful when you want to run a single query with no functional add-ons.