Donations And The Social Sphere

Contents

Introduction

One of the most important aspects of non profit initiatives is fundraising. As with many other real world activities, social media has disrupted the paradigm for conducting daily business. Let us discuss how to make this disruption work positively.

Articles

Questions

Q1 – How Has The Giving Scope Changed?

As if it wasn’t obvious in the general world sphere, the Benchmark article makes clear that fundraising targets are migrating from static media (email, websites) to social media. This provides both challenges and opportunities for organizations relying on upon public generosity. No longer can one reliably send out bulk emails and expect dependable results. However, while news has always been instantly available – now it is instantly available on a personal level. This kind of “touch it, feel it” illusion aspect of social media may prompt public response in a way that “special reports” on the corner cafe TV never could.

Q2 – How to Leverage Social Media As Fundraising Tool.

As above, social media has already begun to disrupt the traditional email fundraising model – itself a fairly recent disruption of ring downs, snail mail, and door to door. In a way, part of the work has already been done – there is all but a certain chance whatever cause one wishes to take part in already has an operating interest group. Unfortunately, this also means the possible issue of competing for time and resources. Additionally, while previous generational methods are gradually being supplanted, they are still far too relevant and important to ignore. The savvy fund-raising manager will need to find a way to mix public empathy toward their cause with multiple social outlets, and traditional methods – all in a way that is cost and time effective.

Dynamic Stored Procedure Call Generation

PHP

Scripting experiment. Inspector Blair page layout is currently static code. I would like to upgrade to a content managed layout. One of the first challenges is overcoming the diverse array of forms and subsequent data calls. Dynamic queries are NOT acceptable. Instead I’ll need to find a way to create dynamic inputs for stored procedures that can be driven from a forms database.

This experiment was to test a simple array generator. Normally saving to the database is handled by a set of class calls as in this excerpt:

case RECORD_NAV_COMMANDS::SAVE:
	
	// Stop errors in case someone tries a direct command link.
	if($obj_navigation_rec->get_command() != RECORD_NAV_COMMANDS::SAVE) break;
							
	// Save the record. Saving main record is straight forward. We’ll run the populate method on our 
	// main data object which will gather up post values. Then we can run a query to merge the values into 
	// database table. We’ll then get the id from saved record (since we are using a surrogate key, the ID
	// should remain static unless this is a brand new record). 
	
	// If necessary we will then save any sub records (see each for details).
	
	// Finally, we redirect to the current page using the freshly acquired id. That will ensure we have 
	// always an up to date ID for our forms and navigation system.			

	// Populate the object from post values.			
	$_main_data->populate_from_request();
	
	// --Sub data: Role.
	$_obj_data_sub_request = new class_account_role_data();
	$_obj_data_sub_request->populate_from_request();

	// Let's get account info from the active directory system. We'll need to put
	// names int our own database so we can control ordering of output.
	$account_lookup = new class_access_lookup();
	$account_lookup->lookup($_main_data->get_account());

	// Call update stored procedure.
	$query->set_sql('{call account_update(@id			= ?,
											@log_update_by	= ?, 
											@log_update_ip 	= ?,										 
											@account 		= ?,
											@department 	= ?,
											@details		= ?,
											@name_f			= ?,
											@name_l			= ?,
											@name_m			= ?,
											@sub_role_xml	= ?)}');
											
	$params = array(array('<root><row id="'.$_main_data->get_id().'"/></root>', 		SQLSRV_PARAM_IN),
				array($access_obj->get_id(), 				SQLSRV_PARAM_IN),
				array($access_obj->get_ip(), 			SQLSRV_PARAM_IN),
				array($_main_data->get_account(), 		SQLSRV_PARAM_IN),						
				array($_main_data->get_department(),	SQLSRV_PARAM_IN),						
				array($_main_data->get_details(), 		SQLSRV_PARAM_IN),
				array($account_lookup->get_account_data()->get_name_f(), SQLSRV_PARAM_IN),
				array($account_lookup->get_account_data()->get_name_l(), SQLSRV_PARAM_IN),
				array($account_lookup->get_account_data()->get_name_m(), SQLSRV_PARAM_IN),
				array($_obj_data_sub_request->xml(), 	SQLSRV_PARAM_IN));
	
	//var_dump($params);
	//exit;
	
	$query->set_params($params);			
	$query->query();
	
	// Repopulate main data object with results from merge query.
	$query->get_line_params()->set_class_name('blair_class_account_data');
	$_main_data = $query->get_line_object();
	
	// Now that save operation has completed, reload page using ID from
	// database. This ensures the ID is always up to date, even with a new
	// or copied record.
	header('Location: '.$_SERVER['PHP_SELF'].'?id='.$_main_data->get_id());
	
	break;

 

Before we can begin to control the above calls dynamically, we’ll need to break the call down and see if we can assemble the sql string. Here we will concentrate on building the SQL string.

The form parts and column names they send data too will likely be stored in a sub-table of the forms database, and output as a linked list. We need to use those column names in a call string for sending or retrieving data. This simple experiment uses a keyed array to simulate the list we might get and see if we can concatenate a usable stored procedure call string.

$_main_data->populate_from_request();
            
// --Sub data: Role.
$_obj_data_sub_request = new class_account_role_data();
$_obj_data_sub_request->populate_from_request();

// Let's get account info from the active directory system. We'll need to put
// names int our own database so we can control ordering of output.
$account_lookup = new class_access_lookup();
$account_lookup->lookup($_main_data->get_account());

$save_row['id']                = '<root><row id="'.$_main_data->get_id().'"/></root>';
$save_row['log_update_by']    = $access_obj->get_id();
$save_row['log_update_ip']     = $access_obj->get_ip();
$save_row['account']         = $_main_data->get_account();
$save_row['department']        = $_main_data->get_department();
$save_row['name_f']         = $account_lookup->get_account_data()->get_name_f();
$save_row['name_l']         = $account_lookup->get_account_data()->get_name_l();
$save_row['name_m']         = $account_lookup->get_account_data()->get_name_m();    
$save_row['sub_role_xml']     = $_obj_data_sub_request->xml();        

$sql_str = '{call account_update(@';
$sql_str .= implode(' = ?, @', array_keys($save_row));
$sql_str .= ')}';
echo $sql_str;

//

Obviously this alone won’t be enough, but the resulting output looks quite promising:

{call account_update(@id = ?, @log_update_by = ?, @log_update_ip = ?, @account = ?, @department = ?, @name_f = ?, @name_l = ?, @name_m = ?, @sub_role_xml)}

 

 

Social Media & The Public Sphere

Contents

Introduction

The concept of a public sphere is hardly new, and certainly predates social media. The very existence of Habermas, J is proof enough, having been published in 1974. But has social media usurped the public sphere?  Or is it just another facet?

Article

  • Habermas, J. (1974). The Public Sphere: An Encyclopedia Article. New

    German Critique, 48-55. (UK Canvas)

  • The Nonprofit Engagement Project (NEP): Group Selection (Feb 9) (UK Canvas)

Question

Q1 – Contrast the public sphere vs. the social sphere.

Although this is my own question, I believe that ultimately there is very little contrast between social spheres and the public sphere. The public will acquire and consume information from social spheres at will, even (and often especially if) the originator wishes to keep their content under control.

Q2 – How to leverage the blending of social and public spheres.

If we can assume social networking is a key to the public sphere, then we have an easy access to the consciousness of John Q., if only for fleeting moments. The work of “getting attention” is already done. Instead, it is quality of message that must set us apart to garner force behind our cause, in whatever form it might manifest.

Selecting Charitable Initiatives

Contents

Introduction

Social media brings much more than the logistics of organization to charitable initiatives. It is now possible with just a few clicks to research and evaluate any group or initiative on instant notice.

Articles

Questions

Q1 – Discuss basic aspects of charitable organizations and initiatives.

This is a rather lose topic that is (quite intentionally) better for in class discussion. Each individual will have their own personal criteria on what makes an organization charitable, let alone worthy.

Q2 – Discuss using social media to evaluate worthy charitable initiatives.

This again is a question best used for in class discussion. Frankly I look forward to lobbing it into the room like a proverbial Apple of Discord. Amplifying the diversity of opinion is the infinite variability of information available through social media. Within reason, there is really no one correct way to evaluate an initiative. This is the at once blessing and curse of social media power.

Deleting Facebook

Recently as I go deeper into the academic rabbit hole, I have found the need to revive my blog just a bit. Many class assignments require writing up discussion points, so why not do it here?

Also, if I am to eventually obtain a PHD, it is important to build a body of work. Some of the none of you reading this may have noticed an influx of posts about various functions, database manifests, and so on. These are partially just notes so I can remember what I was doing five minutes ago – but eventually along with my OpenBOR work I am hoping to have a nice collection to organize and publish.

So what does this have to do with Facebook? Simple, part of reviving the blog sphere means reorganizing – switching to hashtags vs. loads of categories, revamping some old code, all that good business.

As it happens, Facebook is chained to this thing like Stevie Nicks, ergo my account needs cleaning just as bad. Fixing the albums, categorizing posts so I can find them, and frankly, removing a few faux pas. Not personal stuff – I was always pretty careful about that. I’m talking posts about code work that looking back now I must wonder what sick malaise struck me when crafting them! 🙂

Problem is have you SEEN what a pain it is to clean FB profiles?! Mine has been active since 2008, and I no doubt averaged ~1.5 posts a day. That’s a lot of extraneous excrement to sift through. It’s certainly doable, just not worth the time. Instead let’s go the way of a rookie call center tech: Format and recover!

Within the next week I am going to perform a full delete on my account. Afterwards I will restart fresh, though it might be a while since FB can take up to 90 days before a profile is truly cleared.

There you have it – sorry if I carried on like an Allen Collins solo. If you see me show up as a friend request in the next month or two, it’s really me and I hope you’ll accept. In the meantime, both of you who wish to keep up with me may do so here or through my other social venues. You shouldn’t have trouble finding them.

See you soon!

DC

PHP Directory Scan

PHP

Introduction

This function will scan directories and return keyed arrays of file attributes matching a user provided filter string. Perfect for image, documents, and other sorts of content delivery where a naming convention is known but the directory contents are often appended or otherwise in flux.

Example

Let’s assume we need to locate a series of .pdf newsletters. Occasionally these letters are uploaded to the web server with a big endian date based naming convention.

The documents we need might be part of a larger container with many other items.

Since we know each file begins with “bio_newsletter_”, we can use that as our search string, like this:

$directory 			= '/docs/pdf/';
$filter				= 'bio_newsletter*/';
$attribute			= 'name';
$descending_order 	= TRUE;

$files = directory_scan($directory, $filter, $attribute, $descending_order);

The function will then rummage through our target directory, and return an array with any matched files, giving you an output that looks something like this:

 
Key Value
/docs/pdf/bio_newsletter_2015_09.pdf /docs/pdf/bio_newsletter_2015_09.pdf
/docs/pdf/bio_newsletter_2015_05.pdf /docs/pdf/bio_newsletter_2015_05.pdf
/docs/pdf/bio_newsletter_2015_04.pdf /docs/pdf/bio_newsletter_2015_04.pdf

 

This might look redundant, but that’s because keys are always populated with file name to allow extraction of values by name later, and in this case we are looking specifically for the file name. There is an option of returning one of several attributes, which are reflected in the value.

If the directory does not exist or isn’t readable, the function will return NULL.

Source

// Caskey, Damon V.
// 2012-03-19
//
// Scan a directory for files matching filter
// and return an array of matches.
//
// $directory: 		Directory to scan.
// $filter:		Filter string.
// $attribute:		File attribute to acquire. See here for 
// 			list of available attributes: http://php.net/manual/en/function.stat.php
// $order_descending:	FALSE (default) = Order by file name ascending. 
//			TRUE = Order by file name descending. 
function directory_scan($directory, $filter, $attribute = 'name', $order_descending = FALSE)
{	
    $result 			= NULL;	// Final result.
    $directory_handle 	= NULL; 	// Directory object handle.
	$directory_valid	= FALSE;	// If directory is accessible.
	$stat				= array();	// Attribute array.
	
	// Validate directory.
	$directory_valid = is_readable($directory);
	
	// If the directory is valid, open it
	// and get the object handle.
	if($directory_valid)
	{
		$directory_handle = opendir($directory);
	}
	
	// Do we have a directory handle?
	if($directory_handle) 
	{
		// Scan all items in directory
		// and populate result array with 
		// the attribute of those with
		// names matching our search pattern.
        do 
		{
			// Get first/next item name in the 
			// directory handle.
			$file_name = readdir($directory_handle);
			
			
            if (preg_match($filter, $file_name)) 
			{
                $stat = stat($directory.'/'.$file_name);
				
				// If requested attribute is name, then
				// just pass on the name with directory.
				// Otherwise, pass the requested attribute.
				if($attribute == 'name')
				{
					$result[$file_name] = $file_name;
				}
				else
				{
					$result[$file_name] = $stat[$attribute];
				}
            }
			
        }
		while($file_name !== FALSE);
        
		// Close the directory object.
		closedir($directory_handle);
        
		// Sort the array as requested.
		if ($order_descending)
		{
            arsort($result);
        }
        else
		{
            asort($result);
        }
    }
	
	// Return resulting array.
    return $result;
}

 

A word of caution – directory scanning is simple and effective, but doesn’t scale so well. A few hundred files is fine, but once you start breaching the thousands it’s probably time to break your directory structure down a bit, or consider a RDMS solution.

Until next time!

DC

Social Media And Charitable Initiatives

Contents

Introduction

One of the greatest potentials of social media is that of logistics. No need for daisy chain calls, large scale meetings, or travel. Just fire up a space in your application of choice and go. As mentioned here however, the almost comical ease of joining a cause has potential challenges of its own.

Articles

 

Questions

Q1 – Discuss social initiatives vs. social media.

One of the potential dangers of contemporary social media is the lack of effort required to conceive any sort of cause or initiative. With just a few well placed words or photos, one person can stir a whirlwind of frenzied interest and passion. Yet, just as quickly these movements fizzle out to be replaced with a new interest.

It is of course possible to circumvent this pitfall, creating a true initiative rather than a viral burst. It is my opinion the subtler, more meaningful approach that produces long term effort is what separates a functioning initiative from common social media.

Q2 – How can social media be used to empower a charitable initiative?

On the most basic level, social media is the ultimate logistical tool. I can attest from personal experience. My peers have at times arranged missing person searches, training, tornado clean up, flood aid, and even funeral support.

Social media is also a powerful potential fundraising tool. In the time a single person might canvas one neighborhood for support, social media could potentially reach millions.

 

Power of Social Media

Contents

Introduction

Continuing discussion of defining social media. The first in class discourse confirmed (at least personally) my theory of social media: That it is largely a psychological construct. Each individual responded with a definition that reflected their own experiences and predilections. This is not a bad thing at all, and in my opinion demonstrates the true power of social media – diversity and awareness.

Articles

 

Questions

Q1 – What is the true power of social media?

Numbers don’t lie – social media is ubiquitous in society. But how much of an effect does it really have? Consulting groups are quick to assure a social media presence is essential to any business survival, and for brand awareness they may well be on target. In terms of direct sales – perhaps not so much. Facebook’s entire business model is a hedge bet by advertisers not yet backed up by quantifiable results in product sales. Essentially it’s a repeat of the dot-com model (and we all know how that one went).

Conversely though, the psychological power of social media is undeniable. As above, you may not sell more widgets by placing ads on Facebook, but it’s likely you won’t sell any widgets AT ALL without a social media presence.

In short, awareness is the key.

Q2 – What are the personal effects of social media?

How many of us in the first world make a day without checking our favorite social media site? Good old stand by example Facebook is last week’s news in the public consciousness, scoffed at my millennials, derided by Gen-X, loathed by boomers. Yet ~10% of the entire world’s population logs in each day. Nobody on Facebook anymore? Somebody’s lying.

The impact is enormous, and effects it has on everyday lives are obvious. For this question, I will focus on what is perhaps the most oft debated aspect: Privacy.

I’ve made no secret about my view that social media gets overstated a bit. Facebook and its ilk get credit (and blame) for a lot of concepts they neither invented or refined. Data mining? Please. Entities like Equifax know more about us than Zukerburg could ever imagine – and have been at it since 1899. That’s not a typo: 1899! Blaming social media for breach of privacy is no more fair than giving it credit for empowering movements in society. What has social media done then?

See also: Public awareness. Sure, there’s still the common misconception data mining is a new thing, but at least we now have a public awareness that it does in fact exist. Moreover, though social media does make data mining hilariously easy, it also gives John-Q a tool to push back with, if only just a little.

This awareness of social media I believe is far and away more powerful than the social media itself. The illusion or in some cases reality that we are watched by peers undeniably alters our every day human behavior (deny it, go ahead). I would like to delve further into the topic of awareness via in person discussion, as breaching here would need far more than allotment of one page.

Cammy White

This blog has been long overdue for cleaning, and with discussion based classes kicking into gear its going to see quite a bit of mileage soon. Housecleaning includes unused media files, and oh boy are there plenty! One of the first items I discovered was a huge pile of photos acquired 2015-01-12 while taking a GSD named Cammy out for her first Winter.

Regardless of what some Winter haters may tell you, big snows in Kentucky are pretty rare and usually concentrated in a small area. I thought catching a good one in the Red River Gorge was lighting in a bottle, at least until the blizzard of 2015 blew it away, and the blizzard of 2016 blew THAT away. Alas, it looks as if 2017 will bring no such luck.

In any case, here are the shots. Better five years late than never, ne?

What IS Social Media?

Social Media (Wikipedia)

History And Evolution of Social Media

 

What is social media? Ask Mr. John Q., and you will probably get something akin to “Facebook”, “Twitter”, or whatever the most prominent brand is at the moment. That’s a fine example of branding yes? Those are no more the definition of Social Media than “Kleenex” is the definition for tissue. Or are they? Social media is social is it not? If the public consciousness says that social media is Facebook and Twitter, perhaps it is.

According to Wikipedia, social media is defined as follows…

Social media are computer-mediated technologies that allow the creating and sharing of information, ideas, career interests and other forms of expression via virtual communities and networks.

…and then immediately admits this definition is neither definitive or all encompassing. It is likely well beyond the scope of this class, let a lone a single discussion to truly define social media, but we can at least narrow the scope a bit with some simple questions.

Q1 – How does “Social Media” differ from other communication vectors?

Technically speaking, telephones are social media, as would be virtually any form of direct communication in the modern era. What exactly sets apart the concept of social media vs. mass media, vs. a simple phone call? One to many? That’s been around since the days of town criers. Instant access? The telegraph. Two way? Telephone.

Could it be the combination of these aspects that creates the “social” in social media? Or is this merely a psychological effect of the previously mentioned branding?

Q2 – What is the future of social media?

Nebulous though it may be, the history of social media is recorded and available for dissemination. A more difficult question is where it will go. Facebook and its ilk are ubiquitous today, but will not last forever. What will replace the current forms of social media? Will blogs like this make a comeback in the public consciousness? Will a new brands come along doing the same thing with another name plate? Or is there a truly disruptive force on the horizon?

 

Bootstrap Remote File Models

Introduction

Opening a model is well documented for the bootstrap framework, assuming the model’s contents are located within the same document. However, if the model target is a remote file the process becomes slightly more nebulous. After some experimenting and Stack Overflow research I have compiled the following steps to do so.

Link

The link for opening a model must still target a container element of some sort, typically a DIV id. You may also choose to target the container by class. For remote files, you need to include a href as well, targeting the remote file location.

<a href="model_document.php" data-target="#model_div_container_id" data-toggle="modal">Open Model</a>

Model Container

Next you need to prepare the calling document (the same document that contains model link) with a model container. This is exactly the same as with a normal model, except you are only adding the model’s container elements, not its content.

<div class="modal fade" id="#model_div_container_id">
    <div class="modal-dialog">
        <div class="modal-content">
        	<!-- Model content area - leave this empty.-->
        </div>
    </div>
</div>

Model Document (Content)

The last step is to prepare your remote document. Add whatever content you wish. The only special caveat to consider is you do NOT add the model container elements to this document. You only add the content. The content itself may include its own container elements, formatting, and so forth. In the example below, the model is a simple list, and therefore only the markup for the list is included.

<ul>	
    <li>Item 1</li>
    <li>Item 2</li>
    <li>Item 3</li>
</ul>

You’ve probably already guessed what’s going on here. When the model link is clicked, the remote document’s contents are inserted into the calling document’s target element. Bootstrap Model Documentation (extracted 2017-01-23):

If a remote URL is provided, content will be loaded one time via jQuery’s load method and injected into the .modal-content div. If you’re using the data-api, you may alternatively use the href attribute to specify the remote source.

This is why you must include a target element in the calling document, and conversely NOT include model containers in the remote document. Hope this helps. Special thanks to Buzinas from Stack Overflow.

Until next time!

DC

Redefining Political Communication Continued

Original Article (UK Canvas)

 

Further discussion upon previous points raised here resulted in a slightly more concrete debate of social media and political sphere. One aspect is the delivery system, and thus raises my first query:

Q1 – What are the political effects of connection delivery initials such as Connect Kentucky.

Just like the physical Superhighway system that preceded it, the internet was born from a military initiative. And like the Super Highway system the internet was quickly appropriated by civilian interests, bringing information, commerce, and connections at speeds never conceived.

Unfortunately, parallels continue to the detrimental aspects. The super highway system is often blamed for the disruption and ultimately destruction of areas is bypassed or in the case of larger urban areas, dissected. Without access, the disenfranchised fell yet further behind. So far the internet has proven no different. The poor and uneducated are now being left behind on the virtual highway.

Connect Kentucky and similar initiatives hope to ameliorate the connection disparity by bringing WIFI services to every municipality. Leaving out the logistical obstacles, my immediate would be the potential political ramifications. Someone has to pay for all that hardware, maintenance, and access. Paying is power, and power is control. When public access becomes not a convenience, but a dependence, the dangers of single entity control are very real.

Q2 – What are the drawbacks to a fully connected political sphere?

Quick query? When was the last time you recall any grass-roots movement affecting practical change? I’ve oft heard crowing from social media pundits of the power it gives groups to conduct movements and change the political landscape in ways not possible. However, the last time I checked, there was no such thing as Facebook during the movements for Anti Slavery, Civil Rights, Women’s Suffrage, or any other initiative that actually managed a permanent change in our political sphere.

Indeed, an argument could be made that instant access to group communication, and the ability to “join” a movement by pressing Like or posting a random meme has in fact weakened influence of grass-roots initiatives. To put it bluntly, it takes no effort to make one’s self feel involved without actually doing anything. Outrage comes, goes, and the powerful ignore them until they go away, knowing full well a new flash in the pan distraction will appear tomorrow.

 

 

 

Redefining Political Communication

 

Original Article (UK Canvas)

 

The overarching theme appears to be that of political and consumptive entities re-adapting to “new-media”.

Q1. Does any adaptation need to be made at all?

While it is true information is more fragmented and entities must compete in ever wider arenas, an opposing argument could be made politics is business as usual. The splitting trend of ideology continues at an arguably accelerated rate, while third parties, theoretically empowered by new media hold less sway at the polls than they they once did even before the rise of television.

Q2. How best to consolidate the influx of information?

A growing issue with evolving media is that of the common vernacular: “Information overload”. Perhaps though, a more fitting appellation is “input overload”. Twitter, Facebook, Instagram, Snapchat, Tumbler, Google, the blog sphere, all these and many more vie for increasing shares of a finite resource – attention. Add to that ever evolving techniques for insertion of advertising and subjectivity into all forms of media new or old. The end result is a potentially horrifying mix of misinformation and utter consumer apathy.

So what then is the best technique to resolve or at least slow the tide. Technology solutions range from the mundane concepts of auto dissemination up to disruptive new innovations hoping to change the way we access information. Consolidate your news feed to one coherent screen from dozens of sources today – download a filtered stream directly to your conscious tomorrow.

If that sounds far fetched – it is. Technology is nebulous and ever changing. Each generation tends to pat itself on the back for sitting on the bleeding edge while laughing at the poor Luddites who came before – only to be the next laughing stock tomorrow. So it goes that we’re unlikely to find a permanent solution using technology alone.

Perhaps regulatory? I’m not sure I’d even want to breech thins one – freedom of the press, for all the issues it enables, is a necessity to maintaining a reasonably free society.

This in my opinion only leaves the individual to assume responsibility. It is in turn up to us as a whole society to educate ourselves and upcoming generations to view all sources with a critical eye and open mind at once.

 

Server Side Paging – MSSQL 2008

SQL

Introduction

Paging is almost perfunctory when dealing with large lists of online data. Problem is, most paging solutions out there (at least those I’ve seen) perform this vital function on the client side – either through dangerous dynamic SQL or even worse – pulling the entire record set down and disseminating pages for the user afterward.

You wouldn’t (I hope) trust the client to put data into your tables, so why trust it to filter and page? That’s what your database engine was specifically designed to do! Server side paging gives you a couple of big advantages:

  • Security – See above. I would never, and will never trust a client generated SQL. You’re just asking for it.
  • Scaling – Client side record processing may seem faster at first because of the instant response. Then your record count passes the VERY modest six digit mark, and suddenly you’re looking for ways to mediate ten minute load times.

The only real downsides to sever side paging are reloading and complexity of initial setup. The former can be dealt with using an AJAX or similar solution. The later is where I come in. The following stored procedure completely encapsulates paging, all of any other data extraction.

Implementation

Execute within data procedure after all other record processing is complete on the primary data table. Assumes primary data is in temp table #cache_primary. Pass following arguments:

  • page_current – Current record page to view as requested by control code.
  • page_rows (optional, uses default value if NULL) – Number of rows (records) to output per page.

 

Outputs following record set for use by control code:

Column Type Description
row_count_total int Total number of rows in the paged record set.
page_rows int Maximum number of rows per page. Will be same as the maximum row argument passed form control code unless that argument was null, in which case this will reflect the default maximum rows.
page_last int Last page number / total number of pages.
row_first int ID of first record in requested page.
row_last int ID of last record in requested page.

 

SQL

-- Master Paging
-- Caskey, Damon V.
-- 2016-07-08
--
-- Output recordset in divided pages. Also creates and outputs
-- a recordset of paging data for  control code. Execute in another 
-- stored procedure after all other record work (filters, sorting, joins, etc.) 
-- is complete. Make sure final table variable name is #cache_primary.

-- Set standard ISO behavior for handling NULL 
-- comparisons and quotations.
ALTER PROCEDURE [dbo].[master_paging]
    
    -- Parameters. 
        @param_page_current    int            = 1,    -- Current page of records to display.
        @param_page_rows    smallint    = 25    -- (optional) max number of records to display in a page.
            
AS
BEGIN
    
    -- If non paged layout is requested (current = -1), then just
    -- get all records and exit the procedure immediately.
        IF @param_page_current = -1
            BEGIN
                SELECT *
                    FROM #cache_primary
                    RETURN
            END 

    -- Verify arguments from control code. If something
    -- goes out of bounds we'll use stand in values. This
    -- also lets the paging &quot;jumpstart&quot; itself without
    -- needing input from the control code.
                
        -- Current page default.
        IF    @param_page_current IS NULL OR @param_page_current &lt; 1
            SET @param_page_current = 1
            
        -- Rows per page default.
        IF    @param_page_rows IS NULL OR @param_page_rows &lt; 1 SET @param_page_rows = 10 -- Declare the working variables we'll need. DECLARE @row_count_total int, -- Total row count of primary table. @page_last float, -- Number of the last page of records. @row_first int, -- Row ID of first record. @row_last int -- Row ID of last record. -- Set up table var so we can reuse results. CREATE TABLE #cache_paging ( id_row int, id_paging int ) -- Populate paging cache. This is to add an -- ordered row number column we can use to -- do paging math. INSERT INTO #cache_paging (id_row, id_paging) (SELECT ROW_NUMBER() OVER(ORDER BY @@rowcount) AS id_row, id FROM #cache_primary _main) -- Get total count of records. SET @row_count_total = (SELECT COUNT(id_row) FROM #cache_paging); -- Get paging first and last row limits. Example: If current page -- is 2 and 10 records are allowed per page, the first row should -- be 11 and the last row 20. SET @row_first = (@param_page_current - 1) * @param_page_rows SET @row_last = (@param_page_current * @param_page_rows + 1); -- Get last page number. SET @page_last = (SELECT CEILING(CAST(@row_count_total AS FLOAT) / CAST(@param_page_rows AS FLOAT))) IF @page_last = 0 SET @page_last = 1 -- Extract paged rows from page table var, join to the -- main data table where IDs match and output as a recordset. -- This gives us a paged set of records from the main -- data table. SELECT TOP (@row_last-1) * FROM #cache_paging _paging JOIN #cache_primary _primary ON _paging.id_paging = _primary.id WHERE id_row &gt; @row_first 
                AND id_row &lt; @row_last
                    
            ORDER BY id_row    
                
    -- Output the paging data as a recordset for use by control code.
                
        SELECT    @row_count_total    AS row_count_total,
                @param_page_rows        AS page_rows,
                @page_last            AS page_last,
                @row_first            AS row_first,
                @row_last            AS row_last
            
        
END

 

 

Master Table Design

SQL

Introduction

Master table layout and creation. The master table controls all data tables via one to one relationship and carries audit info.

 

Layout

 

Column Type Description
id int (Auto Increment) Primary key for master table. All data tables must include an ID field (NOT auto increment) linked to this field via one to one relationship.
id_group int Version linking ID. Multiple entries share an identical group ID to identify them as a single record with multiple versions. If no previous versions of a new record exist, then this column is seeded from ID field after initial creation.
active bit If TRUE, marks this entry as the active version of a record. Much faster than a date lookup and necessary for soft delete.
create_by int Account creating this version. -1 = Unknown.
create_host varchar(50) Host (usually IP provided by control code) creating entry.
create_time datetype2 Time this entry was created.
create_etime Computed column Elapsed time in seconds since entry was created.
update_by Same as create_x, but updated on every CRUD operation.
update_host
update_time
update_etime

 

Set Up

CREATE TABLE [dbo].[_a_tbl_master](
	id				int IDENTITY(1,1)	NOT NULL,						-- Primary unique key.
	id_group		int					NULL,							-- Primary record key. All versions of a given record will share a single group ID.
	active			bit					NOT NULL,						-- Is this the active version of a record? TRUE = Yes.
		-- Audit info for creating version. A new 
		-- version is created on any CRUD operation 
		-- in the data tables controlled by master.
	create_by		int					NOT NULL,						-- Account creating this version. -1 = Unknown.
	create_host		varchar(50)			NOT NULL,						-- Host (usually IP from control code) creating version. 
	create_time		datetime2			NOT NULL,						-- Time this version was created. 
	create_etime	AS (datediff(second, [create_time], getdate())),	-- Elapsed time in seconds since creation.	
		-- Audit information for updating version.
		-- When any CRUD is performed on a data
		-- table, the previously active version
		-- is marked inactive. Deleting a record
		-- simply marks all versions inactive.
		-- In short, the only updates made to 
		-- a master table are toggling Active
		-- flag.
	update_by		int					NOT NULL,						-- Account updating this version. -1 = Unknown.
	update_host		varchar(50)			NOT NULL,
	update_time		datetime2			NOT NULL,
	update_etime	AS (datediff(second, update_time, getdate())),
 
CONSTRAINT PK__a_tbl_master PRIMARY KEY CLUSTERED 
(
	id ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

ALTER TABLE _a_tbl_master ADD CONSTRAINT DF__a_tbl_master_active		DEFAULT ((1))			FOR active
GO

ALTER TABLE _a_tbl_master ADD CONSTRAINT DF__a_tbl_master_create_by		DEFAULT ((-1))			FOR create_by
GO

ALTER TABLE _a_tbl_master ADD CONSTRAINT DF__a_tbl_master_create_host	DEFAULT (host_name())	FOR create_host
GO

ALTER TABLE _a_tbl_master ADD CONSTRAINT DF__a_tbl_master_created		DEFAULT (getdate())		FOR create_time
GO

ALTER TABLE _a_tbl_master ADD CONSTRAINT DF__a_tbl_master_update_host	DEFAULT (host_name())	FOR update_host
GO

 

Hashtag Hunting

 

Hashtags are a great tool – assuming they actually are used as a tool and not containers for pithy phrases and meme overload. As time passes I have found myself sometimes forgetting my own tags, and thus am creating this list to avoid and overlap.

#academic_alacrity

#adhd_adventure

#altruistic_activism

#animal_appreciation

#anorak_apogee

#artistic_affinity

#commerce_cacophony

#courthouse_courtship

#family_fracas

#fitness_foibles

#h20_hijinx

#history_hunting

#holiday_hoopla

#humorous_hubris

#musical_medley

#outdoor_opulence

#painful_puns

#signage_spam

#solipsistic_selfie

#technology_temerity

#training_tactics

#winter_wanderlust

 

 

Encapsulated XML ID List Parsing – In Progress

SQL

Introduction

As part of my ongoing MSSQL Versioning Project, it is often times necessary to pass a list of records to stored procedures, either from the controlling application or from a calling procedure. By use of standardized table design, you will normally only need to pass a list of primary keys. The procedure can query for any other information it needs from that point forward, making it fully self contained more maximum encapsulation.

For the list itself there are several options I know of, but only one is fully viable for my own needs. Your mileage may vary of course:

  1. Delimited List: On the surface this is the simplest of means. Just slap together a comma delimited string (“x, y, z”), break it down at the database console and off you go. If only it were actually that simple, and even if it was there’s not a chance on Earth I’m doing that. Neither should you. This is breaking the law of First Normal Form, something you never want to do for reasons well beyond the scope of this article. If you are curious, several (but by no means all) of the pitfalls of this approach are explained quite nicely here.
  2. Table Valued Parameters: TVPs are extremely powerful when used correctly and have their place, but for purposes of acting as lists or caches in a stored procedure, they have two serious drawbacks.
    1. TVPs are assumed by the query optimizer to contain a single row. This is of little to no consequence if only a few records are involved, but it can be disastrous once a threshold of ~100 is reached. Queries with execution times normally in the millisecond range may suddenly balloon into resource hogs requiring several minutes to complete.
    2. It’s rather unusual, but our environment is WIMP (Windows, IIS, MSSQL, PHP). Connectivity is provided by the sqlsrv driver, with an object oriented wrapper penned by yours truly. Unfortunately at time of writing, this otherwise excellent driver set does not support TVP.
  3. Common Language Runtime: Lots of fun to be had here, but like TVPs, it depends on a very specific environment. Otherwise it simply isn’t applicable. Even when it is, the realities of business often mean taking advantage of CLR adds layers of extra management and time wasted on service requests for the simplest of modifications. No thank you.
  4. XML: This is my method of choice. It’s reasonably fast, only giving up speed to CLR and TVP, and eventually surpassing the later as the number of records increases. It’s also T-SQL compliant, and thus quite portable. The downside is there’s more of a learning curve and you’ll want to design carefully to avoid huge strings. Let’s have a closer look at how…

Considerations

  • Efficiency: We want the XML itself and parser to be compact and fast as possible.
  • Scaling: The design should be solid and not break down under a heavy load.
  • Reuse. We need to encapsulate and standardize our code. It won’t do much good if every query or procedure requires an inline rewrite of the XML parsing.

Implementation

There are three basic scenarios where we will need to parse a list of IDs via XML.

Procedure A executes Procedure B, Sending List of IDs

This will be a common occurrence – in my MSSQL versioning design every procedure that updates data must first update the Master Table. Should sub data be involved, then multiple updates to Master table must first take place – one for each row of sub data updated. First Procedure A will establish the list of records to update as a temp table, as in the following example:

id created update_by update_ip active
1 2016-12-28 115 128.163.237.37 False
2 2016-12-28 115 128.163.237.37 False
3 2016-12-28 115 128.163.237.37 False

Once the table is ready, this query is run against it:

SET @xml_string_var = (SELECT id FROM <temp table of items to update> FOR XML RAW, ROOT)

The variable @xml_string_var will be populated with an XML string as follows. Note <root> and row. These are default outputs that we could change these by modifying our SQL above, but I prefer to leave them be. Since this little bit of SQL will be in nearly every data update procedure, let’s keep it simple and reusable as possible.

<root>
<row id=”1″ />
<row id=”2″ />
<row id=”3″ />
</root>

We can now execute Procedure B passing @xml_string_var as an XML string argument.

Procedure B Receives XML From Procedure A

Upon execution, Procedure B will need to break the XML back down into a table. Rather than Procedure B breaking the XML down inline, let’s outsource the work. We could do this with a stored procedure, but the moment we executed a procedure that in turn executed our XML parser, we would run smack into the irritating limitation of nested executions. For those unfamiliar, MSSQL 2008 and below simply do not allow nested stored procedure execution. Any attempted to do so will produce the following error:

Msg 8164, Level 16, State 1, Procedure <procedure_name>, Line <line> An INSERT EXEC statement cannot be nested.

In short, encapsulation as a stored procedure just won’t work. That really just leaves user defined functions. I personally loathe them for a lot of different reasons. They appeal to the programmer in me, but in SQL tend to cause more trouble than they’re worth. Still, if we want to encapsulate the XML parsing (and we DO), a table valued function is the best way to go. We’ll call it tvf_get_id_list:

-- tvf_get_id_list
-- Caskey, Damon V.
-- 2017-01-25
-- Returns recordset of IDs from xml list
-- 
-- <root>
--		<row id="INT" />
--		... 
-- </root>
			

CREATE FUNCTION tvf_get_id_list (@param_id_list xml)
RETURNS TABLE AS
RETURN (SELECT x.y.value('.','int') AS id	
			FROM @param_id_list.nodes('root/row/@id') AS x(y))

Procedure B will call tvf_get_id_list, passing along the XML. The tvf_get_id_list will break the XML down and produce a record-set of IDs, which we can then insert into temp table:

id
1
2
3

Procedure B will now have a access to record set of IDs that it can use to perform whatever work we need done.

As you can see, the XMl parsing work is fairly simple – we specifically planned the XML markup for easy break down. Even so encapsulating the XML work out to a separate function gives us a couple of advantages over just pasting the XML parsing code inline.

  • Obviously we will use the fastest and best scaled technique for breaking down the XML (see here for examples), but should even better techniques be developed, we only need to modify this one function.

Procedure B and any other procedures that we send our XML list to are simpler and more compact. They need only call tvf_get_id_list to break down the XML list into a usable record set.

Procedure Called By Control Code

This is more or less identical to procedures executing each other, except the procedures are being called by application code. In this case, it is the application’s responsibility to send XML formatted as above. The simplicity of XML makes this rather easy, and the parsing code can be made part of a class file.

foreach($this->id as $key => $id)
{	
					
	if($id == NULL) $id = DB_DEFAULTS::NEW_ID;
							
	$result .= '<row id="'.$id.'">';									
}

MSSQL Relational Record Versioning – In Progress

SQL

Versioning Notes (In Progress)

Master Table Update

The master table controls all data tables in the database, including sub tables containing one to many relational data (ex. One Person -> Many Phone Numbers). This means our master update procedure must be able to handle multiple record updates at once and be modular enough to execute by another update procedure – both for the updating the that procedure’s target data table AND any related sub tables. Otherwise the whole encapsulation concept falls apart.

  1. Populate a temp table with list of update IDs. These are the record IDs that we want modified (or created as the case may be). Ultimately we will be inserting these as new records with new IDs no matter what, but we’ll need to perform versioning if these IDs already exist in the master table.
    ID
    1
    2
    3
  2. Find any records in Master Table that match the Update List, and mark them as inactive. They will be replaced with new inserts.
    UPDATE
    			_a_tbl_master
    		SET
    			active = 0
    		FROM
    			#master_update_source _new
    		WHERE 
    			_a_tbl_master.id = _new.id;
  3. Prepare a list of inserts consisting of records where update list and master table IDs math, AND unmatched items in the Update List. The combined list is used to populate a temp table. This is also where we acquire the group ID for existing records. The group ID will be applied to new inserts (versions) of the existing records.
    INSERT INTO 
    			#master_update_inserts (id, id_group, update_by)
    		SELECT 
    			_current.id, _current.id_group, @update_by
    		FROM #master_update_source _source  
    			LEFT JOIN _a_tbl_master _current ON _source.id = _current.id
    
  4. Apply list of inserts to the Master Table. Use OUTPUT clause to populate a temp table with a list of IDs for each insert.
    INSERT INTO 
    			_a_tbl_master (id_group, update_by)
    		OUTPUT 
    			INSERTED.ID 
    				INTO #master_update_new_id
    		SELECT 
    			id_group, update_by 
    		FROM 
    			#master_update_inserts
  5. Using the list of IDs created when records were inserted to Master Table, we run an UPDATE against Master Table on the list of newly created IDs, where the id_group field is empty. This is to seed new records (not new versions of existing records) with a group ID.
  6. Master Table is now populated. New records will have a group ID identical to their ID, while existing records will have a new ID, but retain their previous group ID.
    ID id_group created update_by update_ip active
    2844 2884 2016-12-28 10:13:45.1900000 115 128.163.237.37 False
    2845 2845 2016-12-28 10:13:45.1900000 115 128.163.237.37 False
    2846 2846 2016-12-28 10:13:45.1900000 115 128.163.237.37 False
    2989 2844 2016-12-28 22:42:14.7930000 115 128.163.237.37 True
    2990 2845 2016-12-28 22:42:14.7930000 115 128.163.237.37 True
    2991 2846 2016-12-28 22:42:14.7930000 115 128.163.237.37 True

Full procedure (in progress)

-- Caskey, Damon V.
-- 2016-12-20
--
-- Update master table. Must be run before
-- any data table controlled by master is
-- updated. Outputs record set containing
-- IDs for the updated master that a
-- calling data update procedure will need.


SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER PROCEDURE [dbo].[_a_master_update]
	
	-- Parameters
	@arg_id					int				= NULL,		-- Primary key. 			
	@arg_update_by			int				= NULL,		-- ID from account table.
	@arg_update_ip			varchar(50)		= NULL		-- User IP, supplied from application.
			
AS
BEGIN

	-- Let's create the temp tables we'll need.
		
		-- List of update requests. All we
		-- need are IDs. The rest is handled
		-- by parameters or generated by
		-- default data binds in the master
		-- table.
		CREATE TABLE #master_update_source
		(
			id			int
		)

		-- Prepared list of items that
		-- will be inserted into the master
		-- table.
		CREATE TABLE #master_update_inserts
		(
			id			int,
			id_group	int
		)

		-- List of new item IDs created when
		-- inserts are perform on master
		-- table.
		CREATE TABLE #master_update_new_id
		(
			id			int
		)
		

	-- Populate update source (for experiment).
	INSERT INTO #master_update_source (id)
	VALUES (-1), (-1), (2844), (2845), (2846)


	-- Find any records that match our 
	-- update list and mark them as inactive.
		UPDATE
			_a_tbl_master
		SET
			active = 0
		FROM
			#master_update_source _new
		WHERE 
			_a_tbl_master.id = _new.id;

	-- Prepare inserts. Here we are adding inserts for new
	-- records AND for records that already exist. We do the
	-- later so we can get the current group ID and pass it on. 
		INSERT INTO 
			#master_update_inserts (id, id_group)
		SELECT 
			_current.id, _current.id_group
		FROM #master_update_source _source  
			LEFT JOIN _a_tbl_master _current ON _source.id = _current.id

	-- Apply the insert list (insert into master table). New
	-- IDs created by the database are output into
	-- a temp table.
		INSERT INTO 
			_a_tbl_master (id_group, update_by, update_ip)
		OUTPUT 
			INSERTED.ID 
				INTO #master_update_new_id
		SELECT 
			id_group, @arg_update_by, @arg_update_ip 
		FROM 
			#master_update_inserts

	-- For new records, seed the group ID with
	-- new record's ID.
		UPDATE
			_a_tbl_master
		SET
			id_group = _new.id
		FROM
			#master_update_new_id _new
		WHERE 
			_a_tbl_master.id = _new.id AND _a_tbl_master.id_group IS NULL;

END

Code Snip – Collision Checking

Notes from 2016-11-26 – Revamping OpenBOR collision detection.

With the possibility of several dozen or more entities on screen, collision detection must be precise with minimal resource intensity.

With the possibility of several dozen or more entities on screen, collision detection must be precise with minimal resource intensity.

Currently coordinates (s_hitbox) exist as static sub-structures in s_collision_attack and s_collision_body. See below…

typedef struct
{
	int x;
	int y;
	int width;
	int height;
	int z1;
	int z2;
} s_hitbox;

// s_collision_attack
typedef struct
{
    int                 attack_drop;        // now be a knock-down factor, how many this attack will knock victim down
    int                 attack_force;
    int                 attack_type;        // Reaction animation, death, etc.
    int                 blast;              // Attack box active on hit opponent's fall animation.
    int                 blockflash;         // Custom bflash for each animation, model id
    int                 blocksound;         // Custom sound for when an attack is blocked
    s_hitbox            coords;
    int                 counterattack;      // Treat other attack boxes as body box.
    ...

This was done for simplicity, and with current logic wastes no memory as coordinates are always required for a collision box.

However, the addition of multiple collision box support has exposed the need to break collision detection down into smaller functions. This in turn requires a lot of passing around the entire s_hitbox structure. Given the rate this functionality is used (multiple collision evaluations on every entity on every animation frame @200 frames per second), efficiency is absolutely imperative. Replacing the static coords declaration with a pointer and using dynamic allocation will add some code complexity initially, but in the long term should simplify breaking down collision logic and save substantial resources.

The following are in progress logic functions, they will need reworking to accommodate new pointer.

// Caskey, Damon V.
// 2016-11-25
//
// Get 2D size and position of collision box.
s_coords_box_2D collision_final_coords_2D(entity *entity, s_hitbox coords)
{
    s_hitbox        temp;
    s_coords_box_2D result;

    temp.z1 = 0;

    // If Z coords are reversed, let's correct them.
    // Otherwise we use
    if(coords.z2 &gt; coords.z1)
    {
        temp.z1 = coords.z1 + (coords.z2 - coords.z1) / 2;
    }

    // Get entity positions with Z offset
    // included, and cast to integer.
    temp.x    = (int)(entity-&gt;position.x);
    temp.y    = (int)(temp.z1 - entity-&gt;position.y);

    // Use temporary positions to get final dimensions
    // for collision boxes.
    if(entity-&gt;direction == DIRECTION_LEFT)
    {
        result.position.x   = temp.x - coords.width;
        result.size.x       = temp.x - coords.x;
    }
    else
    {
        result.position.x   = temp.x + coords.x;
        result.size.x       = temp.x + coords.width;
    }
    result.position.y   = temp.y + coords.y;
    result.size.y       = temp.y + coords_owner.height;

    return result;
}

bool collision_check_contact_2D(s_coords_box_2D owner, s_coords_box_2D target)
{
    // Compare the calculated boxes. If any one check
    // fails, then the boxes are not in contact.
    if(owner.position.x &gt; target.size.x)
    {
        return FALSE;
    }
    if(target.position.x &gt; target.size.x)
    {
        return FALSE;
    }
    if(owner.position.y &gt; target.size.y)
    {
        return FALSE;
    }
    if(target.position.y &gt; target.size.y)
    {
        return FALSE;
    }
}

bool collision_check_contact_Z(entity *owner, s_hitbox coords_owner, s_hitbox coords_target)
{
    int Z_distance = 0;
    int z1 = 0;
    int z2 = 0;

    if(coords_owner.z2 &gt; coords_owner.z1)
    {
        z1 += coords_owner.z1 + (coords_owner.z2 - coords_owner.z1) / 2;
        zdist = (coords_owner.z2 - coords_owner.z1) / 2;
    }
    else if(coords_owner.z1)
    {
        zdist += coords_owner.z1;
    }
    else
    {
        zdist += attacker-&gt;modeldata.grabdistance / 3 + 1;    //temporay fix for integer to float conversion
    }

    if(coords_target.z2 &gt; coords_target.z1)
    {
        z2 += coords_target.z1 + (coords_target.z2 - coords_target.z1) / 2;
        zdist += (coords_target.z2 - coords_target.z1) / 2;
    }
    else if(coords_target.z1)
    {
        zdist += coords_target.z1;
    }

    zdist++; // pass &gt;= &lt;= check if(diff(z1, z2) &gt; zdist)
    {
        return FALSE;
    }
    
    return TRUE; 
}

// Caskey, Damon V.
// 2016-11-25
//
// Compare collision boxes and return
// TRUE if they are in contact.
bool checkhit_collision(entity *owner, entity *target, s_hitbox coords_owner, s_hitbox coords_target)
{
    s_coords_box_2D owner_final;
    s_coords_box_2D target_final;

    bool result;
    
    // First check Z contact.
    result = collision_check_contact_Z(owner, coords_owner, coords_target);    

    // If result is TRUE, then run
    // 2D plane checks.
    if(result)
    {
        // Get final collision box 2D plane sizes.
        owner_final     = collision_final_coords_2D(owner, coords_owner);
        target_final    = collision_final_coords_2D(target, coords_target);
        
        // Compare the 2D boxes and get result.
        result = collision_check_contact_2D(owner_final, target_final);
    }
    
    // return final result.
    return result;
}

// Find center of attack area
s_axis_f_2d collision_center()
{

    leftleast = attack_pos_x;

    if(leftleast &lt; detect_pos_x) { leftleast = detect_pos_x; } rightleast = attack_size_x; if(rightleast &gt; detect_size_x)
    {
        rightleast = detect_size_x;
    }

    medx = (float)(leftleast + rightleast) / 2;
}

Double Dragon Reloaded Update Progress

….

 

Controls

  • Back Kick button is removed. New button Defend replaces it.
  • Punch and Kick are renamed “Attack A” and “Attack B” as their functions differ depending on which Lee brother is used.

New Moves

  • Block.
  • Run.
  • Somersault Throw.
  • Somersault Kick.
  • Backdrop Finisher.
  • Hyper Uppercut.
  • Hyper Knee.
  • Dragon’s Tail Kick (Double team jump kick).
  • Double Dragon Hurricane Kick (Double team Hurricane Kick).
  • Rear Backhand Strike (w/sticks).
  • TKD Kick.
  • TKD Finisher.
  • Knee Thrust.
  • Middle kick (Grab finisher).
  • Roundhouse Kick finisher.

 

Modified Moves

  • Hurricane kick now requires timing with apex of jump as in the original arcade DDII, but is also more powerful.
  • Stick combo is now a faster four step combo with all unique animations, and hit per button press. Take away Chin’s Kali sticks and show him how it’s done!
  • Second strike with chains and whips now has a unique animation.
  • Off wall kick has new animation.

Stages

  • All
    • Updated music to Double Dragon Neon tracks with offsets & loops.
  • Stage 1 (City)
    • Separated background.
      • Path
      • Bush
      • City
      • Sky
    • Upgrade to .png assets. ~73kb vs. ~279kb
  • State 3A (Rooftops)
    • Bridge is now a metallic grate – scenery visible through the bottom
    • Separated background.
      • Path
      • Forrest
      • City
      • Mountains
      • Clouds
      • Sky
    • Upgrade to .png assets. ~136kb vs. ~718kb
  • Stage 3 (Forrest)
    • Separated background.
      • Path
      • Trees
      • Field
      • Mountains
      • Clouds (Clouds consist of four independent layers auto scrolling at different rates to simulate real flowing cloud cover)
      • Sky
    • Upgrade to .png assets. ~166kb vs ~615kb
  • Stage 4 (Invade the enemy base!)
    • Separated background.
      • Path
      • Tree (black)
      • Trees (blue)
      • Sky
    • Upgrade to .png assets. ~322kb vs. ~1478kb

Technical

  • Where possible, rework some scripts with #import rather than #include. This is a huge memory saver.
  • Refine billkey.c with marcos and bitwise operators.
  • Jump animations simplified.
    • Jump animations formerly included multiple identical frames to control the timing for cancels. These have been replaced by velocity evaluation in keyscripts. The extra frames are removed.
  • Eliminate unneeded resources
    • Billy’s weapon sprites that were identical to unarmed version. References in weapon texts switched to the unarmed sprite. Note that small item weapon sprites were unique, but unnecessary and were also eliminated. Small items are normally in the far hand while walking and would be completely hidden from view – the armed versions magically switch item to near hand. Makes more sense to simply use the unarmed sprites.
      • Bomb
        • aaaa5
        • aaaa55
        • aaaa6
        • bk0
        • bk4
        • bkick1
        • climb2
        • J00
        • J0
        • Jk2
        • kna4
        • kna5
        • wu1
        • wu2
        • wu3
        • wu4
      • Chain
        • bk0
        • bkick1
        • climb2
        • climb3
        • climb4
        • climb5
        • jk2
        • pain1
        • wu1
        • wu1
        • wu3
        • wu4
      • Dynamite
        • bk0
        • bk4
        • bkick1
        • kna4
        • kna5
        • pain1
        • Knife
        • bkick1
        • bk4
        • kna4
        • kna5
        • wu1
        • wu2
        • wu3
        • wu4
      • knife
        • wu1
        • wu2
        • wu3
        • wu4
      • Whip
        • bkick1
        • wu1
        • wu2
        • wu3
        • wu4
      • All throwing sprites for heavy objects.