Category Archives: Software Development

Event Namespacing: It’s A Good Idea

If you’re a web developer and use jQuery chances are you have had, or will have, the need to attach event handlers to your page elements.

With jQuery you can do this in a number of ways .on, .bind, directly with .click (blur, mouseout, etc…), and more.  For more information regarding adding handlers and removing them see the jQuery API documentation.

A quick example using .bind:

$( "selector" ).bind( "click", function() {
//do something;
});

And if you ever needed to remove your handler you could do something like:
$( "selector" ).unbind( "click");

This is all very simple and works great.  However, what if you need to remove a specified handler?  For example, if you’re writing a plugin and need to remove only the plugin’s custom handlers.

Both jQuery functions .off and .unbind allow you to pass in the method signature to remove the specified handler. However, this requires you to maintain a reference to the handler which may not be ideal.

This is where event namespacing can come in handy.  Adding a namesspace to your handler when you attach it allows you to safely remove it later without removing all event handlers of the same type.

Here are the same bind and unbind methods using a namespace:


$( "selector" ).bind( "click.mynamespace", function() {
//do something;
});

$( "selector" ).unbind( "click.mynamespace");

Namespaces gives you the flexibility to unbind specific event handlers while maintaining the ability to unbind by event type or all handlers at once.  It can also be used in the same way to trigger a specific event handler.  And it’s a good practice to easily identify custom event handlers, especially when developing plugins.

Advertisements

Simple Method for Adding Tooltips

There is no shortage of plugins and techniques for adding tooltips to elements on your website.  I’ve used a lot of them, and most work pretty good.  However, I have found some to be a little cumbersome and bit of overkill.  Here’s an example of a very simple jQuery solution that I’ve used in a number of cases for selects and inputs.

Take for example the following HTML:

Which of course will render as:

With just a few lines of jQuery and a little CSS we can add a tooltip to each of these inputs that will open on hover.
jQuery:
 


CSS:


Now hover over the input and you’ll get the following:

How it works:

The jQuery simply binds the hover function to the inputs which creates the event listeners for mouseout and mouseover for each input.

On mouseover a span is added to the DOM after the target element.  Notice it uses the width and position of the target element to calculate the placement of the span.

On mouseout the span is located using the class as a selector and removed from the DOM.

The CSS can really be whatever you want for style purposes.  What is important is the position and z-index.  Depending on the makeup of the other elements on your page you’ll need the position set to absolute to enforce the location of the span that is created on hover.  And, I like to use a high z-index so that the help text will appear above any near by elements.

If you like this method and want to use it with more flexibility for positioning I created a plugin called Easytip.  You can find more information on the plugin and download it here.


Get Value of Unknown Type From Unknown Object

In a previous post I mentioned a content management system I built for dynamically creating email content to notify users of events.  The system is delivered via a RESTful web service that is called from disparate websites and programs within our software ecosystem.  The basic requirement of the content builder is that those calling systems only need to provide a few key values.  From that information it can determine what database object to gather data from and what content templates to use for building the email notifications.

The design of the system rests on the Abstract Factory design pattern.  This allows the system to determine at run time what objects to create.  In doing this the system uses reflection in a number of ways, one I discussed in a previous post mentioned above.  Another I’m going to share in this post.

The system is passed an object which contains all the possible key fields needed to determine what data and content are needed.  Depending on which key fields are populated, the system will create the needed data objects and apply the correct rules and templates for constructing the template.

Example Class:

     
namespace Extensions
{
    public static class ObjectExtensions
    {
       public static T GetPropertyValue<T>(this object sourceObject, string key)
       {
         if (sourceObject.GetType().GetProperty(key) == null)
            return default(T);
         return (T)sourceObject.GetType().GetProperty(key).GetValue(sourceObject, null);
       }
    }
}

Explanation:

The class is static because I like to write these sort of methods as extensions.  I find them easier to use as a developer and really we just want to extend this method to any object.   The method returns the generic type T because we don’t know what type the property we’re looking up is until run time.

The method first checks to see if the property(key) being requested exists.  If  it does not we return the default value of the generic type.  In my system this works fine.  Returning a null for a string or a zero for an integer will net the same affect because it’s as good as it not being populated.  In other implementation you may want to throw an error here.

If the property exists then the value is returned to the caller it’s all done.  You might ask, what if the property they pass as the “key” does not have a type of “T”?  I contemplated that, one option could be to just return the default of the type T in that scenario.  Instead, I decided not to handle that and allow the .Net framework to bubble the error up to the caller and allow it to make a decision on what to do.  I’m not a big fan of validating methods are being called correctly in my logic layer or adding too much error handling.  If not done correctly errors can be masked from the calling program and hidden. Below are a few unit tests that show how the method works.

Unit Tests:

[TestClass]
public class ObjectExtensionsTests
{
   public class SourceObject
   {
      public int Id { get; set; }
      public string Ids { get; set; }
   }

   [TestMethod]
   public void ValueExistsAndIsReturned()
   {
      var s = new SourceObject {Id = 123456};
      Assert.AreEqual(123456, s.GetPropertyValue<int>( "Id"));
   }

  [TestMethod]
  public void ValueDoesNotExistsAndDefaultIntValueIsReturned()
  {
     var s = new SourceObject { Id = 123456 };
     Assert.AreEqual(0, s.GetPropertyValue<int>("Id2"));

  }

  [TestMethod]
  public void ValueDoesNotExistsAndDefaultStringValueIsReturned()
  {
     var s = new SourceObject { Ids = "123456" };
     Assert.AreEqual(null, s.GetPropertyValue<string>("Id2"));

  }

  [TestMethod]
  public void WrongTypeErrorReturned()
  {
    var error = false;
    try
    {
      var s = new SourceObject { Id = 123456 };
      Assert.AreEqual(null, s.GetPropertyValue<string>("Id"));
    }
    catch (Exception)
    {
       error = true;

    }
    Assert.AreEqual(true, error);

  }
}

There you have it. A simple extension method to get the value of an unknown property on an unknown object.


Object Reflection In JavaScript

Reflection is a powerful technique that I’ve used a lot in C#. On several occasions I’ve found some uses for this technique in JavaScript too. Most recently, I was writing a test harness for a RESTFul web service to allow our QA folks a easy way to test it. Typically, when I’m writing these kinds of things I have a couple objectives in mind.

First, I don’t want to turn the testing tool into a maintenance item. That is, if there are updates to the software being tested (a web service in this case) I don’t want to have to update the tool.

Second, I don’t want to add another uncontrolled variable to the testing.   Meaning, I don’t want the testing tool to require extensive QA or to create a layer of possible bugs to be validated anytime QA finds an issue with the software being tested.

Finally, I don’t want to spend a lot of time on it and I’d like to reuse it. It’s an internal tool that is usually being used by a semi-technical person and has a very specific purpose, being pretty isn’t one of them. And, if I can use it again, for another web service in this case, double bonus!

In the case of the RESTFul Web Service testing tool I accomplished this in two ways.

First, I build the inputs for the service by parsing the web service schema (.xsd) dynamical. This way, if new inputs are added I don’t have to update the testing tool. I probably write a post on that part another day.

Second, I output the results (which are returned in JSON) to a web page using reflection after the JSON object is parsed to a JavaScript object.

Here is an example of how I do it:

Consider the following object. It has a set of properties, a nested object, and a function.

var obj = new Object;
obj.FirstName = “John”;
obj.LastName = “Smith”;
obj.FullName = “John Smith”;
obj.Address = “Main Street”;
obj.Phone = “999-999-9999”
obj.GetName = (function () {});
obj.History = new Object();
obj.History.PreviousAddress = “South Street”;
obj.History.PreviousPhone = “888-888-8888”;

All I want to do is print each property and it’s value to the screen. And, if there is a nested object I want to print it’s properties too. This will allow the return results to be validated.

To do this I use a script like this:

function DisplayObjectProperties(obj) {
for (prop in obj) {
var text = “”
if(typeof obj[prop] != “function” && typeof obj[prop] != “object”){
text = prop + “: ” + obj[prop];
}
else if (typeof obj[prop] === “object”) {
DisplayObjectProperties(obj[prop])
}
$(“body”).append(“<div>”+ text + “</div>”);
}
}

Nothing too much going on here, just a loop through the object and appending each property name and it’s value to an html element.  The important thing is to test the type, so you can handle nested objects, arrays, functions, etc however you might want to.  In my case, if there is a nested object I want to display it’s properties too.  I do that with a recursive call to the function passing the nested object.

Call it:  Just pass the object to the function it will do the rest.

DisplayObjectProperties(obj);

Results:

FirstName: John
LastName: Smith
FullName: John Smith
Address: Main Street
Phone: 999-999-9999
PreviousAddress: South Street
PreviousPhone: 888-888-8888

Returning Errors in a WCF RESTful Web Service

While writing web services I’ve used several different methods for returning errors. A very common method is to return null or if the response is a string return the word error or a message stating there was an error. That works, but I’ve never really been a big fan.

Example:

public object GetSomething(string param)
{
try
{
var returnObject = new object() = some code here
return returnObject;
}
catch (Exception ex)
{
return null;
  or
return new object();
 or
 some variation;
}

}

This method doesn’t really seem to comply with best practices for being RESTful or for OOP. It gets you by, but it’s kind of smelly.

More recently, especially with RESTful WCF Web Services that are returning complex objects, I’ve added DataMembers to the DataContract that include an error Boolean and error message along with the object(s) I’m returning as a DataMember.

Example:

[DataContract(Name=”returnObject”)]
public class returnSomeObjects
{
[DataMember]
public IList <someObject> { get; set; }
[DataMember]
public bool Error { get; set; }
[DataMember]
public string Message { get; set; }
}

Then in the error handling above:

catch (Exception ex)
{
return new returnObjects() {Error = true, Message = “ex.Message”};
}

I like this method more because it always returns the same object and the calling process can make a decision based on error being true or false. Even if you don’t have complete control of the returning object(s); for instance when wrapping a legacy class library with a service, you can just make the object a DataMember on the DataContract and away you go.

The problem with this method is the http response is a 200 ok, even though an error occurred. So, it’s not as smelly, but still not the best practice. Recently I’ve begun to favor a more organic approach. Something that is more RESTful in nature, that is modifying the http status code to reflect that an error has occurred.

Example:
catch (Exception ex)
{
OutgoingWebResponseContext response = WebOperationContext.Current.OutgoingResponse;
response.StatusCode = System.Net.HttpStatusCode.InternalServerError;
response.StatusDescription = “A unexpected error occurred!”;
}

You could evaluate the exception and return codes based on certain situations, I use the above for unexpected errors. You could also make the response.StatusDescription the exception message. The calling process can now key off of the http status code and handle the situation as required by it’s own system and rules.

You might ask, why modify the status code? If you just simply throw the error you achieve something very similar, that is a HTTP status code other than 200 ok. The issue, for me at least, is you get a 400 bad request. Which is a little misleading to the caller and not exactly true if somewhere in my data or logic layer an error has occurred.

So I like to add a little control. You can evaluate for invalid parameters or data and still return the 400 bad request when appropriate, but for unexpected errors, I like returning the 500 Internal Server Error.

I should mention I’m using this for internal web services so I do have more latitude in what I show to the calling processes and systems, however, I see no reason why using standard http response codes would be a problem in a public API.

Additionally, this method adds more consistency to what a calling process will get when errors occur. For example, with the previous methods if your service is unreachable they would get a different response then if it is reached and a custom error is returned. In other words, they can always evaluate the response codes that are native to http giving you a more RESTful response.


Floating Sidebar With CSS

Floating sidebars are a great method for keeping menus, share buttons, and other information in front of users as they scroll down the page. There are a number of jQuery plugins that work great for this, but in some cases jQuery might be overkill. For example, if you need to keep informational or help text that is static in front of the user.

A simple way to create a floating sidebar is to use CSS. Take a look at the side bar on the left and scroll down the page this jsfiddle test. Notice, it stays in view the entire time. To achieve this all we have to do is create a div with the following class.

.sideBar
{
position: fixed;
z-index: 1000;
}

The magic is “position: fixed”, it’s what makes the sidebar stay where you want it to. Depending on what else you have on the page, and perhaps if you want it to float outside the container the div is in, I like to use z-index to make sure it overlaps how I want it to.

Now that you have it floating you need to position the div and style it by adding another class like below. In this case the div will float on the left and start below the header of the page.

.sideBar.boxLeft
{
width: 300px;
float:left;
padding: 0px 0px 0px 0px;
left:75px;
background-color:#eee;
top:30%;
}

There you have it, in just a few minutes you have a quick and easy solution for creating a floating sidebar.


Add CSS Class Recursively With jQuery

The Function:

function addCSSClassRecursively(topElement, CssClass) {
    $(topElement).addClass(CssClass);
    $(topElement).children().each(
            function() {
                 $(this).addClass(CssClass);
                 addCSSClassRecursively($(this), CssClass);
            });
}

Pretty simple. The first line adds the class to the element passed in. Next, using the jQuery.children() and jQuery.each() functions it iterates through each child element adding the CSS Class and then calls itself to add the CSS class to each of the child’s children elements.

Calling it:

$(function() {
   addCSSClassRecursively($('#div1'), 'MasterCssClass');
});

In my implementation I call the function when the DOM is fully loaded, using the jQuery.ready() function. I just pass in the parent or top level element I want to start adding the class at and the functions does the rest.


Invoke base class methods on unknown types

I recently built a system that dynamically builds content for sending messages to users after certain events.  As is with most of these types of systems the content is mostly static and stored in a repository with “holder values” that are replaced at run time from a data source.  The architecture of this system is one that is very configurable.  Meaning, a new message can be created using configuration files no additional coding necessary.  One of the challenges this presented was how to handle formatting of data types that are not known until run time.

Use Case:

The content may look something like: “Your bill with a billing date of _date for _amount is overdue.”  The holder values are _date and _amount; the data source might store the amount as a number and the date including the time (100 & 12/31/2012 08:35:55).  However, this data would need to be formatted as $100.00 and 12/31/2012 for a user facing message.

The system needed to handle any possible formatting of data without knowing at build time what they might be.  To accomplish that I store a pointer to a data source for each “holder value”, which includes a mapping to a base class method (and the required parameters) of the underlying data type.  Then, when replacing holder values with the  real values from the data source I apply the method using reflection.

Here’s some sample code and a quick explanation:

public static string InvokeSomeMethod(string method, string parameters, string value)

        {
            Object v = ConvertValue(value);
            var pList = ParseMethodParameters(parameters);
            var rt = v.GetType().GetMethod(method, GetParameterTypes(pList)).ReturnType.FullName;
            return InvokeMethod(v, rt, method, pList);
        }
}

“Let me explain. No, there is too much. Let me sum up”:

We can’t just invoke a method all willy nilly like, first we need to know a few more things.

  • What type are we dealing with?
  • What is the return type of the method? (Beware of overloads)

A little more detail:

Type:

The values I’m dealing with are strings, however, I may need to execute a method on it’s “true” type.  In order to get the type I call ConvertValue (line 1 above) which passes the value through a series of tests to determine what type it is and convert it.  I won’t bore you with the entire method, but this should give you an idea what I mean.

 public static object ConvertValue(string value)
        {
            if (ParameterIsInt(value))
            {
                return Convert.ToInt32(value);
            }
            if (ParameterIsDouble(value))
            {
                return Convert.ToDouble(value);
            }
              if (ParameterIsDate(value))
            {
                 return Convert.ToDateTime(value);
            }
        }
}

Each sub-function does something like this:

 public static bool ParameterIsDate( string value)
        {
            DateTime d;
             bool result = DateTime.TryParse(value,  out d);
             if (result)
            {
                 return true;
            }
             else
            {
                 return false;
            }
        }

Return Type and the Parameter List:

Remember above where I said beware of overloads when getting the return type.  The parameter list has to be parsed and passed when getting the method return type in order to get the correct method. When I parse the list I reuse the convert value function to get the correct type of the parameter, this is key to the getting the method.  And of course the parsed parameters are passed when invoking the method.

Conclusion:

So far this method (as part of a larger highly flexible system) has worked very well and allowed us to create a lot of new content with little to no new lines of code.  One down side, the person building the configurations has to  know the base class methods and parameters (absent a user interface for the content management) to format the data.  Another issue was the convert value function.  When I originally built the system I just added the half dozen or so common data types.  However, if a new one comes up some coding will have to be added.  I do have a possible solution to that issue.  In some other similar systems I’ve used Convert.Change Type to change strings to the “real type”.  This works well as long as you know what type you want.  If not, your still forced to call the various try parse methods to get the type.  In a system like the one were using, we could leverage the configuration to pass a type to the Convert.ChangeType method and get rid of the try parse convert to methods.  I’ll probably add that the first time I have a new data type come up, so far it hasn’t.


Extension Methods and Generics: Match a String To a Enum

To match a string value to a enum you have a couple options.

  1. You could loop through the enum values until you find a match.
  2. Or a better option, use the Enum.Parse method to convert the string to the equivalent enumerated object.

However, when exchanging data with third parties you might find the need to convert between strings and enums often.

With a little help from generics I wrote a quick extension method to make this even easier.

Example:

public static T FromStringToEnum<T>(this String stringValue, T enumToGet)
{
    if (!typeof(T).IsEnum) return default(T);
    return (T)Enum.Parse(typeof(T), stringValue); 
}

public enum Number { one, two }

[TestMethod]
 public void FromStringtoEnum_EnumReturned()
{
    string f = "one";
    var e = f.FromStringToEnum( new Number());
    Assert.AreEqual(Number.one, e);
}

[TestMethod]
 public void FromStringtoEnum_NonEnumPassed_DefaultReturned()
{
    string f = "one";
    var e = f.FromStringToEnum("");
    Assert.AreEqual( null, e);
}

The first line validates that T  is an Enum, if not it will return the default value of the type.   The second line is the implementation of the parse method using generics to get the type of the enum.


				

Mask a String Using Linq Aggregate Method

There are a lot of different ways to mask sensitive data like passwords and account numbers when displaying to users.  Most of them seem to include some sort of loop and/or a regular expression and get the job done. Recently, I wrote a simple function using Linq that I thought was worth sharing.

Here it is:

public static string MaskString(this string stringToMask, string mask, 
 int show)
{
     return stringToMask.ToCharArray().Aggregate("", (maskedString, nextValueToMask) 
          => maskedString + (maskedString.Length < stringToMask.Length - show 
          ? mask : nextValueToMask.ToString()));
}

Explanation:
The function takes in the string being masked, the masking character, and the number of characters to show. It assumes the masking starts on the left and it is written as an extension method.

It’s pretty simple really. The string is converted to an array so the Linq Aggregate function can be used to string the array back together replacing each character along the way with the mask until the show point in the string is reached.

More Info:
The Aggregate function was brought to my attention in a blog post from  By A Tool.  I’ve used variations of this for combining lists of strings and removing duplicates from delimited strings for a system I’ve been working on.  Just for fun I included those below to demonstrate other applications of the aggregate function.

More Aggregate Examples:

public static string CombineListToDelimitedString(this List<String>listOfStrings, 
     string delimiter)
{
 return listOfStrings.Distinct().Aggregate("", (inner, outer) => inner +
         (!inner.Contains(outer.ToString()) ? outer.ToString() + delimiter : "")).TrimEnd( 
 new char[] { Convert.ToChar(delimiter) });
}

public static string RemoveDuplicatesFromDelimitedString(this string delimitedString,
 string delimiter)
{
 return delimitedString.Split(new char[] { Convert.ToChar(delimiter) }).Distinct().Aggregate("", (inner, outer) => inner
         + (!inner.Contains(outer.ToString()) ? outer.ToString() + delimiter : "")     ).TrimEnd(
         new char[] { Convert.ToChar(delimiter) });
}

Conclusion:
Linq never seems to disappoint when looking for a cleaner and cooler looking way of doing things. It just goes to show, there are more than two ways to skin a cat. Or in this case, “mask” a cat.

Yep, that just happened…


%d bloggers like this: