1. Compiling TypeScript With Grunt

    As Sam introduced recently, B-Line Medical is now using TypeScript, which is a superset of JavaScript. TypeScript lets us have our JavaScript and type it too, but it comes at the cost of having to compile the TypeScript into JavaScript before we can execute anything. This post will show how we set up Grunt—a JavaScript task runner—to do this compilation for us.

    Why Use Grunt?

    We’re currently developing using WebStorm 8, an IDE by JetBrains. WebStorm actually comes with a TypeScript file watcher, which watches your TypeScript files and automatically compiles them into JavaScript whenever you change anything. This is pretty awesome in theory, but had a few problems in practice:

    • The watcher would trigger a compilation for literally every change, so when you’re in the middle of writing a long statement it attempts to compile when you’ve only written half a line. This results in a lot of annoying error messages until you finish what you’re working on.
    • The watcher would sometimes silently fail, so everything would seem to be broken because it just wasn’t getting compiled. It often took many iterations of adding and removing the watcher and restarting WebStorm to get the watcher to reactivate.

    Running the TypeScript compilation process through Grunt took care of both of these issues, and since we were already using Grunt to handle other things like running tests, this process worked out perfectly.

    Installing Grunt

    Note: the following assumes a basic familiarity with the command prompt and that you already have node.js installed and have a node.js project to play with. Run all commands from the root directory of a node.js project—all you need to have is a package.json file, but some uncompiled TypeScript files will be helpful, too! If you’d rather look at the code than try it yourself, feel free to clone this repository in GitHub: https://github.com/blinemedical/GruntTypeScriptWatchExample.

    The first step is to install the Grunt Command Line Interface (grunt-cli). You can do this with:

    npm install -g grunt-cli

    Make sure to include the -g flag so the tool will be available from any directory. You should now be able to run Grunt from any directory, and you should see the following output:

    grunt-cli: The grunt command line interface. (v0.1.13)

    Fatal error: Unable to find local grunt.

    If you’re seeing this message, either a Gruntfile wasn’t found or grunt hasn’t been installed locally to your project. For more information about installing and configuring grunt, please see the Getting Started guide:

    http://gruntjs.com/getting-started

    You’ll notice it said “Unable to find local grunt”. To remedy this, run:

    npm install grunt --save-dev

    The --save-dev flag will update your package.json file so anyone else who checks out your project and does an npm install will get the Grunt module. Now when you run grunt you should see:

    A valid Gruntfile could not be found. Please see the getting started guide for more information on how to configure grunt: http://gruntjs.com/getting-started

    Fatal error: Unable to find Gruntfile.

    We’re still failing, but that’s fine—we’re learning how to work through these issues and why. For example, the Gruntfile that’s missing is the file that contains the tasks that Grunt will run for you. For now, create an empty file in the same directory as package.json named Gruntfile.js. Running grunt will now show you:

    Warning: Task “default” not found. Use –force to continue.

    Aborted due to warnings.

    Don’t worry about this warning for now—we will add the default task in the last section of this post.

    You will need a few more local node modules to use Grunt to compile TypeScript. Run the following commands:

    npm install grunt-typescript --save-dev

    and then

    npm install grunt-contrib-watch --save-dev

    We’ll use the grunt-typescript module to compile our TypeScript into JavaScript. The grunt-contrib-watch module will watch our TypeScript files for changes so we can get continuous compilation. Fun fact: modules that start with grunt-contrib come from the official Grunt development team.

    That’s all the setup we need! Now let’s make Grunt do something.

    Compiling TypeScript

    To use Grunt to compile TypeScript, we’re going to make use of the grunt-typescript module. Open up your Gruntfile and add the following lines:

    module.exports = function (grunt) {
       grunt.loadNpmTasks('grunt-typescript');
    };

    This will load the node.js module called grunt-typescript so Grunt can make use of it.

    Now we want to configure the TypeScript task.

    module.exports = function (grunt) {
       grunt.loadNpmTasks('grunt-typescript');
    
       grunt.initConfig({
          typescript: {
             options: {
                sourceMap: true
             },
            examples: {
                src: ['examples/**/*.ts']
            }
          }
       });
    };
    

    If you already have some TypeScript in your project, change the filepath in the src array to point to your TypeScript files. Otherwise, make a directory named examples at the same level as package.json and your Gruntfile. Put a TypeScript file that you want to compile in that directory.

    Now run:

    grunt typescript

    You should see the following output:

    Running “typescript:example_files” (typescript) task
    2 files created. js: 1 file, map: 1 file, declaration: 0 files (706ms)

    Done, without errors.

    And now there should be a compiled JavaScript file alongside your TypeScript file!

    Tasks And Targets

    Let’s take a closer look at what we did here. First, grunt.initConfig sets ups tasks and targets. A task is something like “typescript”: it’s a task that you want Grunt to perform. You can use tasks by getting node.js modules for Grunt tasks, such as grunt-typescript, or by writing your own tasks (unfortunately, those are beyond the scope of this post).

    Targets are subdivisions of tasks, and are frequently used to run the same task with different parameters. For example, we could change our configuration to add another target:

       grunt.initConfig({
          typescript: {
             options: {
                sourceMap: true
             },
            examples: {
                src: ['examples/**/*.ts']
            },
            moar_examples: {
                src: ['moar_examples/**/*.ts']
            }
          }
       });
    

    Now if we run grunt typescript we see this output:

    Running “typescript:example_files” (typescript) task
    2 files created. js: 1 file, map: 1 file, declaration: 0 files (753ms)

    Running “typescript:moar_example_files” (typescript) task
    0 files created. js: 0 files, map: 0 files, declaration: 0 files (4ms)

    Done, without errors.

    Grunt ran the typescript task, and since we didn’t specify which target we wanted, it ran both of them. In order to only run one target, you specify it with a colon like so:

    grunt typescript:examples

    TypeScript Options

    You may notice that our options target looks different from the others. The options target lets us tell grunt-typescript how we want it to compile our files. In this example, we only specify one option: sourceMap. This option tells the compiler to also generate *.js.map files, so our IDE and browser can map the generated JavaScript to the original TypeScript (this is really helpful for debugging).

    Options can be specified as its own target, like we did here. That results in the options applying to all of the targets. You can also specify options on a per-target basis, such as:

    examples: {
       options: {
          sourceMap: true
       },
       src: ['examples/**/*.ts']
    }
    

    You can see all the available options at https://github.com/k-maru/grunt-typescript, which is the GitHub repository for the grunt-typescript module.

    Note that grunt-typescript automatically places the compiled files in the same directory as the originals. If you want to change where the compiled files go, or put all the compiled files in the same file, you can specify a destination for the compilation in a target with the dest property, for example:

    examples: {
      src: ['examples/**/*.ts'],
      dest: 'examples/compiled.js'
    }
    

    Continuous Compilation

    So far we’ve made it so Grunt will compile our files whenever we run the grunt typescript command. This can get inconvenient when you’re actively developing and testing frequently. We will now set up a watcher so Grunt will compile our TypeScript whenever it detects a change in a TypeScript file.

    Add the following to the top of your Gruntfile:

    grunt.loadNpmTasks('grunt-contrib-watch');

    And add another task to your initial configuration:

    grunt.initConfig({
       typescript: {
         ...
       },
       watch: {
          files: ['./**/*.ts'],
          tasks: ['typescript']
       }
    });
    

    Now you can run grunt watch and you should see the following:

    Running “watch” task
    Waiting…

    If you change one of your TypeScript files, you should see something like:

    >> File “moar_examples\another_example_file.ts” changed.
    Running “typescript:example_files” (typescript) task
    File C:\Users\lily.seropian\Documents\GruntTypeScriptWatchExample\examples\compiled.js created.
    js: 1 file, map: 1 file, declaration: 0 files (746ms)

    Running “typescript:moar_example_files” (typescript) task
    2 files created. js: 1 file, map: 1 file, declaration: 0 files (683ms)

    Done, without errors.
    Completed in 2.118s at Fri Jul 18 2014 11:23:11 GMT-0400 (Eastern Daylight Time) – Waiting…

    Note: unless you’re working with an editor that saves your work automatically, like WebStorm, you will have to save your TypeScript file before Grunt will notice it changed.

    In the example above, the files target specifies which files Grunt should watch for changes. The tasks target tells Grunt what to do when a file Grunt is watching changes. So, when Grunt sees a change on any file ending in .ts, it runs the typescript task and compiles the TypeScript into JavaScript.

    To end the watch task, hit Ctrl-C or exit the command prompt.

    The Default Task

    The default task is great if you want Grunt to always run multiple things at once, or if you’re just lazy (editor’s note: “efficient” would also work here!). The default task is what runs when you run the grunt command without any arguments. To set up the default task, add the following to the end of your Gruntfile:

    grunt.registerTask('default', 'compiles typescript', [
       'typescript',
       'watch'
    ]);
    

    This is just a preview of how to make custom tasks. The registerTask method registers a new task with Grunt. The first parameter is the name of the task; default is a special value that instructs the task should be run when Grunt is invoked without any arguments. The next parameter is a simple description of what the task does. The last parameter is a list of Grunt tasks that Grunt should run, in that order.

    In our example, Grunt will first compile all the existing TypeScript into JavaScript, then start watching for changes so it can recompile when necessary. Now when you run grunt, you should see:

    Running “typescript:example_files” (typescript) task
    File C:\Users\lily.seropian\Documents\GruntTypeScriptWatchExample\examples\compiled.js created.
    js: 1 file, map: 1 file, declaration: 0 files (757ms)

    Running “typescript:moar_example_files” (typescript) task
    2 files created. js: 1 file, map: 1 file, declaration: 0 files (703ms)

    Running “watch” task
    Waiting…

    That’s everything! Now we’ve set up Grunt to do one-time compilation, continuous compilation, and both. If you’re curious about how to make custom tasks, poke around http://gruntjs.com/creating-tasks. Again, if you want to see this in action, look at or clone https://github.com/blinemedical/GruntTypeScriptWatchExample.

    Read More
  2. Sharing Data Between Child and Parent Directives and Scopes (in AngularJS)

    In AngularJS, you may run into the situation where you would like to access data on a different scope. Because AngularJS’s inheritance structure and scope relationships can be rather confusing, approaching the problem is not always intuitive.

    Directives introduce a slightly more complex type of scope. An isolate scope is a scope that does not prototypically inherit from its parent, and is created by declaring scope: {...} on a directive (see: “Understanding Scopes” from the AngularJS team)

    Scopes on Directives

    Here’s an example directive and the various options for its scope:

    angular.module('example', [])
       .directive('myDirective', function () {
          return {
             // Option 1
             scope: false,
    
             // Option 2
             scope: true,
    
             // Option 3
             scope: {
                'attr1': '=',
                'attr2': '@',
                'attr3': '&'
             },
          }
    });
    

    Option 1 (default): scope: false
    By default, a directive is created without instantiating a new scope. This means its scope is effectively the scope of its parent. Since it takes all parent scope properties as its own, this can cause issues with the component’s reusability.

    Option 2: scope: true
    This creates a new child scope for the directive. This new scope has access to parent scope properties so long as the parent is not an isolate scope.

    Option 3: Isolate scope
    When an isolate scope is created, it only has access to a specified set of variables passed in as attributes.

    <my-dir my-attr="myData">
    ...
    </my-dir>
    

    In the directive definition:

    angular.module('example')
       .directive('myDirective', function () {
          return {
             scope: {
                'myAttr': '=',
                ...
             },
             controller: ['$scope', function ($scope) {
                console.log($scope.myAttr);
                ...
             }],
             link: function (scope, elem, attrs, ctrl) {
                console.log(scope.myAttr);
                ...
             }
          }
    });
    

    Tip: ‘?’ denotes an optional parameter.

    How to Share Data?

    With the first two options, data from a parent scope is readily accessible without having to do anything special, provided that parent scope is NOT an isolate scope.

    But what if, as often is in the case of directives, the parent does have an isolate scope?

    Getting Parent Data When Parent Has Isolate Scope

    Method 1: Passing in attributes in the template

    angular.module('example')
       .directive('parentDir', function () {
          return {
             controller: [‘$scope’, function($scope) {
                $scope.parentStuff = ‘Hello world’;
                }
             }],
             template: '<child-dir attr="{{parentStuff}}"></child-dir>'
          }
       });
       .directive('childDir', function () {
          return {
             require: '^parentDir',
             scope: {
                'attr': '@'
             }
          }
       });
    

    The key line here is:

    template: '<child-dir attr="{{parentStuff}}"></child-dir>'

    In this manner, you put the child directive inside the parent’s template and pass in the data you want as attribute(s) on the child.

    Method 2: Using the controller

    Alternatively, you can use the controller as an intermediary.

    To access a parent directive’s data, it is easiest to get scope data through the parent’s controller. First, define the parent-child relationship by adding the following to the child directive’s definition:

    angular.module('example')
       .directive('childDir', function () {
          return {
             require: '^parentDir',
             ...
          }
    });
    

    Here, the important line is:

    require: '^parentDir',

    This causes the child’s linking function to receive parentDir‘s controller as its 4th parameter, shown below.

    angular.module('example')
       .directive('childDir', function () {
          return {
             require: '^parentDir',
             ...
             link: function (scope, elem, attrs, parentDirCtrl) {
                // console.log(parentDirCtrl.parentData);
             }
          }
       });
    

    Tip: If you specify more than one required directive, the parameter will be an array of respective controllers.

    Now that you have access to the parent directive’s controller, you can simply create a getter in the parent controller like so:

    angular.module('example')
       .directive('parentDir', function () {
          return {
             controller: ['$scope', function($scope) {
                $scope.data = 'Hello world';
    
                this.getData = function() {
                   return $scope.data;
                }
             }]
          }
       });
    

    Then simply call parentDirCtrl.getData() in the child directive’s linking function.

    Now what if you want the access in reverse?

    Accessing Child Isolate Scope From Parent Directive

    If for whatever reason you need a parent directive to access data on a child directive’s isolate scope, you can call the isolateScope() function on the child directive’s HTML element. To get the child HTML element, you can call .find() on the parent’s element. (Note that the jqLite that is packed together with AngularJS has limited selector capabilities.) Then in the parent’s link function, call childElem.isolateScope().

    angular.module('example')
       .directive('parentDirective', function () {
          return {
             require: 'E',
             scope: {
                'parentAttr': '='
             },
             link: function (scope, elem, attrs, ctrl) {
                var childElem = elem.find('child-directive');
                var childScope = childElem.isolateScope();
                console.log(childScope.myProperty);
             }
          }
       });
       .directive('childDirective', function () {
          return {
             require: 'E',
             scope: {
                'childAttr': '='
             },
             controller: ['$scope', function ($scope) {
                $scope.myProperty = 'I am a property';
                }]
             }
       });
    

    The HTML would look something like this:

    <parent-directive parent-attr="someModel">
       <child-directive child-attr="someOtherModel">
       ...
       </child-directive>
       ...
    </parent-directive>
    

    Other (Hacky) Ways
    If the methods above do not work for you, it is possible to gain direct access to various scopes and controllers with certain properties. The following lists a few:

    • $parent – access the parent scope
    • $$childHead – non-null when the element does does not have its own scope. Accesses top-most scope in parent + sibling chain.
    • $$childTail – non-null when the element does not have its own scope. Accesses end-most scope in parent + sibling chain.
    • $$prevSibling – access previous sibling scope.*
    • $$nextSibling – access next sibling scope.*

    * Note: Parent scopes are also sibling scopes!

    These properties can also be called from the template.

    Read More
  3. TypeScript: A less-quick QuickStart guide

    We’ve switched all of our new JavaScript development to TypeScript. TypeScript is a great superset of JavaScript. All JavaScript is valid TypeScript. TypeScript adds optional static types to variables as well as function parameters and return values, and provides a really interesting implementation of interfaces and classes. These implementations are quite different from other OO languages like C# and Java, since they’re specifically designed to provide OO while still being perfectly interoperable with existing JavaScript programs and standard constructs.

    This post is intended to provide a slightly deeper quick-start guide for using TypeScript that is more than the official QuickStart and still far quicker than reading the full specification (pdf link).

    Contents

    Types

    Interfaces

    Classes

    Modules

    Definition Files (Libraries)

    Cool Features

    Conclusion

    Types

    The main feature of TypeScript is that it provides static typing. While there is certainly a community that prefers purely dynamic languages, my team and I believe that static typing provides for more maintainable code, easier discovery, better interconnections between team members, and is necessary for very large applications.

    TypeScript provides static typing at compile time with excellent tooling support and compile-time errors through type analysis. All of the type information is then removed during compilation from TypeScript to JavaScript, so there is no run-time support for type checking and no run-time overhead.

    TypeScript effectively recognizes the following built-in primitive types:

    • number
    • boolean
    • string
    • any

    The specification includes others for compatibility – such as null and undefined – but they are not used to type user provided variables, arguments, or functions. They are only used to type specific built-in instances and as a subtype to allow more specifically typed variables to also be null or undefined.

    Through an included library definition, JavaScript classes and DOM objects are also already typed, such as:

    • Date
    • Array
    • RegExp
    • Error
    • UIEvent
    • MouseEvent
    • KeyboardEvent
    • HTMLElement (plus tons of subclasses)

    The full list of DOM and browser related types is available as part of the TypScript source code (hosted on codeplex).

    Finally programs can define their own types through:

    • Interfaces
    • Classes
    • Anonymous types

    Types are specified using colon post-fix notation.

    var s:string;
    
    function sayHello(name:string):void {
      alert("Hello " + name);
    }
    
    var p:Person; // assumes interface or class named Person is defined
    
    function clickHandler(ev:MouseEvent):any {
      alert("Clicked at " + ev.screenX + ", " + ev.screenY);
    }
    
    var p1:{first: string; last: string};
    var p2: typeof p1;
    

    Interfaces

    Interface implementations in TypeScript are unique amongst other OO languages and are an excellent adaptation of this OO principal to the loose nature of JavaScript. In TypeScript, a variable or argument can be typed as an interface and the typing is valid when the analyzed contents of the variable match the interface, regardless of the underlying class declaration of the variable’s type (explained below). Additionally, interfaces provide full support for all aspects of JavaScript values, objects, and functions including objects that are both themselves an object and also serve as a function.

    For example, we can declare an interface and then create a variable matching that interface without using any class declaration at all.

    interface Person {
      firstName: string;
      lastName: string
      };
    
    function sayHello(person:Person):void {
      alert("Hello " + person.firstName + " " + person.lastName);
    };
    
    var p:Person = {
      firstName: "Samuel",
      lastName: "Neff"
      };
    
    sayHello(p);
    

    This declaration is perfectly valid. We declare an interface with two member variables, firstName and lastName. Then we declare a function that accepts a single parameter that matches the interface Person and use the declared member variables in our function body. Finally, we instantiate an anonymous object with member variables matching those of the Person interface and pass that to our sayHello function. We never created a class that implements the interface and in instantiating the object that represents a person, we didn’t specify the interface (we did upon assignment, but not upon instantiation).

    This loose application of interfaces to any instance that can be analyzed at compile time to match the interface contents is a really powerful implementation of interfaces and polymorphism. I’ve often wished this could be done in C#–declaring an instance is of an interface type when the instance has the appropriate members, even when the instance’s declared type does not specifically implement the interface. C# doesn’t support this, neither does Java, but TypeScript does.

    Interfaces in TypeScript can do far more than declare member variables. They can declare member functions, they can declare that the instance of the typed interface is itself also a callable function, they can declare the instance is a new-able function, and even provide for advanced method overloading.

    Here’s a subset of the JQueryStatic interface declaration from the jQuery type library included with TypeScript.

    interface JQueryStatic {
        (selector: string, context?: any): JQuery;
        (func: Function): JQuery;
        (): JQuery;
    
        hasData(element: Element): boolean;
    
        browser: JQueryBrowserInfo;
    }
    
    declare var $:JQueryStatic;
    
    $( function() { ... } );
    
    $("div.header")...
    
    if ($.hasData(element)) { ... }
    
    $.browser...
    

    In this example, we’ve declared an interface JQueryStatic and an existing variable that matches this interface. We can call the variable as a function with several overloads, and we can also access the variable’s member function hasData and member variable browser.

    Interfaces are primarily used to applying types to existing non-typed (JavaScript) code and libraries and also with custom code that will be used in polymorphic implementations. Both of these use cases are discussed in further below under Definition Files (Libraries) and Classes.

    Classes

    Classes provide encapsulation of functionality and data together just like in any other OO language. They support public and private variables, constructors, inheritance, and interface implementation. At runtime, classes are implemented using traditional JavaScript prototype chains and thus add no runtime overhead or code bloat.

    Let’s start with an example:

    class Person {
      constructor(public firstName : string, public lastName : string) { }
      public age : number;
    
      public displayName() { return this.firstName + " " + this.lastName; }
      public sortName() { return this.lastName + ", " + this.firstName; }
    }
    
    var person : Person = new Person("Samuel", "Neff");
    alert(person.displayName());
    

    This code segment declares a class named Person. It has a constructor that accepts two parameters, firstName and lastName, which are both strings. The public keyword on the constructor arguments is a shortcut for declaring that the constructor takes those arguments and then assigns them to identically named public member variables. This allows us to access those member variables later, both inside and outside the class.

    Next we declare another public variable called age, which is a number.

    Finally, we declare two instance methods that return two different forms of the person’s name.

    With the class fully declared, we create a typed instance and access one of its member functions.

    TypeScript supports inheritance so we can create subclasses of our Person class, like this:

    class Employee extends Person {
      constructor(firstName: string, lastName: string) {
        super(firstName, lastName);
      };
      public manager : Employee;
    }
    
    var employee : Employee = new Employee("Samuel", "Neff");
    employee.manager = new Employee("David", "Ramsay");
    
    alert(employee.displayName());
    

    Now we have an Employee class that is a subclass of Person. It has the member functions of Person and adds a new public member variable called manager that is also an Employee. You can see in the small code sample that we assign the manager and also access the displayName() member function through the Employee class instance.

    Classes can also explicitly implement interfaces. This enforces at compile time that the class has the same members with the same type specifications as the interface.

    interface IPerson {
      firstName : string;
      lastName : string;
      }
    
    class Person implements IPerson {
      ... // same class body as above
      }
    

    Modules

    Modules provide a mechanism to encapsulate related functionality and provide for both private and public (in module terms internal and exported) members. A Module in TypeScript is more than a namespace in .NET or package in Java. It can contain interface and class declarations and also provide internal and exported member variables and functions of its own. Additionally, when a module does provide more than type declarations via interfaces, the module itself becomes both a named container and an instance in its own right. This implementation of multi-class and multi-interface encapsulation, while unique to OO implementations, provides an implementation consistent with existing JavaScript encapsulation methodologies commonly in use in popular libraries.

    module Company {
      export class Person {
        ... // same body as above
      }
      export class Employee extends Person {
        ... // same body as above
      }
      export var employees : Employee[];
    }
    
    var sam = new Company.Employee("Samuel", "Neff");
    var dave = new Company.Employee("David", "Ramsay");
    Company.employees = [sam, dave];
    

    In this sample, we created a module called Company that exported two classes, Person and Employee as well as a module variable called employees that is an array of employees. Then we create two employee instances and assign the employees array.

    Definition Files (Libraries)

    TypeScript is defined from the outset to be fully-interoperable with existing JavaScript libraries. While TypeScript provide value through inferred typing, the significant advantage is in declared static typing. Of course, libraries written in pure JavaScript will not have static type information. TypeScript compensates for this by including a mechanism to provide static types external to the libraries. It also includes a static type definition file for one of the most common libraries in use: jQuery.

    Definition files are conventionally included with the extension .d.ts. They typically will include interface and existing variable declarations. Interface definitions are the same as shown earlier. Variable declarations add declare prefix. The result is static typing for TypeScript but zero emitted JavaScript code–these declarations are there only to tell the TypeScript compiler that variables of that name and type will exist at runtime.

    One interesting note in reviewing both the jQuery and core declaration files included with TypeScript is that it’s common practice to have both an interface and variable declaration for identically named items. For example, Date is a type for variables, but it’s also variable that has methods of its own. Abbreviated declarations follow.

    interface Date {
        toDateString(): string;
        toTimeString(): string;
        valueOf(): number;
        getTime(): number;
        getFullYear(): number;
        getMonth(): number;
        getDate(): number;
        getDay(): number;
        getHours(): number;
        ...
    }
    
    declare var Date: {
        new (): Date;
        new (value: number): Date;
        new (value: string): Date;
        new (year: number, month: number, date?: number, hours?: number, minutes?: number, seconds?: number, ms?: number): Date;
        (): string;
        parse(s: string): number;
        now(): number;
    }
    

    The full declaration is available on CodePlex.

    Existing definition files for many popular libraries (and many I’ve never heard of) are available on GitHub.

    Cool Features

    The above introduction demonstrates all of the static typing and object oriented features of TypeScript. In addition, there are some other cool additions and unique features that I’d like to point out.

    Arrow functions

    One very common construct in JavaScript is to pass anonymous functions to other functions as callbacks. You see this everywhere: in AJAX, in event handlers, timers, and all over jQuery. The common syntax for these can get verbose relative to the amount of code you’re actually executing in the body. Arrow functions allow you to reduce this significantly, just like lambda functions in other languages.

    This code:

    $("div.item").each(function(e, el) { $(el).addClass("foo"); });
    

    Can be reduced to:

    $("div.item").each((e, el) => $(el).addClass("foo"));
    

    One interesting additional differentiator between arrow functions and anonymous functions is the this reference. Within classes, anonymous functions do not reference the enclosing class and have an undefined this, necessitating alternative access to the context, such as:

    var Person = function(firstName) {
      this.firstName = firstName;
    
      this.sayHelloDelayed = function() {
        var _this = this;
        setTimeout(function() { alert("Hi " + _this.firstName); }, 100);
      }
    }
    

    Notice the use of the _this variable closure. With TypeScript it can be simplified:

    class Person {
      constructor(public firstName : string) { }
      public sayHelloDelayed() {
        setTimeout(() => alert("Hi " + this.firstName), 100);
      }
    }
    

    The closure is no longer required.

    Type Parameters

    TypeScript supports generics similar to how Java does–they exist for static analysis at compile time and are compiled away with no impact (or benefit) at run-time. TypeScript calls this feature Type Parameters and they’re supported in class, interface, function, and variable type declarations. Type Parameters support constraints and can be nested, both within other Type Parameters and recursively. Additionally, Type Parameters are bivariant, supporting both covariance and contravariance implicitly. Calls to generic functions and type constructors also support type inference, so in usage, you often do not have to specify the types in these situations.

    Examples:

    var strings : Array<string>;
    var strings : string[];       // shortcut for Array<string>
    
    class Employee<T extends Employee> {
      public manager : T;        // manager is an employee or a derived class of Employee
    }
    
    class Manager extends Employee<Manager> {
    }
    
    var employee : Employee<Employee>;
    var manager : Manager;
    
    function filter<T>(array : T[], fn: (item : T) => Boolean) : T[] {
      ...
    }
    
    var data : number[] = [1, 2, 3];
    var filtered = filter(data, x => (x <= 2)); // no need to specify filter<number>
    

    Enums

    Enumerated values can be declared that provide typed mappings of pre-defined identifiers to numbers. Enums can greatly improve discoverability and maintainability of code when one of many options is needed for a variable, argument, or return type.

    enum Direction { Up, Right, Down, Left}
    

    Enums are types and can be used wherever a type is expected.

    function move( d : Direction, amount : number) {
      switch(d) {
     
        case Direction.Up:
          this.y -= amount;
          break;
     
        case Direction.Right:
          this.x += amount;
          break;
     
        case Direction.Down:
          this.y += amount;
          break;
     
        case Direction.Left:
          this.x -= amount;
          break;
      }
      alert("Moved " + Direction[d] + " " + amount + " pixels.");
    }
    
    

    At runtime, the value is a number and the string representations can be referenced by indexing the generated Direction variable.

    Overloaded Functions

    TypeScript provides for excellent function overloading support with unique features specific to how methods are typically overloaded in JavaScript.

    Methods can be overloaded by parameter type and count as in most object oriented languages. Unique to TypeScript, they can also be overloaded by specific literal string values. Examples:

    interface Util {
      add(p1 : number, p2: number) : number;
      add(s1 : string, s2 : string) : string;
      add(d1 : Date, d2 : Date) : string;
    }
    
    interface Element {
        getElementsByTagName(name: "a"): NodeListOf<HTMLAnchorElement>;
        getElementsByTagName(name: "abbr"): NodeListOf<HTMLElement>;
        getElementsByTagName(name: "address"): NodeListOf<HTMLElement>;
    }
    

    The first example shows an interface with an overloaded add function that can accept numbers, strings, or dates. In each case, both parameters are enforced to be of the same type.

    The second example is a small subset of the DOM library declarations. Here getElementsByTagName is overloaded to specify exactly what type of node list is returned based on the name provided. The type is not just string, but a specific string literal. This paradigm is used throughout core JavaScript, jQuery, and many other libraries.

    While TypeScript has excellent overloading support, you also must recognize that this is compile-time support only for static type checking and is completely removed at run-time. Therefore, even overloaded methods only ever have one actual method implementation which must receive parameters that are a superset of the overloaded types.

    For example, if we implement add in a class we specify the overloads followed by a single body.

    class Util {
    
      add(p1 : number, p2: number) : number;
      add(s1 : string, s2 : string) : string;
      add(d1 : Date, d2 : Date) : string;
      add(o1 : any, o2 : any) : any {
    
      	if (o1 instanceof Date && o2 instanceof Date)
      	{
      		return new Date(o1.getTime() + o2.getTime());
      	}
      	return o1 + o2;
      }
    }
    

    Note here there are four declarations and only one body. The single body will be called for all invocations and must have code in it that works at run-time to determine what is appropriate based on the actual values passed.

    Conclusion

    TypeScript is a great enhancement to JavaScript that includes compatibility with existing JavaScript code, libraries, and programming paradigms. The static typing vastly improves tooling, code discovery, maintainability, and team collaboration. I strongly recommend it for all future JavaScript development.

    Read More
  4. Machine Learning with F# and C# side-by-side

    We were very pleased to host a D.C. F# meetup last Friday. Mathias Brandewinder gave a short introduction to F# and machine learning, and quickly broke us down into small groups where we ran through his tutorial.

    As an interesting way to share the content for C# developers, I’m going to go through the tutorial conducted the same way in both C# and F#, side-by-side. Note that prior to today’s meetup, I’ve never programmed F#. I’ve heard Anton talk about it plenty and seen some of his code, but I’ve never actually programmed it myself. So when I ran through the tutorial and got to the “your code here” part, I was intimidated at first. My code? My F# code? I don’t have F# code. The tutorial was great though, guiding you through what to do line-by-line with examples of similar things done in F#- this made it much simpler to figure out how to do what’s needed for the tutorial in F# despite having no prior F# knowledge.

    The tutorial is based on machine learning to match an image with known images of numbers. The first thing to do is load our training data. In C#, we would do this:

    var lines = File.ReadAllLines("C:/Projects/Experiments/FSharpMachineLearning/Library1/trainingsample.csv");

    and in F#

    let lines = File.ReadAllLines "C:/Projects/Experiments/FSharpMachineLearning/Library1/trainingsample.csv"

    These lines are almost identical and do the same thing. We end up with a variable lines which contains a string array of the sample data. Next we want to parse the strings further into individual elements within each line. In C#:

    var stringArrays = lines.Select(s => s.Split(','));

    and in F#:

    let stringArrays = Array.map (fun (s:string) -> s.Split(',')) lines

    These lines have similar results but are not exactly the same. In C# we’ve used LINQ to get a representation of a collection of string arrays, represented as IEnumerable<string[]>. In F# we’ve used the F#-helper method Array.map which returns a new array after applying the conversion function to each element of the array, resulting in string[][].

    The CVS file has headers at the top which we don’t care about and want to skip. In C#:

    var dataStringArrays = stringArrays.Skip(1);

    and in F#:

    let dataStringArrays = stringArrays.[ 1 .. ]

    Here we’ve again used LINQ in C# to wrap our previous enumerable of string arrays with a new enumerable that will skip the first element. In F#, we’ve used F#’s special array access to create a new array which is all elements except the first. Now that we’ve removed the headers, we convert the string arrays to integer arrays.

    var intArrays = dataStringArrays.Select(sa => sa.Select(s => Convert.ToInt32(s)));

    and in F#:

    let intArrays = dataStringArrays |> Array.map (fun (line: string[]) -> line |> Array.map (Convert.ToInt32))

    On the C# side we select each string array, then each string element, and convert those elements to integers, resulting in IEnumerable<IEnumerable>. In F#, we’ve done something new. Instead of using the more familiar function arg syntax, we’ve used the |> construct to reverse the order which makes it easier to string functions together and read them in the order they’re processed. What this line means is we take dataStringArrays and pass that to our Array.map which in turn takes each line and passes it to another Array.map that converts each element to an integer. In F# we’ve ended up with int[][].

    Finally to make it easier to work with the data, we want to separate the first number from the rest. The first number is the actual number represented by the image. The rest of the numbers are pixel values for the image. In C# we’ll create a struct to hold the data:

    struct Record
    {
        public int Label;
        public int[] Pixels;
    }
    

    and in F# we’ll create a type:

    type Record  = { Label:int; Pixels:int[] }

    and in C# we convert to Record instances like this:

    var knownRecords = intArrays.Select(intArray =>
        new Record()
        {
            Label = intArray.First(),
            Pixels = intArray.Skip(1).ToArray()
        }).ToList();

    and in F#:

    let knownRecords = Array.map (fun (i: int[]) -> { Label = i.[0]; Pixels = i.[ 1 .. ] }) intArrays

    Two things to note here. In C#, we’ve specifically called .ToList() to make knownRecords a list and not IEnumerable<>. The reason for this is we’ll loop through that list many times and don’t want to be constantly recalculating the underlying enumerables. Initially I didn’t have this call in the C# version and without it the C# version was exponentially slower than the F# version. After adding .ToList() the C# version sped up significantly.

    In the F# version, note that we don’t specify the type we’re instantiating. We only specify the field names. F# infers the type based on the field names.

    At this point we’re done parsing our known data. The next thing is a few calculations. First is the calculate the difference between each point. We’ll pass in two arrays, and for each point of the array, subtract the corresponding element in the second array and square it. Here I’m specifically using functional-style programming in C# to make the programs comparable line-by-line. Normally I would separate this into its own method instead of creating a lambda inline.

    var distanceBetweenArrays = new Func<int[], int[], IEnumerable<int>>((a1, a2) => a1.Zip(a2, (i1, i2) => (i1 - i2)*(i1 - i2)));

    and in F#:

    let distanceBetweenArrays (a1 : int[]) (a2 : int[]) = Array.map2(fun p1 p2 -> (p1 - p2) * (p1 - p2)) a1 a2

    In both cases we’ve start with two int arrays and end up with a single int array representing the distance between elements of the first arrays. In C# we’ve used Enumerable.Zip and in F# we used Array.map2.

    Next we need to take these arrays of distances and calculate the total distance of all pixels. In C#:

    var aggDistanceBetweenArrays = new Func<int[], int[], int>((a1, a2) => distanceBetweenArrays(a1, a2).Sum());

    and in F#:

    let aggDistanceBetweenArrays a1 a2 = distanceBetweenArrays a1 a2 |> Array.sum

    In C# we used LINQ to sum up the elements and result in a single integer, and in F# we used Array.sum to do the same thing. Now we’re ready to create our “classifier” which will take an unknown record and then compare it against the known records to find out which known record is most similar to the unknown record. In C#:

    var classify = new Func<Record, Record>
        (unknown => knownRecords.OrderBy(
            known => aggDistanceBetweenArrays(unknown.Pixels, known.Pixels)).First());

    and in F#:

    let classify (unknown:Record) =
        knownRecords |> Array.minBy (fun (r : Record) -> aggDistanceBetweenArrays unknown.Pixels r.Pixels)

    In C# we use LINQ to order the known records array by the distance between the unknown record and each known record, and then we take the first one, which will have the smallest distance and be the closest match to the unknown. In F# we use Array.minBy to find the minimum record from the array based on the aggregate distance calculation.

    Testing the classifier

    Now we’re ready to test out classifier. To do this we’ll parse another CSV file with the same format. We’ll use our classifier to identify what number is represented by each image and compare against the test data’s known result. Since we’ve already written code to parse the CSV, lets go back and refactor both into a reusable method. In C#:

    private static IEnumerable<Record> ParseRecords(string path)
    {
        return File.ReadAllLines(path)
                    .Select(s => s.Split(','))
                    .Skip(1)
                    .Select(sa => sa.Select(s => Convert.ToInt32(s)))
                    .Select(intArray => new Record()
                        {
                            Label = intArray.First(),
                            Pixels = intArray.Skip(1).ToArray()
                        }).ToList();
    }

    and in F#:

    let parseRecords path =
        path
        |> File.ReadAllLines
        |> Array.map (fun (s:string) -> s.Split(','))
        |> (fun (a:string[][]) -> a.[ 1 .. ])
        |> Array.map (fun (line: string[]) -> line |> Array.map (Convert.ToInt32))
        |> Array.map (fun (i: int[]) -> { Label = i.[0]; Pixels = i.[ 1 .. ] })

    For our testing, we’ll also need a new type to hold the test results. In C#:

    struct TestRecord
    {
        public int ExpectedLabel;
        public int FoundLabel;
        public double CorrectPercent;
    }

    and in F#:

    type TestRecord  = { ExpectedLabel:int; FoundLabel:int; CorrectPercent:double}

    Now we can parse our test, unknown, records. In C#:

    var unknownRecords = ParseRecords("C:/Projects/Experiments/FSharpMachineLearning/Library1/validationsample.csv");

    and in F#:

    let unknownRecords = parseRecords "C:/Projects/Experiments/FSharpMachineLearning/Library1/validationsample.csv"

    and let’s run our test. In C#:

    var testedRecords = unknownRecords.Select(r =>
        {
            var foundLabel = classify(r).Label;
            return new TestRecord()
                {
                    FoundLabel = foundLabel,
                    ExpectedLabel = r.Label,
                    CorrectPercent = foundLabel.Equals(r.Label) ? 1.0 : 0.0
                };
        });

    and in F#:

    let testedRecords = unknownRecords |> Array.map (fun (r:Record) ->
            let foundLabel = (classify r).Label
            {
                FoundLabel = foundLabel;
                ExpectedLabel = r.Label;
                CorrectPercent = if foundLabel.Equals(r.Label) then 1.0 else 0.0
            })

    And now let’s print out the percentage we classified correctly. In C#:

    var percentCorrect = testedRecords.Average(r => r.CorrectPercent);
    Console.WriteLine("Percent Correct: {0}", percentCorrect);

    and in F#:

    let percentCorrect = testedRecords |> Array.averageBy (fun (r:TestRecord) -> r.CorrectPercent)
    printfn "Percent Correct: %f" percentCorrect

    And with this set of training data, test data and a very simple classifier algorithm, we’ve achieved 94.4% accuracy. More importantly, we learned line-by-line how to write some F#.

    Fully working examples including training and test data can be found on github: https://github.com/blinemedical/MachineLearningFSharpCSharp

    This was my first foray into F#. For a more experienced take on the same tutorial, see Anton’s blog post here.

    Read More
  5. How to write a good bug report

    Many companies have different names for what they store in their bug tracking system. I’ve heard them called tickets, cases, defects, issues, or tasks. Here at B-Line Medical we call them cases, and that’s how I’ll refer to them in this post. Regardless of their name, it’s important that they’re clear and well written. I’ve covered this in an earlier blog post, but I wanted to go in to more detail on what we consider to be a well-written case.

    I equate good cases to a recipe. Anyone should be able to read it, understand it, and work from it. A good case can be returned to months after it was created, and will still be clear and understandable. Cases should never rely on information passed through an email, conversation or IM. All relevant information should be documented in the case, and it should stand on it’s own.

    Writing a good case isn’t easy, but it’s important. So, how do we do it?

    Language

    First and foremost, your case should be written with clear language. Capitalize, punctuate, spell check and use proper grammar. This is a no-brainer. If your case is hard to read, it may cause information to get lost or passed over. You don’t want this. Be succinct, summarize your points and don’t use unnecessarily flowery language. You want to get straight to the case’s relevant information. Don’t lose the point of the post while doing this, though. You want to have the right balance of short and meaningful.

    Structure

    Your case should be broken down in to sections, and each of those sections should play a different role. If you’ve got places where redundant information needs to be entered, it will lead to problems. Don’t duplicate effort, and don’t create situations that are prone to error. You want a workflow for creating and reading cases that allows the most information to be passed on with the least amount of effort.

    Title

    The title of your case has a higher visibility than any other portion of the case. It’s going to be seen in searches, listings, and overviews. It’s going to be included in email notifications about cases, and summaries in patch notes. Developers, QA, support people, and anyone else looking at their own queue will see case titles first. Don’t take this lightly. A poorly-written case title can make one problem look less important, or look just like another problem. A good title can be the difference between a case getting the proper attention, and it getting pushed off to a backlog.  Essentially, the title is your case “at a glance.”

    So in order to make it meaningful, you need to do a few things. The case title should be specific to the problem at hand. It should contain a unique portion of the error, specific to the case, so that it is easily searchable. “Login button not working” or “Error logging in” are bad titles. They’re not descriptive, not unique to the problem at hand, and don’t use details from the actual error. Searching for this specific case would be incredibly difficult If you’ve had a hundred, or a thousand, cases about problems logging in.

    Where possible and relevant, add portions of a stack trace to the case title. “Null ref exception at XYZ after clicking the login button” is a much more descriptive title. It gives indication of what the problem is, and can be searched on. It will be descriptive enough for a summary or a case listing.

    Some situations call for a naming convention for case titles. This is not appropriate for all cases, but will help out in certain situations. If there’s a new feature that’s been implemented, for example. It helps to have a reference to a feature when there’s going to be a group of cases. This makes assigning them to the right person easier, and makes overviews of case titles group easily. For example, cases created while testing ModuleA could be prefixed with a “ModuleA -> Case Title.”

    While you can infer the area of your product that this references by reading the case title, that requires actually reading and processing each title individually. Having a convention will cause you to naturally group these cases together without actually needing to read the titles. This will allow you to process related cases together, rather than jumping around between different areas. You might note most bug trackers have an area dropdown to indicate the part of the product each case is referencing, and my suggestion for case names breaks the rule about redundant information. This is a good point, and why discretion needs to be used when naming cases. Putting an area can denote a subset or a superset of a particular area, giving you more granularity in your case groupings. For example, if you have an area for your permissions screen, but you have a module that adds something to that screen, you could set the area to permissions, and add the module name to the title. Or vice-versa, depending on how you’ve structured things. This way, the case gets as specific as possible, with the least amount of duplication and clutter.

    Summary

    The summary is where a lot of good cases fall apart. This is the place to expand on what you started with the title. It should be short, only a few sentences. You want to provide an overview of what is going on, not a novel. This is the place to denote context to the rest of the information you’re going to provide. It should provide a quick overview of what happened, without listing out each step.  You don’t need to explain everything in this section.

    It should be towards the top, if not the top part of the case, and should be the first thing someone sees and reads when they open a case. Just like the tittle, the body of the case is usually indexed for searching. It’s important to use specific terms and language unique to the case, as it will assist searching.

    If you’re unsure how to make the summary clear, think about how you would explain the case to another person on the team. You should be able to clearly convey the meaning of the case in 10-15 seconds by reading this section out loud to a coworker. If it takes longer than that, or if it’s not clear, you most likely need to revamp your summary.

    Steps to reproduce

    Quite easily the most important section of any average case. Any person should be able to follow these steps, and produce identical results to what the case is describing.

    • This section should be a bulleted or numbered list.
    • Use short, descriptive sentences for each step.
    • Don’t omit common steps. You can combine multiple common steps in the same bullet to save space.

    Expected vs Actual Results

    This section should be very succinct, and differs slightly from others. It’s a method of internal quality control. This can help weed out cases that aren’t actually bugs, by highlighting what the person creating the case is expecting to see. If this expected description differs from actual requirements, you’ll save development time by catching it before code is changed. This helps keep everyone on the same page when it comes to expected functionality. If there’s any confusion as to what is supposed to happen, this can spark discussion, and help clarify requirements.

    Stack trace/error information

    Actual data is crucial. Logs, screenshots, databases, or anything related to your application that can be used in debugging should go here. We used to only ask for logs, but found that including various other types of data has increased case visibility and readability immensely.

    Screenshots in even the most simple cases can help visualize what can take a paragraph to describe.

    We ask that all cases that have a stack trace, that it be included in the body of the case itself. Inline stack traces, copy and pasted within a formatted block, will get indexed for searching. This means I can plug part of a stack trace in to a search, and find similar cases. When logs are attached as files, they aren’t indexed and this isn’t possible. As well, a developer who is looking at the case can see a snippit of what’s happening without loading another application. The goal here is to maximize the ease of finding cases, as well as minimizing the time to fix.

    Databases, disk image snapshots, process dumps, Wireshark captures, or other raw data can be provided to help reproduce a specific application or machine state that can be difficult or impossible to set up. Some bugs are data, environment, or network dependent, and loading up the exact setup that was being used to test it will make it easier to reproduce. These additions can help speed up the time to fix a case by a significant margin.

    Include the version that this was discovered. Some bug trackers have a field for “Milestone”. I tend to see the milestone field as what version the bug will be fixed, not the version it was discovered. You want to make sure that both are clearly denoted.

    Supplemental Information

    Finally, I like having a section at the bottom for the leftover information. Not all cases need this, but it’s crucial to some.

    • History and Context are important.
      • If there is a story behind this bug, make sure to tell it.
      • If this is for an angry client, this can be noted here.
    • If you need a particular setup or machine, this is the place to say why.

     

    These aren’t hard and fast rules, more like guidelines. Some of them might not apply to your shop, and you might have things to add. The important thing is to establish a consistent format that your people can follow, and then enforcing it. Don’t let laziness be an excuse for bad bugs.

    Read More
  6. Advice to new engineers

    I had the opportunity to represent the company I work for at an engineering networking event at the University of Maryland today catered to young engineering students of all disciplines. The basic idea was to be available for students to ask questions they don’t normally get to ask of working professionals such as “what’s the day to day like?” [lots of coffee, followed by coding all day], “what advice would you give to someone looking to get into xyz field”, etc.

    Personally I had a great time being there since as an alum I felt like I could relate to their specific college experience. In this post, I wanted to share a couple of the main points that came up today during my informal discussions with the students.

    Don’t be afraid of problems

    I really wanted to stress this to the people I talked to today. You can’t anticipate every problem you will face in the technical world, and the only real way to succeed in a career is to accept that. The trick is, though, is to know just enough to be able to find the information you want. If you can’t find the info you want, ask someone! Unlike school, group work is encouraged. On top of that, the things you learn in school won’t prepare you for all the real world things you will encounter. All a good education really gives you is the toolset to help you find the information you need.

    Not being afraid of problems means you won’t freeze and give up when you’re faced with what seems like an insurmountable issue. Break things down into smaller sets; do some research. Eventually you’ll find a solution, or at least be more informed as to why you can’t solve a certain problem and hopefully have learned from it.

    Remember, nobody knows the answer to everything, and if they say they do they are lying.

    Work with people you like

    Almost 5/7 of the week (and sometimes more) is spent with a bunch of people at work. If you don’t like who you work with that’s a problem. I think recent graduates don’t realize that at an interview the interviewer should be selling themselves to the candidate just as much as the candidate to the interviewer. It has to be a good match, both professionally and personally. If you come out of an interview and feel like you just talked to the weirdest most uncomfortable person ever, don’t work there! It’s natural to be afraid of saying no to a job that was offered, especially when you are starting out. But, if you can afford to, it’s good to be picky. The people you work with can make all the difference between a place you consider a “job” and a place where you get to practice your hobby all day long and get paid for it.

    On top of that, don’t work at a place where you won’t feel challenged. If you can find a mentor at that place that’s even better, because guided growth (especially in the beginning of a career) is invaluable.

    Also, don’t worry about any stigma of jumping ship early. Leaving a job after a year isn’t a bad thing if it’s not a right fit. Find somewhere else to work. Engineering is a field that is in demand right now, but it’s also extremely competitive and constantly changing. The only way to be competitive is to always be learning.

    Interests matter

    For me, when I’m conducting interviews, what really sets people apart is their level of enthusiasm and interest. You can be the best engineer in the world, but if you don’t care about what you work on, or your field, you won’t do a good job. Being enthusiastic about the field you are in is important. If you care about what you do, whether its computer engineering, or biological engineering, or whatever, you should have personal projects you can show. Even just showing you’ve gone above and beyond basic classwork and done research or internships in an area goes a long way.

    I don’t think it matters how big or small the personal projects are, what matters is you spent the time independently to do them. People frequently suggest contributing to open source projects, and that’s great, if you have time. But if not, small personal projects also show interest and a real drive to learn and do better.

    Engineering is fun as hell

    I could spend all day doling out advice, but I only had about an hour with the students. In the end, while sometimes the engineering field can be wrought with roadblocks, if you can get past them it’s super fun and gratifying to build stuff that works. Sometimes as a student it’s hard to see how all the pieces fit, but they do, and if you persevere through it a career in engineering can be extremely satisfying.

    Read More
  7. Import Firefox Keywords into Chrome as Search Engines

    I just switched from Firefox to Chrome. Of course the first thing I did was import all my bookmarks. The second thing I did was try to use them, via their Firefox keyword. None of them worked. After a quick web search I found a lot of complaints about the inability to import Firefox keywords into Chrome but surprisingly no solution. I poked around a bit and found both browsers store their bookmarks, keywords, and search engine data in SQLite databases. This makes import straightforward. Here’s how.

    Step 1: Download SQLite Command Line Interface

    If you don’t already have sqlite3.exe on your computer, then download it here from SQLite.org. The file you want is Precompiled Binaries for Windows or whichever is appropriate for your OS.

    Once you’ve downloaded it, you can either copy the executable inside to a directory in your path, such as C:\Windows\, or use the full path when launching it in the command line, below.

    Step 2: Close browsers

    The next step is to close both Firefox and Chrome. Both browsers maintain a lock on the database while open, so in order to read and write from the databases you’ll need to close both browsers.

    To keep these instructions available, open it in Internet Explorer (yuck) or another browser if you have one installed (Opera, Safari). Alternately, you can do it the old-fashioned way and print it on paper.

    Step 3: Locate Firefox’s bookmarks file

    Firefox stores its bookmarks in a file called places.sqlite. This is the root of your profiles directory. The exact location will vary by OS and version. On my Windows 7 box, it’s at C:\Users\sam\AppData\Roaming\Mozilla\Firefox\Profiles\x0wk0n99.default\places.sqlite.

    You don’t need to do anything with the file for now, just copy it’s location to a text file. Don’t just copy to the clipboard, copy to a text file since we’ll need to manipulate it slightly.

    Step 4: Locate Chrome’s Search Engines file

    Chrome stores its search engine information in a file called Web Data, no extension. Again, the exact location may vary. Mine is at

    C:\Users\sam\AppData\Local\Google\Chrome\User Data\Profile 1\Web Data
    

    Step 5: Open a Command Prompt

    Start > cmd
    

    Step 6: Open Chrome’s Search Engines in SQLite

    Using the above location, open Chrome’s Search Engines file in sqlite3.exe.

    sqlite3 "C:\Users\sam\AppData\Local\Google\Chrome\User Data\Profile 1\Web Data"
    

    Remember that if you didn’t copy sqlite3.exe to your path, you’ll need to enter the full path above.

    Once you launch SQLite, you’ll get the SQLite prompt:

    SQLite version 3.7.16 2013-03-18 11:39:23
    Enter ".help" for instructions
    Enter SQL statements terminated with a ";"
    sqlite>

    Step 7: Attach Firefox’s bookmarks database

    Now that we’ve opened Chrome’s database, we have easy access to its data. We also need to attach Firefox’s database so we can read from it. This is where copying the path to a text file helps. SQLite only understands paths when using the / character, even in Windows. So replace all \ with / and then copy the path to the clipboard.

    Run this command in SQLite, replacing the path with yours.

    ATTACH 'C:/Users/sam/AppData/Roaming/Mozilla/Firefox/Profiles/x0wk0n99.default/places.sqlite' AS f;
    

    Don’t miss the part at the end, AS f; is important, including the semicolon.

    Step 8: Import

    Now we’re ready to run the import. The SQL code used here will be the same for everybody.

    INSERT INTO
    	keywords (
    		short_name,
    		keyword,
    		favicon_url,
    		url,
    		safe_for_autoreplace,
    		originating_url,
    		date_created,
    		usage_count,
    		input_encodings,
    		show_in_default_list,
    		suggest_url )
    SELECT
    	b.title, 	-- short_name
    	k.keyword, 	-- keyword
    	substr(p.url, 1, instr(substr(p.url,8),'/') + 7) || 'favicon.ico',
    				-- favicon_url
    	replace(p.url, '%s', '{searchTerms}'),
    				-- url
    	1, 			-- safe_for_autoreplace
    	null, 		-- originating_url
    	0, 			-- date_created
    	0,	 		-- usage_count
    	'UTF-8', 	-- input_encodings
    	1, 			-- show_in_default_list
    	null		-- suggest_url
    
    FROM
    	f.moz_keywords k
    			INNER JOIN
    	f.moz_bookmarks b
    			ON
    		k.id = b.keyword_id
    			INNER JOIN
    	f.moz_places p
    			ON
    		b.fk = p.id
    ORDER BY
    	k.keyword;
    

    Done

    Now close the SQLite command line window and you’re done. Open Chrome and notice all the imported search engines.

    Read More
  8. Thread Synchronization With Aspects

    Aspect-oriented programming is an interesting way to decouple common method level logic into localized methods that can be applied on build. For C#, PostSharp is a great tool that does the heavy lifting of the MSIL rewrites to inject itself in and around your methods based on method tagging with attributes. PostSharp’s offerings are split up into free aspects and pro aspects so it makes diving into aspect-oriented programming easy since you can get a lot done with their free offerings.

    One of their free aspects, the method interception aspect, lets you control how a method gets invoked. Using this capability, my general idea was to expose some sort of lock and wrap the method invocation automatically in lock statement using a shared object. This way, we can manage thread synchronization using aspects.

    Managing thread synchronization with aspects isn’t a new idea: the PostSharp site already has an example of thread synchronization. However, they are using a pro feature aspect that allows them to auto-implement a new interface for tagged classes. For the purposes of my example, we can do the same thing without using the pro feature and simultaneously add a little extra functionality.

    There are two things I wanted to accomplish. One was to simplify local method locking (basically what the PostSharp example solves), and the second was to facilitate locking of objects across multiple files and namespace boundaries. You can imagine a situation where you have two or more singletons who work on a shared resource. These objects need some sort of shared lock reference to synchronize on, which means you need to expose the synchronized object between all the classes. Not only does this tie classes together, but it can also get messy and error-prone as your application grows.

    First, I’ve defined an interface that exposes a basic lock. Implementing the interface is optional as you’ll see later.

    public interface IAspectLock
    {
        object Lock { get; }
    }
    

    Next we have the actual aspect we’ll be tagging methods with.

    [Serializable]
    public class Synchronize : MethodInterceptionAspect
    {
        private static readonly object FlyweightLock = new object();
    
        private static readonly Dictionary<string, object> LocksByName = new Dictionary<string, object>();
    
        private String LockName { get; set; }
    
        /// <summary>
        /// Constructor when using a shared lock by name
        /// </summary>
        /// <param name="lockName"></param>
        public Synchronize(String lockName)
        {
            LockName = lockName;
        }
    
        /// <summary>
        /// Constructor for when an object implements IAspectLock
        /// </summary>
        public Synchronize()
        {
    
        }
    
        public override void OnInvoke(MethodInterceptionArgs args)
        {
            object locker;
    
            if (String.IsNullOrEmpty(LockName))
            {
                var aspectLockObject = args.Instance as IAspectLock;
    
                if (aspectLockObject != null)
                {
                    locker = aspectLockObject.Lock;
                }
                else
                {
                    throw new Exception(String.Format("Method {0} didn't define a lock name nor implement IAspectLock", args.Method.Name));
                }
            }
            else
            {
                lock (FlyweightLock)
                {
                    if (!LocksByName.TryGetValue(LockName, out locker))
                    {
                        locker = new object();
                        LocksByName[LockName] = locker;
                    }
                }
            }
    
            lock (locker)
            {
                args.Proceed();
            }
        }
    }
    

    The attribute can either take a string representing the name of the global lock we want to use, or, if none is provided, we can test to see if the instance implements our special interface and use its lock. When an object implements IAspectLock the code path is simple: get the lock from the object and use it on the method.

    The second code path, when you use global lock name, lets you lock across the entire application without having to tie classes together, keeping things clean and decoupled.

    For the scenario where a global lock name was defined, I used a static dictionary to keep track of the locks and corresponding reference objects to lock on based on name. This way I can maximize throughput by using a flyweight container: lock first on the dictionary just to get the lock I want, then lock on the value retrieved. The locking of the dictionary will always be fast and shouldn’t be contended for that often. Uncontested locks are tested for using spinlock semantics so they are usually extremely quick. Once you have the actual lock you want to use for this function, you can call args.Proceed() which will actually invoke the tagged method.

    Just to be sure this all works, I wrote a unit test to make sure the attribute worked as expected. The test spawns 10,000 threads which will each loop 100,000 times and increment the _syncTest integer. The idea is to introduce a race condition. Given enough threads and enough work, some of those threads won’t get the updated value of the integer and won’t actually increment it. For example, at some point both threads may think _syncTest is 134, and both will increment to 135. If it was synchronized, the value, after two increments, should be 136. Since race conditions are timing-dependent we want to make the unit test stressful to try and maximize the probability that this would happen. Theoretically, we could run this test and never get the race condition we’re expecting, since that’s by definition a race condition (non-deterministic results). However, on my machine, I was able to consistently reproduce the expected failure conditions.

    private int _syncTest = 0;
    private const int ThreadCount = 10000;
    private const int IterationCount = 100000;
    
    [Test]
    public void TestSynchro()
    {
        var threads = new List<Thread>();
        for (int i = 0; i < ThreadCount; i++)
        {
            threads.Add(ThreadUtil.Start("SyncTester" + i, SynchroMethod));
        }
    
        threads.ForEach(t=>t.Join());
    
        Assert.True(_syncTest == ThreadCount * IterationCount,
                String.Format("Expected synchronized value to be {0} but was {1}", ThreadCount * IterationCount, _syncTest));
    }
    
    [GlobalSynchronize("SynchroMethodTest")]
    private void SynchroMethod()
    {
        for (int i = 0; i < IterationCount; i++)
        {
            _syncTest++;
        }
    }
    

    When the method doesn’t have the attribute we get an NUnit failure like

      Expected synchornized value to be 1000000000 but was 630198141
      Expected: True
      But was:  False
    
       at NUnit.Framework.Assert.That(Object actual, IResolveConstraint expression, String message, Object[] args)
       at NUnit.Framework.Assert.True(Boolean condition, String message)
       at AspectTests.TestSynchro() in AspectTests.cs: line 35
    

    Showing the race condition that we expected did happen (the value will change each time). When we have the method synchronized, our test passes.

    Read More
  9. IxD 2013: Rhythm, Flow, and Style

    Today Carlo and I listened to a 45 minute presentation by Peter Stahl called “Rhythm, Flow and Style” which discussed designing flow and rhythm into applications. Peter started with the observation that the world is full of rhythms, but not just what people think of (namely music). Rhythms exist in every part of life, chopping vegetables, walking outside, involving in conversations, filling out forms, navigating a website, etc. All of them involve actions, pauses, and repetitions. According to Stahl there are two types of rhythm: iterative rhythm, which is rhythm of the user (engaging with the application) and there is also motivic rhythm, which is rhythm within the application. But to get rhythm you need flow.

    Flow, according to Stahl’s presentation has several dimensions:

    • Known goals, with known progress
    • Perceived balance of challenge and skill
    • Sense of control
    • Focused concentration
    • Loss of self conciouslyness: becoming one with the activity
    • Time distortion
    • Self rewarding experience

    and it’s possible to induce it by offering clear goals, achievements, progressive challenges, progress tracking, and obvious next steps. The ultimate intent is to keep the user engaged and challenged, where they can easily lose themselves in the application. You can see these kinds of theories being applied to sites like StackOverflow, LinkedIn, or OKCupid. They have percentages indicating progress, helpful hints on how to get to the next step, progressively more advanced information, etc. Flow engages the user and lets the application open up, giving it a sense of movement over time.

    Flow, however, isn’t always just about content and progress. It’s also a matter of visual effects and transitions. Transition effects can influence the perception of an application: fast and jarring can give a sense of precision and automation, whereas slow, more gentle fades, induces a sense of calmness.

    On top of rhythm and flow is style, which is used to direct the flow and rhythm. Stahl called it rhythmic style and it is affected by visual frequency, speed, size and distance, and special effects. There are lots of different kinds of rhythmic style choices:

    • Dazzling or engaging?
    • Zippy or comfortable?
    • Dramatic or responsive?
    • Single-use or long-term?

    But the choice of style within visual components depends on content branding and audience.

    Stahl compared Samsung vs Vizio (though both have changed their sites since the slides shown in presentation). Samsung, at the time, used gentle fade transitions, whereas Vizio used a paper swipe effect. He argued that Samsung’s felt more human but Vizio’s was reminiscent of a copy machine: inhuman but precise. Both have advantages and with small motivic rhythm choices you can induce different emotions in the user.

    Of course though, flow rests with the user, so Stahl stressed to make sure to include the user in user interaction testing. Stahl showed a four way split image showing two images of the user using the application, as well as the screen being viewed (the last screen I think was a description of the test):

    stahlPresentation

    Finally, Stahl ended with a resonating quote that I think captures the design goals of any application:

    It’s all about the choregraphy of people’s attention. Attention is like water. It flows. It’s liquid. You create channels to divert it, and you hope that it flows the right way”.

    I really enjoyed his presentation. Here are Stahls slides https://www.box.com/s/ms8ssh4nc3zl6mx0n38w.

    Read More