Why subscribe to the SDW newsletter?

  • Insight on business technology trends and news.
  • Case studies on ground-breaking innovation.
  • Invitations to SDW events and webinars.
  • Other worthy dispatches from our expert team.
  • No spam. Just cool stuff.


Please leave this field empty.

Go back to skookum.com

Development

1 Comments

Apps for Art?

Even in the 21st century, it is not cool to pull your smartphone out at a concert.

That would be every concert except ones where Dan Deacon performs.

Deacon, an electronic music artist, is renowned for his live performances which include audience participation and obligatory visual effects.

Dan Deacon’s Mobile App

In conjunction with Wham City Apps, an extension of Wham City, the art and music collective he helped create, Deacon has released the Dan Deacon app to accompany his latest tour and in support of his new album, America.

The app plays sounds through your smartphone’s speaker while bright colors transition to different hues on the phone’s display. During the chorus, the LED flash of your smartphone acts as a strobe light by flashing on and off.

Creating a smartphone app to augment a live performance is a new and exciting way to leverage a device that most people are already carrying around on a daily basis. But how many have probably thought how smartphones could be used as an instrument for art.

I was there, man…

I had the pleasure to witness a large crowd using Deacon’s smartphone app during a recent tour. As if on cue, Deacon asked everyone to open the app, hold it in the air, and enjoy the performance. The app, which does not use WiFi, ‘listens’ for callibrating tones and then understands when to display bright colors and sounds and when to activate the strobe.

The experience was fun and engaging and almost made me feel like part of the performance—if I hadn’t been so worried I would drop my phone with all of humanity dancing around me. The app’s creation is a masterstroke that fits perfectly with the aesthetic Deacon consistently creates at live performances.

Fully excited about the potential of an app that can contribute to art and engage audience participation? In an upcoming post, I’ll speculate about how this app was probably created.


Leave a comment

Google Analytics for node.js: Writing an npm Module

add analytics to node track events

On its own, Node.js is a powerful platform for developing all sorts of applications for everything from the web to even robots (full disclosure: I’m into robots).

Like all programming languages, the basics of Node are like LEGO® building blocks: you can build anything you can think of (unless Tommy’s sister hid that one piece you were going to turn into a cannon).

npm (purposely written in all lowercase) stands for Node Package Manager. These packages are modules that you and other members of the Node community have written to make your life a little easier. Sometimes these packages are rather simple (like a conversion tool), and sometimes they are rather complex (like the express framework).

Writing a module can come across as rather daunting at first, but with a few key steps, it’s rather easy

My First npm Module

I recently published my first npm module, called Nodealytics, which is a server-side Google Analytics (GA) tool for Node that is based on the Gabba Ruby gem by The Hybrid Group. The purpose of this module is to help customize events for analysis in GA, so we can track when users visit specific pages/press certain buttons or even dynamically tally who is visiting and when.

In this particular case, I had the luxury of seeing someone else (Gabba) do all the heavy work. I just needed to port the code over from Ruby for use by members of the Node community. (As an aside: the beauty of open source is that you don’t have to always make everything from scratch—as long as you attribute where you found the stuff you’re using, you’re good to go.)

So, I started by looking at Gabba’s current source and decoding what was going on.

I’m assuming you have the latest versions of both Node and npm installed. The first step is to initialize your module using npm init. This is an easy way to set up the module’s package.json, which will set up your module’s name, description, dependencies, and other basics.

The next step is to understand how Node modules work. When adding a node module to an application, the `require` syntax is used:

The way this works is that when the module is required, it is passed into the module variable.

From the inside of the module, we tell the code that it’s destined for module-dom by using the module.exports syntax:

Now we can fill in the code. I won’t go into how to translate from Ruby to JavaScript here, nor will I go into the depths of testing dos and don’ts. But suffice it to say, the translation part was rather simple, and if you’re not using test- (or behavior-) driven development, then get started pronto.

The difficult part about creating the Nodealytics module is that testing is a bit backwards. I can test to ensure that an appropriate http response occurs when sending data, and I can confirm page-views almost immediately, but events take at least 8 hours to show up on the Google Analytics dashboard. What’s more, it’s nearly impossible to test using a fake account; Google seems to require an official connection (i.e. attached to a real, live site) before it is willing to show off any analytics data.

That said, the major pieces (customized events and page-views) have successfully been ported over from Gabba to Nodealytics. More work can still be done, however, so feel free to contribute.

Now that the module has been (mostly) written, all tests are passing, and we’re ready to share with the world, it’s time to publish. Fortunately, publishing an npm module is as easy as starting one, with npm publish—no fuss, no muss. If you don’t already have a username and password, you’ll be able to create one on the spot.

 

Resources:

http://npmjs.org

http://cnnr.me/b/2012/05/your-first-node-dot-js-module/

http://howtonode.org/how-to-module

http://anders.janmyr.com/2012/04/writing-node-module.html

 


4 Comments

Make your APIs Swagger

Here at Skookum we write a lot of REST services. REST services provides a great integration point between frontend and backend developers. This makes it easy to split work into large units, frontend development and backend development. Frontend developers write tremendously awesome user interfaces with clean markup and a performant and responsive user experience. Backend developers write testable, maintainable, performant, and robust service code.

The REST service specification is the glue that holds it all together. A specification allows frontend developers to start immediately by mocking the REST service responses with real data. A specification allows backend developers to start writing unit tests to ensure their code meets the desired state.

We all agree a REST service specification is a great tool to streamline and enhance our development process. We also all agree that writing documention is about as fun as a root canal. I have found a great tool to make this process easier–Swagger-UI. The Swagger set of tools is an entire toolset revolved around generating REST service documentation. At its core is the Swagger specification. Here is an example:

Swagger-UI is a tool that will transform the Swagger specification into a fully functional REST client that allows developers not only to view the REST documentation, but also interact with the REST API. You can view example requests, example responses, and even input arguments and see how the responses change. Overall its an awesome discovery tool and really helps developers, helping frontend developers learn an API, and helping backend developers for testing and demoing.

In our use of Swagger-UI, we came across one issue though. Swagger-UI is only good if users have access to the tool. If, however, you aren’t able to put Swagger-UI in a public space, the users will not be able to view the documentation or interact with the API. For example, if you and your clients are on different networks, and the API should not be exposed on the public web. This leads you back to writing your own API documentation by hand again and losing the power of the Swagger specification.

Well, one handy thing about a specification is that—well, it’s a specification. This means, as developers, we know it follows very specific rules. We can write tools that interact with that specification. So I decided to write a Swagger-to-Markdown script. This script can be found here https://github.com/Skookum/SwaggerToMarkdown/blob/master/swagger-to-markdown.rb. It takes a number of parameters, but the main parameters it takes is the Swagger specification for your API.

It will traverse your specification and generate a static Markdown file that contains a lot of the same information as the dynamic Swagger-UI tool. It writes out all the operations, their arguments, their error codes, and will even perform curl operations to generate example responses and example requests. Here is an example markdown file that was generated with our script.

See the results here.

Now we have another tool in our toolbox. For now, this script lives in one of our organization’s repositories, but once I clean it up a little more I plan on giving it over to Wordnik. I hope others will find a use for this script.


2 Comments

Re: Components in practice

TJ Holowaychuck recently published his thoughts on the direction and future of JavaScript components. The post has begun a discussion with Isaac Schlueter, but strangely no one has mentioned the most intriguing part of TJ’s explanation, “Components in practice:”

At LearnBoost we’ve been using components for a while now, but like I’ve mentioned not only for abstract UI components, but for everything in our application, even the build system and application bootstrap are implemented as components.

That sounds awesome. How exactly does it work? TJ attaches a screenshot that shows a flat list of component directories, which begs several questions:

Big picture

It’s clear how a flat list of components would help with testing and coordinating teamwork. However, let’s say you hire a new developer to jump on this project, and he pulls down the repo. Where does he start? How does he get a feeling for the high-level orchestration of this machine of many components? Is there a “main” component that represents the application’s entry point? Similarly, what does debugging look like in this environment?

Composition

I’m going to make up a figure here and say that 99.9% of npm modules depend on at least one other module. One of the arguments TJ makes is that such dependencies cause fragmentation, eg jQuery versus MooTools. However, I’m assuming there must be some composition happening in this app, since LearnBoost is far more complex than a popover implementation. I’m also going to suggest that several of these components likely depend on express, mocha, etc. How is that different from depending on jQuery or MooTools?

Implementation

Code is worth a thousand blog posts, and I think I would understand TJ’s proposal much better via example. I’d love to see a pure-component demonstration of a 2-page project, with ‘/’ (login) and ‘/dashboard’ (hello, world). With a firmer grasp on the details, I’d be willing to try a component-based approach on my next project.

 

I’m fascinated by TJ’s proposed alternative to, as he puts it, modules that “splatter themselves all over your system.” I hope this proposition doesn’t get lost in the debate surrounding AMD, CommonJS, build systems, npm, etc.


6 Comments

20 HTML5 Mobile Application Development Tips

Have you decided that you’re going to build a cross-platform HTML5 mobile application instead of going the native route? In this video from our recent Tech Talk, SDW’s Director of Technology, Hunter Loftis, shares some HTML5 mobile app development tips, tricks, and gotchas. The presentation is interactive so grab your phone and view some examples.

In addition to real-world case studies from our shop, and a sweet, follow-along-demo, below are the 20 HTML development rules covered in the presentation that will improve your mobile applications and factor in many of the diverse smartphone and tablet hardware challenges.

HTML5 Mobile App Development Tips

  1. Aim low
  2. Ignore standards and useability
  3. Debug on real devices; not software, not your iPhone
  4. Ignore feature detection
  5. Assume you’re offline
  6. Store data locally
  7. Forget jQuery
  8. Write touch events yourself
  9. Avoid frameworks
  10. Use alert()
  11. Learn microlibraries
  12. Use specific forms
  13. Link to maps and phone numbers (easy)
  14. Limit your DOM updates
  15. Never animate with JavaScript (use CSS3 instead)
  16. Keep it async
  17. Embrace GeoLocation
  18. Protect your state
  19. Make your app turn itself off
  20. …and a few more (peep the video)

One of my favorite points in the 30 minute video is where Hunter casually discusses your user’s “suspension of disbelief” (a great re-appropriation from fictional narratives). Our free Tech Talks are held every-other Friday, and we hope you’ll soon join us.

Web Optimized, HTML5, Hybrid, or Native App?

Still not sure what flavor of mobile application development is right for you? Hunter has covered that in a previous video as well.


Leave a comment

Lessons Learned from Advanced Git Training

Recently my friend, Jonathan and I had the opportunity to attend Github’s Advanced Git training that Jim Van Fleet pulled together. Whilst being blown away by the in-depth knowledge Tim and Adam demonstrated, I managed to wrap my head around a few concepts and walk away with brain intact.

Git Hash Architecture

When troubleshooting Git issues, it’s helpful to have an understanding of how things work under the hood; only knowing git as “magical” isn’t quite helpful. If you have a good mental model for how Git does its business behind the scenes, you can better eliminate potential causes for a particular issue. You will also have a better feel for where to look when things go wrong. Tim started the day off with a graphic similar to the following:

Git Hash Architecture, consisting of a commit, tree, and blobs.The flow of this graphic begins at the top with a commit. Each commit is simply a text file with some metadata; usually the tree, parent, author and committer.

The tree is a representation of files changed according to your folder structure on disk. The associated hash is where you can find the tree. Inside the tree are references to additional trees, blobs, and other objects that are needed to make up the state of the filesystem at that commit.

A blob is a zlib compressed file of each chunk of changes in a file. If you have 3 changes in a file that are not side-by-side, then you will have 3 different blobs stored. A blob has no metadata attached. It is really just a blob of compressed data—opening it up in your text editor will display garble.

The hashed filename is the SHA1 of the contents. The first two characters of the hash is the name of the folder the object is stored in while the remaining 38 digits make up the filename.

Onionskin API

I have never written an API of any depth, though I have consumed plenty and have considered what is involved in doing so. I recall reading an article a couple years ago by Microsoft doing usability studies on it’s C# API’s in Visual Studio. That article was the first time my mind saw usability going beyond consumers and users of websites and apps and into the realm of programmers developing against an API.

In a nutshell, the onionskin API approach is an API of multiple layers. In git there are two different layers which are commonly referred to as the porcelain and the plumbing. The porcelain commands are meant to solve 80% of everyone’s problems. Some developers will never need any more than this, while others will need a custom solution beyond what is provided.

For this remaining 20% there is the plumbing commands. The plumbing consists of the low-level functionality that is used to compose the porcelain and will allow a user of git to compose their additional layers on top.

For example, git flow is a set of commands on top of git that provides more porcelain to provide a documented and proven workflow.

Git has approximately 145 commands with around 1000 commandline switches. Maybe 15-20 of these are porcelain.

Hub. Github CLI commands.

Hub is an extension to git that makes git in the terminal extra-awesome. Created by @defunkt, co-founder of Github, it takes just a moment to install and provides you with the following sugar:

My favorite piece of this is git pull-request. This command will open up your default git text editor and allow you to compose your pull request right there. The first line becomes the title, skip a line, and then the remaining text is your body. Save and exit and hub will create
the pull request on github.com without you ever needing to leave the comfort of your editor.

Wrapping Up

At Skookum, we use Github for all of our projects and learning more about the tools we use daily is always a fascinating exercise that better equips us for the task at hand. I would highly recommend attending a github training session. Also, Pro Git by Scott Chacon is available for your reading pleasure online for free.

Now, go forth! Increase your git-fu! And amaze your friends.


6 Comments

Categorizr.js. Device-detection for your responsive websites.

The responsive web is here to stay, yet there are still other multi-device implementation solutions, to mention a few:

  • Head-in-the-sand. Just build your standard, locked website and let your users and their devices do the best they can with it.
  • Responsive design. One code base intelligent enough to bend and change within the device and screen constraints available.
  • Adaptive design.
  • RESSResponsive web design with server-side components.
  • Device-specific web-powered apps. Creating html-based apps specific to the user’s device. Full web-stack (m.yoursite.com, touch.yoursite.com) or hybrid apps (linkedin).
  • Native apps. Website. iOS app. Android app. WinMo app. Blackberry app. Windows app. OSX App. Linux app.

What is categorizr.js?

View Demo

Categorizr.js is a tiny (1.9 kB gzipped) javascript library for progressively enhancing your responsive projects with a more targeted experience (without forking the user off to a separate sub-domain).

I will admit that this began as an experiment for me to write tablet-specific UIs in my responsive workflow. After doing numerous responsive web sites and web apps, targeting desktop vs mobile is a pretty drastic and easy to do thing with media queries and feature detection. Targeting tablets for additional progressive enhancement was a much larger gray area. Brett Jankord had already done the hard work of creating the original categorizr as a php script. I simply ported it and began building on the foundation he laid.

We’ve all used our phones and clicked on an article link from a tweet or search result only to get abducted by the server and dropped at the homepage of said website, completely disconnected from the content we wanted to read (and with no clear path to get back to it).

Categorizr.js hopes to limit this and improve the user experience by making it easy to enhance the core experience of our websites with device-specific styles and behavior. It does this by adding a CSS class of ‘tablet, tv, desktop, or mobile’ to the HTML element for styling hooks, and it gives you javascript access to properties such as ‘categorizr.isTablet’. Usage with a library like yepnope and you could bootstrap your code to load in additional JavaScript or trigger other behaviors.

How does it work?

Categorizr.js enters the black magic community of web development: User-agent sniffing. Yet, before you scream blasphemy, call me a heretic, and burn me at the stake, let’s look a little deeper.

The first thing we need to be mindful of is mitigating the risk of being wrong. With a simple client-side solution, we make it easy to give the user a toggle to quickly switch between views. Furthermore, there is a relatively low and known quantity of desktop-based browsers. This number doesn’t change frantically and is relatively safe to detect. Meanwhile, mobile is blowing up like nobody’s business, so we follow Luke Wroblewski’s advice and go mobile first. If we’re not sure what browser we’re dealing with, we will assume it’s one of the billion mobile phones sprouting up on this planet.

Now the odd-man out is the tablet. The tablet is similar to a desktop in screen real-estate, but much more in line with a smartphone by usage. The haptic interface brings a whole new set of user expectations and possibilities. Herein lies most of the work to be done to keep categorizr.js on the right track. So many web properties already do this, and giving a tablet user a mobile-optimized site doesn’t qualify as one of the seven deadly sins.

Luckily, there are products like WURFL to guide us along the way.

Getting started

Right now the code is ready to go in your front-end stack. If you are using modernizr, I would recommend concatenating categorizr.js with it (if else, putting it in with the rest of your code would not be a bad idea either).

If you’re an ender user, you can include it in your ender build today with ‘ender add categorizr’.

To see all of this in action, check out the demo on Skookum.github.com/categorizr.js and follow the repository github.com/Skookum/categorizr.js

The future

The upcoming work involves:

  • Fleshing out the UA tests on github and ensuring that categorizr.js passes with flying colors
  • Support for node.js as an express or connect plugin
  • A test-extension API (to add your own UA detection points. This would facilitate things such as specific Microsoft-Metro style UI’s and WAP phones).
  • Emit events when a user requests to change deviceType

Let us know how you think you could use categorizr.js in the comments. Follow along on github. Let’s make a better web.


1 Comments

Sublime Workflow to Best Coda?

As a web developer, I am constantly looking for the next tool or resource to increase productivity. With tight deadlines and multiple projects simultaneously, any way to make my work life more efficient is appreciated.

For just over a year, my editor of choice was Sublime Text 2 (I’m actually writing this post with it now). It loads fast, it has few frills and it has lots of great features like split pane editing and Goto anything.

One thing it doesn’t have is built-in FTP support. I don’t always work on projects locally and often need a FTP client to push up my work. Because of this, six weeks ago I started using Coda from Panic. I’d been using Transmit anyway for my FTP needs so I figured using their editor and FTP program together would be a productivity boost. With a pretty painless site setup and a key combination to publish, edits can be pushed live as they are completed.

The marriage had been wonderful until I saw this post from Andrey Tarantsov.

I thought I had relegated Sublime to merely my text editor but reading this post inspired me to think that maybe Sublime could work the way I need it.

So I gave it a shot.

As somebody that is constantly annoyed when someone says ‘All you have to do is’ and then outlines 15 steps to achieve something that shouldn’t be so involved, I was a bit apprehensive to go through the steps to get some of these features installed out of pure spite.

For one, to install packages, you first have to install Package Control (that handles the installation of packages…).

The steps outlined in the post were relatively painless so I proceeded to the next step I was interested in: installing SFTP.

Again, not the worst thing I’ve ever done to install the SFTP plugin. The setup is a json file that is pretty self-explanatory if you’ve spent any time in web development. The key feature for me was ‘upload on save.’ This features does just what it says, it uploads your file as soon as it’s saved.

If I follow my efficieny rule, uploading on save is a lot faster than the aforementioned upload on key combination.

So have I switched back to Sublime? Not yet.

The thing is, some of the peripherial stuff in Coda started to grow on me in the past weeks that I’ve used it. I really like the way the left toolbar functions; Terminal built in is a huge plus.

I’m planning to use the Sublime workflow for another week or so to see if it turns the tide.

Unfortunately (or fortunately) Coda 2 comes out today, full of lots of new features to keep me intrigued. And probably make me rethink my workflow.


Leave a comment

Developer X Visits SDW HQ

In house engineers are our friends. Except for nacent startups, there’s hardly an execution that lives in a vacuum with only SDW Cast and Crew behind the wheels. Collaboration with client devs often happens with integrations, data dumps, new deployments, and even training.

There’s a cool project floating through the shop right now for a really successful business in NYC. Their team is small, and the customers they’ve been able to attract are impressive. Still, sustained growth means an overhaul. Scaling has become difficult. Legacy code unwieldy.

Luckily, we know a thing or seven about being discreet. There’s a hotel right across the street from SDW HQ. We have various disguises to cloak the 200′ walk. And our windows, have blinds. After all, Skookum Digital Works was started by two programmers with security level three (Top Secret) clearances from the DoD.

So, “Developer X” came to visit. We’ll be excited a year from now (yes, their competition is crazy fierce) when we can reveal some of the state secrets.


2 Comments

Using WordPress as a User and Authentication Database

WordPress is a great tool and you can hack all sorts of functionality into it, but have you ever thought about using it as a user authentication database for content on your server that is outside the realm of WordPress? Maybe a wiki or media server application that you only want your registered WordPress users to access.

There are some really awesome authentication tools built right into WordPress that you can use verify a username and password within your WordPress install. You can even look at that user’s specific capabilities to determine if they get access or not based on their role or capabilities.

In the following example, I use PHP’s ability to present the user with a basic HTTP authentication dialog box, and then it’s authenticated against the WordPress database.

Only thing I haven’t done is set a cookie that keeps them logged in across browser sessions.

This works great if you’ve got an application that has a rewrite to a single index.php file to serve everything, or else put it into a header file that gets served on every page (above any HTML output since it sends our HTTP headers).

And remember: this security is only as good as WordPress security–which is to say “not very secure” but it sure beats an internal non-password protected server that anyone could access simply by plugging into your physical network and browsing around.


Page 4 of 8« First...23456...Last »