Logo

Who is talking?

Archive

An Evening with Dr. Marshall Goldsmith

about 3 years ago | Niraj Bhandari: Technology Product Management

Getting back to my blog after a hiatus.. and what better way to re-energize than a session with Dr. Marshall …Continue reading →

Lean Startup and why some call it Innovation Accounting - the workshop

over 3 years ago | Sven Kräuter: makingthingshappen blog.

We often discuss the benefits of Innovation Accounting which is also known as Lean Startup with our clients. We experimented with different forms of learning - from plain old talking about the subjects and sketching the principles to presentations, we always had the feeling that there had to be a better way. Especially since the great ideas we help develop by facilitating idea development workshops will become even more valuable if you apply these hard to explain principles on top. Sven clustering ideas produced in a workshop The slides Sven prepared seemed to be interesting judging by the numbers on slideshare. But still the reactions when we gave that talk did not totally satisfy us. When Sven and Katrin of CoCreact sketched a workshop on the subject at Play for Agile 2013 we immediately knew that was the missing link between our experience and delivering it to our clients. Workshop participants working on their idea iteratively This new workshop we’re talking about simulates several iterations, so the agile and lean product development mindset is not only a subject that’s discussed, but an actual hands on experience that’s undergone by the participants. Ideas are developed, presented to potential customers and reworked several times. How we achieve this? Strict time-boxing of the iterations in a sand-box created by our experience in iterative product/campaign/idea design. Groups interchange their new ideas switching vendor and client roles The pictures originate from something we do before including a workshop in our portfolio: a final prototype workshop. We believe in iterative idea development so much that we apply it to the workshop on the topic too. It was quite a success although we made it hard for the participants by selecting them from very different expertise: advertising, strategic consulting and even human resources. We always advocate diverse teams, but to see how the groups that in some cases came from completely different working domains performed surprised even us. Diverse people - diverse footwear We were making a lot of things happen since the last blog post which is almost a year old. This workshop is one of our personal highlights since it manages to deliver the benefits of Lean Product Developement. We’re just preparing the next lean product development workshop for a client which Sven guided in developing ideas before. Now we’ll let them rinse and repeat the accelerated feedback loop technique described above. It’s a great way to see why we also call it Innovation Accounting: you have a timebox/budget, you deliver a lean increment to the customer once you run out of time/budget, you ship. Then you weave in the insights of your customer’s feedback into the next iteration. Accountants love this for the reduced risk, and we love the term Innovation Accounting since it underlines that Lean Startup is also a more than interesting process for mature companies.

Introducing Expert JavaScript

about 4 years ago | Mark Daggett: Mark Daggett's Blog

As many of you know I have spent much of the last six months writing a book on JavaScript. I am pleased to announce that last week APress began shipping it out to stores and distribution centers everywhere. In my mind, good technical books are part mixtape, treasure map, and field journal. "Expert JavaScript" is the result of my efforts to successfully weave these forms together into a compelling and information-rich book about JavaScript. A mixtape, for those old enough to remember, is a curated collection of songs. These tapes were often made as gifts for friends, lovers, and those in between. The mixer would craft the tape by selecting personal favorites or organizing tracks along a conceptual thread. Often these tapes were a surrogate for the mixer, a way to be remembered by the listener when the tape was playing. This book is a mixtape for JavaScript that I made for you. These chapters cover some of my favorite aspects of the language, but also includes less-understood topics because they are not easily explained in a tweet or blog post. The long form format of a book affords these subjects the necessary room to breathe. As a child, I found the idea of finding a treasure map a thrilling prospect. I was captivated by the idea that anyone could become rich as long as they followed the map. This book will not lead you to buried treasure, but it is a map of sorts. I laid out these chapters to chart the inner workings of the language, which you can follow to the end. Dig through these concepts with me and you will unearth a deeper understanding of JavaScript than when you started. A field journal is kept by scientists. They are taught to keep a log of their thoughts, observations, and hunches about their subject. They may even tape leaves, petals, or other artifacts of nature between its pages. It’s a highly contextual diary about a subject of study filtered through a specific point of view. The purpose of the field journal is to be a wealth of information that the scientist can continually mine when they are no longer in the field. "Expert JavaScript" is my field journal of JavaScript, which I wrote to return to often. I will use it to help me remember and understand the particulars of the language. I encourage you to do the same. Scribble in the margins, highlight sections, and bookmark pages. It is not a precious object; it is meant to be a living document that is improved through your use. Table Of Contents (with comments) Chapter 1: Objects and Prototyping (What JavaScript is and isn’t) Chapter 2: Functions (Deep dive into functions including changes in es6) Chapter 3: Getting Closure (Understanding the dark arts of closures) Chapter 4: Jargon and Slang (lexical border guards to the community) Chapter 5: Living Asynchronously (promises, coroutines, webworkers) Chapter 6: JavaScript IRL (nodebots, JohnnyFive, node-serialport, firmata ) Chapter 7: Style (understanding programmatic style) Chapter 8: Workflow (sensible workflow for JavaScript developers) Chapter 9: Code Quality (how to evaluate and improve quality in code) Chapter 10: Improving Testability (what really makes code "untestable," hint it’s not the code)

How to setup Yeoman in Windows from scratch

about 4 years ago | Suman Paul: My Blog

If you are using Mac or any *nix system you are right at home, but for windows it’s little bit of work. Follow the steps below to get up and running with yeoman in windows system. First Install Ruby dependencies     Install Ruby for Windows from http://rubyinstaller.org/     Open Command prompt     Check ruby version […]

Difference between == and === ?

about 4 years ago | Suman Paul: My Blog

One of favorite interview question on JavaScript which I normally get started with. And it turns out the answer I knew, and in fact invariable everyone who gave the answer,  were wrong. So the answer I expected is == check the value and === check both value and type Well it turns out although from […]

The Yun Way

about 4 years ago | Sven Kräuter: makingthingshappen blog.

originally published on the official arduino blog This summer I had a speaker engagement at the Codemotion conference in Berlin which I really enjoyed for many reasons. For starters Jule & me participated in an inspiring wearable computing workshop where we met Zoe Romano for the first time. The next day I talked about a possible and easy way how to build the internet of things. Presenting thoughts on & actions how to build the IOT. After the talk it seemed to appear like a a good idea to Zoe that I should get a sneak peek at some new Arduino hardware. There weren’t any more details since it was still secret back then. Of course it didn’t took me much time for consideration since I really love Arduino for making our hardware prototyping so much easier. I happily agreed on checking out this new mysterious device. <!-- more --> The talk was about how to connect anything to the internet using an open source framework I initiated called the Rat Pack, so I assumed it had to do something with online connectivity or that something had to be connected to the internet. Turns out it was about both ;-). Making things talk with each other online (source: slideshare). When Zoe told me about the Arduino Yun I was immediately stoked: an Arduino Board equipped with wifi, plus being able to access a small real time linux system. How awesome is that? Exactly. I couldn’t wait to get hold of the Yun, and when it finally arrived it became quite obvious to me that I had a well thought and rounded product in my hands. Before I really knew what hit me this thing took shape on our balcony: Back then secret device, back then secret Yun. I’ll skip the amazing deeper tech details if you don’t mind (Uploading via wireless LAN, remote debugging, SSH access, Ruby on your Yun…). If you do mind please tell me, I’m glad to blog about them too ;-). I’ll just give you a rough outline of the journey I went through with the Yun so far. The first idea was to integrate it into the Rat Pack ecosystem. Adapting the Arduino client code of the rat pack was fairly easy, it simply uses Linux shell commands on the Yun instead of putting the HTTP command together in the Arduino C code. It’s just a small detail but dramatically reduces the complexity of your project. You don’t have to implement the HTTP calls yourself, you can rely on the work horse that Linux is. Being inspired by this first success with the Yun I thought maybe I could reduce complexity of the prototype of a device that we use to welcome guests at our place. I’m talking about the Bursting Bubbles Foursqaure Switch. Foursquare & Arduino powered soap bubble machine. When you check in to our balcony with foursquare, a soap bubble machine starts filling the air with bursting bubbles. The first prototype uses Arduino connected to an XBee Wifly to control the soap bubble machine and a Rat Pack server that handles the Foursqaure API. Initial approach with lots of moving parts™. Quite complex and actually and as you might have guessed the Yun helped reducing both the software and the hardware complexity drastically. Adding it to the project made it possible to cut off a lot of fat. Actually it now only consists of the Yun connected to the soap bubble machine. The Yun way. What’s true for the hardware is also true for the software. Have a look at the code base. Reduced comlpexity is achieved by processing the response of the Foursqaure API on Linino as opposed to letting the Ruby server take care of it. And although there’s much debate when it comes to JSON processing with regular expressions in general, I just used grep and a matching regexp to extract the information from Foursquare’s JSON response. The parts marked green are the only ones necessary after adding the Yun to the setup. Losing some pounds. Or rather kilobytes… For us at making things happen the Yun will also be the platform of choice for our Internet Of Things workshops. Until now we use Arduinos and XBee WiFlys since they turned out to be the most robust solution for introducing a group of people to the principles of connecting things to the internet. Current ‘IOT Basics’ workshop setup. Although this works most of the time there is still time needed to wire things up and debug the hardware the participants build. With the Yun we can reduce the time necessary for setting things up and debugging the custom setup and use it to concentrate on spreading our knowledge on the subject. Actually you only need two wires for the basic Rat Pack example when using the Yun: Future workshop setup: drastically reduced wiring effort. So on the bottom line I see the Arduino Yun as a major milestone in making the internet of things available to a broader audience and empowering fellow makers and tinkerers to spent less time debugging and more time inventing. Less complexity = more time for creativity (source: twitter). It will also make our workshops far less complex and let the participants concentrate less setting things up and focus on their creativity. I did not use all of it’s features yet, I’m more than curious to explore more of it. The feature I’ll focus on next is the possibilities of actually using the pins of your Arduino via RESTful web service. I guess I’ll keep you posted about that. Thanks Arduino for this awesome device and thanks for letting me have a look at it a little earlier. It seems like the beginning of a wonderful friendship…

Running E2E for Yeoman generated angular app

over 4 years ago | Suman Paul: My Blog

Yeoman is a great tool. It gives build tool, dependancy management and unit test runner out of the box. But there is no straight way to run the angular E2E test. It needs little bit of configuration. Below are the steps that I do to run E2E test. Yeoman automatically generate karma-e2e.conf.js file. We need […]

Hello World

over 4 years ago | Suman Paul: My Blog

The Art of Sampling and Dangers of Generalizing

over 4 years ago | Niraj Bhandari: Technology Product Management

While driving to work today, I was listening to radio and suddenly a claim by the RJ (Radio jockey) struck …Continue reading →

Makers Go Pro

over 4 years ago | Sven Kräuter: makingthingshappen blog.

If you met Jule & me lately and talked about the #internetofthings chances are high you will have heard the term “Makers Go Pro”. Here is one example for a pro version of a tinker-idea: a real life facebook likes counter. Real life facebook by the numbers. Source: smiirl.com So what are the differences between pro and tinker projects? In my opinion one aspect is about the quality of the hardware in terms of tech and in most cases more striking - design. <!-- more --> Rather tech focussed approach. Source: skolti.com_ Of course there are also tinker projects that combine technology and design. This mainly depends on your own abilities and your network. If you aren’t too much into package design perhaps there is somebody in your netowork who is? Real life facebook interaction. Source: makezine.com What I see as the main difference between tinkering and pro maker artifacts is the ability to produce in large quantities and to be connected to other makers or manufacturers that are able to do so. Rat Pack IOT’s circuit board. Source: pics.makingthingshappen.de There is a rising economy of services that provide just that - hardware as a service if you like. Send your CAD files and let them produce your packaging, or as in the example above: let your circuit board designs be produced in professional quality. In addition put the plans for your circuit board on open source and enable everybody to reproduce it. I’m excited about all the possibilities we already have and guess we have some quite interesting times ahead. What we need to see most to be able to take the next step towards the much quoted next industrial revolution is a way of collaborating between all the different fields of a maker project, be it tinkering or corporate work. A thought to be discussed. So feel free to go ahead & tell us your thoughts on the subject. We’re curious!

Lean Hardware

over 4 years ago | Sven Kräuter: makingthingshappen blog.

If you ask me what lean hardware development is and you have a background in software or the startup economy the explenation is short: lean startup applied to physical product development. The more interesting conversation starts if you do not have a software background. Developing software products the weapon of choice to minimize risk & waste and at the same time maximize chances of success & return of invest these days is lean entrepreneurship: agile software development tightly glued to lean startup product development. So what does this mean for the development process of physical goods? Can it be applied to my average internet of things project too? You might have guessed - the answer is: yes! I’ll elaborate it using as little buzzwords as possible so you can concentrate on the advantages of the concept of lean hardware itself. The mood lamp. <!-- more --> What you see here is the hardware manifestation of the starting point of my approach of developing web apps: a minimum viable product. It is a prototype that represents the minimum set of features to be viable to its users by delivering a certain value. It’s also viable to its producers to validate the proposition of the value they had in mind to be delivered to their customers. Question is: What Is It? It’s the mood light - using the rat pack internet of things framework for rapid iot prototyping and a sentiment analysis web api to mix colors depending on the percentage of positive, neutral and negative tweets that relate to a given hashtag: red for negative, blue for neutral and green for positive moods. Rather minimal although already proposing a certain value: delight the attendees of an event for the time of their visit by displaying the mood of the event’s twitter timeline intuitively. Mood Lamp User Lab Spring 2012 the fabulous Digiwomen asked if I could build something physical that’s somehow related to social media to kick off the Social Media Week with a blast. So I started that minimal to get the product in contact with real users early and to weave in their feedback and more important their reaction to shape the face of the product. This technique is called validated learning or rapid prototyping. Back to our value proposition: delighting the attendees of an event for the time of their visit. The first test drives of this hypothesis took place at selected local events: the picture above shows a hacktable and I also took the prototype with me to a CoCreact! Workshop where I actually sketched making things happen’s value proposition, which is another story ;-). The first reaction of users to a product can be bitter medicine: you spent half a year developing something you thought they’d love and then it turns out they don’t exactly do so. Sweet time wasted. But wait - I did not spend such a ridiculous amount of time before confronting myself with a first feedback, did I? First user feedback & the first reactions too are always a surprise. Sometimes rather small suprises. Sometimes they are of epic dimension. In our case the first reactions are solid fun. Turns out confronting the value proposition with The Reality™ proved it wrong. The reactions showed people were interested in this object - but delight was yet to be added. The rate of change of the colors made it more appealing for a chilled, snoozy room like setting. Which is cool unless you’re briefed to open an event with a big bang of attention. In this situation you can either persevere - stay on course - or pivot - change the direction you’re moving. Basically the element of sentiment represenation proved itself quite handy. Persevere. But the way I represented it needed a little pivot. Or perhaps even a huge one ;-). Sometimes you have to pivot hard: hacked Vinyl Killer Quite out of the blue I started collaborating with my partner in crime Jeremy who told me about this toy his kids played with - a little plastic VW bulli that drives around a record using an integrated needle and speakers to actually play the vinyl it moves on. “What’s it called?” “It’s a Vinyl Killer”. You could tell it by the look on my face: we found our new direction: a Vinyl Killer playing a record in different speeds according to the mood of an event’s twitter stream was the way to go. Missing Link I prototyped the missing links: regulating the voltage of the toy via our online Arduino devices and adjusting the software to ditch neutral moods - how boring anyways. The first users’ reactions were more than promising. A quick check against the initial value proposition “Delighting the attendees of an event for the time of their visit” made me finally say: value proposition validated! It’s proved itself fun, easy to understand and just the show act the event organizers had in mind for its starting event. Minimum Viable Product ready for action. You can have a look at this video from the event to see how it went :-). On the bottom line please keep in mind: when developing basically anything - the lean entrepreneurship (aka lean startup) approach makes you flexible enough to respond to user feedback & more important the behaviour of people using your product. Be it software, hardware or conceptual / strategic work: moving fast in slow steps reduces risks and maximizes your chances of success by making you - lean actually.

Uncommon lessons in marketing

over 4 years ago | Amit Anand: UXcorner

One of the largest media and entertainment industries globally is Bollywood, a helm of modern classic and drama depiction of the Indian subcontinent, this industry has been in existence for over a century now! In the recent past, some of the best movies which have fetched a good RoI have not been the best grocers […]

Uncommon lessons in marketing

over 4 years ago | Amit Anand: UXcorner

One of the largest media and entertainment industries globally is Bollywood, a helm of modern classic and drama depiction of the Indian subcontinent, this industry has been in existence for over a century now! In the recent past, some of the best movies which have fetched a good RoI have not been the best grocers […]

NY Diaries – 1

over 4 years ago | Amit Anand: UXcorner

A recent (and my first trip to NY) reinforced principles of good design. This is a city which thrives on art, multifaceted culture and design amongst all chaos. Little nuggets of design in every corner make this city more interesting than other bust business concentrates of the world. One of the oldest inhabitants being a […]

NY Diaries – 1

over 4 years ago | Amit Anand: UXcorner

A recent (and my first trip to NY) reinforced principles of good design. This is a city which thrives on art, multifaceted culture and design amongst all chaos. Little nuggets of design in every corner make this city more interesting than other bust business concentrates of the world. One of the oldest inhabitants being a […]

How To: build the Internet Of Things

over 4 years ago | Sven Kräuter: makingthingshappen blog.

The first serious Internet of Things project I worked on was the digital foosball table I prototyped for Sinner Schrader’s Radical Innovation Lab. An interesting detail that you might not have guessed: I only built the software. A slim little Ruby API & attached to a realtime web app and to the Arduino hardware. The hardware was built by one of my all time favorite colleagues Thomas Jacob. I admired the hardware skills he had and didn’t dare to dream of developing something similar on my own. I was only a software guy, right? First working prototype in action, check wired for details. <!-- more -->Right, but: Arduino is perfect to get into the subject of hardware hacking. Although I’ll never reach Thomas level of craftsmanship in general I was able to build hardware that connects itself to the internet pretty fast. The first prototype was just a breadboard with a switch that could light an led on the other side of the world. Or in my case: on the other side of the breadboard. The following projects I did all had two things in common: a lean development approach and a variation of the API and wireless hardware shields in various forms. These experiences lead to the Rat Pack Remote Control Workshops that we offer. Aimed at software people who want to see their Arduino projects talk to eacht other via a wireless internet connection. Rat Pack Repo Some wires attached to a few lines of code: the Rat Pack. When I was asked to talk about the Rat Pack device at Codemotion I was glad to offer a talk on the subject. I used it as an opportunity to distill all the previous versions to a handy package that acts as a basic building block for the IoT: the Rat Pack Repo that describes what’s necessary to built a little IoT project using Arduino Uno & Sparkfun’s Wifly Shield. The idea is: use this little example that enables you to make an LED light up globally by pressing a button locally to connect almost anything to the internet. Almost Anything? For example the wearable computing projects that where done at the great workshop Kobakant did in cooperation with Zoe Romano to open Codemotion. Jule weaving in some electronic life into fabrics. Jule and me participated, but my initial intention to develop my conductive thread sewing skills further was a little distracted: inspiration struck me. I realized I could adjust the Rat Pack using the Xbee Breakout Lilypad to attach the Xbee Wifly that slumbered in my toolbox. I managed to do so and the adjusted code is part of the Rat Pack Repo on github now. The fritzing sketch has still to be done - I hope to see some Open Source / Open Hardware contribution love happening there soon ;-). When inspiration strikes… (source: Arduino) The workshop was quite exciting already. When I did my talk I was even more excited by all the interest in the subject and even more by the very communicative and open crowd. Breaking the first rule of live demo over and over again. So many people getting in touch with me after the talk, during the day & during the following days isn’t really symptomatic for a german tech event. I really enjoyed the multi cultural, international and open vibe and took quite some inspiration with me. I hope I was of any help to the crowd too. If you missed the talk you can get a glimpse how it was by checking the slides: Rat Pack Remote Control - an Internet Of Things ™ primer I was glad to do some client work in Berlin the next couple of days - which I’m also quite exctied about currently but that would be a little off topic here. I used the drive & the new encounters of the weekend to connect myself a little in this pulsating city. From the amazing Knowable crew who are about to built a github for makers - which will make my life much easier - over the very inspiring lunch breaks to the visit at the Berlin Fab Lab - where I also ran into Kobakant’s Hanna - this week in Berlin was really packed with inspiration and new encounters. I’m on my way back to Hamburg right now which I miss pretty much. I will miss Berlin too I guess, looking forward to see you again soon! I’d also love to see some reports of actual IoT projects built on the Rat Pack example. I’d love to see some open source contributions aka pull requests to improve it even more. Curious where this project will lead to!

Book Review – PhoneGap 2.x Mobile Application Development

over 4 years ago | Niraj Bhandari: Technology Product Management

  It was about two weeks back when Kraig Lewis of packtpub reached out to me do a review of …Continue reading →

Decoding Big Bazaar’s Profit Club

over 4 years ago | Niraj Bhandari: Technology Product Management

Last week I went to newly opened big bazaar store close to our home and was pleasantly surprised with a …Continue reading →

Lean Lego and Hacked Tables

over 4 years ago | Sven Kräuter: makingthingshappen blog.

We are in the middle of an energetic scene: groups of two are presenting each other prototyped solutions for problems you can have when planning a vacation. Concentrated listening, questions and answers. Feedback loop in full effect. But how could it come to this? sketching the prototype The power of the back of the napkin (Photo by @cuxdu) Like many good stories it all begins with a cliché<!-- more -->: agile conference, evening get together at the bar, two agilists, one napkin and a pen. While discussing the similarities between one core principle of strategicplay and the product development approach of the accelerated feedback loop it became obvious to us that the creative problem solving approach of “diverge & converge” approach is also part of the quantifeid learning when applying lean startup techniques. All of a sudden Katrin aka @cuxdu stated that we should prototype a workshop around the idea of, well, rapid prototyping. Assisted by strategic play. We sketched the topics that had to be covered and a rough timeline. Could probably work, let’s see if we still like the idea tomorrow morning before Play 4 Agile’s session planning. Which we did, so there we are right in the middle of a group of people that is really amazing to facilitate. Find problems, let your partner group pick one to solve. Prototype a solution, get feedback by showing it to your target group on the other side of the table. Ditch your solution strategy because it didn’t work or enhance the solution that hit a nerve to make it even more appealing. Pivot or persevere. Rinse and repeat. Prototyping with Lego or Paper & Pencil - anything goes. Amazingly we see almost all theoretic arguments in favor of rapid prototyping happening in the workshop: failing fast saves you budget - in this case time - to adapt to the misunderstood customer demand. If you pleased your customer with your solution you start working on putting smiles on their faces by continuing to steer in the direction you chose. The build-measure-learn loops are accelerated by the decreasing timebox sizes Katrin & me set. One pair of groups even accellerates beyond these boundaries by having several increments in a single timebox - quite astonishing! I think I will offer this workshop at the next open space I participate in to iterate it a little. One success factor here is the mix of people: agilists have the matching feedback and failure culture by default. Another one is the amazing atmosphere at the conference itself that just inspires you. Perhaps the amazing results we saw were caused by the opening day’s facilitation technique to get everyboy’s mind in a creative state: three people putting some stress on the left and right brain half for 20 seconds until you reached a state of an open mind indicated by glowing eyes. I think I was yelled at for a little longer by the way ;-). The same observation could be made at the hacktable we organized. It’s an open format where you meet and bring things or topics with you that you want to hack on in a group. Hacktable with Raspberry Pi & Lego case. Coincidence? Our friends at humanist lab hosted the event - we provided hacked tables out of spare desk materials for the hacktable which is quite poetic I guess ;-). The result were the same glowing eyes I saw after the brain massage at the beginning of Play 4 Agile. Shining eyes & open minds - post hacktable talks. What does this mean for the mentioned workshop? I’ll continue working on it in surroundings where the eyes start glowing on their own like a hacktable or an open space unconference. Our agile colleagues at ScrumCenter are using the format via a Creative Commons (CC-BY-SA) license for their lean startup product owner certification and as you may imagine I am more than curious about their experience. In addition I want to let the format prove itself in not so matching surroundings where the mindset of the people perhaps isn’t too much biased towards an open failure and feedback culture to see how much work is to put in getting people into the mood for an accelerated feedback loop. If you are interested in hosting this workshop yourself or with my facilitation go ahead and get in touch. I’m curious how the idea will develop and try to keep you posted!

USP/Points of differentiation : Quikr vs OLX – Any one ?

over 4 years ago | Niraj Bhandari: Technology Product Management

I got thinking about the Quikr and OLX the other day after watching their TV Commercials (TVC) back to back. …Continue reading →

Functional Illiteracy In JavaScript

almost 5 years ago | Mark Daggett: Mark Daggett's Blog

When someone cannot read or write in their native language, they are considered functionally illiterate. This level of illiteracy means that they subsist in their daily life through their ability to speak fluently, and recognize certain written keywords. Illiteracy is not a sign of stupidity; in many cases it is the result a lack of opportunity to learn. However, illiteracy does stunt the potential of otherwise bright people. The sad fact is their inability to participate in society through the mastery of language makes them at higher risk of being in poverty and committing crime. Most computer languages are written, not spoken (try speaking JavaScript out loud and you’ll see what I mean). Therefore, being able to write code does not make you literate. Being an illiterate developer means that you skim across the surface of the language, copying snippets of code from others trying cobble, together a working program with little or no understanding of how or why it works. As with illiterates in the wider world, illiterate developers are not unintelligent. Often it means that they didn’t have the luxury of taking a deep dive through the mechanics of the programming language. Many illiterate developers are practicing software professionals, backed into a corner by impending deadlines, or lack of resources. Perhaps they started in other fields such as graphic design, or business and find themselves scurrying along the surface of the language, learning in fits and starts as they go along.

Dynamic Spotlight Effect Using CSS and JavaScript

almost 5 years ago | Mark Daggett: Mark Daggett's Blog

In casual gaming there is a convention whereby the player is introduced to the interface during the first play cycle. Typically, this involves a character from the game pointing out aspects of the interface and telling the player how to use it and why they should care. Ideally, you want to visually draw the attention of the player to the relevant component of the interface as the characters are explaining it. For this purpose I created a JavaScript class which will spotlight a portion of the screen using only CSS and JavaScript. Here is an example of it working. The class allows you to configure the following spotlight attributes: starting x,y (integer) destination x,y (integer) duration (0%-100%) callback when animation is complete (function) Below is the CSS and JavaScript you’ll need to use it in your own projects. If you improve this script please let me know. 1 <div id="spotLight"></div> 1 2 3 4 5 6 7 #spotLight { width:1024px; height:768px; z-index:9; position:absolute; display:none; } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 function SpotLight(element) { this.element = element; this.x = element.width() / 2; this.y = element.height() / 2; this.show = function() { element.hide(); element.removeClass("hide"); return element.fadeIn('fast'); }; this.hide = function(callback) { element.fadeOut('fast', function() { if (callback) { return callback(); } }); return element.addClass("hide"); }; this.move = function(opts) { var endX, endY, obj; obj = $.extend({}, { start_x: this.x, start_y: this.y, x: this.x, y: this.y, aperture: "50%", duration: 1000, done: function() {} }, opts); endX = obj.x; endY = obj.y; obj.x = obj.start_x; obj.y = obj.start_y; return jQuery(obj).animate({ x: endX, y: endY }, { duration: obj.duration, step: function() { var style, _i, _len, _ref; _ref = ["-moz-radial-gradient(" + this.x + "px " + this.y + "px, ellipse cover, rgba(0,0,0,0) 0%, rgba(0,0,0,0.8) " + this.aperture + ", rgba(0,0,0,0.8) 100%)", "-webkit-gradient(radial, " + this.x + "px " + this.y + "px, 0px, " + this.x + "px " + this.y + "px, 100%, color-stop(0%,rgba(0,0,0,0)), color-stop(" + this.aperture + ",rgba(0,0,0,0.8)), color-stop(100%,rgba(0,0,0,0.8)))", "-webkit-radial-gradient(" + this.x + "px " + this.y + "px, ellipse cover, rgba(0,0,0,0) 0%,rgba(0,0,0,0.8) " + this.aperture + ",rgba(0,0,0,0.8) 100%)", "-o-radial-gradient(" + this.x + "px " + this.y + "px, ellipse cover, rgba(0,0,0,0) 0%,rgba(0,0,0,0.8) " + this.aperture + ",rgba(0,0,0,0.8) 100%)", "-ms-radial-gradient(" + this.x + "px " + this.y + "px, ellipse cover, rgba(0,0,0,0) 0%,rgba(0,0,0,0.8) " + this.aperture + ",rgba(0,0,0,0.8) 100%)", "radial-gradient(ellipse at " + this.x + "px " + this.y + "px, rgba(0,0,0,0) 0%,rgba(0,0,0,0.8) " + this.aperture + ",rgba(0,0,0,0.8) 100%)"]; for (_i = 0, _len = _ref.length; _i < _len; _i++) { style = _ref[_i]; element.css({ "background": style }); } return true; }, done: obj.done }); }; return this; } // Example Usage: var spotLight = new SpotLight($("#spotLight")) spotLight.show(); spotLight.move({ x: 150, y: 650 });

racing and profiling

almost 5 years ago | Mark Daggett: Mark Daggett's Blog

I’ve been experimenting with various ways to profile, and explore JavaScript as it executes in the runtime environment. Mostly I’ve been experimenting with the rKelly and rubyracer gems. Both gems are written by people much smarter than myself so there is lots to learn and explore inside their source. I was talking to the very friendly Charles Lowell, creator of the rubyracer and he shared this great snippet with me, which allows you to turn on the v8 profiler while the rubyracer is running. Because this is an undocumented hook I thought I’d share it here: 1 ruby -Ilib -Iext -rv8 -e 'V8::C::V8::SetFlagsFromString("--prof"); V8::Context.new() {|c| puts c.eval("5 + 1")}; V8::C::V8::PauseProfiler()' This will produce a v8.log file wherever you executed the script from. Inside the file there is a gluttonous amount of data, which will take some time to parse through but in general it looks a bit like this: code-creation,LoadIC,0x127fc3e29140,181,"A load IC from the snapshot" code-creation,KeyedLoadIC,0x127fc3e29200,181,"A keyed load IC from the snapshot" code-creation,StoreIC,0x127fc3e292c0,183,"A store IC from the snapshot" code-creation,KeyedStoreIC,0x127fc3e29380,183,"A keyed store IC from the snapshot" code-creation,Builtin,0x127fc3e29440,97,"A builtin from the snapshot" code-creation,Builtin,0x127fc3e294c0,137,"A builtin from the snapshot" code-creation,Script,0x127fc3e14e20,980,"native string.js",0x2e87cc50ec50, code-creation,LazyCompile,0x127fc3e15500,1616,"SetUpString native string.js:940",0x2e87cc5129c8, code-creation,LazyCompile,0x127fc3e15be0,472," native string.js:36",0x2e87cc512ab0, code-creation,Script,0x127fc3e15dc0,336,"native array.js",0x2e87cc512e00, code-creation,LazyCompile,0x127fc3e15f20,2544,"SetUpArray native array.js:1469",0x2e87cc5175b0, code-creation,LazyCompile,0x127fc3e16920,340,"SetUpArray.b native array.js:1482",0x2e87cc517668, code-creation,Script,0x127fc3e16b00,552,"native regexp.js",0x2e87cc5177f0, code-creation,LazyCompile,0x127fc3e16d40,388,"RegExpConstructor native regexp.js:86",0x2e87cc518a70, code-creation,LazyCompile,0x127fc3e16ee0,280,"RegExpMakeCaptureGetter native regexp.js:363",0x2e87cc519288, code-creation,LazyCompile,0x127fc3e17000,668," native regexp.js:364",0x2e87cc519340, code-creation,LazyCompile,0x127fc3e172a0,2304,"SetUpRegExp native regexp.js:403",0x2e87cc519488, code-creation,LazyCompile,0x127fc3e17ba0,292,"SetUpRegExp.a native regexp.js:422",0x2e87cc519540, code-creation,LazyCompile,0x127fc3e17ce0,256,"SetUpRegExp.c native regexp.js:426",0x2e87cc519658,

Javascript ParseTrees

almost 5 years ago | Mark Daggett: Mark Daggett's Blog

I’ve been experimenting with the rkelly Ruby gem to help me explore the JavaScript parse tree. It is really fascinating, and I can see myself spending a lot of time spelunking through the language. Here is a simple example using the gem to iterate over each node in the parse tree and print out its type. Stay tuned, more to come! 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 require 'rubygems' require 'rkelly' parser = RKelly::Parser.new src = <<EOF // Create scrollLeft and scrollTop methods jQuery.each( {scrollLeft: "pageXOffset", scrollTop: "pageYOffset"}, function( method, prop ) { var top = "pageYOffset" === prop; jQuery.fn[ method ] = function( val ) { return jQuery.access( this, function( elem, method, val ) { var win = getWindow( elem ); if ( val === undefined ) { return win ? win[ prop ] : elem[ method ]; } if ( win ) { win.scrollTo( !top ? val : window.pageXOffset, top ? val : window.pageYOffset ); } else { elem[ method ] = val; } }, method, val, arguments.length, null ); }; }); function getWindow( elem ) { return jQuery.isWindow( elem ) ? elem : elem.nodeType === 9 && elem.defaultView; } EOF ast = parser.parse(src) =begin Outputs something like this as it traverses the parseTree RKelly::Nodes::SourceElementsNode RKelly::Nodes::ExpressionStatementNode RKelly::Nodes::FunctionCallNode RKelly::Nodes::DotAccessorNode RKelly::Nodes::ResolveNode RKelly::Nodes::ArgumentsNode RKelly::Nodes::ObjectLiteralNode RKelly::Nodes::PropertyNode ... =end ast.each do |node| puts node.class end

Getting Closure

almost 5 years ago | Mark Daggett: Mark Daggett's Blog

Understanding the Dark Arts of JavaScript Closures "No matter where you go, there you are." - Buckaroo Banzai The purpose of this post is to explain how closures work in plain english, and to give a few compelling examples where the use of closures really improve the quality of your code. Like many others I am a self-taught programmer, and little over a decade ago I was also a freshly minted Creative Director working in Los Angels. I was employed by a major footwear brand, and had inherited a team of very bright and technically gifted programmers. I felt that I needed to learn enough code to speak intelligently to them. I didn’t want to propose a feature that wasn’t possible, and more importantly I wanted to understand the promise and the problems inherent in the medium we were building within. More generally though, I am just a very curious person who likes to learn. Once I started to pull that tread the world of programming began to unwind for me. Now years later, here I sit writing about the internals of JavaScript. Being that my computer science education has been ad-hoc there are many core concepts in JavaScript (and programming in general) that I wanted to understand better. My hypothesis is that there are others like me who have been using and abusing JavaScript for years. For this reason I decided to write on closures an often used but equally often misunderstood concept in JavaScript. Closures are important for a variety of reasons: They are both a feature and a philosophy that once understood makes many other concepts (e.g. data binding, promise objects) in JavaScript easier. They are one of the most powerful internals of the language, which many other so-called real languages don’t support. They are where JavaScript is trending due to the rise in popularity of asynchronous execution. For all the potential benefits that closures offer, there is a black magic quality to them that can make them hard to understand. Let’s start with a definition, A closure is the act of binding all free variables, and functions into a closed expression, that persist beyond the lexical scope from which they were created. While this is a succinct definition it is pretty impenetrable for the uninitiated; let’s dig deeper. The Straight Dope On Scope Before we can truly understand closures we must take a step back and look at how scope works in JavaScript. When reading about JavaScript periodically writers will make reference to lexical scope, or the current and/or executing scope. Lexical scope simply means that where a statement is placed within the body of the script is important and effects how it can be accessed, and what in turn it has access to. In JavaScript unlike other languages the only way to create a new scope is through a function invocation [1]. This is what programmers mean when they say JavaScript has function level scoping. This form of scoping may be anti-intuitive to programmers coming from languages that support block-level scoping e.g. Ruby. The following example demonstrates lexical scope: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 // Free Variable var iAmFree = 'Free to be me!'; function canHazAccess(notFree){ var notSoFree = "i am bound to this scope"; // => "Free to be me!" console.log(iAmFree); } // => ReferenceError: notSoFree is not defined console.log(notSoFree) canHazAccess(); As you can see the function declaration canHazAccess() can reference the iAmFree variable; this is because the variable belongs to the enclosing scope. The iAmFree variable is an example of what in JavaScript is called a free variable [2]. Free variables are any non-local variable which the function body has access to. To qualify as a free variable it must be defined outside the function body and not be passed as a function argument. Conversely, we see that referencing notSoFree from the enclosing scope produces an error. This is because at the point at which this variable was defined it was inside a new lexical scope (remember function invocation creates a new scope). Put another way, function level scopes act like one-way mirrors; they let elements inside the function body spy on variables in the outer scope, while they remain hidden. As we’ll see below closures short-circuit this relationship, and provide a mechanism whereby the inner scopes internals can be accessed by the outer scope. Thisunderstandings One feature of scopes, that routinely throw developers off (even seasoned ones) is the use of the this keyword as it pertains to the lexical scope. In JavaScript the this keyword always refers to the owner of scope from which it is executing. Misunderstanding how this works can cause all sorts of weird errors where a developer assumes they are accessing a particular scope but are actually using another. Here is how this might happen: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 var Car, tesla; Car = function() { this.start = function() { console.log("car started"); }; this.turnKey = function() { var carKey = document.getElementById('car_key'); carKey.onclick = function(event) { this.start(); }; }; return this; }; tesla = new Car(); // Once a user click's the #carKey element they will see "Uncaught TypeError: Object has no method 'start'" tesla.turnKey(); The developer who wrote this was headed in the right direction, but ultimately a thisunderstanding forced them off the rails. They correctly bound the click event to the car_key DOM element. However, they assumed that nesting the click binding inside the car class would give the DOM element a reference to the car’s this context. The approach is intuitive and looks legit, especially based on what we know about free variables and lexical scope. Unfortunately, it’s hopelessly borked; because as we learned earlier a new scope is created each time a function is invoked. Once the onclick event fired this now referred to the DOM element not the car class. Developers sometimes get around this scoping confusion by assigning this to a local free variable (e.g. that, _this, self, me). Here is the previous method rewritten to use a local free variable instead of this. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 var Car, tesla; Car = function() { this.start = function() { console.log("car started"); }; this.turnKey = function() { var that = this; var carKey = document.getElementById('carKey'); carKey.onclick = function(event) { that.start(); }; }; return this; }; tesla = new Car(); // Once a user click's the #carKey element they will see "car started" tesla.turnKey(); Because that is a free variable, it won’t be redefined when the onclick event is triggered. Instead it remains as a pointer to the previous this context. Technically, this solves the problem, and I am going to resist the urge of calling this an anti-pattern (for now). I have used this technique thousands of times over the years. However, it always felt like a hack, and fortunately, closures can help us marshall scopes in a much more elegant way. My First Closure In it’s most basic form a closure is simply an outer function that returns an inner function. Doing this creates a mechanism to return an enclosed scope on demand. Here is a simple closure: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 function outer(name) { var hello = "hi", inner; return inner = function() { return hello + " " + name; } } // Create and use the closure var name = outer("mark")(); // => 'hi mark' console.log(name); In this example you can see that the local variable hello can be used in the return statement of the inner function. At the point of execution hello is a free variable belonging to the enclosing scope. This example borders on meaninglessness though; lets look at a slightly more complex closure: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 var car; function carFactory(kind) { var wheelCount, start; wheelCount = 4; start = function() { console.log('started with ' + wheelCount + ' wheels.'); }; // Closure created here. return (function() { return { make: kind, wheels: wheelCount, startEngine: start }; }()); } car = carFactory('Tesla'); // => Tesla console.log(car.make); // => started with 4 wheels. car.startEngine(); Why Use Closures Now that we know what closures are, let’s look at some use cases on where they can elegantly solve common problems in JavaScript. Object Factories The previous closure implements what is commonly known as the Factory Pattern [3]. In keeping with a Factory Pattern the internals of the factory can be quite complex but are abstracted away in part thanks to the closure. This highlights one of the best features of closures which is their ability to hide state. JavaScript doesn’t have the concept of private or protected contexts, but using closures give us a good way to emulate some level of privacy. Create A Binding Proxy As promised lets revisit the Car class we wrote earlier. We solved the scoping problem by assigning the outer function’s this reference to a that free variable. Instead of that approach we’ll solve it through the use of closures. First we create a reusable closure function called proxy, which takes a function and a context and returns a new function with the supplied context applied. Then we wrap the onclick function with our proxy and pass in the this that references the current instance of the Car class. Coincidentally, this is a simplified version of what jQuery does in their own proxy function [4]. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 var Car, proxy, tesla; Car = function() { this.start = function() { return console.log("car started"); }; this.turnKey = function() { var carKey; carKey = document.getElementById("carKey"); carKey.onclick = proxy(function(event) { this.start(); }, this); }; return this; }; // Use a closure to bind the outer scope's reference to this into the newly created inner scope. proxy = function(callback, self) { return function() { return callback.apply(self, arguments); }; }; tesla = new Car(); // Once a user click's the #carKey element they will see "car started" tesla.turnKey(); Contextually Aware DOM Manipulation This example comes from directly from Juriy Zaytsev’s excellent article "Use Cases for JavaScript Closures" [5] . His example code demonstrates how to use a closure to ensure a DOM element has a unique ID. The larger takeaway is that you can use closures as a way to maintain internal states about your program in an encapsulated manner. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 var getUniqueId = (function() { var id = 0; return function(element) { if (!element.id) { element.id = 'generated-uid-' + id++; } return element.id; }; })(); var elementWithId = document.createElement('p'); elementWithId.id = 'foo-bar'; var elementWithoutId = document.createElement('p'); // => 'foo-bar' getUniqueId(elementWithId); // => 'generated-id-0' getUniqueId(elementWithoutId); Singleton Module Pattern Modules are used to encapsulate and organize related code together under one roof. Using modules keeps your codebase cleaner, easier to test, and reuse. Attribution for the Module Pattern is typically given to Richard Conford [6], though a number of people most notably Douglas Crockford are responsible for popularizing it. The Singleton Module is a flavor that restricts more than one instance of the object from existing. It is very useful for instances where you want several objects to share a resource. A much more in depth example of the Singleton Module can be found here [7], but for now consider the following example: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 // Create a closure var SecretStore = (function() { var data, secret, newSecret; // Emulation of a private variables and functions data = 'secret'; secret = function() { return data; } newSecret = function(newValue) { data = newValue; return secret(); } // Return an object literal which is the only way to access the private functions and variables return { getSecret: secret, setSecret: newSecret, }; })(); var secret = SecretStore; // => "secret" console.log(secret.getSecret()); // => "foo" console.log(secret.setSecret("foo")); // => "foo" console.log(secret.getSecret()); var secret2 = SecretStore; // => "foo" console.log(secret2.getSecret()); TLDR Takeaways Lexical scope gives importance to where code is located within the script body. Free variables are any non-local variable which the function body has access to. The only way for new scopes to be created in JavaScript is through function invocation. The *this* keyword always refers to the owner of scope from which it is executing. A closure allows a function to access variables outside of its lexical scope. [1]http://howtonode.org/what-is-this [2]http://en.wikipedia.org/wiki/Free_variable [3]http://en.wikipedia.org/wiki/Factory_method_pattern [4]https://github.com/jquery/jquery/blob/master/src/core.js#L685 [5]http://msdn.microsoft.com/en-us/magazine/ff696765.aspx [6]http://groups.google.com/group/comp.lang.javascript/msg/9f58bd11bd67d937 [7]http://www.addyosmani.com/resources/essentialjsdesignpatterns/book/#singletonpatternjavascript

CYA With CSS

almost 5 years ago | Mark Daggett: Mark Daggett's Blog

Using Design Time Classes To Polish Your Product This post is dedicated to CYA with CSS; for the uninitiated CYA means "cover your ass", and I assume that anyone reading my blog already knows what CSS is. Just as you can craft the JavaScript on your website to act defensively against unforeseen errors, so too can you use CSS at the design stage to ensure you don’t end up with egg on your face post-launch. A while back, I was viewing the Github’s source code (man I sound like such a nerd), and I noticed these classes added to their body tags: "logged_in page-dashboard macintosh env-production". Several of these classes are obviously progressive enhancement style additions meant to change the layout / features of the page based on the visitor’s browser. In my own sites I often include the controller and action params into the body tag so that I can scope my JavaScript and CSS executions. Doing this provides a convenient way to namespace your CSS and JS, without having to worry about polluting the global namespace. However one of Github’s additions stuck out at me "env-production". I have to imagine that Kyle Neath was the one who added this to the page, and that he did it because he wants the site to render differently based on the runtime environment of the sever. I thought about the possibilities of this technique and figured out that there are probably a whole host of ways to use these design time classes. The use of which would help ensure a polished final project. Here are just a couple of examples of how you might use them: 1. If you are using a specific grid layout you could set an image to appear as a background-image of the body. Doing this would ensure your page conforms the the correct visual spacing and vertical rhythm. I know that "Blueprint CSS" used to have something like this back in the day. It might look something like this: 1 2 3 body.env-development { background: url('/assets/grid.png') no-repeat scroll top left !important; } 2. Often as developers we’ll mock in a bit of functionality that the design calls for with the intention of making it work later. Unfortunately, this can mean that dead links get deployed. Here is how you could use a CSS selector and a design time class to color code all the links without a href attribute. This example adds a gaudy eye-searing color to all the dead links, to ensure you fix it before you deploy into production. 1 2 3 4 5 6 body.env-development { a:not([href]) { color:#00FF00 !important; background-color:#ff00ff !important; } } The best thing about design time classes is that because they are properly scoped to the body they just disappear in the production environment. This means you don’t have to worry about them being seen by the end user. If you are using Rails it’s a pretty straight forward process to get these classes into your application. 1 %body{ :class => "#{app_classes}" } In your application helper you’d add something like this: 1 2 3 def app_classes "#{Rails.env} #{params[:controller].gsub('/',' ')} #{params[:action]}" end Kyle suggested over twitter that another good use is to change the favicon based on the server environment. More To Come Do you use Design time classes? If so what are they, share them in the comments or as a gist and maybe we can develop a nice resource of helpful snippets for others.

The Great Indian Air War-Fare

almost 5 years ago | Niraj Bhandari: Technology Product Management

Yeah you read it right  – It is warfare over ” Air Fare”. It all started one fine day when …Continue reading →

JavaScript Jigs

almost 5 years ago | Mark Daggett: Mark Daggett's Blog

In the excellent book "The Pragmatic Programmer: From Journeyman to Master" Hunt and Thomas use the metaphor of a wood worker’s jig to describe how a smart programmer reduces the repetitive nature of coding by creating reusable templates or code generators: "When woodworkers are faced with the task of producing the same thing over and over, they cheat. They build themselves a jig or a template. If they get the jig right once, they can reproduce a piece of work time after time. The jig takes away complexity and reduces the chances of making mistakes, leaving the craftsman free to concentrate on quality." To be a jig the solution is highly specific and good for one task, for example making a complex cut. At first you might want to conflate jigs and design patterns together, because they are both reusable solutions to a problem. Jigs are precise where design patterns are generalized. While Hunt and Thomas said jigs are generators, I will use them in the context of helpers, friendly little functions or classes that do one thing well. Many of the most popular JavaScript libraries started as a collection of jigs. Prototype and JQuery for example, were initially just a collection of reusable snippets that acted like speed-boosts, and shortcuts for discrete problems. What follows are a collection of jigs that are useful in modern JavaScript applications. Self Executing Functions The immediately invoked function expression (IIFE) is one jig you will see various libraries and frameworks use repeatedly. In its most basic form it can be written in a couple of ways 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 ;(function(){ ... })(); ;!function(){ ... }(); ;-function(){ ... }(); ;+function(){ ... }(); ;~function(){ ... }(); // Not Recommended ;void function(){ ... }(); // Not Recommended ;delete function(){ ... }(); The beauty of the IIFE is that it uses a unary expression to coerce a function declaration, which would normally need to be explicitly called into a function expression that can self-execute. Behind the scenes JavaScript is running a unary operation on the function declaration, the result of that operation is the function expression, which is immediately invoked with the trailing parentheses "()". Besides being elegant code the IIFE also affords the following: It provides a closure which prevents naming conflicts It provides elegant block scoping It prevents pollution of the global namespace. It promotes the developer to think in terms of modular code. One other point worth mentioning is the use of the semicolon prepending the statement. Adding this provides a bit of defensive programming against other malformed modules that might have a trailing semicolon. If this were just a function declaration it would be absorbed into the preceding module. This can often occur when multiple scripts are concatenated together as part of a deploy process. It is highly recommended that you follow this convention to protect yourself against mystery bugs in production. Modules Modules are very common is many programming languages, though JavaScript doesn’t have a native representation for them. As such other developers have developed a spec for encapsulating your code inside a reusable module. The following code is based off an example in the "Principles of Writing Consistent, Idiomatic JavaScript" [1]. There are a couple of elements that should be called out in this jig: We see two different examples of the self executing function jig being used. This is to ensure proper closure around the module itself and the initializer function that adds it to the global namespace. Invoking this function returns an object with a bound reference to the private variable "data". This allows the developer to enforce the use of getters and setters for access to the data variable. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 ;!function(global) { var Module = (function() { // Mostly Private Variable var data = 'secret'; return { bool: true, string: 'a string', array: [1, 2, 3, 4], object: { lang: "en-Us" }, getData: function() { return data; }, setData: function(value) { return (data = value); } }; })(); // expose our module to the global object global.Module = Module; }(this); safeEval The eval function and its siblings setTimeout, setInterval and Function all have access to the JavaScript compiler, which means it is a bit like running with scissors. Since eval typically does more harm than good people try to work around it as much as possible. This jig does just that giving you eval like features without calling the function. 1 2 3 4 5 6 7 8 9 10 // A string representation of an a object similar to what you might get with JSON. var dataString = '{"foo":"bar"}'; ;!function(global, data){ // the variable name provided is replaced with the evaluated code. global[data] = new Function("return" + global[data])() }(this, "dataString"); // dataString is now Object {foo: "bar"} PubSub PubSub is short for a publish-subscribe message system, where objects ask to receive messages that are broadcast by publishers. The main advantage of PubSub is that the subscribers are loosely coupled allowing just about any object to publish and subscribe to messages. PubSub systems also have been proven to scale much nicer that tightly coupled client / server paradigms. This implementation of PubSub was written by Ben Alman and can be download from his Github account [2]. Let’s take a look at this jig in detail. Again, the first thing you should notice is that this jig uses the IIFE jig too (see a pattern yet?). This jig does depend on jQuery for access to the "on","off", and "trigger" functions. This jig stores an internal list of subscribers as keys of the internal object "o". When a message is broadcast all the subscribers have the arguments supplied by the publisher transferred to them. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 ;(function($) { var o = $({}); $.subscribe = function() { o.on.apply(o, arguments); }; $.unsubscribe = function() { o.off.apply(o, arguments); }; $.publish = function() { o.trigger.apply(o, arguments); }; }(jQuery)); // Usage Examples // Creates a "named" logging function. function createLogger(name) { return function(event, a, b) { // Skip the first argument (event object) but log the name and other args. console.log(name, a, b); }; } // Subscribe to the "foo" topic (bind to the "foo" event, no namespace). $.subscribe('foo', createLogger('foo')); // Subscribe to the "foo.bar" topic (bind to the "foo" event, "bar" namespace). $.subscribe('foo.bar', createLogger('foo.bar')); /* * logs: * foo 1 2 * foo.bar 1 2 * foo.baz 1 2 */ $.publish('foo', [1, 2]); /* * logs: * foo.bar 3 4 */ $.publish('foo.bar', [3, 4]); Your Jigs Go Here Please send me your favorite jigs. I would love to expand this post with more great Jigs. Footnotes [1]https://github.com/rwldrn/idiomatic.js/ [2]https://github.com/cowboy/jquery-tiny-pubsub

Functions Explained

almost 5 years ago | Mark Daggett: Mark Daggett's Blog

A Deep Dive into JavaScript Functions Based on my readership I have to assume most of you are familiar with JavaScript already. Therefore, it may seem odd to include a post on functions. After all, they are one of the most rudimentary components of JavaScript. My assertion is this, just as a person can speak a language without the ability to read or write it, so too can developers use functions in JavaScript and yet be blissfully unaware of their complexities. Typically developers only become aware of the specifics of functions when something they wrote explodes in their face. My goal in this section is to expose the intricacies of JavaScript functions to you, which will hopefully save you from having to pull syntactic shrapnel from your codebase. A word of caution before we begin; JavaScript is only as good as its interpreter. While the concepts we’ll consider are well-covered in the language spec, it does not mean that all runtime environments will work the same way. In other words your milage may vary. This section will discuss common misconceptions of JavaScript functions, and the silent bugs they introduce. However, debugging functions in detail is not covered. Fortunately, debugging has been documented by others in the JavaScript community especially in Juriy Zaytsev’s excellent article "Named Function Expressions Demystified" [1]. Blocks in JavaScript Before we can understand functions in JavaScript we have to understand blocks. JavaScript blocks are nothing more than statements grouped together. Blocks start with a left curly bracket "{" and end with a right one "}". Simply put, blocks allow statements inside the brackets to be executed together. Blocks form the most basic control structure in JavaScript. The following are a few examples of how blocks in JavaScript: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 // Block as an anonymous self-executing function ;!function () { var triumph = false, cake = false, satisfaction = 0, isLie, note; // Block used as part of function expression var isLie = function (val) { return val === false; } // Block used as part of a conditional statement if (isLie(cake)) { triumph = true; makeNote('huge success'); satisfaction += 10; } // Block used as part of a function declaration function makeNote(message) { note = message; } }(); As we saw above, functions are essentially named blocks, which the developer can invoke on demand. This is easy to demonstrate: 1 2 3 4 5 6 7 8 9 10 11 12 // The inline conditional block statement is executed only once per cycle. if (isLie(cake)) { ... } function makeNote(message) { ... } // The function declaration is executed as many times as it is called. makeNote("Moderate Success"); makeNote("Huge Success"); Function Arguments Functions like control flow statements (if, for, while etc.) can be initialized by passing arguments into the function body. In JavaScript variables are either a complex type (e.g. Object, Array) or a primitive type (e.g. String, Integer). When a complex object is supplied as an argument it is passed by reference to the function body. Instead of sending a copy of the variable, JavaScript sends a pointer to its location in memory. Conversely, when passing a primitive type to a function JavaScript passes by value. This difference can lead to subtle bugs because conceptually we often treat functions as a black box, and assume they can only effect the enclosing scope by returning a variable. With pass by reference, the argument object is modified even though it may not returned by the function. Pass by reference and pass by value are demonstrated below: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 var object = { 'foo': 'bar' }, num = 1; // Passed by reference ;!function(obj) { obj.foo = 'baz'; }(object); // => Object {foo: "baz"} console.log(object); // Passed by value; ;!function(num) { num = 2; }(num); // => 1 console.log(num); Function Types Now that we have a better understanding of blocks, and arguments lets dive deeper into Function Declaration and Function Expression, the two types of functions used in JavaScript. To the casual reader the two appear very similar: 1 2 3 4 5 6 7 8 9 // Function Declaration function isLie(cake){ return cake === true; } // Function Expression var isLie = function(cake){ return cake === true; } The only real difference between the two, is when they are evaluated. A function declaration can be accessed by the interpreter as it is being parsed. The function expression on the other hand is part of an assignment expression, which prevents JavaScript from evaluating it until the program has completed the assignment. This difference may seem minor, but implications are huge; consider the following example: 1 2 3 4 5 6 7 8 9 10 11 12 13 // => Hi, I'm a function declaration! declaration(); function declaration() { console.log("Hi, I'm a function declaration!"); } // => Uncaught TypeError: undefined is not a function expression(); var expression = function () { console.log("Hi, I'm a function expression!"); } As you can see in the previous example the expression function threw an exception when it was invoked, but the declaration function executed just fine. This exception gets to the heart of the difference between declaration and expression functions. JavaScript knows about declaration function and can parse it before the program executes. Therefore, it doesn’t matter if the program invokes the function before it is defined. This is because behind the scenes JavaScript has hoisted the function to the top of the current scope. The function expression is not evaluated until it is assigned to a variable; therefore it is still undefined when invoked. This is why good code style is to define all variables at the top of the current scope. Had we done this then our script would visually match what JavaScript is doing during parsetime. The concept to take away is that during parsetime JavaScript moves all function declarations to the top of the current scope. This is why it doesn’t matter where declarative functions appear in the script body. To further explore the distinctions between declarations and expressions, consider the following: 1 2 3 4 5 6 7 8 9 10 11 12 13 function sayHi() { console.log("hi"); } var hi = function sayHi() { console.log("hello"); } // => "hello" hi(); // => 'hi' sayHi(); Casually reading this code, one might assume that the declaration function would get clobbered because it function expression has an identical name. However, since the second function is part of an assignment expression it is given its own scope, and JavaScript treats them as seperate entities. To make things even more confusing look at this example: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 var sayHo // => function console.log(typeof (sayHey)) // => undefined console.log(typeof (sayHo)) if (true) { function sayHey() { console.log("hey"); } sayHo = function sayHo() { console.log("ho"); } } else { function sayHey() { console.log("no"); } sayHo = function sayHo() { console.log("no"); } } // => no sayHey(); // => ho sayHo(); In the previous example we saw that functions of the same name were considered different if one was an expression and the other was a declaration. In this example we are attempting to conditionally define the function based on how the program executes. Reading the script’s control flow you’d expect sayHey to return "hey" since the conditional statement evaluates true. Instead it returns "no", meaning the second version of the sayHey function clobbered the first. Even more confusing is that the sayHo function behaves the opposite way! Again, the difference comes down to parsetime versus runtime. We already learned that when JavaScript parses the script it collects all of the function declarations and hoists them to the top of the current scope. When this happens it clobbers the first version of sayHey with the second because they exist in the same scope. This explains why it returns "no." We also know that function expressions are ignored by the parser until the assignment process completes. Assignment happens during runtime, which is also when the conditional statement is evaluated. That explains why the sayHo function was able to be conditionally defined. The key to remember here is that function declarations can not be conditionally defined. If you need conditional definition use a function expression. Furthermore, function declarations should NEVER be made inside a control flow statement, due to the different ways interpreters handle it. Function Scopes Unlike many other languages which are scoped to the block, JavaScript is scoped to the function. In Ruby (version 1.9.+) you can write this: 1 2 3 4 5 6 7 8 9 x = 20 10.times do |x| # => 0..9 puts x end # => 20 puts x What this demonstrates is that each block gets its own scope. Conversely, if we wrote similar code in JavaScript: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 var x = 20; // Functions have their own scope ;!function() { var x = "foo"; // => "foo" console.log(x); }(); // => 20 console.log(x); for (x = 0; x < 10; x++) { // => 0..9 console.log(x); } // => 10 console.log(x); In JavaScript x is available inside the for loop, because as a control statement it belongs to the enclosing scope. This is not intuitive to many developers used to block level scope. JavaScript handles the need of block level scope at least partially through the use of closures which we’ll discuss later. Debugging Functions Before we wrap this topic up, lets briefly touch on debugging functions. In JavaScript naming a function expression is completely optional; so why do it? The answer is to aid the debugging process. Named function expressions have access to their name within the newly defined scope, but not in the enclosing scope. Without a name their anonymous nature can make them feel a bit like ghosts in the machine when it comes to debugging. 1 2 3 4 5 6 7 8 9 var namedFunction = function named() { // => function console.log(typeof(named)); } namedFunction(); // => undefined console.log(typeof(named)); Nameless function expressions will be displayed in the stack trace as "(anonymous function)" or something similar. Naming your function expression gives you clarity when trying to unwind an exception whose call stack may feel miles long. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 /* * It is much harder to debug anonymous function expressions * Uncaught boom * - (anonymous function) * - window.onload */ ;!function(){ throw("boom"); }(); /* * Naming your function expressions give you a place to start looking when debuggin. * Uncaught boom * - goBoom * - window.onload */ ;!function goBoom() { throw("boom") }(); [1]http://kangax.github.com/nfe/

Pragmatic JavaScript Style

almost 5 years ago | Mark Daggett: Mark Daggett's Blog

My goal is to make you a better programmer, and I am going to do this by teaching you about style. I am not talking about fashion, because I think most programmers would flunk that test; unless, comic-con couture is a thing. Instead we’ll talk about the importance of style, how it forms, how it spreads and when to kill it. Specifically, we’ll look at style as it applies to programming. Ultimately, once we have a context for evaluating style I will introduce you to elements of programmatic style which have served me well over the years as a professional software developer. What Is Style? Style is often used as a measurement of quality. When someone is described as having style or being stylish, it is almost universally meant as a complement. If someones’ style ever comes into question it is usually in comparison to someone else’s style. "My style’s the best and so I challenge you" screams 70’s era martial arts star. Stylishness is a fresh approach, a unique perspective, an unexpected insight into an activity. The application of a style can become so prominent that it expands the activity itself; that house is built in a Frank Lloyd Wright style. What starts as a personal style in painting can become an art movement almost overnight. Style spreads like a virus, it is the original meme, a mind virus that changes the way you see the world forever. Style is often the conduit where new ideas pulsate. How does style effect programmers? Well the good news about style for those algorithmically inclined is that, no matter how personal a style may seem, for it to exist it all, it must at some level be repeatable. Style must be codified into a series of steps, rules or combinations that can be followed, and then recognized by others. Therefore if style is a measurement of quality, and at the same time repeatable then it can be taught. Just ask Strunk and White. William Strunk Jr. wrote "The Elements of Style" while he was a professor at Cornell. He began with 7 rules for the usage of language, and 11 principles of composition. His student E.B. White revised the book forty years later, by adding an additional 4 rules. The goal of the book was to give aspiring writers and grammarians a context from which to evaluate their own work. According to White, Strunk was compelled to write the "little book" out of sympathy for those afflicted with reading the writer’s ill-composed dreck: "Will felt that the reader was in serious trouble most of the time, floundering in a swamp and that it was the duty of anyone attempting to write English to drain the swap quickly and get the reader up on dry ground, or at least throw a rope." Over the years the book has remained wildly popular by those learning to write efficiently, and is affectionately referred to as "Strunk and White." That is not to say it has been universally loved or followed. Dorothy Parker is quoted in the New York Times as saying "If you have any young friends who aspire to become writers, the second-greatest favor you can do them is to present them with copies of ‘The Elements of Style.’ The first-greatest, of course, is to shoot them now, while they’re happy." Many found the rules too restrictive, and opinionated. White said Strunk believed "…it was worse to be irresolute than to be wrong." Strunk’s assertions is that it takes passion to be stylish. You need to be able to draw boundaries, to allow this idea to flourish while forcing that one to die. Style is a sine wave attracting some and repelling others. What is programmatic style? As mentioned previously, Stunk and White wrote their book not only to empower and train writers, but to save readers from slogging through what was in their minds a textual tar pit. So too, good programmatic style services two audiences, the developer and the processor. That is to say that the code should be well-written, both syntactically, and technically. Below are qualities I consider essential in application of programmatic style: Consistency - By repeatedly applying rules to the codebase we ensure consistency. Consistency, mitigates noise in the codebase, and brings the intent of the code into clearer focus. Put another way, if a developer is trying to piece together how to read your code, you have prevented them from understanding what it does. Consistency is concerned with how the code looks, e.g. naming conventions, use of whitespace, and method signatures; and how the code is written for example ensuring that all functions don’t return a string in one context and an integer in another. Expressiveness - Code is by nature a symbolic language, where variability and abstractness is implicit. Therefore the developer must find a way to make the code meaningful to the reader. This can be archived though naming variables and methods precisely. When reviewing a class, method or variable the reader should understand the roles and responsibilities of the code by reading the code itself. If a method can only be understood by reading the comments above left by the writer it should be a clue that the code is not expressive. Succinctness - Strive to do just enough. Good programming like good writing is about clarity of purpose, and not merely compactness. It should be about reducing the complexity of a method, not it’s usefulness. Restraint - Style should never overpower the subject itself. At that point style becomes the subject it becomes a facile artifice, a dish ruined by too much spice. I am reminded of a minimalist chess set I saw in college every piece was either a white or black cube, and all pieces were the same size. It was aesthetically beautiful and simultaneously unplayable. JavaScript Style Guide This style guide was compiled by compiling, reviewing and considering choices I have made in my own work over the years, and coding practices of individuals, and development teams I admire in the JavaScript community. As such this style guide should be seen as an amalgamation of inputs and influences from the larger JavaScript community rather than the creative output of a singular individual. You can find a list of resources used in this guide in the additional resources section. This guide is broken into two sections: "Rules for Visual Clarity" and "Rules for Computational Effectiveness". Caveats Style guides are just that guides, they are meant to point you in the right direction, but they are at best mutable truth. Moreover, coding theory changes constantly and it is important not to lock yourself into a dogmatic approach to the application of these styles. As my professor Clyde Fowler told me in my studio drawing class, "you must think with your hands", and what he meant by that was you must think through doing, while maintaining the ability to get critical distance from your work. Rules for Clarity - How others see code Rules Of Thumb Write Clearly And Expressively - When naming variables, functions, or organizing code remember you are writing for humans to read it not compilers. Following Existing Conventions - If you share your code anywhere, work on a team, or are hired to write code, then you are not writing for yourself Write in Only One Language - Where possible don’t use JavaScript as a surrogate for other languages. This means resisting the urge to write inline HTML, or CSS where possible. Enforce A Uniform Column Width - Strive for consistent line lengths in source code. Long lines tire the eyes, and cause needless horizontal scrolling. An industry standard is 80 characters per line. Document Formatting Naming Conventions JavaScript is a terse language of brackets, and symbols and one of the only way to make your code expressive to humans is through the names you choose for variables, functions and classes among others. Remember when choosing a name it should describe the role and responsibilities of that object. Vague or obtuse names like doStuff is like telling the reader you figure it out, which often times they won’t. Choose variables and functions with meaningful, expressive and descriptive names. Write for the reader not the compiler. 1 2 3 4 5 6 7 8 9 10 11 // Bad var a = 1, aa = function(aaa) { return '' + aaa; }; // Good var count = 1, toString = function(num) { return '' + num; }; Constants should always belong to a namespace, and be written in uppercase with spaces replaced with underscores 1 2 3 4 5 // Bad MY_CONSTANT = 43; // Good com.humansized.MY_CONSTANT = 43; Variables should be CamelCase 1 myVariableName Classes should be PascalCase 1 MyAwesomeClass Functions should be CamelCase 1 isLie(cake) Namespaces should be CamelCase and use periods as a delimiter 1 com.site.namespace Hungarian notation is not required but you can use it to convey they are objects constructed through or dependent on a library or framework 1 2 3 4 5 // JQuery infused variable var $listItem = $("li:first"); // Angular.js uses the dollar sign to refer to angular-dependent variables $scope, $watch, $filter Constants And Variables Variables and constants definitions always go at the top of the scope 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 // Bad function iterate() { var limit = 10; for (var x = 0; x < limit; x++) { console.log(x); } } // Good function iterate() { var limit = 10, x = 0; for (x = 0; x < limit; x++) { console.log(x); } } Avoid polluting the global namespace by always declaring variables using var 1 2 3 4 5 // Bad foo = 'bar'; // Good var foo = 'bar'; Declare multiple variables using a single var declaration, but separate each variable with a newline 1 2 3 4 5 6 7 // Bad var foo = "foo"; var note = makeNote('Huge Success'); // Good var foo = "foo", note = makeNote('Huge Success'); Declare unassigned variables last. This allows the reader to know they are needed but have delayed initialization. Do not assign variables inside a conditional statement, it often masks errors. 1 2 // Bad because it is easily misread as an equality test. if (foo = bar) {...} Do not clobber arguments with variables names. 1 2 3 4 5 6 7 8 9 10 11 // Bad function addByOne(num) { var num = num + 1; return num; } // Good function addByOne(num) { var newNum = num + 1; return newNum; } Page Layout Blank lines Should always proceed the start of a comment Should be used to separate logically related code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 // Bad var = wheels; wheels.clean() car.apply(wheels); truck.drive(); // Good var = wheels; wheels.clean() car.apply(wheels); truck.drive(); Commas Remove trailing comments in object declarations. It will break some runtime environments. 1 2 3 4 5 6 7 8 9 10 11 // Bad var foo = { bar: 'baz', foo: 'bar', } // Good var foo = { bar: 'baz', foo: 'bar' } Don’t use comma first formatting, if you don’t know what that means keep it that way! Semicolons Even though JavaScript determines semicolons to be optional many compilers expect them, therefore it is better to use them. Useful for clearly delineating the end of a logical statement Do not add meaningless semicolons Whitespace Should be removed from the end of a line Should be removed from a blank line Should not mix spaces and tabs Should appear after each comma in a function declaration 1 2 3 4 5 // Bad function findUser(foo,bar,baz) // Good function findUser(foo, bar, baz) Should not appear inside empty functions or literals 1 2 3 doThis(); var foo = {}; var arr = []; Brackets And Braces Use only where the compiler calls for it or where it enhances readability Brackets should appear on the line that requires them 1 2 3 4 5 6 7 8 9 10 // Bad if (hidden) { ... } // Good if (hidden) { } Add whitespace in front and between brackets to aid readability. 1 2 3 4 5 6 7 // Bad if (condition) goTo(10); // Good if (condition) { goTo(10); } There are a couple of exception to the previous rule 1 2 3 4 5 6 7 8 // No Whitespace needed when there is a single argument if (foo) ... // No whitespace when a parenthesis is used as a scope container ;(function () {...}) // No white space when brackets are used as a function argument function sortThis([2,3,4,1]) Strings String should be constructed using single quotes 1 2 3 4 5 // Bad var foo = "Bar"; // Good var foo = 'Bar'; Strings longer than the pre-determined character line limit should be reconsidered, if required they should be concatenated Functions Method signatures must be consistent. If a function returns a variable in one context it should return a variable in all contexts 1 2 3 4 5 6 7 8 9 10 11 12 13 14 // Bad var findFoo(isFoo) { if ( isFoo === true ) { return true; } } // Good var findFoo(isFoo) { if ( isFoo === true ) { return true; } return false; } While not a requirement, returning early from a function can make the intent more clear 1 2 3 4 5 6 7 // Good var findFoo(isFoo) { if ( isFoo === true ) { return true; } return false; } Comments Should never trail a statement Comments should be used sparingly, overuse of comments should suggest to the developer that their code is not expressive enough. Comments should aways be written as a complete thought. Multiline comments should always use the multiline syntax 1 2 3 4 5 6 7 // Some really // bad multiline comment /** * A well-formed multiline comment * so there... */ Rules for Computational Effectiveness Rules Of Thumb Assume File Will Be Concatenated - Modern applications often munge source JavaScript into a streamline file for production. You should defensively program your scripts to protect from switches in operation context and scope corruption. Keep your code browser agnostic - Keep your business logic free of browser specific code by abstracting them into interfaces. This will keep your code on a clean upgrade path as browser fall in and out of fashion. Never Use eval() - Ever Never Use with() - Ever Keep Prototype Pristine - Never modify the prototype of a builtins like Array.prototype because it can silently break other’s code which expect standard behavior. Equality Comparisons And Conditional Evaluation Use "===" instead of "==" use "!==" instead of "!=" this is because JavaScript is very loose when testing equality. When just testing for truthiness you can coerce the values 1 2 if (foo) {...} if (!foo) {...} When testing for emptiness 1 if (!arr.length) { ... } You must be explicit when testing for truth 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 // Bad because all of these will be coerced into true var zero = 0, empty = "", knull = null, notANumber = NaN, notDefined if (!zero || !empty || !knull || !notANumber || !notDefined ) ... // Bad var truth = "foo", alsoTrue = 1 if (truth && alsoTrue) ... // Good if (foo === true) ... Constants and Variables When deleting a variable set it to null instead calling #delete or setting it to undefined 1 2 3 4 5 6 7 8 9 10 11 // Bad because undefined means the variable is useful but as yet has no value this.unwanted = undefined; /** * Bad because calling delete is much slower than reassigning a value. * The only reason to use delete is if you want to remove the attribute from an objects list of keys. */ delete this.unwanted; // Good this.unwanted = null; Functions Function Expressions 1 2 3 4 5 6 7 8 9 10 11 12 13 14 // Anonymous Function var anon = function () { return true; } // Named Function var named = function named() { return true; }; // Immediately-invoked function, hides its contents from the executing scope. ;(function main() { return true; })(); Anonymous functions are defined at parse-time, and therefore do not have their names hoisted to the top of the scope. 1 2 3 4 5 6 7 8 9 10 11 12 // Bad - Runtime Error iGoBoom(); var iGoBoom = function () { alert('boom'); } // Good iGoBoom(); function iGoBoom() { alert('boom'); } Do not use function declaration within block statements it is not part of ECMAScript; instead use a function expression. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 // Bad if (ball.is(round)) { function bounce(){ // Statements Continue } return bounce() } // Good if (ball.is(round)) { var bounce = function () { // Statements Continue } } Do not hide the native arguments object by using the same name in a function 1 2 3 4 5 6 7 8 9 // Bad var foo = function(arguments) { alert(arguments.join(' ')); } // Good var foo = function(args) { alert(args.join(' ')); } Strings When concatenating a string use Array#join for performance reasons. 1 2 3 4 5 6 7 8 9 10 11 // Bad var lorem = 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\ Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in\ reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in\ culpa qui officia deserunt mollit anim id est laborum.'; // Good var lorem = ['Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.', 'Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in', 'reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in', 'culpa qui officia deserunt mollit anim id est laborum.'].join(''); Objects Should use object literal vs new Object 1 2 3 4 5 6 7 8 9 10 // Bad var person = new Object(); person.firstName = "John"; person.lastName = "Doe"; // Good var person = { firstName: "John", lastName: "Doe" } Don’t overwrite reserved words as keys 1 2 3 4 5 // Bad var person = { class : "Person" }; // Good var person = { klass : "Person" }; Arrays Should use literal syntax for creation 1 2 3 4 5 // Bad var arr = new Array(); // Good var arr = []; Responsibility Delegation Only write code that is the responsibility of the program. Keep your code free of view layer and template code. Use a template library like mustache.js instead 1 2 3 4 5 6 7 8 9 10 11 12 var view = { title: "Joe", calc: function () { return 2 + 4; } }, output; // Bad output = '<div><h5>' + title + '</h5><p>' + calc() + '</div>'; // Good var output = Mustache.compilePartial('my-template', view); Keep JavaScript out of the HTML 1 2 3 4 5 6 // Bad <button onclick="doSomething()" id="something-btn">Click Here</button> // Good var element = document.getElementById("something-btn"); element.addEventListener("click", doSomething, false); Operating Context And Scope Where possible wrap your code inside self executing functions. This will insulate your code from pollution by others, and make it easier to abstract 1 2 3 4 5 // Good ;(function( window, document, undefined) { // My Awesome Library })(this, document); Design for duration-agnostic execution of code. This will prevent your code from building up a backlog of requests that may no longer be relevant 1 2 3 4 5 6 7 8 9 10 // Bad because this might take longer than 100 milliseconds to complete. setInterval(function () { findFoo(); }, 100); // Good this will only be called again once findFoo has completed. ;(function main() { findFoo(); setTimeout(main, 100); })(); Only use this in object constructors, methods and creating closures To prevent breaking community code declaring an operating context e.g. "use strict" should be wrapped inside a self-executing function for modules or inside a function itself when needed 1 2 3 4 5 6 7 8 9 10 11 12 13 14 // Bad var bar = findLooseyGoosey(); "use strict"; var foo = findStrictly(); // Good var bar = findLooseyGoosey(); ;(function () { "use strict"; var foo = findStrictly(); })(); Coercion Conversion over Coercion 1 2 3 4 5 6 7 var num = '1'; // Bad implicit coercion num = +num; // Good expressive conversion num = Number(num);

Agile vs. Lean

about 5 years ago | Sven Kräuter: makingthingshappen blog.

One of the first things I tell potential clients that hire me for an agile transformation is that part of my goals is to make myself superfluous. Although more or less common sense for people with an agile background this provokes more often than never puzzled faces. “Isn’t he supposed to lead the product development process by managing the product team with agile methodologies?” Transforming intangible perceptions into actionable items In fact I’m supposed to enable the product development team to start managing itself in an agile manner. Reads itself like a minor difference in the formulation, but has a major impact on the proceeding. <!-- more -->Of course I start leading & managing the team. Introduce the necessary events, evolve daily stand-ups to short & precise meetings, get everybody on the same page and facilitate the retrospectives. Apart from the latter, a fully developed agile team is able to answer the three magic daily questions and organize the events on its own. When it comes to retrospectives, it’s a little bit different. You need a neutral facilitator here a little longer. For my last client this meant that after the time intensive initial phase of coaching I stopped attending on a daily basis after we both felt comfortable with it. I continued to facilitate the retrospectives for some more weeks, and after we tried and saw that there are still action items and the resulting improvements as tangible outcomes I realized I brought that team to the agile path and futhermore to a state in which it could walk this way without further assistance. Short & direct feedback. <3. Tangible outcomes are amongst the core metrics I am judged by. In the beginning the effects of the new process are still very intangible. “We like that we now communicate more with each other, but we are affraid losing our efficiency!”. When moving from batch processing your specialized task to think a little cross functional this is a natural reaction to the change. Measured by the quantity of product features you deliver it in fact will look like losing efficiency. When judging by the quality of what you deliver the efficiency equation looks quite attractive. In fact after a short period of time the propelled quality becomes quite obvious - be it the lowered technical debts, the increased harmony of the team or the ability to react to customer wishes faster. As you may have realized I used the formulation “measure quantity” and “judge quality”. I did this on purpose since it stresses the very tangible nature of quantity and the more intangible way we perceive quality. In my retrospectives I’m quite strict about actionable items as a result. They make our progress tangible and I see them as a basic metric to measure the progress we make. In fact I put a little square on every action item sticky and we check this box if we accomplished the task in the next retrospective. This is just a small example of validated learning. Which is one of the aspects in the focus of lean entrepreneurship. It is about learning based on the data you collect from your product and finding the right metrics to measure the value and the growth of it. Which is why I see Agile & lean entrepreneurship as a synergy of greatest potential. Participating in the 2012 Agile Design Camp I bootstrapped some provocative slides to bring the audience into the mood for discussing the topic. The core hypothesis here is that agile focusses on the process success while lean entrepreneurship focusses on the product success: ‘Agile vs Lean’ on Slideshare Of course this is only true to some extent. When working in agile environments the chance of meeting practicioners that do not only think in terms of their special subject but also have the product in their vision is quite high. When iterating over your processes lean entrepreneurial thinking as a result can be an outcome quite naturally. This is a result of the fact that the right process framework takes away unnecessary pressure from your team. This enables the practicioeners to actually think about what they build as opposed to being busy with figuring out how to finish tasks in time. I would say this is similar to moving up on the Maslov pyramid - just in the domain of your product quality and not concerning personal life’s quality where Maslov’s famous geometric metaphor is originally pointed at. To quantify if our newly gained freedom actually results in higher value for the customer moving from judging prodcut quality to measuring product quality is essential. One way to achieve this goal is the path of lean entrepreneurship. Viewing from that angle I think it becomes quote obvious why I chose the provocative hypothesis that agile focusses on the process success while lean entrepreneurship focusses on the product success.

The UX Product Owner

about 5 years ago | Amit Anand: UXcorner

Since I took up a new stride with a management consulting organization as a Product Owner, I have been thinking more on lines of what more can be done apart from conventional business as usual Agile practices such as managing backlogs, stakeholder expectations, ensuring user acceptance of the end product. How do I scale this […]

The UX Product Owner

about 5 years ago | Amit Anand: UXcorner

Since I took up a new stride with a management consulting organization as a Product Owner, I have been thinking more on lines of what more can be done apart from conventional business as usual Agile practices such as managing backlogs, stakeholder expectations, ensuring user acceptance of the end product. How do I scale this […]

Hello World!

about 5 years ago | Sven Kräuter: makingthingshappen blog.

Transforming an existing and effective product team with its own existing processes into an even more effective SCRUM team is quite a challenging task. You need to get to know both people and organization and see how you spread agile values amongst both best. In a current client’s assignment we reached a point where the process is learned and the values & mindest widely settled. Now the risk of boring the team with repeating facilitation techniques is high, which is a luxury problem I am quite glad to face ;-). Finishing the preparations for todays retrospective I started retrospect the state of making things happen a little. Starting my own business this february was quite a ride. I was lucky to have a product vision ready for my venture - Strategic Play helped developing it in an awesome CoCreAct workshop. With some local companies waiting for me to work for them the biggest challenge was the bureaucratic means necessary to go into business. Since then my concept of offering both Process Coaching and Application Development - knock on wood - works out quite nice. I sometimes need to explain how I am able to offer both, but that’s another story. Finding my own product vision (image: 5v3n.com) <!-- more --> One challenge you face working as a contractor is the not too surprising fact that no start up or company hires you because she wants you to learn something new from her. Most of the time she searches someone to teach her something new from your portfolio instead. Of course every project makes you learn and adds useful skills to your agile or tech tool belt. But if you really want to advance or even pivot a little you have to invest the time in your own learning. One important aspect of my approach to learning is inspiration. There are inspiring people in my surrounding already. A place to maximize the specific inspirational density nonetheless are barcamps and community events. During the last quarter there were two very special events of this kind where I found loads of inspiration. The first one was the UX Camp Hamburg. Being asked to talk about UX concerning the infamous Internet of Things beforehand was awesome - I wrote about my experience from the Maker’s perspective at Makers & Co already. In addition I did not hesitate to put a spontaneous session called “UX & Agile” on the planning board. Knowing your audience is key to preparing a talk. Since I didn’t know anything about them and had to open the event with the first talk I just improvised a little. I used a flipchart, pens and the biggest track room full with a lot of curious people. Went quite well I’d say since there were lots of questions. Opening UX Camp Hamburg with “UX & Agile” (image: twitter.com/suzhi) I felt we had a real nice conversation going on and I scribbled a lot of pages with viualisations of agile metaphors since that’s the way I explain best. I even received some friendly coverage, i.e, from Hamburg’s Digital Media Women so I think I pleased the audience. Being asked to repeat both sessions in the afternoon was quite reassuring too. Having paid my barcamp dues with leading two sessions myself I enjoyed visiting some sessions without a bad conscience. The main topic UX is quite self evident working in (or being brought in to establish) lean and agile contexts. Seeing the different approaches how to include it in classic project setups and the pains you have there was quite interesting. Amongst the sessions I enjoyed the most were “Lean UX” where Karen Lindemann gave a brief overview of the way Thoughtworks is integrating strategies to focus more on UX in their agile & lean workflow. Karen on Thoughtwork’s approach: “Lean UX”(image: twitter.com/gabormolnar) Chatting with the participants was very inspiring too - all in all a saturday spent more than well. The other event that stood out was Railsgirls Hamburg. I guess you are asking yourself the question why I list an event where I coached people without any experience in Web App Development at all amongst the events I learned from most. Let me elaborate. Concentrated learning...Concentrated learners (image: 500px) Designing web applications these days we talk a lot about the infamous “Embrace Constrains” paradigm. It basically says: “By constraining your product you will see the core of your product” and is applied concerning mobile vs. desktop browser, small vs. tall budget and so on. What I found out at Railsgirls by embracing the skill constratint is what’s in the core of my way to drive product development. We had some tutorials we should go through which I adjusted a little by adding IxD skribbling up front. Then this picture basically sums it up: Action!Skribble - Implement - Ship! (image: EyeEm) It all burned down to: “Skribble - Implement - Ship!” Then check the result, rinse & repeat. Of course we left out the Product Visioning here, but basically this is what it’s all about. Plan, Do, Check, Act as Deming would have put it. So besides being a great idea in general and the coaching being a real pleasure I also learned a lot. What comes after the inspiration is choosing the topics worth further persuasion. I developed the habit of starting spare time projects to learn new technical skills preferably in teams of fellow knowledge & experience seekers. On the process side I read quite a lot in the past but the best way to sharpen my existing skills or learn new ones is learning by doing as well. Ok - so why is this post called “Hello World!”? It’s a test balloon to see if what I write about my experiences is of any interest to you. So please go ahead and tell me. Share the post, give me feedback or even better: tell us about your tactics in the field of knowledge acquisition. I am curious!

Felton on Felton

over 5 years ago | Mark Daggett: Mark Daggett's Blog

Eyeo2012 - Nicholas Felton from Eyeo Festival on Vimeo. “Unprecedented look at the creative process of infographic storyteller Nicholas Felton of Feltron Report fame, from this year’s EyeO Festival.” http://exp.lore.com/post/27493185478/unprecedented-look-at-the-creative-process-of

crowd sourcing concepts

over 5 years ago | Mark Daggett: Mark Daggett's Blog

Below is the results of a brainstorming session with myself and Hege Sæbjørnsen as we thought through the concepts of crowd sourcing. We proposed several questions to one another and then attempted to answer them, or at least frame the question more fully. These notes are pretty raw, and much of the answers are well-documented elsewhere. However, I still thought it was worth sharing. Crowd Sourcing The use of a collection of people with expert knowledge / interest around a very specific topic to fill a role that has been typically held by a few highly educated “editors” (wisdom of the crowds). Crowd sourcing is an active engagement not only passively viewing the content. This is a two way street, you can’t strip mine the crowd, you must engage in a conversation and allow for two way conversation. You must harvest and plant. Crowd Collections subgroups or individuals at varying levels of engagement linked around some common attribute(s). potential subgroups The Unaware Public Students Researcher / Educator Artists / Designers Makers Engineers / Scientists Social Innovators / Entrepreneurs Politicians Professionals Grassroots Organizations Physically located within a specific proximity Contextual community on another platform (Facebook, Twitter, Flickr) Griefers / Trolls (spreaders of misinformation, or just enjoy conflict) Do we have priority subgroups and who are they, and how do we speak to them? Once you define the crowd composition then you choose the target audience to source from. Source Ideas / Opinions / Solutions / Visions in response to our brief Their social graph, and sphere of influence Their money Curation (potentially & to what extent) Vet and or Improve Submissions Volunteers / local ambassadors What Media Do We Source Are we limiting people’s inputs when we state “read” stories? Biographies / Essays Comments on sourced material Poems / Quotes etc. Documentary Photography Fine-art Photography Documentary Video Fine-art Video Tools & Tool Making Research Schematics / Blueprints Goals Crowd Sourcing Engaging a larger distributed audience Collecting and then refining of raw materials Democratic path to engagement / flattened hierarchy / grass-roots composition over aggregation Build a base of future (contextually relevant) engagement with a slice of the crowd Tasks for the crowd create or upload their content to the site find existing relevant content created by others and share it on the site. make linkages between content through activities like tagging (folksonomies) download readymade content and share it in the local community build a knowledge base around future Ways To Curate Crowd Voting, likes, retweets, tags, replies etc. Local Expert, who promotes certain content over other Selection Panel / Judging Panel, which offers periodic content review and selection Expectation of the Crowd Clarity of engagement a specific call to action A way to visualize the crowd’s input A way to visualize the crowd through metrics (e.g. countries, total count, visitors) A way to visualize the crowd’s influence / impact A low barrier of entry A way to find my community What’s in it for the Crowd member Ready made distribution platform Tools to mine the wisdom of the crowd A way to fund your involvement in the project Recognition and enhancement of your social reputation Be a part of a larger well-respected initiative The potential to participate in person in Rio A tool to find other members you share interests with Measurement / Quantifying The Crowd How do you measure engagement with the crowd? How do you measure distribution within the crowd? How do you measure the inertia of the crowd? How do you measure changes in the sentiment? (moving from casually aware to active participant) Crowd Composition Less like a mob and more like filling the seats in structure or setting we’ve defined. This allows us to initially curate the incoming content, and facilitate our “experts” (problematic?) Questions How do we handle out of scope submissions? How much is online vs. offline? How is attribution of content handled? Is there some greater tangible reward for contributing content? Questions About Crowd Curation Are we asking too much of our audience Are we going to allow for total organic organization of the content or will we offer an initial framework.

Spineless 0.2.1 Released

over 5 years ago | Mark Daggett: Mark Daggett's Blog

I have just pushed a new release of my JavaScript application framework called Spineless. Spineless is a simple MVC stack without the need of a backbone. https://github.com/heavysixer/spineless The goal of Spineless is to provide “just enough framework” to succeed. If I have done my job, you should be able to write your first Spineless app in less than 10 minutes. Spineless is meant to run with virtually no dependencies. In the age of frameworks with massive dependency chains, here is a list of things you DO NOT need to run spineless. A persistance layer (e.g. database) A backend server (e.g. node.js) An internet connection! (srsly) Spineless has only two dependencies, JQuery and Mustache.js, both which come bundled with the project inside the /lib directory. Like any good MVC framework Spineless uses the concept of models, controllers and views. Spineless models are essentially JavaScript objects and completely optional. Controllers are used to marshall commands from the views to the models where needed. Views are the visual interface that the user sees. In addition to the normal MVC stack, Spineless also uses the concept of helpers and templates. Templates are HTML snippets, which are used by views to get better use of reusable code. Helpers are functions that modify a template’s variables any way you choose. Going Spineless in 10 minutes or less The entire Spineless application resides inside the “.application” div. An application consists of a collection of controllers which in turn contain a collection of views. Consider the following example: 1 2 3 4 5 6 7 <div class="application"> <div class="controller" data-controller='application'> <div class="view" data-action='index'> Hello World! </div> </div> </div> In this example you’ll see that we have defined an application with a single controller. The name of the controller is defined by the data-controller attribute. This attribute is required by Spineless to route requests to the proper location. Views are much like controllers, but instead of using the data-controller attribute they use the data-action. Routing Requests Routing requests through Spineless is incredibly painless to make any link a spineless request just add the “route” class. For example: 1 <a class="route" href="/application/hello">Hello</a> When the user clicks on this link they will now be routed to the application controller where the #hello method will be called. If you are not using an element that support the href attribute you can also place your url inside a data-href attribute: 1 <div class="route" data-href="/application/hello">Hello</div> If you want to manually trigger a route request from within JavaScript you can call the get function: 1 spineless.get('application', 'index');` Passing local variables to templates When rendering templates, Spineless substitutes predefined template variables with those you supply using JSON. The JSON can be provided in at least two ways: By url encoded a json object into the data-locals attribute. Creating of modifying the JSON object using a helper function. I will explain the helper function method next, but here is a simple example of what the data-locals method looks like: 1 <div data-locals="{&quot;name&quot;:&quot;Mark&quot;}" data-template='hi-my-name-is'></div> Helper functions Helpers are developer-created functions that execute during the rendering of specific templates. Just like in Rails, helpers are available globally across all views. To demonstrate, imagine we have two DIV tags with locals supplied as urlencoded JSON object: 1 2 <div data-locals="{&quot;name&quot;:&quot;Mark&quot;}" data-template='hi-my-name-is'></div> <div data-locals="{&quot;name&quot;:&quot;Slim Shady&quot;}" data-template='hi-my-name-is'></div> As you can see these objects have a property called name, each with unique values. These locals are linked to the “hi-my-name-is” template. To create a helper we’ll bind a function to execute whenever the hi-my-name-is template is rendered. Doing this will allows us intercept the template instance’s data-locals object and modify it anyway we choose before passing it along to Mustache to render. Here is the full example of the helper function: 1 2 3 4 5 6 7 8 9 10 var sp = $.spineless({ helpers: { 'hi-my-name-is': function(obj) { if (obj.name === 'Slim Shady') { obj.name = "*wikka wikka* " + obj.name; } return (obj); } } }); PubSub for Spineless events Spineless now has a very minimal publisher subscriber (PubSub) events framework. The goal of this is to allow other code executing outside of Spineless to receive updates when internal Spineless events execute, without having to know anything about how Spineless is implemented. Here is a trivial example of creating an observer that is triggered every time a view is done rendering. 1 2 3 4 5 6 7 8 $(document).ready(function() { var sp = $.spineless(); sp.subscribe('afterRender', function(publisher, app) { app.request.view.append("<h1>Yes it has!</h1>") }) sp.get('application', 'index'); }); When the publisher executes a subscriber’s function it passes a reference to itself and the Spineless app instance as arguments. This allows the receiver to manage it’s subscriptions and gives the function access to the the Spineless current request, params hash among other things. Controller functions Controller functions are optional code that developers can write to augment the rendering of the view. Controller functions work much like helper functions do, in that they are executed before the view is returned to the screen. Unlike helper functions which are linked to an arbitrary number of templates; controller functions are scoped to just one controller action. Consider this example which executes when someone visits /users/update: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 var sp = $.spineless({ controllers: { users: { update: function(elements, request) { if ($.currentUser.isAdmin()) { this.render(elements); } else { alert(“Access Denied”); } } } } }); sp.get('application', 'index'); I have added examples of all of these new features in the /samples folder of the public Github repo. Please feel free to open bug reports or feature requests, and I will do my best to oblige.

Managing creative talent – 3

over 5 years ago | Amit Anand: UXcorner

Almost an year after I wrote the last post around this topic, I sit back and observer where the industry has moved on from there. We are increasingly becoming more CIO friendly by the day, Fortune thinkers are emphasizing (and luckily budgeting) on better design as a progressive pawn and not the least but clients […]

Managing creative talent – 3

over 5 years ago | Amit Anand: UXcorner

Almost an year after I wrote the last post around this topic, I sit back and observer where the industry has moved on from there. We are increasingly becoming more CIO friendly by the day, Fortune thinkers are emphasizing (and luckily budgeting) on better design as a progressive pawn and not the least but clients […]

The training phenomenon

over 5 years ago | Amit Anand: UXcorner

Was away giving time to family for a while, well glad to say I am a father now. Interesting are nature’s ways of teaching us life, how a little one learns to suckle on a mothers breast for milk, how he adapts to a cycle of day and night, how wonderfully he gestures for your […]

The training phenomenon

over 5 years ago | Amit Anand: UXcorner

Was away giving time to family for a while, well glad to say I am a father now. Interesting are nature’s ways of teaching us life, how a little one learns to suckle on a mothers breast for milk, how he adapts to a cycle of day and night, how wonderfully he gestures for your […]

Task chunking -- or why we leave our cards in the ATM

over 5 years ago | Mark Daggett: Mark Daggett's Blog

Have you ever left your bank card inside an ATM machine? You are not alone, I have done it more times than I care to admit, and each time it happens I am left with the pants around your ankles feeling that you get when you realize you are the worlds biggest idiot. Until recently, I didn’t have a way to explain this reoccurring blind spot; but now I do. ATM designers don’t understand the concept of task chunking. Recently, when rereading through “The Humane Interface” by Jef Raskin I rediscovered his explanation of chunking and gestures in interface design. I won’t touch of gestures, but Raskin defines chunking as: “the combining of separate items of cognition into a single mental unit, a process that allows us to deal with many items as though they are one.” Humans can only focus on one thing a time, therefore when planning a system for use by humans you should make sure that the system doesn’t require user’s to do more than one thing at a time. This is not to say that systems can’t be a mix of short and longterm goals, games do this all the time. For example, players may have a small task like collecting an amulet, but this is done within the larger scope of completing the level and the even larger arc of beating the game. Task chunking is about short-term cognition, which means when players are focused on getting the amulet they cannot be simultaneously thinking about completing the level. In the case of using ATMs there is a single short-term task, which is to deposit or withdraw cash. However, banks for the most part get this wrong. They treat the entire time you are standing in front of their machines as a single mental unit. In their mental model the task begins with you inserting your card and ends with you reclaiming it. However, the customer’s mental model is different. They only want to receive or deposit money, and when they do either of these they often consider their task complete. The problem with using most ATMs is that they put the customer’s most important event in the middle of their process. This would be like placing a quarter of the movie after the end credits. Nobody would see that part of the movie because, while the credits may be the most important to the actor they are the least important to the view. Moreover, moviegoers are trained to head for the exits when they see the credits start rolling. I would wager that if the ATM never dispensed money, virtually nobody would leave their card in the machine. However, when the machine spits bills out onto the street, customers recognize this as the start of the task the mean to complete. This is the point where the initial task of using the ATM bifurcates into a new task of securing the exposed money. This is also where people like me forget all about their debit card. Most of the time customers do get their card back, realizing even though they are done with the ATM, the ATM is not done with them. However, if there are other environmental factors at play like poor weather, lack of time, or thuggish people in the bushes customers may forget about the less important task of completing the bank’s arbitrary process for using their ATM. Over the years, banks have tried various tactics to get their customer’s attention after the cash is dispensed. Sometimes they refresh ATM screen hoping that the interface shift will cause the customer to look back to the monitor. They might also play a reoccurring sound that signals the customer that their attention is needed. Fundamentally, these attempts are just patches, and should signal to the bank that their process is broken. The real solution is not to bifurcate the original task. This can be accomplished by placing all essential but less important tasks before money is dispensed. As a designer I try to be conscious of task chunking when planning out my applications. First I enumerate the discrete tasks in my process and map any potential hotspots where bifurcation by the user may occur. Next, I determine if I can wrap this potential offshoot into the master task chunk. If consolidation is not possible I try to move important tasks in my task chunk before this potential offshoot. My goal is to keep the user’s attention focused on a single pathway. Anytime I make them double-back to complete a task for the benefit of my application I know I have done something wrong. As a footnote I am happy to report that Bank Of America’s new ATMs improve their approach to task chunking. They return your card immediately after you enter your pin. Of course now I have to unlearn years of using the ATMs which has taught me to walk away from the machine once I get my card back!

Reading RFID Cards With Ruby and the Mac

over 5 years ago | Mark Daggett: Mark Daggett's Blog

This weekend I fooled around with the Sparkfun “RFID Starter Kit”. I purchased it from my local technology barn for around $50.00. I am teaching myself physical computing, and projects like that can be completed in an afternoon, are a great way to learn “just enough” to keep a junior hardware hacker like myself from getting frustrated. I was thrilled to find out that getting this board to talk to Ruby took virtually no effort. In fact, it was so simple it almost felt like I was cheating some how. Here are the steps that I followed to get the ID-12 RFID Scanner and Reader talking to Ruby. Download the most recent drivers from Future Technology Devices International: http://www.ftdichip.com/Drivers/VCP/MacOSX/FTDIUSBSerialDriver_v2_2_14.dmg The package contains two sets of drivers. Make sure that you install the version that is right for your operating system. Install the serialport gem for Ruby gem install serialport (I used version 1.0.4) Mount the scanner on top of the reader by lining up the prongs in the appropriate slot. Then plug the reader into the Mini-USB port of the mac. From the console, locate the virtual com port the drivers created when you plugged in your Mini-USB cable. :~ ls -la /dev Scanning the output from this command you should see the device listed similar to this: cu.usbserial-XXXXXX (where X’s represent a driver id). Mine showed up as “cu.usbserial-A900ftPb” but yours may be different. Use this sample code to print the unique RFID id stored inside the card, when someone swipes the card over the scanner. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 # Simple example of reading from serial port to interface with the RFID reader. require "serialport" class RfidReader attr_accessor :key def initialize(port) port_str = port baud_rate = 9600 data_bits = 8 stop_bits = 1 parity = SerialPort::NONE @sp = SerialPort.new(port_str, baud_rate, data_bits, stop_bits, parity) @key_parts = [] @key_limit = 16 # number of slots in the RFID card. while true do main end @sp.close end def key_detected? @key_parts << @sp.getc if @key_parts.size >= @key_limit self.key = @key_parts.join() @key_parts = [] true else false end end def main if key_detected? puts self.key end end end RfidReader.new("/dev/cu.usbserial-A900ftPb") #may be different for you Once you have the unique ID the possibilities become nearly limitless, you can use the card to update twitter, or turn on your lights, turn on the coffee pot, fire the nerf guns, release the hounds etc. Have fun, I know I will!

9 Secrets To Running A Successful Crowdfunding Campaign.

over 5 years ago | Mark Daggett: Mark Daggett's Blog

As many of your know, I am one of the co-founders of Pledgie. Occasionally, I get asked for advice on how to make an effective campaign on Pledgie and so I began to seriously research this topic about three months ago. Using research, expert advice and by analyzing the thousands of existing Pledgie campaigns I attempted to distill the qualities of a successful campaign into a series of tips (actually nine of them). Whether you are an individual or an organization using Pledgie to raise money, these points may help you to craft a winning message. 1. Empathy Over Sympathy The job of your campaign message is to focus on the empathetic link between your cause and your donors. Instead of trying to elicit feelings of guilt or pity, focus on positive connections your donors share with your campaign. Humans, like many other animals, feel empathy and sympathy for one another. Both are incredibly powerful emotions but when it comes to triggering a donor to make a commitment empathy rules the day, and here’s why. Empathy is both the ability to logically understand the experience of others, and simultaneously share a visceral emotional link. Sympathy, however is our ability to feel sorry for someone else’s misfortune, but at a more abstract emotional level. For example, a parent might feel empathy looking at a sick child, because they can imagine their own child in a similar state. In contrast someone without kids might just feel sympathy for child without feeling the greater shared circumstance. 2. Tell A Story When crafting your campaign, tell the story of your cause. People are more likely to connect with a narrative than with facts and figures. When crafting your campaign story, ask yourself if it answers these questions: Who are you? Why is this campaign important to you? What problem does this campaign seek to solve? Why is this campaign important to the donor? 3. Be Yourself Donors do not contribute to an idea; they contribute to a person. When describing yourself or your organization, use simple and direct language that gives a brief and complete picture of who you are, and why your campaign is a passion worth funding. The goal is to get potential donors to start seeing you as a real person and not just words and images on a webpage. 4. Don’t Forget to Ask This seems obvious, but you’d be surprised how many campaigns never ask for funds! People often think that the goal of a Pledgie campaign is to describe the need. While that is an essential component of a good campaign, the actual goal of a Pledgie campaign is to get others to do something about your need. When crafting your campaign message you should concentrate on adding language that: Encourages potential donors to act immediately. Gives them explicit actions they can take to help your campaign. 5. Get Your Friends & Supporters Involved Your friends and supporters are your base, they provide a stable foundation to build your outreach upon. When spreading your message through social networks like Twitter or Facebook, it is essential that your friends help make a personal appeal. As a message spreads across your social graph it can suffer an entropy in trust. This may happen, because as your message spreads the recipients are less likely to know you personally. Asking for personal appeals from others help, because people are more likely to give if someone they know is personally invested. 6. Look for collaborators not benefactors Donors who are funding a cause want to feel like they are part of a solution, and not just performing an act of charity. Craft your message in a way that demonstrates to the donor that they are investing in the outcome, and that you will keep them up-to-date as you progress towards your goal. 7. Photo Finish We cannot stress enough how important having a video or photo is when crafting your campaign. Even if the photo is just a portrait of yourself, simply having a photo or video associated with your campaign description can make a huge difference in perceived credibility. 8. Follow Up With Progress Reports Donors who have donated once are likely to donate again if you ask them. Sometimes the easiest way to ask is to share your progress towards your goals. Giving periodic updates to your donor base shows them that they made a wise investment in you, and that you can be trusted with additional funds. 9. Be Grateful When someone makes an effort to support your cause, be sure to say thank you. It gives them a sense of connection and fulfillment that can lead to future support. It is also the right thing to do!

On Jumping

over 5 years ago | Mark Daggett: Mark Daggett's Blog

Jump and the net will appear.John Burroughsen.wikipedia.org/wiki/… Seven years ago there was much handwringing around leaving my job to start my own consultancy. That is until my friend told me this quote from John Burroughs. It is a compelling illustration about faith in yourself and that the world values you. It often resurfaces for me when I am mulling over a risky proposition. The best thing about this quote is when properly applied, if the net doesn’t appear… well it won’t hurt long!

Thoughts on Requirement Gathering

over 5 years ago | Mark Daggett: Mark Daggett's Blog

Requirement documents should be a recipe not a shopping list. This metaphor is my attempt to encapsulate what a good requirements document is. Both a shopping list and a recipe are essentially lists of ingredients in various quantities. However, the recipe also includes a desired outcome and precise descriptions of how the ingredients are meant to be used. When evaluating a requirement document ask yourself if it gives you a clear understanding of what it is you are being asked to build, and how the independent parts work together. If it doesn’t do that then, my friend, you are reading a shopping list.

Productized Services [Capturing the next BIG market place]

over 5 years ago | Amit Anand: UXcorner

  On a photography expedition (of amateurs of course!) recently I visited the old Delhi by-lanes. Slim gullys (lanes in Hindi as we call it) with fat loads of money exchanging hands. You get almost everything from good food to wholesalers of clothing accessories and specialty packaging industry giants. A good mix of traders and […]

Productized Services [Capturing the next BIG market place]

over 5 years ago | Amit Anand: UXcorner

  On a photography expedition (of amateurs of course!) recently I visited the old Delhi by-lanes. Slim gullys (lanes in Hindi as we call it) with fat loads of money exchanging hands. You get almost everything from good food to wholesalers of clothing accessories and specialty packaging industry giants. A good mix of traders and […]

Changing user behavior [and design principles]

over 5 years ago | Amit Anand: UXcorner

Today the Smartphone (e.g. Android based handhelds and iPhones) capture over 50% of the world market (extrapolated based on data from Neilsen for US markets), what further pushes this is the increased availability of high speed internet (3G, 4G LTE) at affordable plans globally. What more millions of iPads, Samsung Galaxy’s, PlayBooks and other low […]

Changing user behavior [and design principles]

over 5 years ago | Amit Anand: UXcorner

Today the Smartphone (e.g. Android based handhelds and iPhones) capture over 50% of the world market (extrapolated based on data from Neilsen for US markets), what further pushes this is the increased availability of high speed internet (3G, 4G LTE) at affordable plans globally. What more millions of iPads, Samsung Galaxy’s, PlayBooks and other low […]

client-side request caching with JavaScript

over 5 years ago | Mark Daggett: Mark Daggett's Blog

Recently I was writing an enterprise data visualization application that made heavy user of interactive charts and graphs. Like most best-of-breed data-viz apps this one supported very robust filters for slicing and dicing through the dataset. Each time the user adjusted one of these filters the application made new AJAX request and idled until the results were returned. Technically, this approach worked fine, but because the data-segmentation occurred on the server the charts felt sluggish because they were always polling or data. Additionally, the user quite frequently toggled between only a couple filters to compare the results. What should have been an experience of rapidly flipping between two views on the data was actually a belabored rendering experience. As the developer this was frustrating because they were asking for and receiving the same data over and over again. To solve this problem, I built a very simple mechanism that affords just enough caching to persist these payload objects only while the user is viewing the page. In this way the user would be guaranteed to get a fresh copy from the server on each page load. Essentially, I hooked my caching routine around the function that made the AJAX request for new chart data. Using this approach an AJAX request only occurred once, and all future requests pulled from the cache. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 // Called when someone adjusts a filter function updateChart(url, chart, key) { // Builds the request params needed to correctly query the server. var opts = requestParamsFor(chart, key); // Generate a cache key based on this object var cacheKey = $.cache.getKey(opts); if ($.hh.cache.exists(cacheKey)) { // If the key exists then the request has happened in the past // use the cached result to refresh the chart. var result = $.cache.get(cacheKey); onSuccess(kind, opts, chart, code, result); } else { $.ajax({ url: url, type: 'POST', data: opts, success: function(result) { // Since this was a new request store the results in the cache // at the location specified by the cache key. $.cache.add(cacheKey, result); onSuccess(kind, opts, chart, code, result); } }); } } Here is the local cache class in all it’s detail: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 $.cache = (function() { var _cache = {}; var _keys = []; var _indexOf = function(arr, obj) { var len = arr.length; for (var i = 0; i < len; i++) { if (arr[i] == obj) { return i; } } return - 1; }; var _serialize = function(opts) { if ((opts).toString() === "[object Object]") { return $.param(opts); } else { return (opts).toString(); } }; var _remove = function(key) { var t; if ((t = _indexOf(_keys, key)) > -1) { _keys.splice(t, 1); delete _cache[key]; } }; var _removeAll = function() { _cache = {}; _keys = []; }; var add = function(key, obj) { if (_keys.indexOf(key) === -1) { _keys.push(key); } _cache[key] = obj; return $.hh.cache.get(key); }; var exists = function(key) { return _cache.hasOwnProperty(key); }; var purge = function() { if (arguments.length > 0) { _remove(arguments[0]); } else { _removeAll(); } return $.extend(true, {}, _cache); }; var searchKeys = function(str) { var keys = []; var rStr; rStr = new RegExp('\\b' + str + '\\b', 'i'); $.each(_keys, function(i, e) { if (e.match(rStr)) { keys.push(e); } }); return keys; }; var get = function(key) { var val; if (_cache[key] !== undefined) { if ((_cache[key]).toString() === "[object Object]") { val = $.extend(true, {}, _cache[key]); } else { val = _cache[key]; } } return val; }; var getKey = function(opts) { return _serialize(opts); }; var getKeys = function() { return _keys; }; return { add: add, exists: exists, purge: purge, searchKeys: searchKeys, get: get, getKey: getKey, getKeys: getKeys }; }).call(this); Here are some jasmine tests which explain more features of the cache not covered in this post, and prove that it works! 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 it("should allow you to build a cache using keys", function() { var obj = { 'foo': 'bar' }; expect($.cache.exists("foo=bar")).toEqual(false); expect($.cache.getKey(obj)).toEqual('foo=bar'); expect($.cache.getKey('foo')).toEqual('foo'); expect($.cache.add("foo=bar", obj)).toEqual(obj); expect($.cache.exists("foo=bar")).toEqual(true); expect($.cache.get("foo=bar")).toEqual(obj); expect($.cache.get("bar")).toEqual(undefined); }); it("should allow you to empty the cache completely", function() { $.cache.purge(); expect($.cache.add("baz", 'baz')).toEqual('baz'); expect($.cache.getKeys().length).toEqual(1); expect($.cache.purge()).toEqual({}); }); it("should allow you to empty the cache of just a specific record", function() { $.cache.purge(); expect($.cache.add("baz", 'baz')).toEqual('baz'); expect($.cache.add("boff", 'ball')).toEqual('ball'); expect($.cache.getKeys()).toEqual(['baz', 'boff']); expect($.cache.purge('boff')).toEqual({ 'baz': 'baz' }); expect($.cache.getKeys()).toEqual(['baz']); expect($.cache.purge('bozz')).toEqual({ 'baz': 'baz' }); expect($.cache.getKeys()).toEqual(['baz']); }); it("should allow you to search for keys in the cache", function() { $.cache.purge(); var obj = { 'bar': 'baz' }; $.cache.add('bar=baz', obj); expect($.cache.getKeys().length).toEqual(1); expect($.cache.getKeys()).toEqual(["bar=baz"]); expect($.cache.searchKeys("bar")).toEqual(["bar=baz"]); expect($.cache.searchKeys("bar=")).toEqual(["bar=baz"]); expect($.cache.searchKeys("bat")).toEqual([]); });

Winning a deal – 01 [The backstage act]

over 5 years ago | Amit Anand: UXcorner

It becomes important for us as business practitioners to understand the “need behind the need” – even before the need is clear <smiles> well complex but true and possible. Often observed clients do not actually know why they want solutions or even why they want to outsource at most instances. It becomes important for us […]

Winning a deal – 01 [The backstage act]

over 5 years ago | Amit Anand: UXcorner

It becomes important for us as business practitioners to understand the “need behind the need” – even before the need is clear <smiles> well complex but true and possible. Often observed clients do not actually know why they want solutions or even why they want to outsource at most instances. It becomes important for us […]

Project to Product – the baby steps

over 5 years ago | Amit Anand: UXcorner

In the last 6 years of my portfolios with IT outsourcing organizations, I have seen several projects turning into products; in recent times co-innovation avenues with our clients has sprinted that further with agreed investments going into creation of industry specific solutions which are then owned and managed jointly. This has been a growing trend […]

Project to Product – the baby steps

over 5 years ago | Amit Anand: UXcorner

In the last 6 years of my portfolios with IT outsourcing organizations, I have seen several projects turning into products; in recent times co-innovation avenues with our clients has sprinted that further with agreed investments going into creation of industry specific solutions which are then owned and managed jointly. This has been a growing trend […]

Rails Protip: hash.slice

over 5 years ago | Mark Daggett: Mark Daggett's Blog

Rails has hidden gems just waiting to be discovered. I will demonstrate the use of Hash.slice, which is one of the core extensions of ActiveSupport. Here is an example of how Hash.slice can clean up a controller, take this existing code for example: 1 2 3 def index @users = User.paginate({ :page => params[:page].present? ? params[:page].to_i : 1, :per_page => params[:per_page].present? ? params[:per_page].to_i : 12 }) end With Hash.slice we can shorten it to: 1 2 3 def index @users = User.paginate({ :page => 1, :per_page => 12 }.merge(params.slice(:page, :per_page))) end Hash.slice is that it is very forgiving. The method only returns the attributes if they exist. In our example we are assured all conditions will be met because the default values will only be overwritten if Hash.slice returns them.

The Minimally Viable Party

over 5 years ago | Mark Daggett: Mark Daggett's Blog

Garry and I are planning our next big project together. In the spirit of agile development and with the reality of limited funds we are ruthlessly scoping our efforts around a minimal feature set. We want to develop just enough of the product to see if we have a hit. Typically, this process is described as developing the minimum viable product (MVP). The MVP approach targets the hardcore vocal minority that understand your offering, and are likely to give you helpful insights on how to improve it. With this in mind we began to list our potential features and aggressively cut anything that wasn’t essential. We tried a variety of approaches to identify our MVP. Which included: Sorting features in order of complexity, and identifying those with serial dependencies Selecting only those features that touch the revenue line (a topic for another post) Determining those features which could give us a competitive advantage over other similar products. These thought experiments were helpful, but the focus felt very myopic, and more about cutting than pruning; like shaping a bonsai tree blindfolded. However, while mowing the lawn (where I do much of my good thinking), I came up with a new approach: “The Minimum Viable Party”. A party seemed like a perfect metaphor for these reasons: The goal of product at this stage, is to meet people, show them a good time and give them a complete experience they can give feedback on. Parties are events with a specific beginnings and ends. Being the host narrows your responsibilities to just throwing a great party. If you find yourself needing to first build the venue, or starting a catering company at the same time then you are doing it wrong. Parties are fun, (even Goth Emo parties); they are about doing something you love, with others looking for the same thing. A complete party is more than just good food. There are many aspects that can be considered including venue, theme, duration, etc. If it all goes horribly wrong you can recover. You just clean up the mess, pull the lawn chairs off the roof, get a tow company to dredge your car from the neighbors pool and go on with your life. By breaking a party into smaller components you can map them onto the MVP. Now, I am not for a minute claiming that there is an absolute one-to-one mapping between party to product. However, the metaphor did allow me to consider the attributes of my product in a more objective and holistic way. For example, a decision on whether to spend money on party invitations could be construed as a marketing spend on promoting our product. Planning a Minimal Viable Party Here are the rules for the Minimal Viable Party thought experiment: You are planning a party for people you do not know. You have one week to plan and execute your party. Without specifying a specific amount you should assume that funds are very limited, which should force you to make decisions on how and where to spend your money. The party is not a catered meaning that much if not all the work should be done personally. These rules lead me to a series of questions to consider which i’ve detailed below: Q. How many guests should I invite? A. You should invite the number of guests you can host comfortably. Everyone wants to feel special at the party, meaning you should know your limits before the inviting others. Insight: Many people focus on the hockey stick style growth from the outset, that is a result of a good product not the goal itself. At this stage the goal is to get to know the users, and the only way to do that is to ensure there is enough of you to go around. Q. How do I entertain people I have never met? A. Plan a party around the type of guest you’d want to see again. If you are a geek at heart then have your party on the holodeck and don’t mind the haters. Insight: You can’t please everyone but it’s important that you understand who you’d like as customers and friends. Ensure that your product gives them a memorable and enjoyable experience. Q. What kind of food should I cook? A. Be honest about your own cooking skills, anything you don’t do well you should either eliminate or buy (even if this means you have to buy all the food). Insight: No one wants to eat bad food, a strangers will not give you an “A” for effort when eating your half-cooked hamburgers. The same is true for a poorly executed product. I continually have to fight the urge to be an everything expert. While striving to learn new things is a positive, not knowing (or ignoring) your weaknesses limits you from being effective under a deadline. Q. How many courses should I prepare? A. What would your ideal guest expect? Not everyone expets (or even wants) a five course meal that takes hours to eat. What they will want is for it to feel complete, and that differs from person to person. Insight: The expression “soup to nuts” is often used when describing a project completed from beginning to end. It alludes to a complete meal that included appetizers (soup), nuts (dessert) and everything in between. If you view your features through the lens of completeness it should help you determine if a feature is needed now or can wait.

Agile Product Manager - Are you there?

about 6 years ago | Bhuwan Lodha: What Not to Build

And I am back to my blog. Not that I had disowned Agilecat, but just that I have a little more time these days compared to the last three years, as I gear up to transition into a new product at a new organization.This comeback blog is dedicated to all product managers who have renovated themselves to become ‘product owners’. I realize that there has been a lot of deliberation and analysis on this topic, but in today’s post, I am going to touch upon just that one aspect which is crucial to both the roles - ‘physical presence’. Yes, you got to be there. At the right time with the right people! A nice PM stays in constant contact with his ‘customers’, and to be a nice PO, he must and always be present with the scrum team. This is one large aspect of variation between the descriptions of duties between these two titles.The Product Manager, by definition, is the voice of market. He is coached to stay out, experience the souk, observe the users, study the customers and so on. The agile Product Owner role characterization lays a lot of emphasis on conducting planning meetings, giving first hand feedback to scrum teams, participating in review sessions and so on. So from a distance, this looks pretty clear – PM stays out of the office, while PO sits alongside the team. Two roles, pretty distinct, right? If you look closely, these roles may appear to have dissimilar portrayals, but share a common goal - that of connecting the dots between the market and the team. Yes, it is fundamental for a good PO to understand his customers and market, and equally imperative for the PM to identify with his team dynamics. If there is one person playing both these roles, he’s got to master that sweet balance of staying home with the team, and also out their in the field!Time management and self prioritization become crucial for such an agile PM/PO person. His typical calendar week best describes the role he plays, and nothing else.Agile methods encourage frequent release iterations, and call for stake holders, sponsors etc. to be significantly involved in product development. This helps in reducing risks, and at the same time makes it easy for the PM/PO to engage users and sponsors by making sure that his product stays in touch with them, while he stays in touch with the product.Exploiting social media tools to engage, listen and gauge the market sentiments is not new anymore, and is more and more becoming mainstream in product management techniques. While it clearly does not obviate that need for physical presence, it surely helps the PM/PO to better prioritize his ‘facetime’. Some argue that being a product manager and an agile PO at the same time results in overburdening of one person, and that is true, mostly because with that oath of “product owning” comes a never ending endeavor to help build great agile teams, along with winning products. Another way to look at this is to treat the agile team as another ‘product’ in the making, and PO playing chief architect on that project.

UX as a discipline

about 6 years ago | Amit Anand: UXcorner

In this post I try to answer, why and where UX as a discipline stands (or sits). Well I know it might be late for a post such as this ; But each day as I come across reasons to give away reasoning for such displacements, I felt might as well take opportunity to give […]

UX as a discipline

about 6 years ago | Amit Anand: UXcorner

In this post I try to answer, why and where UX as a discipline stands (or sits). Well I know it might be late for a post such as this ; But each day as I come across reasons to give away reasoning for such displacements, I felt might as well take opportunity to give […]