Logo

Who is talking?

Archive

Using Angular’s `ng-bind` to Eliminate Pre-render Flickering On Your Index

over 3 years ago | Christian Lilley: UI Guy

Angular’s double-curly-bracket notation is super-easy. Nay, elegant. It’s a great, straightforward way to demonstrate the power of data-binding in templates. But… there’s a drawback: until Angular has a chance to process those expressions, bind to them, and update, your users will see the brackets and the expression within them, rather than the content that should […]

Using Angular’s `ng-bind` to Eliminate Pre-render Flickering On Your Index

over 3 years ago | Christian Lilley: UI Guy

Angular’s double-curly-bracket notation is super-easy. Nay, elegant. It’s a great, straightforward way to demonstrate the power of data-binding in templates. But… there’s a drawback: until Angular has a chance to process those expressions, bind to them, and update, your users will see the brackets and the expression within them, rather than the content that should […]

Simplest iterative algorithm Post order traversal

over 3 years ago | Subodh Gupta: Subodh's Blog

public class POIterateSimple {public static void main(String[] args) {    pfIterate(PFIterate.NODE);}private static void pfIterate(Node root) {    Node prev = null;    Stack stack = new Stack<>();    stack.push(root);    while (!stack.isEmpty()) {        root = stack.pop();        if (root.left == null && root.right == null) {            System.out.print(root.data + " ");            prev = root;        } else if (prev == root.left || prev == root.right) {            System.out.print(root.data + " ");            prev = root;        } else {            stack.push(root);            if (root.right != null)                stack.push(root.right);            if (root.left != null)                stack.push(root.left);        }    } }}

Transferring/Importing Emails Between Notes Accounts

over 3 years ago | Christian Lilley: UI Guy

There are days when IBM (Lotus) Notes makes me lose my faith in the forward progress of humanity, and specifically of the idea that most people do, in fact, care about their jobs and the products they build. This is one of those days. I am not the least bit convinced that anybody who works […]

Transferring/Importing Emails Between Notes Accounts

over 3 years ago | Christian Lilley: UI Guy

There are days when IBM (Lotus) Notes makes me lose my faith in the forward progress of humanity, and specifically of the idea that most people do, in fact, care about their jobs and the products they build. This is one of those days. I am not the least bit convinced that anybody who works […]

Angular.js - Sharing data between controllers

almost 4 years ago | Rocky Jaiswal: Still Learning

A lot of times my friends ask me - "How do we share data between controllers in Angular.js?" Since services in Angular.js are injectable singletons, they seem like a good choice for sharing mutable data. But nothing is worth anything without some code. So here goes - <stro ...

Building a Forum with Clojure, Datomic, Angular, and Ansible

almost 4 years ago | Daniel Higginbotham: Flying Machine Studios

After many long months I've finished re-writing Grateful Place. The site now uses Clojure as an API server, with Datomic for the database, Angular for the front end, and Vagrant and Ansible for provisioning and deployment. This setup is awesome, the best of every world, and I love it. Below we'll dive into the code base, covering the most important parts of each component and how everything works together. We'll cover: Clojure API Server Liberator for Easy API'ing Going on a Spirit Journey with Friend Testing a Clojure Web App is More Fun than Testing Rails Serving files generated by Grunt/Angular The Uberjar Datomic Why Oh Why Did I Do This The Poopy Code I Wrote to Make Basic Things "Easier" The Good Code I Ripped Off to do Migrations Mapification with Cartographer, My Very Own Clojure Library!!! Angular Peeking and Secondary Controllers Directives to the Rescue Infrastructure Creating a Local Sandbox with Vagrant Provisioning with Ansible Building and Deploying with a Janky Bash Script and Ansible Development Workflow Emacs Bookmarks and Keybindings tmuxinator Config Actually doing development All source is available on github. This article isn't meant to be a tutorial. However, if you have any questions about how things work or about how to work on the codebase, please leave a comment and I'll do my best to clarify. About the Site Grateful Place is my attempt at creating the kind of online community that I'd like to belong to. It's still in its infancy, but I'd like for it to become a site where people consciously help lift each other up. One way to do this is by expressing gratitude on a daily basis, which science says increases happiness. Some of the features include watching forum threads, liking posts, and creating a basic profile. I have a lot more planned, like tags and "summary" views, and I think the combination of Clojure and Angular will make it fun and easy for me to continue development :) If you want to have a look without diving head-first into hippified waters (what, do you have something against happiness?), you can use the demo site with username/password test101/test101. Be warned, though, that that server might have some bugs. Now, on to the code! Clojure API Server I'm very happy with using Clojure as an API server. The libraries involved are lightweight and transparent, with no magic, and that makes it so easy to fully understand what's going on. That was my experience with Liberator: Liberator for Easy API'ing Liberator provided me with a much better abstraction for handling logic which I was repeating in all of my controllers. For example, my create functions all basically looked like this before I moved to Liberator: (defn create! [params auth] (protect (:id auth) (if-valid params (:create validations/post) errors (let [post-map (create-post params)] {:body post-map}) (invalid errors)))) The above code implements a decision tree: First, the protect macro is used to ensure you're authorized to do whatever you're trying to do. The first argument is a boolean expression, in this case (:id auth), which just checks whether you're logged in. If the boolean expression is true, run everything that follows. Otherwise return an error status and error messages (see implementation). Check whether params is valid using the specified validation, in this case (:create validations/post). If it's valid, run the let statement, otherwise make the validation errors available in errors and run (invalid errors). There are a couple things that I didn't like about this approach. First, there was too much distance between the logical branches. For example, protect is basically an if statement, but the else is hidden. Also, the actual code I wrote in if-valid is a bit long, which makes it difficult to visually understand how (invalid errors) relates. Second, this approach required me to introduce more nesting in order to add more steps or checks in the workflow. This would make it even harder to understand as I'd mentally have to descend and ascend a few conditionals in order to understand what's going on. I'd end up with something like: - Decision one - Decision one first branch: Decision two - Decision two first branch: Decision three - Decision three first branch ... Lots of code here physically creating distance between branches - Decision three second branch ... More code causing more distance - Decision two second branch - Decision one second branch... what was the decision even? I can't remember and now it's hard for me to visually associate this branch with its parent decision So essentially, I'd have to keep an ever-growing decision tree in my head. The physical representation of the tree, the code, would help to obscure the logic flow as I added more code. Here's how the same function looks when rewritten using Liberator: (defresource create! [params auth] :allowed-methods [:post] :available-media-types ["application/json"] :authorized? (logged-in? auth) :malformed? (validator params (:create validations/post)) :handle-malformed errors-in-ctx :post! (create-content ts/create-post params auth record) :handle-created record-in-ctx) Holy shnikes! That's so much clearer! Liberator improved my code by providing a pre-defined, HTTP-compliant decision tree, providing sensible default logic for nodes, and by allowing me to easily associate my own logic with the nodes. This allows me to concentrate on one node at a time, instead of having to keep an increasingly complicated tree structure in my head. For example, I can physically place the logic for malformed? next to the code that I want to run if the request is malformed, specified by handle-malformed. Liberator has excellent documentation and using it is a big win. It lets me just plug my own bits of logic into a carefully-coded, useful HTTP framework. I definitely recommend it. Going on a Spirit Journey with Friend Friend still kinda makes my head hurt. It's a useful library that gets the job done, but I feel like using it requires poring over the source code until you attain that brief flash of enlightenment that allows you to pound out the code you need for as long as some dark, atavistic, pre-conscious part of your brain can hold everything together. After that you pray that you won't need to change anything because, Jesus, that trip can take a lot out of you. I don't know, maybe that's why peyote was invented. Anyway, that's a testament to my own need to learn (and perhaps a need for slightly clearer documentation) and not to the quality or value of the library itself. Everything hangs together, working with Ring in a stateless way, which I really appreciate. OK enough of my blathering. We want code! The actual tricky bits were: Getting Friend to return errors instead of redirecting Creating an authenticated session as soon as a user registers instead of requiring login It turned out that the first wasn't all that difficult. I think. Here's the code on github. It's also listed below, in the next code block. The key bit is :login-failure-handler, which simple returns a map for Ring. I also have :redirect-on-auth? listed twice. I'm not sure if this is necessary but every once in awhile I like to do some shaman programming, throwing chicken bones and listening to the wind in hopes that everything turns out OK. Things are working and I'm not going to mess with them. Creating the authenticated session is a different story. There are a lot of things going on. In order, they are: User submits registration form Request goes through a bunch of Ring middlewares that wrap the request, adding keyword params and handling json and whatnot Request hits the middleware created by Friend The request "hits" the users/attempt-registration Friend workflow If the registration is valid, return a friend authentication map. Friend "knows" that this is not meant to be a response sent to the browser, so the authentication map gets added to the Ring request map and the resulting map gets sent to the next Ring middleware The next ring middleware is routes The users/registration-success-response route matches users/registration-success-response returns a Ring map, providing a body. The response is a map like {:id 1234 :username "flyingmachine"}. This then gets used by Angular. Here's all the relevant code. Steps are indicated in brackets, like [1] or [2] or [3]. Step 1 is omitted as that's not code, you silly goose. ;; The ring app, https://github.com/flyingmachine/gratefulplace2/blob/v1.0.0/server/src/gratefulplace/server.clj#L29 (defn wrap [to-wrap] (-> to-wrap (wrap-session {:cookie-name "gratefulplace-session" :store (db-session-store {})}) (wrap-restful-format :formats [:json-kw]) wrap-exception wrap-keyword-params wrap-nested-params wrap-params)) ; The ring app (def app (-> routes ;; [6] after a successful registration the routes ;; middleware is called auth ;; [3] after request is wrapped, send it to friend wrap ;; [2] )) ;; Friend middlware (defn auth [ring-app] (friend/authenticate ring-app {:credential-fn (partial creds/bcrypt-credential-fn credential-fn) :workflows [(workflows/interactive-form :redirect-on-auth? false :login-failure-handler (fn [req] {:body {:errors {:username ["invalid username or password"]}} :status 401})) users/attempt-registration ;; [4] session-store-authorize] :redirect-on-auth? false :login-uri "/login" :unauthorized-redirect-uri "/login"})) ;; [4] Friend runs this workflow function. If the workflow function ;; returns falsey, then friend tries the next workflow function. In ;; this case, when a user submits a registration form then the `when` ;; boolean expression is true and the function will not return falsey. ;; If the registration is successful it will return an authentication ;; map and continue to step 5. If the registration is unsuccessful it ;; will return a Ring response map, which is basically a map that has ;; the keys :body or :status. ;; https://github.com/flyingmachine/gratefulplace2/blob/v1.0.0/server/src/gratefulplace/controllers/users.clj#L20 (defn attempt-registration [req] (let [{:keys [uri request-method params session]} req] (when (and (= uri "/users") (= request-method :post)) (if-valid params (:create validations/user) errors ;; [5] Here's where we return the authentication map, which ;; Friend appends to the request map, sending the result to the ;; next middleware (cemerick.friend.workflows/make-auth (mapify-tx-result (ts/create-user params) record) {:cemerick.friend/redirect-on-auth? false}) (invalid errors))))) ;; [7] The compojure route, https://github.com/flyingmachine/gratefulplace2/blob/v1.0.0/server/src/gratefulplace/middleware/routes.clj#L67 (authroute POST "/users" users/registration-success-response) ;; [8] the final step in our journey (defn registration-success-response [params auth] "If the request gets this far, it means that user registration was successful." (if auth {:body auth})) I'm both proud and appalled that I wrote all that code. Testing a Clojure Web App is More Fun than Testing Rails For testing I decided to try out Midje. Midje is easy to get used to, and @marick has articulated a clear and compelling philosophy for it. But before we get into some code let me explain the heading, "testing a Clojure web app is more fun than testing Rails." This has to do with Clojure itself and not with any testing library. There's no real magic in any of the code I wrote. Everything is just a function. You give it an input and it returns an output. You give your application a Ring request and it goes through all the layers and returns a Ring response. You don't have to do any crazy setup hijinks or create special environments like you do in Rails - especially like you have to do when testing controllers. This makes testing so much easier and more fun. So, that said, I feel like there's not much remarkable with my tests. There's a lot of room for improvement. I ended up creating a lot of helper functions to DRY up my controller tests, and those might prove helpful to someone else. I also ended up writing a crazy-ass macro for creating functions with default positional arguments: (defmacro defnpd ;; defn with default positional arguments [name args & body] (let [unpack-defaults (fn [args] (let [[undefaulted defaulted] (split-with (comp not vector?) args) argcount (count args)] (loop [defaulted defaulted argset {:argnames (into [] undefaulted) :application (into [] (concat undefaulted (map second defaulted)))} unpacked-args [argset] position (count undefaulted)] (if (empty? defaulted) unpacked-args (let [argname (ffirst defaulted) new-argset {:argnames (conj (:argnames argset) argname) :application (assoc (:application argset) position argname)}] (recur (rest defaulted) new-argset (conj unpacked-args new-argset) (inc position))))))) unpacked-args (unpack-defaults args)] `(defn ~name (~(:argnames (last unpacked-args)) ~@body) ~@(map #(list (:argnames %) `(~name ~@(:application %))) (drop-last unpacked-args))))) ;; Examples (defnpd response-data [method path [params nil] [auth nil]] (data (res method path params auth))) (defnpd res [method path [params nil] [auth nil]] (let [params (json/write-str params)] (server/app (req method path params auth)))) The next big step for me with testing is to get off my butt and figure out how to run some kind of autotest process with Midje. If you're new to Clojure and are wondering what testing library, I think clojure.test works just fine. It's easier to understand than Midje, but Midje seems more powerful. Serving files generated by Grunt/Angular While developing, the frontend files are located completely outside of the Clojure application. The directory structure looks like: /server /src /gratefulplace - server.clj /resources ... /html-app /app - index.html /.tmp /scripts - app.js /controllers - topics.js ... So I needed some way to get the Clojure app to actually serve up these files. I also needed to be able to serve the files when they're packaged as resources in the final uberjar. This turned out to be really easy: ;; https://github.com/flyingmachine/gratefulplace2/blob/v1.0.0/server/src/gratefulplace/config.clj ;; Example config (def conf (merge-with merge {:html-paths ["html-app" "../html-app/app" "../html-app/.tmp"]})) (defn config [& keys] (get-in conf keys)) ;; https://github.com/flyingmachine/gratefulplace2/blob/v1.0.0/server/src/gratefulplace/middleware/routes.clj#L33 ;; Serve up angular app (apply compojure.core/routes (map #(compojure.core/routes (compojure.route/files "/" {:root %}) (compojure.route/resources "/" {:root %})) (reverse (config :html-paths)))) ;; Route "/" to "/index.html" (apply compojure.core/routes (map (fn [response-fn] (GET "/" [] (response-fn "index.html" {:root "html-app"}))) [resp/file-response resp/resource-response])) We're just iterating over each possible path for the front end files and creating both a file route and a resource route for them. This is a lazy way to do things, resulting in a few unnecessary routes. In the future, it would be nice to make the app "know" whether to use the single resource route, html-app, or whether it needs to use the file routes, ../html-app/app and ../html-app/.tmp. The Uberjar As I started to deploy the forum I found that I needed and easy way to run database-related tasks. Here's what I came up with: (ns gratefulplace.app (:gen-class) (:require [gratefulplace.server :as server] [gratefulplace.db.manage :as db])) (defn -main [cmd] (cond (= cmd "server") (server/-main) ;; I know there's repetition here please don't hate me :'( (= cmd "db/reload") (do (println (db/reload)) (System/exit 0)) (= cmd "db/migrate") (do (println (db/migrate)) (System/exit 0)))) So you can run java -jar gp2.jar server and get a server running, or reload the database or run migrations. I could also have used lein on the server, and I'll probably do that eventually. For now I'm just creating uberjars and copying them over. Holy cow, the Clojure section is over! Let's talk about Datomic now! Datomic Why Oh Why Did I Do This When I set about re-writing the site it felt risky to use Datomic because a) I didn't know how to use it and b) it didn't seem like it would add much value over postgres or mysql for my tiny side project. But those were also compelling reasons to go with it: a) it's exciting to learn a completely new way of working with databases, designed by some really freaking smart people who know which side of the bread is buttered and b) it's just a tiny side project and I can do whatever I want. Ultimately I'm happy with the decision. I've learned a lot by researching Datomic (see "Datomic for Five-Year-Olds") and using it has afforded the same simple, lightweight experience as using Clojure. You won't find any mind-blowing code here – I'm still trying to learn how to use Datomic well – but hopefully you'll find it useful or interesting. The Poopy Code I Wrote to Make Things "Easier" I wrote a number of wrapper functions in the misleadingly-name gratefulplace.db.query namespace: (ns gratefulplace.db.query (:require [datomic.api :as d]) (:use gratefulplace.config)) ;; This is dynamic so I can re-bind it for tests (def ^:dynamic *db-uri* (config :datomic :db-uri)) (defn conn [] (d/connect *db-uri*)) (defn db [] (d/db (conn))) ;; Don't make me pass in the value of the database that gets boring (def q #(d/q % (db))) ;; I'll give you an id, you give me a datomic entity or nil (defn ent [id] (if-let [exists (ffirst (d/q '[:find ?eid :in $ ?eid :where [?eid]] (db) id))] (d/entity (db) exists) nil)) ;; Is this an entity?! Tell me! (defmulti ent? class) (defmethod ent? datomic.query.EntityMap [x] x) (defmethod ent? :default [x] false) ;; I'll give you some conditions, you'll give me an entity id (defn eid [& conditions] (let [conditions (map #(concat ['?c] %) conditions)] (-> {:find ['?c] :where conditions} q ffirst))) ;; I want one thing please (defn one [& conditions] (if-let [id (apply eid conditions)] (ent id))) ;; I want all the things please (defn all [common-attribute & conditions] (let [conditions (concat [['?c common-attribute]] (map #(concat ['?c] %) conditions))] (map #(ent (first %)) (q {:find ['?c] :where conditions})))) ;; Passing the connection all the time is boring (def t #(d/transact (conn) %)) (defn resolve-tempid [tempids tempid] (d/resolve-tempid (db) tempids tempid)) ;; I make a lot of mistakes so please make it easy for me to retract them (defn retract-entity [eid] (t [[:db.fn/retractEntity eid]])) Some of these functions simply reduce the code I write by a tiny bit, for example by allowing me to not pass a connection or database value into every single database-related function, which would make no sense for me as I only have one database. Others, like one and all provide me with an "easier" way of performing common queries but at the expense of sometimes writing queries in roundabout ways or taking away some of my flexibility. For example, in the all function I'm limited to only one data source. The result is that I sometimes have to use the datomic.api functions in places where I'd prefer not to, and the codebase doesn't quite feel cohesive. One example of this is the query function in the watches controller: (defresource query [params auth] :available-media-types ["application/json"] :handle-ok (fn [ctx] (map (comp record first) (d/q '[:find ?watch :in $ ?userid :where [?watch :watch/user ?userid] [?watch :watch/topic ?topic] [?topic :content/deleted false]] (db/db) (:id auth))))) I have to call datomic.api/q directly because I want to pass in ?userid. I'm not sure whether I should drop these functions entirely and just use the datomic api or whether I should continue tweaking them to meet my needs. The Good Code I Ripped Off to do Migrations The gratefulplace.db.manage namespace has some code I stole and modified from Day of Datomic. It's a really cool, simple way of ensuring that migrations get run. The basic idea is that you keep track of schema names which have been installed, then install any schemas that haven't been installed. It's a simple, logical approach and the code that implements it is pretty neat, as you would expect from Stu Halloway. Mapification with Cartographer, My Very Own Clojure Library!!! Cartographer is the result of my attempt to easily do some processing and pull in relationships when converting a Datomic entity to a map. I think the README explains it all so you can learn more about it there. Here are some of the maprules used in GP2: ;; https://github.com/flyingmachine/gratefulplace2/blob/v1.0.0/server/src/gratefulplace/db/maprules.clj (defmaprules ent->topic (attr :id :db/id) (attr :title :topic/title) (attr :post-count (ref-count :post/topic)) (attr :author-id (comp :db/id :content/author)) (attr :last-posted-to-at (comp format-date :topic/last-posted-to-at)) (has-one :first-post :rules gratefulplace.db.maprules/ent->post :retriever :topic/first-post) (has-one :author :rules gratefulplace.db.maprules/ent->user :retriever :content/author) (has-many :posts :rules gratefulplace.db.maprules/ent->post :retriever #(sort-by :post/created-at (:post/_topic %))) (has-many :watches :rules gratefulplace.db.maprules/ent->watch :retriever #(:watch/_topic %))) (defmaprules ent->post (attr :id :db/id) (attr :content (mask-deleted :post/content)) (attr :formatted-content (mask-deleted #(md-content (:post/content %)))) (attr :deleted :content/deleted) (attr :created-at (comp format-date :post/created-at)) (attr :topic-id (comp :db/id :post/topic)) (attr :author-id (comp :db/id :content/author)) (attr :likers #(map (comp :db/id :like/user) (:like/_post %))) (has-one :author :rules gratefulplace.db.maprules/ent->user :retriever :content/author) (has-one :topic :rules gratefulplace.db.maprules/ent->topic :retriever :post/topic)) There are definitely some edge cases where this approach gets strained but overall it's served me well. I ended up creating a macro which allows you to easily create a function that, when applied to a datomic entity, returns a map using maprules created with Cartographer: ;; https://github.com/flyingmachine/gratefulplace2/blob/v1.0.0/server/src/gratefulplace/db/mapification.clj (defmacro defmapifier [fn-name rules & mapify-opts] (let [fn-name fn-name] `(defn- ~fn-name ([id#] (~fn-name id# {})) ([id# addtl-mapify-args#] (if-let [ent# (or (db/ent? id#) (db/ent id#))] (let [mapify-opts# (merge-with (fn [_# x#] x#) ~@mapify-opts addtl-mapify-args#)] (fyingmachine.cartographer/mapify ent# ~rules mapify-opts#)) nil))))) Angular I've been learning Angular since last November and I love it. Using it, I feel like I finally have the right tools for creating web apps. Peeking and Secondary Controllers I wanted to implement the idea of "peeking" at things on the forum. For example, if you click on a user link you'll just view a summary of his info in the right column instead of completely leaving the page you're on. The idea is that, while reading a thread, you might find a response interesting. You want to know a little more about the author but don't want to lose your place. So you "peek" at him, which shows you some info and preserves your place in the thread. It was just something fun I wanted to try. However, as far as I know Angular doesn't make this very easy for you. The approach I took was to have a Foundation controller which places the Support service on the scope. Since all other controllers are nested under Foundation, they'll have access to $scope.support. The purpose of Support is define a way to show views in the right column and make data accessible to the view. For example, the author directive has the following: https://github.com/flyingmachine/gratefulplace2/blob/v1.0.0/html-app/app/scripts/directives/author.coffee#L8 $scope.peekAtAuthor = (author)-> User.get id: author.id, (data)-> _(data.posts).reverse() data.posts = _.take(data.posts, 3) Support.peek.show("user", data) The base view has the following: <div id="more"> <nav class="secondary"> <ng-include src="support.secondaryNav.include()"></ng-include> </nav> <ng-include src="support.peek.include()"></ng-include> </div> And the user peek looks like this: <div class="peek"> <div class="user"> <h3 class="username">{{support.peek.data.username}}</h3> <div class="about" ng-bind-html-unsafe="support.peek.data['formatted-about']"></div> </div> <div class="recent-posts"> <h4>Recent Posts</h4> <div class="post" ng-repeat="post in support.peek.data.posts"> <date data="post['created-at']"></date> <div> <a href="#/topics/{{post.topic.id}}">{{post.topic.title || 'view topic'}}</a> </div> <div class="content" ng-bind-html-unsafe="post.content"> </div> </div> </div> </div> So, ultimately, what's happenins is that when you call Support.peek.show("user", data), it sets some variables so that the view associated with the "user" peek is shown. That view then accesses the data you passed to Support.peek.show with, e.g., support.peek.data.username. I know this isn't a super-detailed explanation of what's going on, but I hope some investigation of the code will answer any questions you might have. Directives to the Rescue Angular directives are as powerful as everyone says they are, and I think I'm finally utilizing them well. You can see all my directives on github. This article is already 500 times to long so I won't go into any details, but if you're looking to understand Angular better, read this excellent SO response to How do I “think in AngularJS/EmberJS(or other client MVC framework)” if I have a jQuery background?. Infrastructure Because GP2 uses Datomic Free, I couldn't deploy to Heroku. This meant having to actually handle provisioning a server myself and deploying without following a tutorial. In the end things are working well. The site's residing on a [Digital Ocean][https://www.digitalocean.com/] server, which has been very easy to work with. Creating a Local Sandbox with Vagrant Creating a local sandbox lets you make all your provisioning mistakes more quickly. If you're creating a new provisioning script of tweaking your existing one, you should do it in a virtual machine. Vagrant makes this process as easy as an old shoe. Once you've installed virtualbox and vagrant all you have to do is run vagrant up from the infrastructure directory and you'll have a virtual machine ready to go. The Vagrant web site has excellent tutorials so check it out if you want to learn more. Provisioning with Ansible Ansible's supposed to be super simple compared to Puppet and Chef. I found it easy to learn. It's also simple enough to easily modify scripts and powerful enough to do exactly what I want it to, which is provision a server with Java and Datomic and deploy my app to it. You can check out my setup in infrastructure/ansible. If you're using Datomic free please do use it as a starting point. provision.yml has just about everything you need to get a server up and running, with the exception of uploading SSH keys. deploy.yml is used by the janky bash script below to upload an uberjar, run migrations, and restart the server. Building and Deploying with a Janky Bash Script and Ansible Here's my janky Bash scripts which first build the app and then deploys it with Ansible: # build.sh #!/bin/bash cd html-app grunt build rm -Rf ../server/resources/html-app cp -R targets/public ../server/resources/html-app cd ../server lein uberjar cd .. cp server/target/gratefulplace-0.1.0-SNAPSHOT-standalone.jar infrastructure/ansible/files/gp2.jar # deploy.sh #!/bin/bash die () { echo >&2 "$@" exit 1 } if [ "$#" -eq 0 ] then INVENTORY="dev" else INVENTORY=$1 fi [ -e infrastructure/ansible/$INVENTORY ] || die "Inventory file $INVENTORY not found" ./build.sh cd infrastructure/ansible/ ansible-playbook -i $INVENTORY deploy.yml Workflow OMG this article is almost over! Listen, I know you don't need to know this and it makes no difference to you but I am out here in the North Carolina heat sweating my ass off trying to finish this article so I can get on with my day. So it's pretty exciting that we're almost done. Anyway - here are workflow improvements I developed over the course of this project. You might also want to check out this My Clojure Workflow, Reloaded. Emacs Bookmarks, Snippets, and Keybindings I created a bookmark to open my server/src/gratefulplace/server.clj file with just a few keystrokes instead of having to navigate to it. I recommend doing this for any project which you'll be toiling over for months on end! Keybindings Behold, my very first keybinding! This starts the Jetty server: (defun nrepl-start-http-server () (interactive) (nrepl-load-current-buffer) (nrepl-set-ns (nrepl-current-ns)) ;; (with-current-buffer (nrepl-current-repl-buffer) ;; (nrepl-send-string "(def server (-main)) (println server)")) (nrepl-interactive-eval (format "(println '(def server (%s/-main))) (println 'server)" (nrepl-current-ns))) (nrepl-interactive-eval (format "(def server (%s/-main)) (println server)" (nrepl-current-ns)))) (eval-after-load 'nrepl '(define-key clojure-mode-map (kbd "C-c C-v") 'nrepl-start-http-server)) So, once you have server.clj open and you've run nrepl-jack-in you can hit C-c C-v to start the server. Also check out the nrepl keybindings for some great workflow helpers. tmuxinator config In order to do development you need to have Datomic and Grunt running. Instead of having to open up a bunch of terminal tabs and handle all that manually every time I want to start working, I use tmuxinator so that I can get my environment set up in one comand. Here's my config: # ~/.tmuxinator/nicu.yml # you can make as many tabs as you wish... project_name: gp2 project_root: ~/projects/web_sites/gp2 rvm: 1.9.3 tabs: - angular_server: git pull && cd html-app && grunt server - datomic: datomic - shell: I also have these nice little bash aliases: alias "tmk"="tmux kill-session -t" alias "datomic"="~/src/datomic/bin/transactor ~/src/datomic/config/samples/free-transactor-template.properties" Actually Doing Development So, in order to get to the point where you can actually start writing code and seeing the results, do the following: Install datomic and set up your own datomic alias Run mux gp2 to start tmux with your tmuxinator conf Open emacs Hit C-x r l to open your list of bookmarks and choose the bookmark for server.clj Run M-x nrepl-jack-in in emacs Hit C-c C-v to start the jetty server The End That's it! I hope you've found this article useful. I'm going to go have a life for a little while now. Haha, just kidding! I'm going to spend the next two hours hitting refresh on my reddit submission!

Angular.js with Scalatra

almost 4 years ago | Rocky Jaiswal: Still Learning

Angular.js is pretty much my favorite way to develop web applications as of now. For building simple applications with Angular.js I look for a basic backend through which I can add persistence or do some heavy lifting. Node.js is a one possible backend which does the job and is pretty fast ...

How to inject multiple endpoint in SEI using Camel's @EndpointInject

almost 4 years ago | Subodh Gupta: Subodh's Blog

By default you can inject annotation on single method of an interface like:public interface MyListener {  @EndpointInject(uri="activemq:foo.bar")    String sayHello(String name);}what if you need multiple methods like this with @EndpointInjection happening over them for each endpoint e.g.:public interface MyListener {  @EndpointInject(uri="direct:foo")    String sayHelloFoo(String name);@EndpointInject(uri="direct:bar")    String sayHelloBar(String name);}The simple solution i have working involved spring's FactoryBean implementation.It need following steps:<!-- Define route for which you need injection to happen --><bean id="r" class="MyListener" /><!-- Define producer template having that route  -->   <camelContext  xmlns="http://camel.apache.org/schema/spring">        <template id="producerTemplate" />        <routeBuilder ref="r"/>    </camelContext> <!-- define FactoryBean for proxy instance -->  <bean id="definedCamel" class="CamelFactoryBean">        <constructor-arg index="0" value="MyListener" />        <constructor-arg index="1" ref="producerTemplate" /></bean>The FactoryBean will create instance of custom InvocationHandler and return it which will invoke the endpoint on the ProducerTemplate instance.

The Art of Sampling and Dangers of Generalizing

almost 4 years ago | Niraj Bhandari: Technology Product Management

While driving to work today, I was listening to radio and suddenly a claim by the RJ (Radio jockey) struck …Continue reading →

A Taste of the λ Calculus

almost 4 years ago | Daniel Higginbotham: Flying Machine Studios

I've been having a brain-bending good time reading An Introduction to Functional Programming Through Lambda Calculus. Using examples from that book, this article will walk you through the basics of λ calculus. We'll then look at the surprising, counterintuitive way that the λ calculus lets us represent conditional expressions and boolean operations — all with functions as the only values. It was a very different way of thinking for me, and exciting to learn. I hope it's exciting for you, too! A Bit of History As every aspiring greybeard knows, the λ calculus was invented by Alonzo Church in response to David Hilbert's 1928 Entscheidungsproblem. The Entscheidungsproblem inspired another computational model which you may have heard of, the Turing Machine. The λ calculus is one of the foundations of computer science. It's perhaps most famous for serving as the basis of Lisp, invented (or discovered, if you prefer to think of Lisp as being on par with the theory of gravity or the theory of evolution) by [John McCarthy](http://en.wikipedia.org/wiki/JohnMcCarthy(computer_scientist)) in 1958. Indeed, by examining the λ calculus, you can see where Lisp derives its beauty. The λ calculus had a lean syntax and dead-simple semantics, the very definition of mathematical elegance, yet it's capable of representing all computable functions. Enough history! Tell Me About λ Expressions! The λ calculus is all about manipulating λ expressions. Below is its specification. If you don't know what something means, don't worry about it at this point - this is just an overview and we'll dig into it more. <expression> ::= <name> | <function> | <application> <name> ::= any sequence of non-blank characters <function> ::= λ<name>.<body> <body> ::= <expression> <application> ::= (<function expression> <argument expression>) <function expression> ::= <expression> <argument expression> ::= <expression> ;; Examples ;; Names x joey queen-amidala ;; Functions ;; Note that functions always have one and only one parameter λx.x λy.y ;; equivalent to above; we'll get into that more λfirst.λsecond.first ;; the body of a function can itself be a function λfn.λarg.(fn arg) ;; Application (λx.x λx.x) ((λfirst.λsecond.first x) y) There are two super-cool things about this specification. First, it really boils down to four elements: names, functions, application, and "expressions" which can be any of the above. That's awesome! Second, function bodies and function application arguments can be any expression at all, meaning that a) functions can take functions as arguments and b) functions can return functions. You can see how this is directly related to functional programming, where you have first class functions and higher order functions. This is interesting in itself as it gives you a glimpse of the theoretical underpinnings of functional programming. But it gets way, way cooler. By the end of this article you'll see how conditions and boolean operations can be represented in terms of functions and functions that operate on functions. In order to get there, let's first look at how function application works. Then we'll go over some basic but crucial functions. Function Application When you apply a function to an argument expression, you replace all instances of name within the function's body with the argument expression. Keep in mind that we're talking about a mathematical system here, not a programming language. This is pure symbol manipulation, without any regard for how actual hardware will carry out the replace operation mentioned above. Let's start to flesh out this purely abstract notion of function application with some examples, starting with the identity function: ;; Identity function λx.x As you would expect, applying this function to an argument expression returns the argument expression. In the example below, don't worry about where "foo" comes from: ;; Apply the identity function to foo (λx.x foo) ;; After replacing all instances of x within the body, you get: foo Makes sense, right? I'm sure that you can intuitively understand what's going on in function application. Nevertheless, I think we can make it clearer by looking at a few examples: (λs.(s s) foo) => (foo foo) (λx.λy.x foo) λy.foo (λa.λb.λc.((a b) c) foo) λb.λc.((foo b) c) For a more thorough explanation of what's going on here, please see Jon Sterling's comment below! Now that we understand how to apply functions, let's explore a few more basic functions. The Self-Application Function The self-application function evaluates to the application of its argument to itself: λs.(s s) Let's see an example: ;; Apply the self-application function to the identity function (λs.(s s) λx.x) ;; Perform replacement - results in an application (λx.x λx.x) ;; Perform another replacement λx.x Now let's make things interesting: ;; Apply the self-application function to itself (λs.(s s) λs.(s s)) ;; Perform replacement (λs.(s s) λs.(s s)) ;; Hmmm this is exactly like the first expression. Let's perform ;; replacement again just for kicks (λs.(s s) λs.(s s)) How about that, it turns out that it's possible for evaluation to never terminate. Fun! The Function Application Function Check this out: λfunc.λarg.(func arg) This function takes a function as its argument, returning a function: (λfunc.λarg.(func arg) λx.x) => λarg.(λx.x arg) When you apply this resulting function to an argument, the end result is that the function you supplied as the first argument gets applied to the current argument: ;; Notice the identity function nestled next to the ;; second left parenthesis (λarg.(λx.x arg) λs.(s s)) Here's the whole application: ((λfunc.λarg.(func arg) λx.x) λs.(s s)) => (λarg.(λx.x arg) λs.(s s)) => (λx.x λs.(s s)) λs.(s s) Is your head hurting yet? I sure hope so! That's your brain's way of letting you know that it's learning! We're starting to get a hint of the cool things you can do with λ calculus. It only gets cooler from here! Interlude: Give the Functions Names, Already! Before this post gets overwhelmed with "λx.x" and "λs.(s s)" and such, let's introduce some syntax: ;; Name functions def <name> = <function> ;; Examples def identity = λx.x def self_apply = λs.(s s) def apply = λfunc.λarg.(func arg) Now wherever we see <name>, we can substitute <function>. Examples: (identity identity) => (λx.x identity) => identity (self_apply identity) => (λs.(s s) identity) => (identity identity) => identity ((apply idenity) self_apply) => ((λfunc.λarg.(func arg) identity) self_apply) => (λarg.(identity arg) self_apply) => (identity self_apply) => self_apply Make sense? Excellent! This will let us break your brain with greater efficiency. Now pay attention, because things are about to get super flippin' fantastic. Argument Selection and Argument Pairing Functions In the λ calculus, functions by definition have one and only one parameter, the name. This might seem limiting, but it turns out that you can build functions which allow you to work on multiple arguments. The following functions together allow you to select either the first or the second of two arguments. We'll look at them all together first and then dig in to see how they work together. def make_pair = λfirst.λsecond.λfunc.((func first) second) def select_first = λfirst.λsecond.first def select_second = λfirst.λsecond.second select_first and select_second do what their names suggest, selecting either the first or second of two arguments. They have the same underlying structure; they're both functions which take a first argument and evaluate to a function. This function is applied to a second argument. select_first returns first, and select_second returns second. Let's see how this works with select_first: ;; Start here ((select_first identity) apply) ;; Substitute the function itself for "select_first" ((λfirst.λsecond.first identity) apply) ;; Perform the first function application, replacing "first" with "identity". ;; This returns another function, which we'll apply to a second argument. ;; Notice that the body of the resulting function is "identity", and ;; the name "second" doesn't appear in the body at all (λsecond.identity apply) ;; Apply function. Since "second" doesn't appear in the function body, ;; it disappears into the ether. identity select_second uses the same principle: ((select_second identity) apply) ((λfirst.λsecond.second identity) apply) (λsecond.second apply) apply So, select_first and select_second are able to operate on a pair of arguments. But how do we create pairs for the to work on? make_pair creates a "pair" by returning a function which expects either select_first or select_second as its argument. This is awesome - we don't need any data structures to represent a pair, all we need are functions! Let's actually create a pair: ;; Start here ((make_pair identity) apply) ;; Substitute the actual function ((λfirst.λsecond.λfunc.((func first) second) identity) apply) ;; Perform first function application, replacing "first" with "identity" (λsecond.λfunc.((func identity) second) apply) ;; Perform remaining function application, replacing "second" with "apply" λfunc.((func identity) apply) This resulting function looks very familiar! Let's compare it with our select_first and select_second applications above: ;; Result of ((make_pair identity) apply) λfunc.((func identity) apply) ;; Application of select_first and select_second ((select_first identity) apply) ((select_second identity) apply) ;; Apply the result of make_pair to select_first (λfunc.((func identity) apply) select_first) => ((select_first identity) apply) So, to reiterate, make_pair works by taking a first argument. This returns a function with takes a second argument. The result is a function which you can apply to either select_first or select_second to get the argument you want. This is super freaking cool! A pair is a function which has "captured" two arguments and which you then apply to a selection function. Starting with just four basic constructs – names, functions, applications, expressions – and five simple rules for performing function application, we've been able to construct pairs of arguments and select between them. And things are about to get even more fun! We're now ready to see how we can create conditional expressions and boolean operations purely using λ expressions. Conditional Expressions and Boolean Operations The upcoming treatment of conditional expressions and boolean operations is going to look kinda weird at first. You'll want to keep in mind that in abstract math, elements don't have any inherent meaning but are defined by the way with they interact with each other — by their behavior. For our purposes, the behavior of a conditional expression is to select between one of two expressions, as shown by the following pseudocode: if true <expression> else <expression> end Hmm... selecting between two expressions... we just went over that! make_pair gave us a pair of expressions to choose between using either select_first or select_second. Because these functions result in the exact same behavior as if/else, let's gon ahead repurpose these: ;; This is identical to make_pair def cond = λe1.λe2.λc((c e1) e2) def true = select_first def false = select_second ;; Apply a conditional expression to true, aka select_first (((cond <e1>) <e2>) true) => <e1> ;; Apply a conditional expression to false aka select_second (((cond <e1>) <e2>) false) => <e2> You're probably not used to thinking of a conditional expression as a function which you apply to either true or false, but it works! NOT, AND, OR NOT can be seen as if x false else true end So, if x is true then false is selected, and if x is false then true is selected. Let's look at this using the cond expressions above: ;; In general (((cond <e1>) <e2>) true) => <e1> (((cond <e1>) <e2>) false) => <e2> ;; For NOT (((cond false) true) true) => false (((cond false) true) false) => true ;; So in general, we can say: def not = λbool.(((cond false) true) bool) ;; We can simplify this, though I'm lazy and won't show how: def not = λbool.((bool false) true) AND can be seen as if x y else false end In other words, if x is true then the value of the expression is the value of y, otherwise it's false. Here's how we can represent that: def and = λx.λy.((x y) false) Keep in mind that true is select_first and false is select_second: ;; select_first is true ;; when x is true, the value of the entire expression is the value of y (λx.λy.((x y) false) select_first) => λy.((select_first y) false) ;; select_second is false ;; when x is false, the second argument, false, is selected (λx.λy.((x y) false) select_second) => λy.((select_second y) false) We can treat OR similarly: if x true else y end We can capture this with def or = λx.λy.((x true) y) I won't work this one out - I'll leave it "as an exercise for the reader." :) The End I hope you've enjoyed this brief taste of the λ calculus! We've only scratched the surface of the kinds of neat things it's capable of. If you thought this article was fun, then I definitely recommend An Introduction to Functional Programming Through Lambda Calculus. This fun tome provided most or all of the examples I've used, though I've tried to present them in a way that's easier to understand. I also recommend The Art of Lisp & Writing, which conveys the beauty and joy of coding in Lisp.

Tessel Runs JavaScript Right On The Device

almost 4 years ago | Eduard Moldovan: eduardmoldovan.com - tech

Node.js was probably the first step in widening JavaScript usage accross various platforms and devices. But things are evolving rapidly and we already have hardware which runs JS.

Structure JavaScript with Backbone

almost 4 years ago | Rocky Jaiswal: Still Learning

These days I find myself saying "structure the JavaScript code better" a lot. Also, some of my friends assume that Backbone.js or Angular.js are only good for Single Page Applications. Since most applications work using server side templating and having a JavaScript file or so per page, m ...

Node.js and Express - Strange Http Status Codes

almost 4 years ago | Dave Kerr: dwmkerr.com

In a Nutshell Sending a response in Express with a call like res.send(status, body) will send body as the status code if it is numeric - ignoring status. This is due to a fudge for backwards compatibility. The Details As part of a project I'm working on, I'm writing a service using node.js and Express. This service exposes some entities in a MongoDB database through a REST API. Typically I hit this API through client-side Javascript, but in some places I want to hit the same API from some C# code - and I don't want to have to create classes for everything. I've got a funky library for this which I'll be publishing soon, but it helped me find a problem. Testing the C# code showed me something that was a bit odd - GETs and POSTSs were working fine, but PUTs and DELETEs were showing an HTTP Status code of '1' (which isn't a valid code). Here's the what I was seeing: requests Checking the node server showed the same thing - DELETEs were returning status 1. console The server code is very lightweight so it's quick to see what's going on: [code lang="js"]exports.deleteUser = function(request, response) { // Get the id. var id = request.params.id; // Log the user id. console.log('Deleting user: ' + id); // Get the users collection, delete the object. db.collection(collectionName, function(err, collection) { collection.remove({'_id':new BSON.ObjectID(id)}, {safe:true}, function(err, result) { if (err) { console.log('Error deleting user: ' + err); response.send(400, {'error':'An error has occurred'}); } else { console.log('' + result + ' document(s) deleted'); response.send(result); } }); }); }[/code] The function is called successfully, so we hit 'response.send'. This looks like the problem - the result object is simply the number one, checking the Express Api Documentation for send shows some examples like this: res.send(new Buffer('whoop')); res.send({ some: 'json' }); res.send('some html'); res.send(404, 'Sorry, we cannot find that!'); res.send(500, { error: 'something blew up' }); res.send(200); So just like the final example, we're sending the code 1, which is not valid. What surprised me was what happened when I changed the send call to the below: [code lang="js"]response.send(200, result)[/code] I was still getting the code 1 returned. It turns out that this is a kind of undocumented oddity of Express - if you pass a numeric code and the second argument is also numeric it sends the second argument as the status. In response.js of Express we find: [code lang="js"]res.send = function(body){ var req = this.req; var head = 'HEAD' == req.method; var len; // allow status / body if (2 == arguments.length) { // res.send(body, status) backwards compat if ('number' != typeof body && 'number' == typeof arguments[1]) { this.statusCode = arguments[1]; } else { this.statusCode = body; body = arguments[1]; } }[/code] So it seems the Express used to support a call like res.send({body}, 200) - and checks for a numeric second argument for backwards compatibility. The workaround - don't send numbers as any part of the response, unless it's most definitely the status code - if you want to return the number of documents deleted, format it as json first, otherwise Express will get confused and mess with your status codes.

DDD North 2013

almost 4 years ago | Jimmy Skowronski: jimmy skowronski

Yay! Yet another DDD (http://www.dddnorth.co.uk) is coming in October. And it’s in my birthday! I’ve just posted two sessions but there may be more (see here). I hope I will get your vote. Building Single Sign On websites Meet Dave. Dave is like you and he has a problem. He found that great website but he needs to register on to use it. That means he needs to create yet another user name and password. And he has to remember it or write on that big post-it on his monitor - booo! So Dave decided he will not register and he will look for another website he can use without creating yet another password. There are plenty of people like Dave. He may be your user or you may be like him. But we need users and passwords and permissions and all that stuff on our websites. Here is an idea. What if you could delegate all that somewhere and let someone else to worry about passwords, security and all that boring stuff – yayyy! This session will show you how to delegate your authentication somewhere else. You will learn basic theory behind Single Sign On and delegated authentication concepts. Practical use of SQL Server events Databases, old good databases, we all love them when they work as we want. When they don’t… well, it’s totally different story. Most of us were in sticky situation when our queries didn’t perform quite the way we expected. Sometimes we are lucky and we can isolate the troublesome query and analyse. In some cases however our troublemaker is part of the complex system and then things tend to go nasty. There are many ways you can try to find your way. This session will show you one of them that uses SQL Server events to capture some useful information about your query such as wait stats or execution plan. This is going to be very practical session demonstrating application of a specific technique to solve the specific problem. There will be no new frameworks or methodologies, just old good problem solving. This is related to this post

Big Data, what it can and can't do

almost 4 years ago | Adhir Aima: tech blogs

With all the hype around the big data, I happened to attend the Fifth Elephant 2013 conference to understand the playing field better. The speaker list was impressive and had some industry bigwigs like Dr. Edouard Servan-Schreiber, Director or Solution Architecture at 10Gen, the MongoDB company, Dr. Shailesh Kumar, Member Of Technical Staff from Google, Hyderabad and Andreas Kollegger Experience Architect from Neo4J to mention a few.The experience was thoroughly fulfilling and it was nice to rub shoulders with the local tech community and connect on such a scale. It's just fascinating to see the amount of data that some companies generate, capture and operate upon on everyday basis. The blog contains my take on the technology applications and limitations, again thoughts may vary and that's why we have a comments section.First, I would discuss where we cannot apply or use bigdata/NoSQL paradigms:It cannot be used for applications and systems which have high volume of transactions which are long/complex or the system requires multiple join queries. That's something no NoSQL implementations guarantees so far. It may be on the cards but seems less unlikely as it will take the flavor of the non RDBMS implementation.It cannot be applied to legacy systems which are tightly coupled with the data base systems. e.g. in one of my previous projects, one application was very DB heavy, as in it had a lot of functions and stored procedures which were the code of the application logic. So, even though the app had a huge amount of data, this coupling makes it difficult to move to a NoSQL implementation.It is not a choice for applications which deal with a small amount of unstructured data. Honestly, because we cannot use and elephant to scare the mouse, a cat would do just fine.It essentially cannot be used for anything that operates on real time. e.g. capturing data from a F1 car to do a real time diagnostics and see where the problem might come (or maybe we can if a little bit of latency would not be a problem).NoSQL/bigdata have given us the power to operate in near real time on a very huge data set, but of-course the speed of the operation depends on the implementation of the crunching logic. So, in order to have a fast op (read low latency and high throughput) we need a NoSQL DB and near real time processing/crunching  capabilities.Now let's touch upon some areas where bigdata/NoSQL can have a big impact:e-Learning is one of the classic examples. I was working with an application which had a lot of custom courses, exams and associated media for students registering to take the course. It was designed with the rigidity of a RDBMS, but in retrospect I feel that this is a good candidate for a NoSQL implementation.Banks and commercial institutions are already implementing big data in a lot of ways, and fraud monitoring agencies and companies rely on the processing capabilities of the bigdata stack to do transaction analysis in near real time. The transactions data still goes to a RDBMS system but a lot of other data is not being recorded on to NoSQL databases for trend analysis and simply put, faster access/look-ups.Content Delivery Networks are also using bigdata stack for optimizing web app performance.Citibank has such a implementation, where application renders out of a content cache which uses MongoDB as storage. There can be a custom cache controller written over the DB to achieve something like this.Bioinformatic  and cheminformatic systems can also leverage NoSQL databases for faster responses. I happened to work with the industry leaders Accelrys Inc. in chem and bioinformatics and there were a few application that I saw could definitely benefit from the bigdata stack. Some of their products can also use graph databases especially with the development of Accelrys Enterprise Platform AEP.Large scale analytic processes and applications are the classic use case of a bigdata/NoSQL stack. Meteorological systems, trade analysis systems, logistics systems are places where we can use bigdata stack and I am sure is being used in some places. These systems need near real time analytics and also require data trends and reports over large data sets and over a long period of time, and that is where bigdata stack can help.Lastly I would like to close the post with a discussion hat I had with a peer about example of having hue amount of data. Remember Gary Kasparov, who defeated Deep Blue and was a year later defeated by Deep Blue successor. The reason we concluded that the latest Deep Blue won was not because it was faster and better, but because it had a bigger data set and crunching ability than its predecessor.So, it's the high volume of data that will win over a period of time than a well written crafty algorithm.

Where does Front-End Engineering exactly lie in the scope of technology?

almost 4 years ago | Priya Ranjan Singh: thisText

I started to gather my answer in a top-down approach. Starting at the root - Computer Science and its immediate section - Applied Computer Science (^1). Applied Computer Science aims at identifying certain Computer Science concepts that can be used directly in solving real world problems. It is further divided into 11 different concepts as shown below. Applied Computer Science├ Artificial intelligence├ Computer architecture and engineering├ Computer graphics and visualization├ Computer security and cryptography├ Computational science├ Computer Networks├ Concurrent, parallel and distributed systems├ Databases and information retrieval├ Health Informatics├ Information science├ Software engineering   The only sections where I think Front-End plays a role is mostly in the areas of Information Science and Software Engineering. These two concepts are vast at this level but this is so much separated from the rest. This post is still in process of being complete in terms of having a clear position of front-end engineering in technology. [edit: 08/18/2013] Software engineering because front-end has quite a number of languages (and many more being built over them) on its belt, fully capable to build a functional software. Front-end projects run through all the known concepts like waterfall and agile models, have mature sub-disciplines of software engineering like software design, testing and quality assurance. Though front-end technologies under software engineering umbrella is not where the magic is, its Information Science. Information Science is an interdisciplinary field where computer science plays only a part role. It applies computer science concepts to the lifecycle of information until it is consumed. As the information reached to the end of its lifecycle just before it is consumed, it is organised in a form and is delivered in a way that is intuitive to people. The way I see it, ever since computer was invented, Internet found the most accelerating driving force for civilisation. It has connected the world in a way that there is no going back. All of sudden information took over anything else in field of computers. Content took priority over implementation details and websites became the new books, distributed to rest of the world faster than communication mediums seen before. Browsers ran the websites and with increasing devices amongst us, was born a set of technologies that dealt with a new problem - how should information be delivered to people. We can call this new problem and ongoing solution, all to form Front-End Engineering. It sounds like to carry a decent purpose along with it. One of the reasons I get up everyday. 1. Computer Science on wikipedia, http://en.wikipedia.org/wiki/Computer_science

Windows 8.1 Preview install: error 0x800705AA – 0x2000C

almost 4 years ago | Kristof Mattei: Kristof's blog

Yesterday I was trying to upgrade a VM to Windows 8.1. The VM had Windows 8 on it. The host software I used was Hyper-V from Windows 8.1 Preview By itself the VM worked fine, and I was able to … Continue reading → The post Windows 8.1 Preview install: error 0x800705AA – 0x2000C appeared first on Kristof's blog.

An Epistle and Warm Welcome to New Learners of Web Development and Software Engineering

almost 4 years ago | Christian Lilley: UI Guy

Hi! <Waves.> This is a cool, dynamic, super-rewarding field to live and work in. And we need all the new talent we can get. But there are certain predictable obstacles you’re going to encounter, and I want to prepare you for some of them, so you stick around with us. What you’re going to encounter […]

An Epistle and Warm Welcome to New Learners of Web Development and Software Engineering

almost 4 years ago | Christian Lilley: UI Guy

Hi! <Waves.> This is a cool, dynamic, super-rewarding field to live and work in. And we need all the new talent we can get. But there are certain predictable obstacles you’re going to encounter, and I want to prepare you for some of them, so you stick around with us. What you’re going to encounter […]

Makers Go Pro

almost 4 years ago | Sven Kräuter: makingthingshappen blog.

If you met Jule & me lately and talked about the #internetofthings chances are high you will have heard the term “Makers Go Pro”. Here is one example for a pro version of a tinker-idea: a real life facebook likes counter. Real life facebook by the numbers. Source: smiirl.com So what are the differences between pro and tinker projects? In my opinion one aspect is about the quality of the hardware in terms of tech and in most cases more striking - design. <!-- more --> Rather tech focussed approach. Source: skolti.com_ Of course there are also tinker projects that combine technology and design. This mainly depends on your own abilities and your network. If you aren’t too much into package design perhaps there is somebody in your netowork who is? Real life facebook interaction. Source: makezine.com What I see as the main difference between tinkering and pro maker artifacts is the ability to produce in large quantities and to be connected to other makers or manufacturers that are able to do so. Rat Pack IOT’s circuit board. Source: pics.makingthingshappen.de There is a rising economy of services that provide just that - hardware as a service if you like. Send your CAD files and let them produce your packaging, or as in the example above: let your circuit board designs be produced in professional quality. In addition put the plans for your circuit board on open source and enable everybody to reproduce it. I’m excited about all the possibilities we already have and guess we have some quite interesting times ahead. What we need to see most to be able to take the next step towards the much quoted next industrial revolution is a way of collaborating between all the different fields of a maker project, be it tinkering or corporate work. A thought to be discussed. So feel free to go ahead & tell us your thoughts on the subject. We’re curious!

Never Hit Reload Again

almost 4 years ago | Rocky Jaiswal: Still Learning

Imagine a nice world where you have a dual monitor setup for web development. On one monitor you have your favorite editor open and in another the browser, even in a less than perfect world if you have a single screen with a good resolution you can have two windows open side by side. Heck, ...

Examples In Open Source JavaScript Projects

almost 4 years ago | Eduard Moldovan: eduardmoldovan.com - tech

Many of us use Github, or something similar, for hosting our public or private projects. So do I. There is something though that is many times annoying about them. The examples.