Logo

Who is talking?

Archive

First (?!) Real Angular.js Book Hitting the Streets Tomorrow

about 4 years ago | Christian Lilley: UI Guy

I’d heard rumors of such things, but while looking at some other technical books tonight, noticed that O’Reilly is imminently publishing “AngularJS”, by Brad Green & Shyam Seshadri. Congrats, guys! And congrats, Angular team. Angular has always been a great product, but getting books out about it is a big deal. Devs will follow. According to […]

First (?!) Real Angular.js Book Hitting the Streets Tomorrow

about 4 years ago | Christian Lilley: UI Guy

I’d heard rumors of such things, but while looking at some other technical books tonight, noticed that O’Reilly is imminently publishing “AngularJS”, by Brad Green & Shyam Seshadri. Congrats, guys! And congrats, Angular team. Angular has always been a great product, but getting books out about it is a big deal. Devs will follow. According to […]

How to Cross-Post to Google+ From Your WordPress Blog… sorta’

about 4 years ago | Christian Lilley: UI Guy

It’s surprising that Google *still* won’t make it easier for people to integrate Plus into the broader social ecosystem. That’s OK. You can work around this. I’ve been looking for a good personal use-case for IFTTT since it launched, and thought I had one. Follow these instructions, and IFTTT will take any new posts from your […]

How to Cross-Post to Google+ From Your WordPress Blog… sorta’

about 4 years ago | Christian Lilley: UI Guy

It’s surprising that Google *still* won’t make it easier for people to integrate Plus into the broader social ecosystem. That’s OK. You can work around this. I’ve been looking for a good personal use-case for IFTTT since it launched, and thought I had one. Follow these instructions, and IFTTT will take any new posts from your […]

Must-Use Features & Hotkeys for Sublime Text; Sweet Video Tutorial

about 4 years ago | Christian Lilley: UI Guy

I’m a bit ashamed. I started using Sublime Text more than a year ago, but never got around to learning many of its best features. This is my penance. First, let’s be clear (and opinionated): if you develop for the web, you probably ought to be using Sublime. Sure, there are the folks who insist […]

Vega: the Answer To My DataViz Prayers?

about 4 years ago | Christian Lilley: UI Guy

The hot new thing in my office today is Vega, a library and “visualization grammar” for making D3 more declarative and potentially easier to use. What rocks my world about it is the ability to render to either SVG or Canvas. You can see that in action on the Vega Live Editor. Canvas is going […]

Recording for Pluralsight, what I learnt

about 4 years ago | Jimmy Skowronski: jimmy skowronski

A few months ago I finished recording my very first Pluralsight course, which was published early February. The whole recording business took me a few long months but in the end it was good fun. A few weeks ago I received my first royalty payment and, to my huge surprise, the amount was higher than I expected judging on not great performance of my course. Looks like I’m not that bad as I feared. Yes it’s a niche subject but some people are apparently watching it. I stayed in this glorious state for the next couple of days, until Fritz asked me to help with transcripts. Soon after my crystal palace was shattered into million pieces and I realised I suck totally. All right, I’m fishing for compliments as my wife would certainly say. Anyway, as every good story, let’s start from the beginning. The idea: I will record a course I don’t really remember how the idea formed in my head, but at some point I’ve started browsing Pluralsight forum and the list of most wanted topics. As it happened OAuth was quite high on the list, along with couple other subjects. I made the shortlist and sent an email quietly expected that I will be rejected straight away. Deep down I think I felt I’m not grown enough to be a Pluralsight author. To my surprise, a few weeks later I received an email from Megan, the Acquisitions Editor, who asked me to prepare an audition recording. That was the very first time I actually had to record something. Not counting a short podcast very long time ago. The audition video was actually quite tricky to do. I had to create a fully structured talk on a subject of my choice, with introduction, some theory, demo and summary. And all that squeezed to just 10 minutes. I wish I did any grok talks before. The most difficult part was not the content itself, but to actually record it. I spent hours in Camtasia processing my 10 minutes clip, removing all ymmms and emmms and making it as good as I could. In the end it took me couple of weeks and multiple less or more successful attempts but I finally had it. As it turned out the audition was good enough and I was approved. Hey, we are in the business! Shortly after that Megan and I started discussing subjects, deadlines, experience and so on. It turned out that OAuth has been already taken by Dominick Baier, but as I was talking a lot about PowerShell back then we decided to go for this one. After we agreed on the scope, I had to create an outline and wait for approval. That was followed by Skype call with Editor in Chief, Fritz Onion when we further discussed this and potential other topics. Shortly after that I received an email from Megan, I mean I was excited not her, in which I was offered a contract. Without sharing any details, it consisted of one off fee and royalties. We set the deadline for November and I was good to go. The first surprise, a silly one, was that I need to get a microphone. And I mean proper one, not simple Logitech headset with a microphone. In my naïve thinking I expected my Logitech gaming set will so just fine. I was wrong and I had to go shopping. A week or so later I had sweet looking Rode Podcaster with shock mount mounted to a tripod and standing on my desk. As the practice will show later this wasn’t the best choice. In that moment I was in heaven. I had professional looking kit on my desk, felt empowered and excited, and I was ready to start recording. Hard time It was mid-August and I was very confident I will have it done and dusted by November. I was so wrong. There were many factors I could use as an excuse, holiday, workload, technical difficulties, neutrinos and cosmic radiation. The ugly truth is that I was moving forward so slow I barely could notice any progress at all. Preparation for the first of four planned modules took me almost two months. I was fiddling with content moving things there and back, changing my mind ten times a day not really knowing what to do and how to start. Finally, around middle of October I managed to break through that and started recording. As expected, this was the most difficult part. First there was I vs. the microphone. Rode Podcaster is truly great piece of kit but to get good quality sound I had to keep my mouth just few centimetres from it. I still don’t know what I was doing wrong but anytime I moved away further than 1-2 inches I was just too quiet. Maybe I was oversensitive but that how it felt back then and I wanted to make it perfect. In the same time it was perfectly able to capture any noise that happened in five miles radius, including keyboard clicks which. Funnily enough the latter could be significantly reduced by putting folded towel underneath the keyboard. Because I had only small tripod I had to put it in front of me and kept the keyboard behind it. That let me to be close enough to the microphone and reduced clicks, but I had to type in really awkward position with my arms hugging around the tripod. Now I know I should get either a desk mounted arm or other microphone, or learn what I’m doing wrong. Despite that I started doing some progress. By November I had my first module recorded … about twenty times over and none of them was good enough. Finally I gave up and accepted the fact I will never be satisfied. After exporting slides so they can be easily saved as PDF (animations can be an issue) and adding metadata document, my first module was “ready” so I sent it for review. I’m on fire Whilst I was waiting for the review result I used the flow I was in and kept working. The next module I approached in slightly different way. Instead of recording sections, sometimes 10 minutes long, as a single clip, I decided to chunk it. Using some great guidelines from the authoring kit I’ve created a table where I decomposed each section into smaller chunks, each no longer than 2 minutes. It was initially quite difficult to keep that organised but I had much less content to repeat if I wanted to re-record one clip. After the whole module was ready I used Camtasia to clean it and glue all clips together into longer sections. It took me the whole sleepless weekend to record the second module. I think in the end I re-recorded every single clip at least once. Never mind that, I had my second module ready for review. The next module took me just one day and half of the night. I was on fire and finally sometime around the New Year I was done. Now I only had to provide long and short bio, likewise descriptions, assessment questions and clean up source files. Creating assessment questions was actually a lot of fun. Source files had to be assigned to each module which was annoying as I tend to reuse them. It took me a while to clarify what I used and where. The next time I shall be more careful. All that left was now waiting for Managing Technical Editor, Mike Woodring, to do the technical review. It’s alive! By end of January all was done and ready. Surprisingly I haven’t got any feedback from reviewers. I’ve been later told that this is pretty common and feedback is often only given when something is wrong. I however would still love to see some. Mike? In January I was granted access to private Yammer site where I joined the Pluralsight authors’ family and finally, on 15th of February my course has been published. Soon after the course was published I was given access to special website where I could track how it performs and what is calculated royalty. Numbers initially were not very clear but after the first quarter I learnt how to read them. A few weeks later I was asked to help with transcript. I think this was the best learning experience since I started. I had to listen to the whole course and check line by line, sentence by sentence all its content. Only then I realised how many mistakes I made and how badly I sometimes sound. This was really great lesson. In overall I’m very happy I did that. Not because of money but because of myself and my personal esteem. I’ve learnt that recording is far more difficult than speaking to an audience on a conference. The audience will always give me feedback and I can see their reaction on what I’m saying. There is nothing when recording. I was spending hour after hour recording, listening and re-recording over and over again. Every single mistake is in fact damaging the whole clip. I will not count how many times I stopped in middle of the sentence, deleted the clip and started again. It was sometimes very, very frustrating. Despite that it’s really a lot of fun. Would I do it again? Yes I would. In fact I’ve already committed to three new courses. I will try some new techniques other authors are sharing on Yammer. Most likely I will still keep recording in very small chunks but I may actually write the whole transcript upfront. I found it to be a great check if what I’m saying actually makes any sense. I may also follow popular method and try recording video first and then add narration to that. I will need to experiment and find how I could improve. I will also need to figure out what to do with the microphone. And I can’t miss the next year’s summit. In general I would recommend this experience to anyone who is willing to try. It’s really great. Your opinion matters That was my story but I did it for you. The course was viewed by over 500 people since it was published. Wherever and whoever you are I really want to know you opinion, even if it is bad one. I really mean it. With your feedback I can do better, I can learn and improve but I need your help. So if you are out there, please let me know and don’t be a stranger.

Book Review – PhoneGap 2.x Mobile Application Development

about 4 years ago | Niraj Bhandari: Technology Product Management

  It was about two weeks back when Kraig Lewis of packtpub reached out to me do a review of …Continue reading →

Skyline Algorithm - A Binary Tree Approach

about 4 years ago | Shadab Ahmed: Shadab's Blog

If you're into algorithms, you must have heard of this puzzle: Drawing the Skyline A number of buildings are visible form a point. Each building is a rectangle, and the bottom of each buliding lies on a fixed line. A building is specified using a triple of (Left, Height, Right). One building may partly obstruct another, as shown below: Skyline The skyline is the list of coordinates and corresponding heights of what is visible. For example, the skyline to the buildings on the left in figure above is given in the figure on the right. Example input: (1,11,5), (2,6,7), (3,13,9), (12,7,16), (14,3,25), (19,18,22), (23,13,29), (24,4,28) Example output: 1, 11, 3, 13, 9, 0, 12, 7, 16, 3, 19, 18, 22, 3, 23, 13, 29, 0 This puzzle is particularly popular in academia and also as an interview question. If you google it, you would find many research papers as well. Skyline Tree Now I am presenting a binary tree solution - Skyline Tree. This tree is very much similar to a tree I created earlier - Range Count Tree. Here is goes: For the input we are given a set of tuplets containing - where a building starts, where it ends and the height. We use this input to build a binary search tree in such a way, that the nodes in the tree represent the maximum height for a given range of start and end coordinates. Each node in the tree has attributes range_start, range_end and value(height). We just follow these rules when creating the tree: Each parent has a greater value(height) than any of its children When a child is being inserted at a node, it is added in left subtree if the range of the child completely lies to the left of the node's range and vice versa for the right If the incoming range intersects with the current node, we split the incoming range and then add the split parts as children of the node. For e.g. if we already have 4,5 as an existing range and incoming is 2,8 it becomes: 4,5 / \ 2,4 5,8 This is done because the parent 4,5 has greater value(height) than range 2,8 so only the tallest portions are added to the tree, for the ranges of these nodes. Now, how do we ensure that a parent is taller than its children ? We simply insert nodes in the Skyline tree sorted by their height. Time for pretty pictures: Input: (1,11,5), (2,6,7), (3,13,9), (12,7,16) Skyline Tree Partial Starting from the left, you can see, range 1 to 3 has maximum height 11, 3 to 9 has 13 and 12 to 16, 7. Now let's see the tree for the full input: Input: (1,11,5), (2,6,7), (3,13,9), (12,7,16), (14,3,25), (19,18,22), (23,13,29), (24,4,28) Skyline Tree Full The code for SkylineTree is here(gist) and the code to solve the puzzle using the Skyline tree: require './skyline_tree' input = [[1,11,5], [2,6,7], [3,13,9], [12,7,16], [14,3,25], [19,18,22], [23,13,29], [24,4,28]] input.sort!{|x,y| y[1] <=> x[1]} stree = SkylineTree.new input.each do |start_range, value, end_range| stree.add [start_range, end_range], value end stree.print_skyline # output # 1, 11, 3, 13, 9, 0, 12, 7, 16, 3, 19, 18, 22, 3, 23, 13, 29, 0 You can also run the code on CodeBunk The tree we built can be used for similar problems like area under the skyline etc.

The ultimate git bash prompt

about 4 years ago | Jimmy Skowronski: jimmy skowronski

I've been playing with the Git bash prompt trying to make it nicer and, let’s be honest here, more fun. After some digging I found one that looks quite cool https://github.com/magicmonty/bash-git-prompt. It needed a few small modifications to make it useful in Windows environment and here is the effect. Here I'm in a perfectly healthy master, just after the push path1 Then, I've modified a file the prompt changed to path2 showing me that there is one modified file. After fetch I'm being told that I'm 1 commit ahead but 5 behind path3 So I do rebase and path4 no conflicts and I'm nicely one commit ahead. Cool, isn't it? The only drawback is that it takes a few hundred millisecond longer for the next prompt to appear. This is due to the fact that the Python script is executed every time and runs couple of git commands. It's noticeable initially but after couple of hours I got used to that. Setup Install Python and add install folder (default C:\Python27) to the PATH environment. In the c:\Users\{your user name}\ create folder .bash and drop gitprompt.sh and gitstatus.py files there (bash.zip) In the .bashrc file add source ~/.bash/gitprompt.sh If you don’t have that simply create empty text file and call it .bashrc Enjoy! You can easily customise the prompt by editing the gitprompt.sh file. I’ve added all colour definitions for your convenience. Let me know if you are going to give it a go. HTH

Decoding Big Bazaar’s Profit Club

about 4 years ago | Niraj Bhandari: Technology Product Management

Last week I went to newly opened big bazaar store close to our home and was pleasantly surprised with a …Continue reading →

Firebase And Ember.js

about 4 years ago | Eduard Moldovan: eduardmoldovan.com - tech

I've been playing a bit with these two last weekend and let me share a few thoughts on that.

TowTruck

about 4 years ago | Eduard Moldovan: eduardmoldovan.com - tech

Mozilla has released a new collaboration tool named TowTruck. I has a few interesting features, like cursor-mirroring, collaboratively editing forms and text, browsing through the site, and both text and real-time voice chat.

The Reports of WebSQL’s Death Have Been Greatly Exaggerated

about 4 years ago | Christian Lilley: UI Guy

Every time I poke around the interwebs looking for docs on various forms of offline storage, I find folks saying that WebSQL is ‘dead’, or more often: ‘deprecated’. Not so. It’s just that every browser that implemented WebSQL used SQLite, so work on a formal standard stopped: The specification reached an impasse: all interested implementors […]

Parenthesis Permuation

about 4 years ago | Shadab Ahmed: Shadab's Blog

Another interesting puzzle: Parenthesis Permutation Given N pair of parenthesis. Write an algorithm which would print out all permutations, possible with those parenthesis given that parenthesis are in correct order (i.e. every open parenthesis is matched with closed parenthesis) For .e.g. .. N =3 should give: ()()() (()()) ()(()) (())() ((())) There are recursive solutions for this you can find just by googling. I thought of a non-recursive solution Click to view my solution The basic idea is to construct a Binary Tree, where left node is '(' extra and right node is ')' extra from the current node.  Each node has a weight = number of '(' - number of ')'. We start with a root node '(' at level 0 and create the tree such that the 2N-1 level will contain permutations for N pairs.Now when constructing the tree, we create a child only if:Weight of incoming child is not less than 0When weight >= 0, it should be less than or equal to the numbers of levels to create, so that we have enough parenthesis left to balance the stringLet's take a look at the tree created (string -> weight) for N = 3:An optimization - the last level is not required since all that is added is a ')'Now the code. Infact not using a tree at all, just a linked list and traversing in preorder style, using the concepts above of adding a child. I could have used an array instead of linked list, but it suffers from list expansion frequently hence slowing everything down.The code to generate the tree picture is here Infact the number of permutations for a given N pair form a series called Catalan Numbers . It goes like this: N=1 P=1 N=2 P=2 N=3 P=5 N=4 P=14 N=5 P=42 N=6 P=132 N=7 P=429 N=8 P=1430 N=9 P=4862 N=10 P=16796 N=11 P=58786 N=12 P=208012 N=13 P=742900 N=14 P=2674440 N=15 P=9694845

Angular vs. Ember: Does (Code) Beauty Matter?

about 4 years ago | Christian Lilley: UI Guy

I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong.  -Richard Buckminster Fuller For some reason, this quote immediately made me think of Angular vs. Ember, and the debate about inline templates. Have a look at […]

Capturing stats for slow running queries

about 4 years ago | Jimmy Skowronski: jimmy skowronski

My Story Recently I had some problems with a query that took a very long time to execute, it took over 20 minutes to do 150 inserts, that's whopping 9-ish seconds per one insert. There was nothing I could see straight away that would shed any light to the problem. The code itself was very simple, just loop through a collection and do insert for each one. As each command opened and closed its own connection I thought that this may be an issue. 15 minutes and few lines of code later all insert used the same connection opened once. No joy, the insert time dropped down to just above 8 seconds per insert. Something certainly was wrong there but I had no idea what until I found this great post by Paul S. Randal about wait stats (http://www.sqlskills.com/blogs/paul/capturing-wait-stats-for-a-single-operation). That gave me hope of actually finding where the problem sits. Unfortunately his approach is suited for running complex queries in the SQL Explorer. That wouldn't work for me because my query relied on many others and was part of complex data processing routine. I had to find a way to collect stats whilst running my code. The idea started forming in my head. If I could modify my query to capture stats, run the application and then see what was collected it would be ace! A few minutes later I had the solution. After half a day of tearing my way through unknown land of wait stats, events and complex SQL I finally got my stats. Here is how. Getting stats Repeating from Paul's post, here is what you have to run before your troublesome query IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'MonitorWaits') DROP EVENT SESSION MonitorWaits ON SERVER; CREATE EVENT SESSION MonitorWaits ON SERVER ADD EVENT sqlos.wait_info (WHERE sqlserver.session_id = 1 /* session id here*/) ADD TARGET package0.asynchronous_file_target (SET FILENAME = N'C:\SqlPerf\EE_WaitStats.xel', METADATAFILE = N'C:\SqlPerf\EE_WaitStats.xem') WITH (max_dispatch_latency = 1 seconds); ALTER EVENT SESSION MonitorWaits ON SERVER STATE = START; and after ALTER EVENT SESSION MonitorWaits ON SERVER STATE = STOP; The first snippet creates an extended event session, dropping one if it already exists. The session has one event, described in the sys.dm_xe_objects table as "Information regarding waits in SQLOS". The session has also asynchronous file target associated which writes captured events to specified files. You can name and place those files wherever you want but the folder you use must exist before you execute the query. Having that, anything you execute between those two statements will be nicely captured. Reading the first snippet however you will see there is a session_id parameter. This is somehow a bit problematic. It's very easy to get the session ID when you are using the SQL Explorer, but when you application connects to the database it can be anything. The rescue lies in the sys.sysprocesses table that can give you your session ID, except it's not that easy as it seems. The only way for you to find the id is to find the session that has the username you are using in the loginname column. In normal case however you will see there are many sessions created from you application and you will have no idea which one to pick. To solve that, you need to create a separate user that will be used only by this connection you open to run your query. In this way you will have only one record in the sys.sysprocesses and that will give you the session ID. Code time! 1: var perfConnString = "data source=(local);Initial Catalog=MyDb;User Id=perf_test;Password=test;MultipleActiveResultSets=True"; <!--CRLF--> 2: using (var connection = new SqlConnection(perfConnString)) <!--CRLF--> 3: { <!--CRLF--> 4: connection.Open(); <!--CRLF--> 5: var sidCmd = new SqlCommand("SELECT TOP 1 spid FROM sys.sysprocesses WHERE loginame = 'perf_test' ORDER BY last_batch DESC", connection); <!--CRLF--> 6: var sessionId = sidCmd.ExecuteScalar(); <!--CRLF--> 7:  <!--CRLF--> 8: string startCommandText = @"IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'MonitorWaits') DROP EVENT SESSION MonitorWaits ON SERVER; <!--CRLF--> 9: <!--CRLF--> 10: CREATE EVENT SESSION MonitorWaits ON SERVER <!--CRLF--> 11: ADD EVENT sqlos.wait_info (WHERE sqlserver.session_id = " + sessionId.ToString() + @") <!--CRLF--> 12: ADD TARGET package0.asynchronous_file_target (SET FILENAME = N'C:\SqlPerf\EE_WaitStats.xel', METADATAFILE = N'C:\SqlPerf\EE_WaitStats.xem') <!--CRLF--> 13: WITH (max_dispatch_latency = 1 seconds); <!--CRLF--> 14: <!--CRLF--> 15: ALTER EVENT SESSION MonitorWaits ON SERVER STATE = START;"; <!--CRLF--> 16:  <!--CRLF--> 17: string endCommandText = @"ALTER EVENT SESSION MonitorWaits ON SERVER STATE = STOP;"; <!--CRLF--> 18:  <!--CRLF--> 19: var startCommand = new SqlCommand(startCommandText, connection); <!--CRLF--> 20: startCommand.ExecuteNonQuery(); <!--CRLF--> 21:  <!--CRLF--> 22: //Run your commands here <!--CRLF--> 23: <!--CRLF--> 24: var endCommand = new SqlCommand(endCommandText, connection); <!--CRLF--> 25: endCommand.ExecuteNonQuery(); <!--CRLF--> 26: } <!--CRLF--> There you are. Lines 5-6 will get the session ID that is later used in line 11. If you are using transactions in your query you may want to comment them out as they can create some issues. Once you run your application and do whatever you have to but try to run the problematic area only once. Every time you execute the above, new event data will be added to files increasing numbers and potentially clouding the outcome. Checking results Once you are done, head to the SQL Explorer. The first thing you can run is SELECT COUNT (*)FROM sys.fn_xe_file_target_read_file ('C:\SqlPerf\EE_WaitStats*.xel', 'C:\SqlPerf\EE_WaitStats*.xem', null, null) which will show you how many events you've collected. They can go into thousands! The easiest way to see wait stats is to run this: -- Create intermediate temp table for raw event dataCREATE TABLE #RawEventData ( Rowid INT IDENTITY PRIMARY KEY, event_data XML); GO-- Read the file data into intermediate temp tableINSERT INTO #RawEventData (event_data)SELECT CAST (event_data AS XML) AS event_dataFROM sys.fn_xe_file_target_read_file ( 'C:\SqlPerf\EE_WaitStats*.xel', 'C:\SqlPerf\EE_WaitStats*.xem', null, null);GOSELECT waits.[Wait Type], COUNT (*) AS [Wait Count], SUM (waits.[Duration]) AS [Total Wait Time (ms)], SUM (waits.[Duration]) - SUM (waits.[Signal Duration]) AS [Total Resource Wait Time (ms)], SUM (waits.[Signal Duration]) AS [Total Signal Wait Time (ms)]FROM (SELECT event_data.value ('(/event/@timestamp)[1]', 'DATETIME') AS [Time], event_data.value ('(/event/data[@name=''wait_type'']/text)[1]', 'VARCHAR(100)') AS [Wait Type], event_data.value ('(/event/data[@name=''opcode'']/text)[1]', 'VARCHAR(100)') AS [Op], event_data.value ('(/event/data[@name=''duration'']/value)[1]', 'BIGINT') AS [Duration], event_data.value ('(/event/data[@name=''signal_duration'']/value)[1]', 'BIGINT') AS [Signal Duration] FROM #RawEventData ) AS waitsWHERE waits.[op] = 'End'GROUP BY waits.[Wait Type]ORDER BY [Total Wait Time (ms)] DESC;GO-- CleanupDROP TABLE #RawEventData;GO That will give you results like this: Wait Type Wait Count Total Wait Time (ms) Total Resource Wait Time (ms) Total Signal Wait Time (ms)------------------------- ----------- -------------------- ----------------------------- ---------------------------NETWORK_IO 4 0 0 0TRANSACTION_MUTEX 2 0 0 0WRITELOG 4 4558147 3985741 12 What was wrong? As you can see, in my case the problem was caused by the WRITELOG wait type. Checking the MSDN (http://msdn.microsoft.com/en-us/library/ms179984.aspx) I could see that WRITELOG is defined as Occurs while waiting for a log flush to complete. Common operations that cause log flushes are checkpoints and transaction commits. That speaks for itself. The problem is in the transaction log. A bit of reading to cover gaps in my SQL knowledge revealed that if there is no transaction each statement causes SQL Server to write the transaction log. As the HDD wasn't that great and the log file itself over 17GB it meant that SQL had to write something to huge file 150 times over. The solution was simple then, I had to reduce transaction log activity. Because one can't switch it of totally I've decided to wrap my query into BEGIN TRAN .... COMMIT TRAN. Bingo! That causes the transaction log being written only once, at end of mass 150 inserts. The effect was reduction of time to 1.2 second per insert. How cool is that? What else? That's not all you could do. There are much more stats you can collect in this way. If you run this query: select * from sys.dm_xe_objects where object_type = 'event' you will see well over 200 different events you can use. For illustration, inspired by another of Paul's posts (http://www.sqlskills.com/blogs/paul/tracking-expensive-queries-with-extended-events-in-sql-2008/) I used the following code to capture query execution plan. var perfConnString = "data source=(local);Initial Catalog=MyDb;User Id=perf_test;Password=test;MultipleActiveResultSets=True";using (var connection = new SqlConnection(perfConnString)){ connection.Open(); var sidCmd = new SqlCommand("SELECT DB_ID()", connection); var sessionId = sidCmd.ExecuteScalar(); string startCommandText = @"IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'MonitorWaits') DROP EVENT SESSION MonitorWaits ON SERVER; CREATE EVENT SESSION MonitorWaits ON SERVERADD EVENT sqlserver.sql_statement_completed (ACTION (sqlserver.sql_text, sqlserver.plan_handle) WHERE sqlserver.database_id = " + sessionId + @" /*DBID*/)ADD TARGET package0.asynchronous_file_target (SET FILENAME = N'C:\SqlPerf\EE_WaitStats.xel', METADATAFILE = N'C:\SqlPerf\EE_WaitStats.xem')WITH (max_dispatch_latency = 1 seconds); ALTER EVENT SESSION MonitorWaits ON SERVER STATE = START;"; string endCommandText = @"ALTER EVENT SESSION MonitorWaits ON SERVER STATE = STOP;"; var startCommand = new SqlCommand(startCommandText, connection); startCommand.ExecuteNonQuery(); var readCmd = new SqlCommand("your query here", connection); var reader = readCmd.ExecuteReader(); while (reader.Read()) { } var endCommand = new SqlCommand(endCommandText, connection); endCommand.ExecuteNonQuery();} After the test, I went to the SQL Explorer and run SELECT data.value ( '(/event[@name=''sql_statement_completed'']/@timestamp)[1]', 'DATETIME') AS [Time], data.value ( '(/event/data[@name=''cpu'']/value)[1]', 'INT') AS [CPU (ms)], CONVERT (FLOAT, data.value ('(/event/data[@name=''duration'']/value)[1]', 'BIGINT')) / 1000000 AS [Duration (s)], data.value ( '(/event/action[@name=''sql_text'']/value)[1]', 'VARCHAR(MAX)') AS [SQL Statement], SUBSTRING (data.value ('(/event/action[@name=''plan_handle'']/value)[1]', 'VARCHAR(100)'), 15, 50) AS [Plan Handle]FROM (SELECT CONVERT (XML, event_data) AS data FROM sys.fn_xe_file_target_read_file ('C:\SqlPerf\EE_WaitStats*.xel', 'C:\SqlPerf\EE_WaitStats*.xem', null, null)) entriesORDER BY [Time] DESC; Which gave me this Time CPU (ms) Duration (s) SQL Statement Plan Handle----------------------- ----------- ---------------------- ----------------------------------------------------------------------------------- --------------------------------------------------2013-04-10 13:51:13.143 0 7.3E-05 ALTER EVENT SESSION MonitorWaits ON SERVER STATE = STOP; 0x06000700912C723140A1B4870000000000000000000000002013-04-10 13:51:11.523 124 5.695983 SELECT * FROM SurveyPublicationJob j LEFT OUTER JOIN Surveys s ON j.SurveyId=s.Id 0x060007005CCBB80B40A14782000000000000000000000000 So my query took over 5 seconds to run. Running this SELECT [query_plan] FROM sys.dm_exec_query_plan (0x060007005CCBB80B40A14782000000000000000000000000); and clicking at link that is returned I've got the execution plan for the query. 2013-04-10_1453 That's pretty cool in my opinion. Happy querying! HTH

Ruby’s Missing Data Structure

about 4 years ago | Pat Shaughnessy: Pat Shaughnessy

Have you ever noticed Ruby doesn’t include support for linked lists? Most computer science textbooks are filled with algorithms, examples and exercises based on linked lists: inserting or removing elements, sorting lists, reversing lists, etc. Strangely, however, there is no linked list object in Ruby…

Firepad - Open Source Collaborative Text Editing

about 4 years ago | Eduard Moldovan: eduardmoldovan.com - tech

While remote work is getting more and more attention, the need for good teamwork tools is getting higher. Here is one which might help remote collaboration in certain cases.

Firebase - Scalable Real-time Backend

about 4 years ago | Eduard Moldovan: eduardmoldovan.com - tech

Build apps fast without managing servers.

Encrypting .NET configuration file

about 4 years ago | Jimmy Skowronski: jimmy skowronski

Did you know that .NET allows encrypting any section of the .config file? It can be either a section within web.config or app.config files or in any of linked files such as confirmit.config etc. A very useful feature if you want to protect passwords or other secret data. The best of all is that your application will work unaffected if you encrypt configuration file. There is no code change required. You can work in your development environment with plain configuration and encrypt it when released. .NET will decrypt configuration transparently. This functionality is basically provided by the ASP.NET but you can use it outside web environment as well. Recently I had to do it at work and, as I haven’t done that for ages now, it took me a moment to figure out all bits and bobs. For your entertainment and my memory here is how to do it. Creating encryption key file First, you will need to create a new RSA container to use later. Simply use the following command:aspnet_regiis.exe -pc "MyKey"–exp MyKey Name of the key container. This should be later used in the configProtectedData element. -exp Creates the key as exportable. This option is required if you will want to share the same key between multiple servers. Now you can export the key to a xml file using the following command:aspnet_regiis.exe -px "MyKey" keys.xml -pri MyKey Name of the key container. This should be later used in the configProtectedData element. keys.xml File where the key will be exported to. -pri Ensures the private key will be exported as well. This is required to encrypt configuration section. Encrypting in a web application I will start with the ASP.NET application where things are nice and easy. First you need an XML file with the encryption key you generated just moment ago. You can also use a key file provided by someone else. The encryption process requires two preparation steps if the configuration section you are encrypting is a custom section that is declared in the configSections element of the .config. Firsly, the aspnet_regiis.exe will have to be able to load the assembly that defines the section. You can either put it in the GAC or simply copy to the same folder as aspnet_regiis.exe. If you fail to do so you will get an error. After you finish encryption you can delete this assembly. Lastly you need to ensure the section declaration contains both type and assembly name, i.e.<section name="mySection" type="MyApp.MySection, MyApp.Configuration" /> Having that done you are ready to roll. The first step is to add the following section to your config file<configProtectedData> <providers> <add keyContainerName="CustomKeys" useMachineContainer="true" description="Uses RsaCryptoServiceProvider to encrypt and decrypt" name="CustomProvider" type="System.Configuration.RsaProtectedConfigurationProvider,System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </providers> </configProtectedData> The name you specify in the keyContainerName and name attributes will be used later. If you generated your own key you can skip this command, otherwise if you received the key from someone else you need to import it. The below command will import the key to the container.aspnet_regiis.exe -pi CustomKeys keys.xml CustomKeys The key container name as defined in the configProtectedData before. keys.xml Path to the xml file with the encryption key. Having the key in the container you can now start encrypting. To encrypt a section use this command:aspnet_regiis.exe -pef section –site SiteName -app /VirtualPath -prov CustomProvider section The name of the section element you want to encrypt. SiteName The site name if the application is configured in a specific IIS site. This parameter can be omitted if the application is in the IIS default web site. /VirtualPath Virtual path to the website. It must start from the “/” character. CustomProvider Name of the provider as defined in the configProtectedData element before. Finally you need to grant permissions for the application pool user to the specific key container. You can do it using:aspnet_regiis.exe -pa "CustomKeys" UserName CustomKeys The key container name as defined in the configProtectedData element. UserName The web site application pool user name. And that’s it. Now your config section has been encrypted. The cool thing is that aspnet_regiis.exe is clever enough to figure out if the section is in the web.config or some side .config file. You can also encrypt all config files in a specific folder and more. For more information see MSDN documentation http://msdn.microsoft.com/en-us/library/k6h9cz8h(v=vs.80).aspx and http://msdn.microsoft.com/en-us/library/53tyfkaw(v=vs.100).aspx. What about non-web application? The above method will only work for web application. For all other applications you will have to use a workaround. Sometimes you may have a non-web application, i.e. service and corresponding web application. Both applications may share certain parts of the configuration. In that case you can just do required changes in the app.config and then copy encrypted sections from your web application. The only thing left to do will be to grant permissions to the key container as your application will likely work under another user than the IIS app pool. If you have totally independent application however, then your case is a bit more complex. What you can do is to create an empty web application, copy all configs, encrypt them and then copy back to your application. A bit fiddly way but it should work. Sharing keys between servers You also may want to share the same key between multiple servers. This may be desired in load balancing scenarios where configuration is centrally propagated. In such case you should encrypt required configuration sections on one machine and then import the encryption key on other servers where the encrypted configuration will be used. To import the key file you will need to copy the key XML file to each server and run the following command:aspnet_regiis.exe -pi "MyKey" keys.xml MyKey Name of the key container. This should be later used in the configProtectedData element. keys.xml File containing the encryption key. Happy encrypting. HTH

My First Impression at #inspect2013 - RubyMotion Conference

about 4 years ago | Amit Kumar: toamitkumar's Code Blog

The first RubyMotion Conference #inspect2013 happened in Brussels, Belgium. It was a very well organized conference, lots of talented people, awesome speakers, good food and yes ‘Belgian Beer’ One Man Army - Laurent Sansonetti Many thanks to Laurent Sansonetti (@lrz) for making it successful. The general gist of what happened there. Below are my observations and opinions. What I saw at #inspect2013 - RubyMotion conference? For people who don’t know about RubyMotion: - http://www.rubymotion.com/developer-center/ - http://rubysource.com/laurent-sansonetti-on-rubymotion-internals/ - http://motioncasts.tv/ - vimeo.com/61255647 Observations/Opinions: The Community: Community plays a very important role in the success of Open Source Software. Even though RM is licensed, it has blessings from Ruby Community. Rubyists (including me) who have tried Obj-C in the past have found it verbose and complex. We all embraced RubyMotion because of its simplicity and more important being a developer friendly language - Ruby. At the conference I saw a lot of seasoned Obj-C developers who had tried RM and loved it. Now, they are using it for full fledged production applications. One of the big reason for this is Cocoa-Pods. They can still use their legacy Obj-C code / project in new a RM application. The build system takes care of linking and tying them during compilation. No need to re-invent the wheel, utilize the investment people have already done in iOS world. This is an interesting phenomenon because this is going to push the toolchain to next level. Contribution: It is less than a year and the RM community has grown exponentially from few rubyists to thousands of iOS developers. There has been tremendous contribution by people to build ruby-motion gems/libraries/wrappers around verbose Obj-C code.The list is very big but I should definitely mention: rubymotion-wrappers.com A lot of talks at the conference were about these libraries and wrappers. Focus on testing: Ruby Community has successfully induced the importance of testing the code. Though RM comes bundled with a testing framework, its hard to do UI testing on iOS and Obj-C has suffered a lot because of this. At the same time, it is unusual that Rubyists would not explore options to test their code on iOS platform. In the conference I got to learn some awesome frameworks out of which ‘motion-calabash’ is really good. It gives you BDD style testing similar to what we are familiar in Ruby/Rails world - Cucumber. You should definitely checkout https://github.com/calabash/motion-calabash . A sample app The one thing I would like to explore is the possibility to test my HTML5 phone-gaped application using 'motion-calabash'. When I am developing for mobile platform I miss Chrome Developer Tools a lot. Not any more. It blew my mind to see the demo by Colin Gray (@colinta). Motion-Xray: An iOS Inspector that runs inside your app, so you can debug and analyze from your device in real-world situations. It is amazing, go check it out Broader Reach: I was completely stunned to see presentation in a technological event by a visually impaired (I don’t mean to hurt anyone. I had never seen such a thing happening and it was a complete jaw-drop for me). Let me introduce Austin Seraphin (http://behindthecurtain.us/about/) . He is an iOS developer and iOS Accessibility Expert. He explains how iPhone and Accessibility changed his life. A completely new perspective. His slides if you are interested - I would urge to have a look at them. Trend and My Reaction: Laurent was the last to present at the conference - a noble decision I would say. He talked about roadmap and the future of RubyMotion. I see a focus shift to make the toolchain more developer friendly. Some highlights: High-level debugging support: right now it uses GDB technology which is pretty low-level. Documentation: add more documentation to make it easier for newbies. I definitely feel RM is going to change the way we are used-to doing native iOS development. So, don’t delay, hack and be awesome. Finally, here is the presentation (Building Interactive Data Visualization Charts) @rubymotion conference: Please share your thoughts. Enjoy !

What Time Is It?

about 4 years ago | Eduard Moldovan: eduardmoldovan.com - tech

Let the CSS and JavasScript tell you.

Transform One String to Another

about 4 years ago | Shadab Ahmed: Shadab's Blog

Another interesting puzzle :- Transform One String to Another Let S and T be strings and D a set of strings. You can say that S produces T if there exists a sequence of strings, SEQ=[s1, s2, ..., sn-1] which meets these criteria: s0 = S sn-1 = T All members of SEQ belong to set D Adjacent strings have the same length and differ in exactly one character. For example, given the set {"cat", "bar", "bat"}, you can say that "cat" produces "bar" by ["cat", "bat", "bar"] Or, given the set {"cat", "red", "bat"}, you can say that "cat" does not produce "red". Given a set D and two strings S and T, write a function to determine if S produces T. Assume that all > characters are lowercase letters. If S does produce T, output the length of a shortest production sequence; otherwise, output -1. Click to view my solution Solving this using a graph. Not the most efficient graph, but it does have pretty pictures :)First let's take this sample data:words = ['simple', 'dimple' , 'pimple','fickle', 'sickle', 'simkle', 'kettle', 'settle'] start_word = 'simple' end_word = 'fickle'Now let's generate a graph, where each node is the word and an edge represents, the index where the words differ and the character that is different.Next, using Djiktra's algorithm, just find the shortest path between the start_word and end_word::Code here(also generates images using graphviz):

Puppet in a rush

about 4 years ago | Rocky Jaiswal: Still Learning

It is a good time to be a programmer now. Barriers are disappearing and technology is evolving not only quickly but also in the right direction. I remember when I started 10 years back, it was common for project to run for 12-18 months with the first 3 months spent on discussions and so called "requirement gathering" ...

How Clojure Babies are Made: What Leiningen Is

about 4 years ago | Daniel Higginbotham: Flying Machine Studios

"What the hell is Leiningen?" is a question you've probably overheard many times in your day-to-day life. You've probably even asked it yourself. Up until now we've described specific capabilities that Leiningen has and how those capabilities are implemented. It can build and run your app, it has a trampoline, and so forth. But it's time to take a step back and get a high-level understanding of what Leiningen is. It's time to stare deeply into Leiningen's eyes and say "I see you," like Jake Sully in that Avatar documentary. We'll do this by giving an overview of the non-coding related tasks you need to accomplish when building software. Next, we'll describe how Leiningen helps you accomplish these tasks and compare it to similar tools in Ruby. This post isn't as nitty-gritty as the previous posts in the Clojure Babies series, but it will help lay the groundwork for an upcoming post on packaging. Additionally, I hope it will clarify what a programming language artifact ecosystem is. This concept is often overlooked when teaching a programming language, and when it is covered it's not covered in a systematic way. Together, noble-chinned reader, we will remedy that situation. For our generation and all generations to come. Programming Language Artifact Ecosystems In order to become proficient at a language, you need to know much more than just its syntax and semantics. You need to familiarize yourself with the entire programming language ecosystem, which is comprised of everything you need in order to build working software in that language. It can be broken down into at least the following sub-ecosystems: The documentation ecosystem The developer community The development environment ecosystem (editor support) The artifact ecosystem Right now we only care about the artifact ecosystem. For our purposes, a programming artifact is a library or executable. Ruby gems, shell scripts, Java jars, shared libraries, and "HAL9000.exe" are all programming artifacts. An artifact ecosystem is the set of tools and services that allow you to do the following with regard to artifacts: Retrieve them from repositories Incorporate them in your own project, (possibly) resolving conflicts Build them Publish them to repositories Run them Tools are often layered on top of each other, one tool smoothing out the warts of the tools it wraps. For example, the following tools (and more) are part of the Ruby artifact ecosystem: Ruby Gems provides a package specification, the means to incorporate gems in your project, and the means to build and publish gems rubygems.org is a central repo for gems Bundler provides a layer on top of Ruby Gems, providing dependency resolution and gem retrieval Jeweler is one of many tools for easing the process of creating gemspecs and building gems. Other languages have their own tools. Java has Maven, PHP has Pear or whatever. Artifact management is a common need across languages. In previous Clojure Baby posts, we've seen that we can use Leiningen to build and run Clojure programs. It turns out that Leiningen also handles the remaining tasks - retrieving packages, incorporating them in your project, and publishing them. It's truly the Swiss Army Bazooka (I'm going to keep repeating that phrase until it catches on) of the Clojure artifact ecosystem. But why is it that in Ruby you need an entire constellation of tools, while in Clojure you only need one? Leiningen Is a Task Runner with Clojure Tasks Built In Leiningen is able to handle so many responsibilities because it is, at heart, a task runner. It just happens to come with an excellent set of built-in tasks for handling Clojure artifacts. (Incidentally, this is probably where Leiningen's name came from. "Leiningen Versus the Ants" is a short story where the protagonist fights ants. Ant is a Java build tool that evidently is unpleasant to use for Clojure builds.) By comparison, Ruby's Rake is also a task runner used by many of Ruby's artifact tools, but Rake provides no built-in tasks for working with Ruby artifacts. "Task runner" is a little bit ambiguous, so let's break it down. Ultimately, all Leiningen tasks are just Clojure functions. However, in previous posts we've seen how fun it is to try and run Clojure functions from the command line. In case you need a short refresher: it's not fun at all! Leiningen allows the Clojure party to remain fun by serving as an adapter between the CLI and Clojure. It takes care of the plumbing required for you to run a Clojure function. Whether the function is provided by your project, by Leiningen's built-in tasks, or by a Leiningen plugin, Leiningen does everything necessary to get the function to run. In a way, Leiningen's like an attentive butler who quietly and competently takes care of all your chores so that you can focus on your true passions, like knitting sweaters for gerbils or whatever. This manner of executing code was foreign to me when I first came to Clojure. At that time I had mostly coded in Ruby and JavaScript, and I had a decent amount of experience in Objective C. Those languages employ two different paradigms of code execution. Ruby and Javascript, being scripting languages, don't require compilation and execute statements as they're encountered. Objective C requires compilation and always starts by executing a main method. With Leiningen, Clojure has achieved an amalgamation of the two paradigms by allowing you to easily run arbitrary functions with a compiled language. The End Hopefully, this article has given you a firmer grasp of what Leiningen is. The idea that Leiningen is a task runner with a powerful set of built-in tasks designed to aid Clojure artifact management should help you organize your disparate chunks of Leiningen knowledge. In the next article, we'll add another chunk of Leiningen knowledge by examining the way Leiningen retrieves artifacts from repositories and incorporates them in your project. Goodbye for now! Shout Outs Thanks to Pat Shaughnessy and technomancy for reviewing this article. technomancy provided the line "Leiningen is an adapter between the CLI and Clojure", which really helped!

Ruby 2.0 Works Hard So You Can Be Lazy

about 4 years ago | Pat Shaughnessy: Pat Shaughnessy

Lazy enumeration isn’t magic;it’s just a matter of hard work Ruby 2.0’s new lazy enumerator feature seems like magic. In case you haven’t

Building Single Page Applications and CORS

about 4 years ago | Rocky Jaiswal: Still Learning

A while back I promised some insight on CORS here and it's about time I delivered. Few things have changed since then, I worked a bit on the Play framework and found it to be quite nice and also a lot of project requests in my day job now ...