Who is talking?


Motivation: Part 2 - Mastery - Take the flow test

over 7 years ago | Jimmy Skowronski: jimmy skowronski

The flow Gallup run employee engagement study in the America which shows that employee disengagement caused by “managers from hell” costs the U.S. companies over $450 billion annually. They found that only about 30% of workforce is engaged at work meaning that remaining 70% are not reaching their full potential. Organisations with high percentage of engaged employees experienced almost 150% higher earnings per share that their competitors with less engaged workforce. As it shows, engagement at work is very important factor for the company but also for us, employees. Engagement is a route to mastery. It’s a powerful force in our lives. That leads to Csikszentmihalyi, Hungarian researcher who in late 1950s started exploring “the positive, innovative, creative approach to life instead of the remedial, pathological view that Sigmund Freud had or the mechanistic work”. Under that complex title he believed that that in our lives we have too much compliance and way too little engagement. This exploration lead him to study of play. He found that when playing, many people experience something he called “autotelic experience” - from the Greek auto (self), telos (goal or purpose). In that state the goal is self-fulfilling and the activity itself is the reward. Some people call it a “trance”. I’m quite sure everyone had such moments in their life, when we were so deep in what we were doing that the world could easily cease its existence and we wouldn’t even notice. Later then we often couldn’t recall details of what was happening but we had this amazing feeling of achieving something great. He personally calls it a “flow”. Csikszentmihalyi struggled to find what causes the flow and people’s state of mind when it happens. So he designed a flow test. He asked his students to set up a timer to go off randomly several times a day. Each time the timer sounded they recorded what they were doing and how they were feeling. On the basis of this test run, he developed a methodology called the Experience Sampling Method. Take the flow test So, let’s do the flow test. Here is how. Pick up someone at work or a friend. Ask them to text, tweet or otherwise poke you no more than ten times a day in totally random moments. When that happens you take a notepad and write down answers using the attached form (see links). Do it for a t lease a week but no more than two and then collect results. Look at them and seek patterns. Have you experienced the flow? See if you can find specific situations when it happened. So, is anyone up to the challenge? Links Gallup's study on the workforce engagement On the Measurement and Conceptualization of Flow [PDF] Csikszentmihalyi's TED talk flow test form.docx


almost 8 years ago | Jimmy Skowronski: jimmy skowronski

Recently, I had a pleasure of speaking about the team motivation in one of our offices. Judging on the discussion that sparked during the talk and some comments afterwards I dare assume that people liked it so I thought I will blog more about it. This post is the beginning of a series that I will write over the next few weeks. I want to cover all topics I talked about in slightly more details. In this series I will bring concepts and ideas that are mostly based on the excellent book by Daniel H. Pink “Drive” but also my own experience and other sources. I don’t remember when and how I heard about this book, but it opened my eyes and greatly changed how I think about those things. I hope you will find something useful for you, no matter if you have a team or not. Source of motivation Long time ago it was presumed that that humans were biological creatures struggling for survival. This theory was then replaced by the new one that assumed that we also respond well to rewards and punishments. That theory worked perfectly through the industrial age when tasks were mainly consisting of manual labour. For a long time everyone presumed that greater reward leads to better performance. That if-then model became the base and was used everywhere around the world with great results. All seemed to be in place, but when economies grew and people had to employ new, more complex and sophisticated skills that well established if-then model started breaking down. Researches and business leaders started slowly realising that the current model is not really working as expected. Researches around the world started scratching their heads and finally confirmed that, when the task is cognitive and requires creativity and thinking, the if-then model is failing. Moreover, they also found that when the reward was increased even further it lead to worse performance. It became clear that a new model is needed to replace the broken one. A model that will employ other type of motivation than simple reward. For that we need to look at what drives us and individuals. I and X type The old, if-then model, assumed extrinsic, the X-type behaviour when people are being driven more by external reward. As it turns out there is another drive behind our motivation, the one that is coming from our intrinsic desires. That I-type behaviour concerns itself less with external rewards and more with the inherent satisfaction of the activity itself. Of course it’s not possible to categorise all humanity to two types and nobody is just one if those two types. We are all both I-type and X-type to some degree but what is important, is how much of each of those two types type is in us. Think about yourself for a moment. What energizes you? What gets you up in the morning and fuels through the day? Is it coming from outside or inside? What makes you to do the career choices you did or will do? I’m not saying that X-type people will always neglect inherent enjoyment over what they do. Or that the I-type person will resist goodies, bonuses and better salary. I’m saying that there is main driver that is either intrinsic or extrinsic. For X-type the main driver is the reward, any job satisfaction is welcome but secondary. For the I-type the main motivator is the freedom, challenge and purpose and any extra gains are welcome but not critical. Of course we all want money and live the good life, even I-types for whom the reward is secondary but only when they have enough money to make so they don’t have to worry. The key to enable I-type behaviour is to take money off the table. People must be paid enough so they can not to worry about living and focus on that intrinsic motivation instead. If your salary will not be good enough you will automatically switch to the X-type. However is the reward raises to a certain level, money starts playing different role for I and X types. I will write more about that aspect later in the series. The three elements So the if-then model doesn’t work in our context. It’s the fact. In general I-type people perform better in cognitive tasks, they are more creative and more effective. We need then a new model, one that will nurse and boost I-type behaviour. Most of current start-ups or so called L3C companies (low-profit limited liability) realised that and that is often huge part of their success. That new model requires the three basic elements. Autonomy By default people are autonomous and self-directed. Unfortunately our life, including outdated “management”, changes that and turns us from I-type to X-type. To encourage the I-type behaviour and the high performance it enables, the first requirement is autonomy. People need autonomy over task (what they do), time (when they do), team (who they do it with), and technique (how they do it). Companies that offer such autonomy, sometimes in quite radical way, outperform their competitors. Mastery Only engagement can produce mastery – becoming better at something that matters. We are in pursuit of the mastery that is essential for the I-type. Mastery is a mindset. It requires the capacity to see your abilities not as finite, but as infinitely improvable. Mastery is also the pain. It demands effort, sacrifice and a lot of practice. And finally it’s and asymptote. It can never be achieved. Purpose Humans, by their nature, seek purpose, a cause greater and more enduring. We achieve that in our private life in many ways, by volunteering, having kids or helping others. We also need it at work. We all want to know that our work has the purpose. In the new business model the purpose maximisation is taking place alongside the profit maximisation as an aspiration and a guiding principle. That purpose motive is usually expressing itself in three ways: in goals that use profit to reach the purpose, in words that emphasize more than self-interest, and in policies that allow people to pursue purpose on their own terms. Intrigued? If you are intrigued by this post you can watch cool visualisation to one of Pink’s talks http://youtu.be/u6XAPnuFjJc or his TED talk http://youtu.be/rrkrvAUbU9Y. You can also get the book and start reading it already http://www.amazon.co.uk/Drive-Surprising-Truth-Motivates-ebook/dp/B0033TI4BW. I strongly recommend it to everyone.

DDD North 2013

almost 8 years ago | Jimmy Skowronski: jimmy skowronski

Yay! Yet another DDD (http://www.dddnorth.co.uk) is coming in October. And it’s in my birthday! I’ve just posted two sessions but there may be more (see here). I hope I will get your vote. Building Single Sign On websites Meet Dave. Dave is like you and he has a problem. He found that great website but he needs to register on to use it. That means he needs to create yet another user name and password. And he has to remember it or write on that big post-it on his monitor - booo! So Dave decided he will not register and he will look for another website he can use without creating yet another password. There are plenty of people like Dave. He may be your user or you may be like him. But we need users and passwords and permissions and all that stuff on our websites. Here is an idea. What if you could delegate all that somewhere and let someone else to worry about passwords, security and all that boring stuff – yayyy! This session will show you how to delegate your authentication somewhere else. You will learn basic theory behind Single Sign On and delegated authentication concepts. Practical use of SQL Server events Databases, old good databases, we all love them when they work as we want. When they don’t… well, it’s totally different story. Most of us were in sticky situation when our queries didn’t perform quite the way we expected. Sometimes we are lucky and we can isolate the troublesome query and analyse. In some cases however our troublemaker is part of the complex system and then things tend to go nasty. There are many ways you can try to find your way. This session will show you one of them that uses SQL Server events to capture some useful information about your query such as wait stats or execution plan. This is going to be very practical session demonstrating application of a specific technique to solve the specific problem. There will be no new frameworks or methodologies, just old good problem solving. This is related to this post

Cloning, inserting and deleting PowerPoint slides using OpenXML SDK

almost 8 years ago | Jimmy Skowronski: jimmy skowronski

Recently I’ve been fighting a lot with the OpenXML SDK working on some cool data driven presentation engine. One of issues I had was how to clone slides. I did some search and found multiple blogs on how to do it but none of them really worked. Most of them shows you how to clone a simple slide and few how to deal with charts. In my case I had images, drawings, embedded Excell workbooks and charts all together on a single slide. After some time I managed to create an extension method that creates full slide clone. The key was to copy all embedded elements, including user defined tags. /// <summary>/// Clones the specified slide./// </summary>/// <param name="sourceSlide">The slide to clone.</param>/// <returns>Cloned slide.</returns>public static SlidePart Clone(this SlidePart sourceSlide){ // find the presentationPart: makes the API more fluent var presentationPart = sourceSlide.GetParentParts() .OfType<PresentationPart>() .Single(); // clone slide contents Slide currentSlide = (Slide)sourceSlide.Slide.CloneNode(true); var slidePartClone = presentationPart.AddNewPart<SlidePart>(); currentSlide.Save(slidePartClone); // copy layout part slidePartClone.AddPart(sourceSlide.SlideLayoutPart); //copy all embedded elements foreach (ChartPart part in sourceSlide.ChartParts) { ChartPart newpart = slidePartClone.AddNewPart<ChartPart>(part.ContentType, sourceSlide.GetIdOfPart(part)); newpart.FeedData(part.GetStream()); newpart.AddNewPart<EmbeddedPackagePart>(part.EmbeddedPackagePart.ContentType, part.GetIdOfPart(part.EmbeddedPackagePart)); newpart.EmbeddedPackagePart.FeedData(part.EmbeddedPackagePart.GetStream()); } foreach (EmbeddedObjectPart part in sourceSlide.EmbeddedObjectParts) { EmbeddedObjectPart newpart = slidePartClone.AddNewPart<EmbeddedObjectPart>(part.ContentType, sourceSlide.GetIdOfPart(part)); newpart.FeedData(part.GetStream()); } foreach (EmbeddedPackagePart part in sourceSlide.EmbeddedPackageParts) { EmbeddedPackagePart newpart = slidePartClone.AddNewPart<EmbeddedPackagePart>(part.ContentType, sourceSlide.GetIdOfPart(part)); newpart.FeedData(part.GetStream()); } foreach (ImagePart part in sourceSlide.ImageParts) { ImagePart newpart = slidePartClone.AddNewPart<ImagePart>(part.ContentType, sourceSlide.GetIdOfPart(part)); newpart.FeedData(part.GetStream()); } foreach (VmlDrawingPart part in sourceSlide.VmlDrawingParts) { VmlDrawingPart newpart = slidePartClone.AddNewPart<VmlDrawingPart>(part.ContentType, sourceSlide.GetIdOfPart(part)); newpart.FeedData(part.GetStream()); } foreach (UserDefinedTagsPart part in sourceSlide.UserDefinedTagsParts) { UserDefinedTagsPart newpart = slidePartClone.AddNewPart<UserDefinedTagsPart>(part.ContentType, sourceSlide.GetIdOfPart(part)); newpart.FeedData(part.GetStream()); } return slidePartClone;} .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In case you need, here is another extension method I use to insert the clone into the presentation. In my case I didn’t want to add it to the end of presentation but wanted to insert before slide that was used as the clone source. /// <summary>/// Inserts the before the specified slide./// </summary>/// <param name="presentationPart">The presentation.</param>/// <param name="newSlidePart">The slide to be inserted.</param>/// <param name="referenceSlidePart">The slide before which the new slide will be inserted.</param>public static void InsertBefore(this PresentationPart presentationPart, SlidePart newSlidePart, SlidePart referenceSlidePart){ SlideIdList slideIdList = presentationPart.Presentation.SlideIdList; //find the reference slide string refSlideRefId = presentationPart.GetIdOfPart(referenceSlidePart); SlideId refSlideId = slideIdList.ChildElements .Cast<SlideId>() .FirstOrDefault(x => x.RelationshipId.Value.Equals(refSlideRefId, StringComparison.OrdinalIgnoreCase)); // find the highest id uint maxSlideId = slideIdList.ChildElements .Cast<SlideId>() .Max(x => x.Id.Value); // Insert the new slide into the slide list after the previous slide. var id = maxSlideId + 1; SlideId newSlideId = new SlideId(); slideIdList.InsertBefore<SlideId>(newSlideId, refSlideId); //slideIdList.Append(newSlideId); newSlideId.Id = id; newSlideId.RelationshipId = presentationPart.GetIdOfPart(newSlidePart);} And finally another extension method to delete a slide, just in case you may need it. /// <summary>/// Deletes the slide from the presentation./// </summary>/// <param name="presentationPart">The presentation.</param>/// <param name="slideToDelete">The slide to delete.</param>public static void DeleteSlide(this PresentationPart presentationPart, SlidePart slideToDelete){ // Get the presentation from the presentation part. Presentation presentation = presentationPart.Presentation; // Get the list of slide IDs in the presentation. SlideIdList slideIdList = presentation.SlideIdList; // Get the slide ID of the specified slide string refSlideRefId = presentationPart.GetIdOfPart(slideToDelete); SlideId slideId = slideIdList.ChildElements .Cast<SlideId>() .FirstOrDefault(x => x.RelationshipId.Value.Equals(refSlideRefId, StringComparison.OrdinalIgnoreCase)); // Get the relationship ID of the slide. string slideRelId = slideId.RelationshipId; // Remove the slide from the slide list. slideIdList.RemoveChild(slideId); // Remove references to the slide from all custom shows. if (presentation.CustomShowList != null) { // Iterate through the list of custom shows. foreach (var customShow in presentation.CustomShowList.Elements<CustomShow>()) { if (customShow.SlideList != null) { // Declare a link list of slide list entries. LinkedList<SlideListEntry> slideListEntries = new LinkedList<SlideListEntry>(); foreach (SlideListEntry slideListEntry in customShow.SlideList.Elements()) { // Find the slide reference to remove from the custom show. if (slideListEntry.Id != null && slideListEntry.Id == slideRelId) { slideListEntries.AddLast(slideListEntry); } } // Remove all references to the slide from the custom show. foreach (SlideListEntry slideListEntry in slideListEntries) { customShow.SlideList.RemoveChild(slideListEntry); } } } } // Get the slide part for the specified slide. SlidePart slidePart = presentationPart.GetPartById(slideRelId) as SlidePart; // Remove the slide part. presentationPart.DeletePart(slidePart);} HTH

Recording for Pluralsight, what I learnt

about 8 years ago | Jimmy Skowronski: jimmy skowronski

A few months ago I finished recording my very first Pluralsight course, which was published early February. The whole recording business took me a few long months but in the end it was good fun. A few weeks ago I received my first royalty payment and, to my huge surprise, the amount was higher than I expected judging on not great performance of my course. Looks like I’m not that bad as I feared. Yes it’s a niche subject but some people are apparently watching it. I stayed in this glorious state for the next couple of days, until Fritz asked me to help with transcripts. Soon after my crystal palace was shattered into million pieces and I realised I suck totally. All right, I’m fishing for compliments as my wife would certainly say. Anyway, as every good story, let’s start from the beginning. The idea: I will record a course I don’t really remember how the idea formed in my head, but at some point I’ve started browsing Pluralsight forum and the list of most wanted topics. As it happened OAuth was quite high on the list, along with couple other subjects. I made the shortlist and sent an email quietly expected that I will be rejected straight away. Deep down I think I felt I’m not grown enough to be a Pluralsight author. To my surprise, a few weeks later I received an email from Megan, the Acquisitions Editor, who asked me to prepare an audition recording. That was the very first time I actually had to record something. Not counting a short podcast very long time ago. The audition video was actually quite tricky to do. I had to create a fully structured talk on a subject of my choice, with introduction, some theory, demo and summary. And all that squeezed to just 10 minutes. I wish I did any grok talks before. The most difficult part was not the content itself, but to actually record it. I spent hours in Camtasia processing my 10 minutes clip, removing all ymmms and emmms and making it as good as I could. In the end it took me couple of weeks and multiple less or more successful attempts but I finally had it. As it turned out the audition was good enough and I was approved. Hey, we are in the business! Shortly after that Megan and I started discussing subjects, deadlines, experience and so on. It turned out that OAuth has been already taken by Dominick Baier, but as I was talking a lot about PowerShell back then we decided to go for this one. After we agreed on the scope, I had to create an outline and wait for approval. That was followed by Skype call with Editor in Chief, Fritz Onion when we further discussed this and potential other topics. Shortly after that I received an email from Megan, I mean I was excited not her, in which I was offered a contract. Without sharing any details, it consisted of one off fee and royalties. We set the deadline for November and I was good to go. The first surprise, a silly one, was that I need to get a microphone. And I mean proper one, not simple Logitech headset with a microphone. In my naïve thinking I expected my Logitech gaming set will so just fine. I was wrong and I had to go shopping. A week or so later I had sweet looking Rode Podcaster with shock mount mounted to a tripod and standing on my desk. As the practice will show later this wasn’t the best choice. In that moment I was in heaven. I had professional looking kit on my desk, felt empowered and excited, and I was ready to start recording. Hard time It was mid-August and I was very confident I will have it done and dusted by November. I was so wrong. There were many factors I could use as an excuse, holiday, workload, technical difficulties, neutrinos and cosmic radiation. The ugly truth is that I was moving forward so slow I barely could notice any progress at all. Preparation for the first of four planned modules took me almost two months. I was fiddling with content moving things there and back, changing my mind ten times a day not really knowing what to do and how to start. Finally, around middle of October I managed to break through that and started recording. As expected, this was the most difficult part. First there was I vs. the microphone. Rode Podcaster is truly great piece of kit but to get good quality sound I had to keep my mouth just few centimetres from it. I still don’t know what I was doing wrong but anytime I moved away further than 1-2 inches I was just too quiet. Maybe I was oversensitive but that how it felt back then and I wanted to make it perfect. In the same time it was perfectly able to capture any noise that happened in five miles radius, including keyboard clicks which. Funnily enough the latter could be significantly reduced by putting folded towel underneath the keyboard. Because I had only small tripod I had to put it in front of me and kept the keyboard behind it. That let me to be close enough to the microphone and reduced clicks, but I had to type in really awkward position with my arms hugging around the tripod. Now I know I should get either a desk mounted arm or other microphone, or learn what I’m doing wrong. Despite that I started doing some progress. By November I had my first module recorded … about twenty times over and none of them was good enough. Finally I gave up and accepted the fact I will never be satisfied. After exporting slides so they can be easily saved as PDF (animations can be an issue) and adding metadata document, my first module was “ready” so I sent it for review. I’m on fire Whilst I was waiting for the review result I used the flow I was in and kept working. The next module I approached in slightly different way. Instead of recording sections, sometimes 10 minutes long, as a single clip, I decided to chunk it. Using some great guidelines from the authoring kit I’ve created a table where I decomposed each section into smaller chunks, each no longer than 2 minutes. It was initially quite difficult to keep that organised but I had much less content to repeat if I wanted to re-record one clip. After the whole module was ready I used Camtasia to clean it and glue all clips together into longer sections. It took me the whole sleepless weekend to record the second module. I think in the end I re-recorded every single clip at least once. Never mind that, I had my second module ready for review. The next module took me just one day and half of the night. I was on fire and finally sometime around the New Year I was done. Now I only had to provide long and short bio, likewise descriptions, assessment questions and clean up source files. Creating assessment questions was actually a lot of fun. Source files had to be assigned to each module which was annoying as I tend to reuse them. It took me a while to clarify what I used and where. The next time I shall be more careful. All that left was now waiting for Managing Technical Editor, Mike Woodring, to do the technical review. It’s alive! By end of January all was done and ready. Surprisingly I haven’t got any feedback from reviewers. I’ve been later told that this is pretty common and feedback is often only given when something is wrong. I however would still love to see some. Mike? In January I was granted access to private Yammer site where I joined the Pluralsight authors’ family and finally, on 15th of February my course has been published. Soon after the course was published I was given access to special website where I could track how it performs and what is calculated royalty. Numbers initially were not very clear but after the first quarter I learnt how to read them. A few weeks later I was asked to help with transcript. I think this was the best learning experience since I started. I had to listen to the whole course and check line by line, sentence by sentence all its content. Only then I realised how many mistakes I made and how badly I sometimes sound. This was really great lesson. In overall I’m very happy I did that. Not because of money but because of myself and my personal esteem. I’ve learnt that recording is far more difficult than speaking to an audience on a conference. The audience will always give me feedback and I can see their reaction on what I’m saying. There is nothing when recording. I was spending hour after hour recording, listening and re-recording over and over again. Every single mistake is in fact damaging the whole clip. I will not count how many times I stopped in middle of the sentence, deleted the clip and started again. It was sometimes very, very frustrating. Despite that it’s really a lot of fun. Would I do it again? Yes I would. In fact I’ve already committed to three new courses. I will try some new techniques other authors are sharing on Yammer. Most likely I will still keep recording in very small chunks but I may actually write the whole transcript upfront. I found it to be a great check if what I’m saying actually makes any sense. I may also follow popular method and try recording video first and then add narration to that. I will need to experiment and find how I could improve. I will also need to figure out what to do with the microphone. And I can’t miss the next year’s summit. In general I would recommend this experience to anyone who is willing to try. It’s really great. Your opinion matters That was my story but I did it for you. The course was viewed by over 500 people since it was published. Wherever and whoever you are I really want to know you opinion, even if it is bad one. I really mean it. With your feedback I can do better, I can learn and improve but I need your help. So if you are out there, please let me know and don’t be a stranger.

The ultimate git bash prompt

about 8 years ago | Jimmy Skowronski: jimmy skowronski

I've been playing with the Git bash prompt trying to make it nicer and, let’s be honest here, more fun. After some digging I found one that looks quite cool https://github.com/magicmonty/bash-git-prompt. It needed a few small modifications to make it useful in Windows environment and here is the effect. Here I'm in a perfectly healthy master, just after the push path1 Then, I've modified a file the prompt changed to path2 showing me that there is one modified file. After fetch I'm being told that I'm 1 commit ahead but 5 behind path3 So I do rebase and path4 no conflicts and I'm nicely one commit ahead. Cool, isn't it? The only drawback is that it takes a few hundred millisecond longer for the next prompt to appear. This is due to the fact that the Python script is executed every time and runs couple of git commands. It's noticeable initially but after couple of hours I got used to that. Setup Install Python and add install folder (default C:\Python27) to the PATH environment. In the c:\Users\{your user name}\ create folder .bash and drop gitprompt.sh and gitstatus.py files there (bash.zip) In the .bashrc file add source ~/.bash/gitprompt.sh If you don’t have that simply create empty text file and call it .bashrc Enjoy! You can easily customise the prompt by editing the gitprompt.sh file. I’ve added all colour definitions for your convenience. Let me know if you are going to give it a go. HTH

Capturing stats for slow running queries

about 8 years ago | Jimmy Skowronski: jimmy skowronski

My Story Recently I had some problems with a query that took a very long time to execute, it took over 20 minutes to do 150 inserts, that's whopping 9-ish seconds per one insert. There was nothing I could see straight away that would shed any light to the problem. The code itself was very simple, just loop through a collection and do insert for each one. As each command opened and closed its own connection I thought that this may be an issue. 15 minutes and few lines of code later all insert used the same connection opened once. No joy, the insert time dropped down to just above 8 seconds per insert. Something certainly was wrong there but I had no idea what until I found this great post by Paul S. Randal about wait stats (http://www.sqlskills.com/blogs/paul/capturing-wait-stats-for-a-single-operation). That gave me hope of actually finding where the problem sits. Unfortunately his approach is suited for running complex queries in the SQL Explorer. That wouldn't work for me because my query relied on many others and was part of complex data processing routine. I had to find a way to collect stats whilst running my code. The idea started forming in my head. If I could modify my query to capture stats, run the application and then see what was collected it would be ace! A few minutes later I had the solution. After half a day of tearing my way through unknown land of wait stats, events and complex SQL I finally got my stats. Here is how. Getting stats Repeating from Paul's post, here is what you have to run before your troublesome query IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'MonitorWaits') DROP EVENT SESSION MonitorWaits ON SERVER; CREATE EVENT SESSION MonitorWaits ON SERVER ADD EVENT sqlos.wait_info (WHERE sqlserver.session_id = 1 /* session id here*/) ADD TARGET package0.asynchronous_file_target (SET FILENAME = N'C:\SqlPerf\EE_WaitStats.xel', METADATAFILE = N'C:\SqlPerf\EE_WaitStats.xem') WITH (max_dispatch_latency = 1 seconds); ALTER EVENT SESSION MonitorWaits ON SERVER STATE = START; and after ALTER EVENT SESSION MonitorWaits ON SERVER STATE = STOP; The first snippet creates an extended event session, dropping one if it already exists. The session has one event, described in the sys.dm_xe_objects table as "Information regarding waits in SQLOS". The session has also asynchronous file target associated which writes captured events to specified files. You can name and place those files wherever you want but the folder you use must exist before you execute the query. Having that, anything you execute between those two statements will be nicely captured. Reading the first snippet however you will see there is a session_id parameter. This is somehow a bit problematic. It's very easy to get the session ID when you are using the SQL Explorer, but when you application connects to the database it can be anything. The rescue lies in the sys.sysprocesses table that can give you your session ID, except it's not that easy as it seems. The only way for you to find the id is to find the session that has the username you are using in the loginname column. In normal case however you will see there are many sessions created from you application and you will have no idea which one to pick. To solve that, you need to create a separate user that will be used only by this connection you open to run your query. In this way you will have only one record in the sys.sysprocesses and that will give you the session ID. Code time! 1: var perfConnString = "data source=(local);Initial Catalog=MyDb;User Id=perf_test;Password=test;MultipleActiveResultSets=True"; <!--CRLF--> 2: using (var connection = new SqlConnection(perfConnString)) <!--CRLF--> 3: { <!--CRLF--> 4: connection.Open(); <!--CRLF--> 5: var sidCmd = new SqlCommand("SELECT TOP 1 spid FROM sys.sysprocesses WHERE loginame = 'perf_test' ORDER BY last_batch DESC", connection); <!--CRLF--> 6: var sessionId = sidCmd.ExecuteScalar(); <!--CRLF--> 7:  <!--CRLF--> 8: string startCommandText = @"IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'MonitorWaits') DROP EVENT SESSION MonitorWaits ON SERVER; <!--CRLF--> 9: <!--CRLF--> 10: CREATE EVENT SESSION MonitorWaits ON SERVER <!--CRLF--> 11: ADD EVENT sqlos.wait_info (WHERE sqlserver.session_id = " + sessionId.ToString() + @") <!--CRLF--> 12: ADD TARGET package0.asynchronous_file_target (SET FILENAME = N'C:\SqlPerf\EE_WaitStats.xel', METADATAFILE = N'C:\SqlPerf\EE_WaitStats.xem') <!--CRLF--> 13: WITH (max_dispatch_latency = 1 seconds); <!--CRLF--> 14: <!--CRLF--> 15: ALTER EVENT SESSION MonitorWaits ON SERVER STATE = START;"; <!--CRLF--> 16:  <!--CRLF--> 17: string endCommandText = @"ALTER EVENT SESSION MonitorWaits ON SERVER STATE = STOP;"; <!--CRLF--> 18:  <!--CRLF--> 19: var startCommand = new SqlCommand(startCommandText, connection); <!--CRLF--> 20: startCommand.ExecuteNonQuery(); <!--CRLF--> 21:  <!--CRLF--> 22: //Run your commands here <!--CRLF--> 23: <!--CRLF--> 24: var endCommand = new SqlCommand(endCommandText, connection); <!--CRLF--> 25: endCommand.ExecuteNonQuery(); <!--CRLF--> 26: } <!--CRLF--> There you are. Lines 5-6 will get the session ID that is later used in line 11. If you are using transactions in your query you may want to comment them out as they can create some issues. Once you run your application and do whatever you have to but try to run the problematic area only once. Every time you execute the above, new event data will be added to files increasing numbers and potentially clouding the outcome. Checking results Once you are done, head to the SQL Explorer. The first thing you can run is SELECT COUNT (*)FROM sys.fn_xe_file_target_read_file ('C:\SqlPerf\EE_WaitStats*.xel', 'C:\SqlPerf\EE_WaitStats*.xem', null, null) which will show you how many events you've collected. They can go into thousands! The easiest way to see wait stats is to run this: -- Create intermediate temp table for raw event dataCREATE TABLE #RawEventData ( Rowid INT IDENTITY PRIMARY KEY, event_data XML); GO-- Read the file data into intermediate temp tableINSERT INTO #RawEventData (event_data)SELECT CAST (event_data AS XML) AS event_dataFROM sys.fn_xe_file_target_read_file ( 'C:\SqlPerf\EE_WaitStats*.xel', 'C:\SqlPerf\EE_WaitStats*.xem', null, null);GOSELECT waits.[Wait Type], COUNT (*) AS [Wait Count], SUM (waits.[Duration]) AS [Total Wait Time (ms)], SUM (waits.[Duration]) - SUM (waits.[Signal Duration]) AS [Total Resource Wait Time (ms)], SUM (waits.[Signal Duration]) AS [Total Signal Wait Time (ms)]FROM (SELECT event_data.value ('(/event/@timestamp)[1]', 'DATETIME') AS [Time], event_data.value ('(/event/data[@name=''wait_type'']/text)[1]', 'VARCHAR(100)') AS [Wait Type], event_data.value ('(/event/data[@name=''opcode'']/text)[1]', 'VARCHAR(100)') AS [Op], event_data.value ('(/event/data[@name=''duration'']/value)[1]', 'BIGINT') AS [Duration], event_data.value ('(/event/data[@name=''signal_duration'']/value)[1]', 'BIGINT') AS [Signal Duration] FROM #RawEventData ) AS waitsWHERE waits.[op] = 'End'GROUP BY waits.[Wait Type]ORDER BY [Total Wait Time (ms)] DESC;GO-- CleanupDROP TABLE #RawEventData;GO That will give you results like this: Wait Type Wait Count Total Wait Time (ms) Total Resource Wait Time (ms) Total Signal Wait Time (ms)------------------------- ----------- -------------------- ----------------------------- ---------------------------NETWORK_IO 4 0 0 0TRANSACTION_MUTEX 2 0 0 0WRITELOG 4 4558147 3985741 12 What was wrong? As you can see, in my case the problem was caused by the WRITELOG wait type. Checking the MSDN (http://msdn.microsoft.com/en-us/library/ms179984.aspx) I could see that WRITELOG is defined as Occurs while waiting for a log flush to complete. Common operations that cause log flushes are checkpoints and transaction commits. That speaks for itself. The problem is in the transaction log. A bit of reading to cover gaps in my SQL knowledge revealed that if there is no transaction each statement causes SQL Server to write the transaction log. As the HDD wasn't that great and the log file itself over 17GB it meant that SQL had to write something to huge file 150 times over. The solution was simple then, I had to reduce transaction log activity. Because one can't switch it of totally I've decided to wrap my query into BEGIN TRAN .... COMMIT TRAN. Bingo! That causes the transaction log being written only once, at end of mass 150 inserts. The effect was reduction of time to 1.2 second per insert. How cool is that? What else? That's not all you could do. There are much more stats you can collect in this way. If you run this query: select * from sys.dm_xe_objects where object_type = 'event' you will see well over 200 different events you can use. For illustration, inspired by another of Paul's posts (http://www.sqlskills.com/blogs/paul/tracking-expensive-queries-with-extended-events-in-sql-2008/) I used the following code to capture query execution plan. var perfConnString = "data source=(local);Initial Catalog=MyDb;User Id=perf_test;Password=test;MultipleActiveResultSets=True";using (var connection = new SqlConnection(perfConnString)){ connection.Open(); var sidCmd = new SqlCommand("SELECT DB_ID()", connection); var sessionId = sidCmd.ExecuteScalar(); string startCommandText = @"IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'MonitorWaits') DROP EVENT SESSION MonitorWaits ON SERVER; CREATE EVENT SESSION MonitorWaits ON SERVERADD EVENT sqlserver.sql_statement_completed (ACTION (sqlserver.sql_text, sqlserver.plan_handle) WHERE sqlserver.database_id = " + sessionId + @" /*DBID*/)ADD TARGET package0.asynchronous_file_target (SET FILENAME = N'C:\SqlPerf\EE_WaitStats.xel', METADATAFILE = N'C:\SqlPerf\EE_WaitStats.xem')WITH (max_dispatch_latency = 1 seconds); ALTER EVENT SESSION MonitorWaits ON SERVER STATE = START;"; string endCommandText = @"ALTER EVENT SESSION MonitorWaits ON SERVER STATE = STOP;"; var startCommand = new SqlCommand(startCommandText, connection); startCommand.ExecuteNonQuery(); var readCmd = new SqlCommand("your query here", connection); var reader = readCmd.ExecuteReader(); while (reader.Read()) { } var endCommand = new SqlCommand(endCommandText, connection); endCommand.ExecuteNonQuery();} After the test, I went to the SQL Explorer and run SELECT data.value ( '(/event[@name=''sql_statement_completed'']/@timestamp)[1]', 'DATETIME') AS [Time], data.value ( '(/event/data[@name=''cpu'']/value)[1]', 'INT') AS [CPU (ms)], CONVERT (FLOAT, data.value ('(/event/data[@name=''duration'']/value)[1]', 'BIGINT')) / 1000000 AS [Duration (s)], data.value ( '(/event/action[@name=''sql_text'']/value)[1]', 'VARCHAR(MAX)') AS [SQL Statement], SUBSTRING (data.value ('(/event/action[@name=''plan_handle'']/value)[1]', 'VARCHAR(100)'), 15, 50) AS [Plan Handle]FROM (SELECT CONVERT (XML, event_data) AS data FROM sys.fn_xe_file_target_read_file ('C:\SqlPerf\EE_WaitStats*.xel', 'C:\SqlPerf\EE_WaitStats*.xem', null, null)) entriesORDER BY [Time] DESC; Which gave me this Time CPU (ms) Duration (s) SQL Statement Plan Handle----------------------- ----------- ---------------------- ----------------------------------------------------------------------------------- --------------------------------------------------2013-04-10 13:51:13.143 0 7.3E-05 ALTER EVENT SESSION MonitorWaits ON SERVER STATE = STOP; 0x06000700912C723140A1B4870000000000000000000000002013-04-10 13:51:11.523 124 5.695983 SELECT * FROM SurveyPublicationJob j LEFT OUTER JOIN Surveys s ON j.SurveyId=s.Id 0x060007005CCBB80B40A14782000000000000000000000000 So my query took over 5 seconds to run. Running this SELECT [query_plan] FROM sys.dm_exec_query_plan (0x060007005CCBB80B40A14782000000000000000000000000); and clicking at link that is returned I've got the execution plan for the query. 2013-04-10_1453 That's pretty cool in my opinion. Happy querying! HTH

Encrypting .NET configuration file

about 8 years ago | Jimmy Skowronski: jimmy skowronski

Did you know that .NET allows encrypting any section of the .config file? It can be either a section within web.config or app.config files or in any of linked files such as confirmit.config etc. A very useful feature if you want to protect passwords or other secret data. The best of all is that your application will work unaffected if you encrypt configuration file. There is no code change required. You can work in your development environment with plain configuration and encrypt it when released. .NET will decrypt configuration transparently. This functionality is basically provided by the ASP.NET but you can use it outside web environment as well. Recently I had to do it at work and, as I haven’t done that for ages now, it took me a moment to figure out all bits and bobs. For your entertainment and my memory here is how to do it. Creating encryption key file First, you will need to create a new RSA container to use later. Simply use the following command:aspnet_regiis.exe -pc "MyKey"–exp MyKey Name of the key container. This should be later used in the configProtectedData element. -exp Creates the key as exportable. This option is required if you will want to share the same key between multiple servers. Now you can export the key to a xml file using the following command:aspnet_regiis.exe -px "MyKey" keys.xml -pri MyKey Name of the key container. This should be later used in the configProtectedData element. keys.xml File where the key will be exported to. -pri Ensures the private key will be exported as well. This is required to encrypt configuration section. Encrypting in a web application I will start with the ASP.NET application where things are nice and easy. First you need an XML file with the encryption key you generated just moment ago. You can also use a key file provided by someone else. The encryption process requires two preparation steps if the configuration section you are encrypting is a custom section that is declared in the configSections element of the .config. Firsly, the aspnet_regiis.exe will have to be able to load the assembly that defines the section. You can either put it in the GAC or simply copy to the same folder as aspnet_regiis.exe. If you fail to do so you will get an error. After you finish encryption you can delete this assembly. Lastly you need to ensure the section declaration contains both type and assembly name, i.e.<section name="mySection" type="MyApp.MySection, MyApp.Configuration" /> Having that done you are ready to roll. The first step is to add the following section to your config file<configProtectedData> <providers> <add keyContainerName="CustomKeys" useMachineContainer="true" description="Uses RsaCryptoServiceProvider to encrypt and decrypt" name="CustomProvider" type="System.Configuration.RsaProtectedConfigurationProvider,System.Configuration, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </providers> </configProtectedData> The name you specify in the keyContainerName and name attributes will be used later. If you generated your own key you can skip this command, otherwise if you received the key from someone else you need to import it. The below command will import the key to the container.aspnet_regiis.exe -pi CustomKeys keys.xml CustomKeys The key container name as defined in the configProtectedData before. keys.xml Path to the xml file with the encryption key. Having the key in the container you can now start encrypting. To encrypt a section use this command:aspnet_regiis.exe -pef section –site SiteName -app /VirtualPath -prov CustomProvider section The name of the section element you want to encrypt. SiteName The site name if the application is configured in a specific IIS site. This parameter can be omitted if the application is in the IIS default web site. /VirtualPath Virtual path to the website. It must start from the “/” character. CustomProvider Name of the provider as defined in the configProtectedData element before. Finally you need to grant permissions for the application pool user to the specific key container. You can do it using:aspnet_regiis.exe -pa "CustomKeys" UserName CustomKeys The key container name as defined in the configProtectedData element. UserName The web site application pool user name. And that’s it. Now your config section has been encrypted. The cool thing is that aspnet_regiis.exe is clever enough to figure out if the section is in the web.config or some side .config file. You can also encrypt all config files in a specific folder and more. For more information see MSDN documentation http://msdn.microsoft.com/en-us/library/k6h9cz8h(v=vs.80).aspx and http://msdn.microsoft.com/en-us/library/53tyfkaw(v=vs.100).aspx. What about non-web application? The above method will only work for web application. For all other applications you will have to use a workaround. Sometimes you may have a non-web application, i.e. service and corresponding web application. Both applications may share certain parts of the configuration. In that case you can just do required changes in the app.config and then copy encrypted sections from your web application. The only thing left to do will be to grant permissions to the key container as your application will likely work under another user than the IIS app pool. If you have totally independent application however, then your case is a bit more complex. What you can do is to create an empty web application, copy all configs, encrypt them and then copy back to your application. A bit fiddly way but it should work. Sharing keys between servers You also may want to share the same key between multiple servers. This may be desired in load balancing scenarios where configuration is centrally propagated. In such case you should encrypt required configuration sections on one machine and then import the encryption key on other servers where the encrypted configuration will be used. To import the key file you will need to copy the key XML file to each server and run the following command:aspnet_regiis.exe -pi "MyKey" keys.xml MyKey Name of the key container. This should be later used in the configProtectedData element. keys.xml File containing the encryption key. Happy encrypting. HTH

Reentrancy error when styling DataGrid with dynamic columns

about 8 years ago | Jimmy Skowronski: jimmy skowronski

A couple days ago I’ve wrote a short post showing how to create DataGrid with dynamic columns. Since then I was playing with the grid making it looking nicer and one of things I wanted to add was cell background depending on the value. Here is an example DynamicDataGrid3 The idea is pretty simple one, a cell should be red when value is negative and green when positive. If I would have standard DataGrid with set columns I could do set the column template to look like that <sdk:DataGridTemplateColumn HeaderStyle="{StaticResource headerStyle}"> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Category}" /> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> <sdk:DataGridTemplateColumn.CellStyle> <Style TargetType="sdk:DataGridCell" BasedOn="{StaticResource cellStyle}"> <Setter Property="Background" Value="{Binding Value, Converter={StaticResource ValueToSolidBrushConverter}}" /> </Style> </sdk:DataGridTemplateColumn.CellStyle> </sdk:DataGridTemplateColumn>   But I couldn’t. Following the code from the previous post my template is set in the code behind and looks like this string cellTemp = string.Format(@"<DataTemplate xmlns=""http://schemas.microsoft.com/winfx/2006/xaml/presentation"" xmlns:x=""http://schemas.microsoft.com/winfx/2006/xaml""> <TextBlock Text=""{{Binding Summary[{0}].Total}}"" /> </DataTemplate>", index); The other option was to wrap the cell in a simple grid and set the background binding as required. I did that string cellTemp = string.Format(@"<DataTemplate xmlns=""http://schemas.microsoft.com/winfx/2006/xaml/presentation"" xmlns:x=""http://schemas.microsoft.com/winfx/2006/xaml""> <Grid Background=""{{Binding Summary[{0}].Value, Mode=OneWay, Converter={{StaticResource ValueToSolidBrushConverter}}}}""> <TextBlock Text=""{{Binding Summary[{0}].Total}}"" /> </Grid> </DataTemplate>", index); After creating a converter, I’ve added it to the page resources and was ready to go. To my surprise I’ve got this reentrancy_error WTF? At this point I was quite perplexed. After wasting serious chunk of time on fruitless searching for an answer I have up and started looking for workaround. After a few runs to eliminate element I figured out that the problem happens when the converter is applied. Finally I’ve decided to cheat and modify the cell style on the fly. Here is the new CreateColumn method private DataGridTemplateColumn CreateColumn(int index, string header) { string cellTemp = string.Format(@"<DataTemplate xmlns=""http://schemas.microsoft.com/winfx/2006/xaml/presentation"" xmlns:x=""http://schemas.microsoft.com/winfx/2006/xaml""> <TextBlock Text=""{{Binding Summary[{0}].Total}}"" /> </DataTemplate>", index); DataGridTemplateColumn column = new DataGridTemplateColumn(); column.Header = header; column.CellTemplate = (DataTemplate)XamlReader.Load(cellTemp); column.HeaderStyle = LayoutRoot.Resources["headerStyle"] as Style; var cellStyle = LayoutRoot.Resources["cellStyle"] as Style; Style style = new Style(cellStyle.TargetType); style.BasedOn = cellStyle; Binding b = new Binding(string.Format("Summary[{0}].Value", index)); b.Mode = BindingMode.OneWay; b.Converter = new ValueToSolidBrushConverter(); style.Setters.Add(new Setter(BackgroundProperty, b)); column.CellStyle = style; return column; } The idea is that I already have have a cell style that is generally applied for each cell. In the code above I’m creating a new style that is based on the existing one, then creating binding using the converter I want, and then suing this new style as a CellStyle. And that works just fine. HTH

Dynamic columns in the Silverlight DataGrid

about 8 years ago | Jimmy Skowronski: jimmy skowronski

New job and new things to learn. How cool is that? I’ve been working on a Silverlight page that should display a grid. There is nothing exciting in that except the fact that the grid had to have dynamic columns depending on data that is coming to the grid. Let put some imaginary context into that. Assume you have sale summary to display and you want it to look like that: DynamicDataGrid0 You have rows that represents sale totals per year for various categories and columns for each year you have in your dataset. Those year columns can depend on some selection criteria and can be in any range between now and whenever. So the problem is how to make the DataGrid to display variable number of columns. The data set used in the example is very simple. Here is a single class that holds a category and a list of summary values per year. public class SaleData { public string Category { get; set; } public List<YearSummary> Summary { get; set; } } public class YearSummary { public int Year { get; set; } public double Total { get; set; } } Then, I will use two simple methods to generate some data. Goes without saying that in a real scenario this would be taken from some sort of data source. public static List<SaleData> GetData() { List<SaleData> data = new List<SaleData>(); data.Add(new SaleData() { Category = "Laptop", Summary = GetSummaryData() }); data.Add(new SaleData() { Category = "Tablet", Summary = GetSummaryData() }); data.Add(new SaleData() { Category = "Desktop", Summary = GetSummaryData() }); return data; } private static List<YearSummary> GetSummaryData() { List<YearSummary> data = new List<YearSummary>(); for (int i = 0; i < 5; i++) { var summary = new YearSummary() { Total = rnd.Next(-5000, 10000), Year = 2008 + i }; data.Add(summary); } return data; } Having all things done I can now add the grid to my XAML. The Category column is not changing and appears in each row so I can add it in XAML to make my life a bit easier. Important is to set AutoGenerateColumns property to false as we will take control over what columns will be created. <sdk:DataGrid x:Name="SalesGrid" AutoGenerateColumns="False"> <sdk:DataGrid.Columns> <sdk:DataGridTemplateColumn HeaderStyle="{StaticResource headerStyle}" CellStyle="{StaticResource cellStyle}"> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Category}" /> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> </sdk:DataGrid.Columns> </sdk:DataGrid> Dynamic columns for each year will be added in the code behind. Do to so I’ve added a simple method that creates a template column using a string containing XAML and XamlReader. private DataGridTemplateColumn CreateColumn(int index, string header) { string cellTemp = string.Format(@"<DataTemplate xmlns=""http://schemas.microsoft.com/winfx/2006/xaml/presentation"" xmlns:x=""http://schemas.microsoft.com/winfx/2006/xaml""> <TextBlock Text=""{{Binding Summary[{0}].Total}}"" /> </DataTemplate>", index); DataGridTemplateColumn column = new DataGridTemplateColumn(); column.Header = header; column.CellTemplate = (DataTemplate)XamlReader.Load(cellTemp); return column; } This method is called from the page constructor in this way: public MainPage() { InitializeComponent(); var data = GetData(); SalesGrid.ItemsSource = data; var firstRow = data.First(); for (int i = 0; i < firstRow.Summary.Count; i++) { var monthlySummary = firstRow.Summary[i]; SalesGrid.Columns.Add(CreateColumn(i, monthlySummary.Year.ToString())); } } What is actually happening here? The whole point of this method is to get templates for each column and have bindings set right. If I would want to add columns manually in the XAML I would have to add something like that in my sdk:DataGrid.Columns: <sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Summary[0].Total}" /> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Summary[0].Total}" /> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <!-- ...and so on... --> But because I don’t know in advance how many columns I will have I have to add them runtime. As you see the idea is really trivial here. I can design the whole template in Blend, make it perfect, set whatever styles I want and then just copy the whole thing to the code behind. Make it nice please So, to show how to apply styles, I’ve added two styles in my RootLayout grid just to show you the point: <Grid.Resources> <Converters:ColorToSolidBrushConverter x:Key="ColorToSolidBrushConverter"/> <Style x:Name="cellStyle" TargetType="sdk:DataGridCell"> <Setter Property="Padding" Value="5" /> </Style> <Style x:Name="headerStyle" TargetType="sdk:DataGridColumnHeader"> <Setter Property="Padding" Value="5" /> </Style> </Grid.Resources> Then I applied those styles to the category column I have in my XAML: <sdk:DataGridTemplateColumn HeaderStyle="{StaticResource headerStyle}" CellStyle="{StaticResource cellStyle}"> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Category}" /> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> and finally modified the CreateColumn method to apply those styles as well. private DataGridTemplateColumn CreateColumn(int index, string header) { string cellTemp = string.Format(@"<DataTemplate xmlns=""http://schemas.microsoft.com/winfx/2006/xaml/presentation"" xmlns:x=""http://schemas.microsoft.com/winfx/2006/xaml""> <TextBlock Text=""{{Binding Summary[{0}].Total}}"" /> </DataTemplate>", index); DataGridTemplateColumn column = new DataGridTemplateColumn(); column.Header = header; column.CellTemplate = (DataTemplate)XamlReader.Load(cellTemp); column.HeaderStyle = LayoutRoot.Resources["headerStyle"] as Style; column.CellStyle = LayoutRoot.Resources["cellStyle"] as Style; return column; } You can of course use application resources or any other way you want. With all of that I’ve finally got my nice dynamic grid. DynamicDataGrid2 And that’s it. HTH