Logo

Who is talking?

Archive

Modern Software Engineering - Part 3 - Designing the organization

5 months ago | Aishwarya Singhal: Aishwarya Singhal

Typical IT organizations have evolved into having multiple layers of managers. Some of that is because organizations try to reduce their risk by having more managers reviewing the work being done. Some is because the growth model only supports growth as managers, and hence everybody grows into a managerial role sooner or later, leading to a pyramid of people that are primarily in supervisory roles. Many organizations have as much as 50% staff in supervisory/ managerial roles. Simply speaking, only 50% of the staff is involved in actual production of software. Basic economics implies that typical overheads (or SG&A) in an organization should be about 20-25%. Shouldn’t the same logic apply to IT teams too? Another aspect here is that complex organization structures lead to a lot of meetings that wastes productive time. At the same time, there is the question of quality being delivered and the trust between different teams. Often, we see a “handover” mindset in most teams - they deliver their part, and then any issues found are to be fixed by the team that comes next in the chain. More often than not, the end-user’s perspective is ignored and forgotten, and teams focus more on covering their backs than on doing the right thing for the user. Let’s look at all these aspects through various enabling mechanisms. Aligned goals and metrics A key aspect of ensuring quality in deliverables is that there is a common definition of quality across the organization. Most teams fail to recognize this, and we see different metrics being used by them. So, while a sales team might be tracking revenue, or customer service team might use Average Handling Time (AHT), the IT team enabling that might still be measuring the number of code releases, or bugs. Now clearly, there is much more than goes into enabling high revenue or low AHT than the software, and there are a lot of IT specific aspects developers need to care for, but that does not mean that the software developers do not have a view on these business metrics. It is vital that everybody uses one language and common metrics across the organization. My most impactful stories have been from situations when my teams took the end-user view and partnered with the stakeholders to ensure that the end result was beautiful. Magic happens when developers and business teams collaborate on achieving common goals. One simple example - we had a feature request to enable printing of VAT invoices for customers, and the developer on my team had already implemented it. However, he did not look happy. I walked up to him to find out why, and I saw him with a printout of an invoice and an envelope. He was upset that the printed customer’s address did not fit in the center of the address cut-out on the envelop. He did not have to do that test, but he went out of his way, fetched an envelope, printed and folded the invoice, and checked if it will work. On the other hand, I was in a team for a large company whose main business was through online sales. Their website crashed, and it had been down for 2-3 days. We were parachuted in as external experts to rescue and fix. At 5 pm, the developers picked their bags and were leaving. We asked the lead developer if he could help debug the issue and he refused - it is the job of the support team and they need to manage it. Now it was late, so I get his point-of-view. However, in such a situation, I would expect an all-hands-on-the-deck type mindset. The disconnect between software developers and business goals is sometimes shocking. Most successful set ups are where all software teams have a business leader who is committed to enabling success and is not just a stakeholder. These business leaders also have sufficient say in the system, typically a direct line to company's leadership. And in such cases every software team is directly responsible for their impact on the business metrics. There will be IT specific metrics that the developers need to track, but they also need to have a keen view on the business goals. I recommend having large screen monitors (that show both business and IT metrics) next to where the developers sit, and I recommend that the teams include the business metrics in their performance reports at least once a month. However, you do not need to over-engineer this. You do not need to track business value or cost per feature. A meta level view is just fine. The goal here is to establish better quality via ownership and awareness, and not to bring in an accounting overhead. Product and platform, not project teams Many organizations work in an outsourcing model even with their internal IT teams. The business team creates a project, gives it to the IT team, and then the IT team has the responsibility to deliver. As expected, this helps optimize the costs (maybe) but erodes quality and trust. The issue here is that most organizations have one model for day-to-day functioning and for mentoring and reporting. This does not have to be. It is important that organizations drop the notion of projects and move towards products. Now “product” has a specific connotation in most organizations - however, we are not talking about the product that you sell to your customer. We are talking about the “software product” that will enable that sale. Although you may sometimes align software product teams with actual products that will be sold to the customer. The difference between a product and a project is that the latter has an end date. It is important that there are product teams that take an end-to-end view on a product, and not a tactical view on enabling a feature/ few features. This enables an improved view on quality and ownership in the teams. This also enables an easier way to align KPIs/ OKRs with the business teams. An easy way to create product teams hence is to follow the business metrics and their responsible business leaders. So, sales may warrant a developer team, customer service might warrant another, and logistics might need yet another. All of them may warrant multiple teams, depending on the number of metrics and business leaders. Another interesting tactic is to allow each business area to have a budget for software development and let them allocate it to each product team based on the latter’s performance in their QBR presentations. This drives collaboration between the business sponsors and the product teams. When you have multiple product teams for a common business area (e.g., sales), you just need all product owners to collaborate with the same business responsible person. Your organization structure does not need to reflect your IT architecture Many IT teams adopt a n-tier architecture, which is composed of different layers. Many of them model their organizations to align with the architecture too - there is a frontend team, a middleware team, a backend team, etc. etc. This leads to a large number of dependencies (and bottlenecks) across teams, and also a lack of end-to-end ownership. In my experience, the most effective model is when organization structure does not replicate the IT architecture. In such cases, there are product teams with end-to-end responsibilities, and platform teams that enable the product teams with tools and frameworks. The platform teams, or as we alternatively call them - IT-for-IT, are deeply technical teams that develop tools and frameworks. Think of these teams as R&D or enabling teams, whose customers are the product teams, and whose primary responsibility is to bring in efficiency and innovation. These are extremely important, and the product owners for these teams need to directly report into the IT leaders. Although we call these platform teams, they should not be centered around specific technical tools, e.g., a Salesforce team, or a SAP team. Salesforce experts, or SAP experts, should be embedded in the right product team. In some cases, the work required is too much to be handled within one “full stack” team. In such cases, there are 2 options, viz., a) take thinner slices of work so a lean team with end-to-end responsibility can still work, or b) divide the teams based on 1-2 layers such that they still have a business significance (e.g., one team does everything until API-enablement, and other prepares frontend and integrates the APIs). The second option is less preferred, and as much as possible, end-to-end ownership should be ensured. More pigs than chicken You need more people that have their skin in the game than those that are just supervisors or advisors. My typical assessment works on the following lines: Anybody who is not actively building or maintaining a product, nor takes an active part in defining the requirements, is an overhead. This includes all advisory roles - security, privacy, architecture, coaches, etc. etc. Anybody spending more that 50% of their time in meetings is an overhead The total number of overhead roles should be less than 25% of the total organization. So, if the IT team is 100 people, at least 75 of them must be actively building the product A simple way to start is to de-layer the organization. A product owner should have a direct reporting to the business leader responsible for that area, and all developers work directly with the product owner and the tech lead, and all tech leads work directly with the IT leader (CIO/ CTO/ VP/ ...). Cut down on all other managerial layers, and clearly define roles and responsibilities for every role Ensure that the Product Owner comes from business team’s perspective and is responsible for writing clear requirements, and for verifying the implementation, and the Tech Lead is a senior developer with >80% time dedicated to coding, and remaining time for mentoring the team. Automate all non-value adding tasks, and simplify what cannot be automated, e.g., coordinator functions, where someone is only responsible to raise a ticket or act as a SPOC for communication. Another example is replacement of manual QA work with automated tests as much as possible. As an example, all advisory roles could be staffed on product teams as needed and would be expected to have an acceptable utilization rate. Typically, such an exercise frees up between 15-20% of capacity that can then be reallocated to value adding roles. The freed-up people are also very talented people in wrong roles, and normally >95% of them can be reallocated (and will be interested) for further value creation. Some might need a bit of training and investing into them brings out magic. Congratulations, you just created a significant productivity boost (through saving and reallocating). At the same time, as a word of caution, do not go overboard with this idea. Many of the advisory teams are often understaffed and underappreciated. In some cases, having SPOCs helps product owners and business leaders to maintain their sanity, especially when it comes to managing vendor relationships. You may still need some manual QA. Similarly, all organizations do require managers, so trying to move towards a near zero managerial capacity will be an absolute disaster. While it is important to chart out an ideal picture, it is also important to then apply a prudent lens and ensure that the model will work in your context. A study at Google indicated that the most effective teams are the ones where team members feel psychological safety and have structure and clarity. I recommend keeping this as the underlying thought when designing the organization. 2 pizza box teams This concept came from Amazon and is almost an industry standard now. The idea is that the team is small enough to have a healthy collaboration and can work together as a SWAT team to deliver towards a common goal. My recipe for typical teams is: 1 Product Owner, 1 Designer, 1 Tech Lead, 4-5 Developers, 1 QA, and 1 Advisor. The designer and advisor role may be fulfilled by different people at different points in time of a product release, based on the need. E.g., there may be a UI designer at 50% and UX designer at 50%, or 50% of architect, 20% of security, and 30% of Subject Matter Experts/ coaches. Some of these may be shared across different teams. So, there are 7-8 dedicated team members, and 2 that are floating. The reason why I would count the floating also into the team is because these need to be in the stand-ups and need to be accountable for the quality of delivery (i.e., they need to be pigs, and not chicken). In special cases, depending on the complexity and (lack of) maturity of the organization, some teams may also have a Business Analyst/ Junior Product Owner, someone that helps the product owner by taking up some of their responsibilities. Functional vs Reporting structures One important clarification to be made here. Everything above talks about how the teams should operate, and not where they should report. The IT team members should continue to report into the IT leaders, so that their career growth, learning and mentoring can be shaped by leaders that understand the field. The product teams should have a dotted line reporting to the business leaders, and the feedback on their performance should be evaluated based on their performance in that context. Another thing to note here is that this does not mean that the IT leaders report into their business counterparts. Both IT and business leaders need to have a top level reporting into the company leadership. This is necessary to ensure that the organization does not always prioritize tactical goals over technical excellence and innovation. This model ensures that the business leaders do not need to worry about the mentorship of technical teams, and the teams get guidance and support from leaders that understand the space. At the same time, the technical teams are focused on generating business value for the organization. Chapters, or communities of practice A final missing piece here is knowledge sharing. It is important that teams share their work for 3 reasons: It enables consistency of implementation across the organization. People have an ability to challenge each other every time they spot an inconsistency. This in turn helps with cost optimizations via prevention of fragmentation and avoidance of duplicate costs It enables learning within a community of similarly skilled colleagues It helps identify training needs for specific skills Spotify has Guilds and Chapters; many other organizations have communities of practice. It is vital to encourage creation of similar virtual structures and ensure that they are exchanging knowledge on a regular basis. So, the community needs to appoint a leader, and that leader should regularly share their observations with the IT leaders. Note that this is not a dedicated role, but an additional responsibility for an existing team member. This has an interesting side-effect: it enables a different growth model in IT compared to traditional ones. Developers can remain developers and still grow (in responsibilities and financial sense) without taking up managerial roles. As always, there is not just one answer for organization structures. Different models work for different set ups, and it is important to understand the context you operate in, and what works in that context. Similarly, the size of an organization can play an important role in defining the feasibility of some of these measures. What works for a 50-member team may not work for a 5000-member organization. Finally, culture and team maturity play an important part in defining the model. At the same time, the principles remain broadly the same, and as long as one can define an execution model that works in their context, it will enable a significant productivity and quality boost in the output. So how do we solve for large organizations? Well, for one, there are a number of standard frameworks and methodologies. I hear SAFe is the most famous. I am personally uncomfortable with any “one-size-fits-all” solutions, so I would recommend evaluating the options based on your context and devising an execution mechanic that works for your organization. Finally, at the heart of all these tips is the intent to simplify (reduce complexity). Anything that increases overheads or complexity in the long term must be challenged and re-evaluated for fit in your context.

Modern Software Engineering - Part 2 - Maximizing developer experience and writing high quality software

5 months ago | Aishwarya Singhal: Aishwarya Singhal

Is the practice of developing software a science (Computer Science), an engineering (software engineering), or an art (software craftsmanship)? When I was in university, we always viewed software as science. We experimented, we learned, and we treated it as mathematics - driven by pure logic. When I started working, it became more of an engineering - applying known techniques, searching for ways how others have solved a problem before, and looking for efficiencies. In recent years, I was introduced to the idea of it being a craft - i.e., focus on quality, and believe in the fact that it can always be improved. I can’t say I fully practice craftsmanship; however, I have moved from engineering more towards it. In my personal view, most projects unfortunately do not quite allow for (or warrant) the time needed for the craft. In any case, we can always do a few things: Ensure a certain level of quality the first time we publish the software through automated checks and thorough code reviews, but avoid over-engineering Make time for refactoring Work smart, not hard - get as many open-source libraries as possible to solve your problems, and only write code for things that are truly specific to your problem and cannot be found on the net Based on these 3 ideas, I have a few practical tips. Build on best-in-class programming techniques My favorite here is the UNIX philosophy that was published in 1978 (yes, over 40 years ago!): * Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features". * Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input. * Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them. * Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them. Why do I love these? These have stayed solid (as has UNIX) over the past 40 years. I derive my coding principles from these, and the following are my most commonly used ones at the moment Write short methods that do one thing only and do it well. This in turn helps to keep a low cyclomatic complexity as well as a smaller number of lines of code per method. I love the Unix pipes and filters, and if you can build that idea into your methods (e.g., using Strategy pattern), a fantastic code quality emerges Use microservices (small pieces of functionality that are independently deployed) where possible Go minimalistic in your design of interfaces, following the YAGNI and Convention over configuration principles. Try to follow the DRY principle as much as possible (without making the code too unreadable) Insist on modularity so that pieces of code can be thrown away when not needed. Caution: avoid over-engineering. This is not the most important aspect if you are following the other principles Refactor, refactor, refactor: Do not shy away from refactoring. The principle is, Whenever you look at a piece of code, aspire to leave it in a better state than you found it in A technique I find useful in writing modular code is: Every time you feel the need to write a comment in the code, see if you can make a new method/ service. Comments usually indicate that the code is doing more than what can be easily understood. You can use any programming language, and any style - my personal favorite at the moment is Functional programming in whichever language I use, because it helps me implement the above-mentioned goals easily. A technique not mentioned here and one I am a big fan of is Event Driven Architecture, (or alternatively, Reactive Programming). It helps reduce dependencies and provides an easier way to guarantee performance and reliability of a system. Align on quality goals and then automate them I have seen situations where the team discussed at length, and kept discussing, the choice of technology. I have also seen similar debates around quality. The only way to avoid an endless debate is to propose and align a set of technologies and quality measures democratically with the team, and then adhere to them for at least a few months. And the best way for that to happen is to automate the agreed principles. Do not define a quality goal that cannot be *(at least partially)* automated, because it is unlikely that it will be implemented. Be real about your Definition of Done (or equivalent) and hold your team accountable to it during code reviews Timebox all decisions Try to leverage industry standards where possible - e.g., Airbnb style for JavaScript linting is often used by teams, or like back in the day Sun’s Java conventions were pretty standard guideline for Java code Call a meeting at the start of the project and agree on quality goals (and publish them) Anybody joining the team afterwards can give suggestions on these goals, but they should only be accepted if they do not disrupt the rest of the team. Alternatively, they can be accepted in the next review of the quality goals (after at least 6-8 weeks) As I mentioned earlier, bugs are any deviation from a user’s expectations. That includes functional defects, and also performance, usability, reliability, etc. Ensure that your quality goals take a complete view. Typical techniques like TDD, Code Reviews, Code Style checks (static code analysis), etc. are usually good measures. When writing automated tests, it is more important to have real quality tests than writing for the sake of getting a 100% test coverage (e.g., you must get a 100% coverage on code containing logic, and it is ok to skip tests for simple Value Objects). Some aspects can be only partially tested - e.g., in case of web accessibility, or security, a manual review may still be required. However, there are many tools available to get you an 80% correct view (if not more) and I would highly encourage using them. Similarly, take your code reviews seriously. GitHub and similar tools simplify the review process significantly and can integrate all feedback that automated tools can generate to help you review code. Technology evolves fast, and I would always recommend checking for the best ways to achieve automation of quality goals before starting any project, and every couple of months even after starting a project. A manual review by the product owner or quality engineer may still be required, but by the time it goes to them, all other checks would have ensured a decent quality level. Be truly agile: ship the software as soon as possible As I said in the previous blog, shipping software is far more important than perfecting it. As long as the code meets all quality goals, it should be good to deploy. I always reflect back on my days in the school, when I first started to code. This was what SDLC looked like to me then: Get a problem (requirements) from the teacher (product owner/ user) Implement them on my computer Copy the working code into a floppy drive (deploy) Show it to the teacher It did not take me weeks or months to do that. It was often done from one day to the next, and in some cases even during a class. Even when we had a project where multiple teammates were working on it, the cycle only had one more step between 3 and 4: Integrate your code with a friend on their computer (aka production set up) There was no 3 or 5 environment set up, no change management, no design approvals, etc. etc. We have made software development overly complex over the years, and it is important to simplify it. The longer you take to ship software, the worse quality you can expect. Now of course, you need to have processes and checks to ensure quality of delivery. However, as long as you have defined sound quality goals and the code meets them all, your code should be good to ship. Put it in front of the customer and address any learnings that come out of that. If you can fix issues quickly, it is perfectly ok to have a few bugs that pop up once you deploy. Some ideas: Use feature branches and feature flags for software development, and have a process to clean up stale feature flags once a feature has been stabilized Ideally, you should push your code to production at least once a day. In the worst case (for complex and large problems), push it within a week. For sub-projects (like a redesign) that takes longer, create a pipeline to deploy the feature branch on the test environment for that sub-project - that’s your production environment for the sub-project. In no case, keep the feature branch alive for more than a week Fully automated deployments: Use continuous delivery pipelines and allow developers to build their own infrastructure through scripts/ bots (infrastructure as code). Achieve a full automation on deployment, ideally including the production environment. In highly controlled settings, implement a fully automatic deployment pipeline at least until pre-production/ staging environment, and then a 1-click deployment for production Ensure sufficient monitoring and logging in the code to observe and learn from user behavior. That will ensure a much higher level of quality than what can be predicted during the development phase and is absolutely needed for a continuous delivery system. CNCF is a great place to start for such topics. Optimize the delivery pipeline to take less than 30 minutes (faster is better) including test execution. This will ensure that developers get feedback on broken builds and issues ASAP and are able to quickly fix the issues on production. One last tip here - be honest to yourself. Every time you have to do a less than perfect job, note down a technical debt item in your product backlog so it is tracked and never forgotten. Reduce the number of meetings you attend One the main time drains for developers is the number of meetings that happen. Avoid them. Put a limit of a total of 30 minutes per day for meetings that need more than 5 people (e.g., the morning standup) for at least 4 days a week. The exception will be some days when you have an architecture/ design session with whole team, planning meeting, or a retrospective, etc. These longer meetings should be on the fifth day of the week. Try to move as much communication online as possible. Use tools like Slack to have effective integrations with various tools and have chats with your team. An online discussion has various advantages - you only dedicate time that you absolutely have to. Also, it helps any other team member to pitch in or learn if they see value in the topic - that makes it much more productive. It is vital to understand the true meaning of agile and I recommend re-reading the Manifesto and listening to the talk Agile is Dead every few months. More often than not, teams claim to work in agile manner but still have numerous complex processes and constraints built around them. Whenever you get an invitation for a meeting, ask yourself - can I avoid this meeting? Try to skip as many meetings as possible. At the same time, pair programming sessions can be awesome. Take a pragmatic view and do those whenever it makes sense. One of the reasons for meetings is dependencies and integrations. Can you reduce them? Try to design your coding responsibilities so you can own end-to-end slices and have minimum dependencies on other teams/ team members. Use interface contracts ) along with techniques like Mechanical Turk, stubs and mocks, to be able to independently develop your code. When done well, this can be done completely independently, and results in significantly reducing integration efforts. Lastly, when I talk about meetings, I am excluding the ones that help you learn (e.g., conferences, meet ups, knowledge exchange sessions). Try to carve out time for them so you do not disrupt your productivity too much, and yet have reasonable time available to learn and share knowledge. As a thumb rule, you should be able to get 6-7 hours a day for focused coding. Leverage and contribute to Open-Source Software, and Internal Open-Source A key aspect to optimizing your time and improving the quality of your code is to leverage open-source libraries as much as possible. Every time you have a problem to solve, check if there is a library that already does that. Ask your team. There is a library for most of the commonly encountered problems - somebody somewhere solved it, stabilized it, and published it. Beware that there are also a number of bad libraries out there, so make sure that a) there is sufficient community behind it, and b) you have tested and seen it working. Open source is awesome because people contribute to it. See what you can publish too. If you solved a generic problem, publish a sanitized library (check for your organization’s policy first). It helps the community of developers, but it also builds a brand for you and your company and attracts good developers to work with you. Similarly, see if you can build an “internal open-source”. If a colleague needs to re-use a piece of code, or if you are re-using code written by someone else, see if it can be a library to be shared internally (or if it generic enough, externally too). Do not greedily create libraries, but instead let that be done on demand. This ensures that a good ecosystem exists for all software in your organization, and everybody benefits from your learnings. At the same time, allow anyone in the organization to submit a pull request, or make changes to the library and help evolve it. That’s the true nature of open-source software and helps with its adoption. Failing this, it just becomes a framework component that will always be your responsibility to maintain and fix and will also see skepticism from your colleagues on its adoption. Finally, find time to learn. Time spent on learning yields exponential results in your productivity (and happiness). Keep measuring quality of your code (through different tools), and you will master it. Happy coding!

Modern Software Engineering - Part 1. Defining a strategy for success

5 months ago | Aishwarya Singhal: Aishwarya Singhal

As leaders, we are often faced with challenges in balancing the needs of the business, and the constraints in delivering to those expectations. It is a complex problem, and one that requires multiple considerations. Over the years, I have developed a list of 5 “principles” I found useful in defining a winning tech strategy. Speed trumps quality, but not always The speed to deliver a software, or a feature, the time to market, is extremely important. At the same time, it is important to focus on the quality. However, these two do not go hand-in-hand. Quality needs time, and that slows down delivery. And nobody likes something that lacks quality even if it is delivered ultra-quick. The definition of acceptable quality changes based on the context. A throw-away software that enables a quick test of a business concept does not need to be perfect. However, a software that is related to hardware which can result in expensive losses due to bugs (like a space shuttle), needs to have a much higher quality level. It is important to know the minimum acceptable levels for both speed and quality. How quickly do you need a feature, and how perfect does it have to be? Usually, in a “Build vs Buy” discussion, “Buy” is more preferable. If you can find an opensource library that already solves a problem, it is better to use it instead of building from scratch. Similarly, a commercial off-the-shelf product may also provide a good foundation and jump start. However, ensure that sufficient due diligence (including a short proof of concept) has been done before adopting/ buying a software. There are a number of horror stories around off-the-shelf products. The marketing material always looks cooler than the actual fit of a software in your ecosystem. In general, an 80-20 rule helps. Ensuring that 80% of scope is delivered with >95% quality is much better than having 100% scope delivered with <80% quality, or only ~20% scope delivered with 100% quality. It is far more important to be able to fix defects quickly, than to avoid them altogether. There will always be unforeseen issues once the software is released to consumers, but if you can fix that in minutes instead of days, nobody notices, and the impact is negligible. An investment into technology - automated delivery (continuous delivery pipelines), monitoring, and processes that enable an on-demand deployment in minutes - will provide a much better risk management ability compared to any review processes that try to foresee and prevent risk. When in doubt, prioritize customer centricity In the B2C world, customer trumps everything else. Even if you are not in B2C, any software you produce needs to be optimized for the consumer. It is extremely important to define metrics and goals with the customers point of view. There are often conflicting priorities, and the engineers would always like to invest into ensuring a robust and maintainable system. As an engineer, I often find myself at the center of this conflict myself. However, as a principle, customer always takes a priority. It is always an unpleasant discussion, but a necessary one. I read a quote from Steve Jobs somewhere: When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall, and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. This is a cool quote, and I immensely respect Mr. Jobs, but perhaps this is something that does not apply to most modern software projects. For me, that plywood in the back may be perfectly ok as a way to get started. That does not mean that it should remain there forever. It should be replaced with beautiful wood as soon as possible. But we do not need to wait for the final quality until the software is released, as long as the chest of drawers is usable by the consumer. Does that mean we let poor quality software to be developed? Absolutely not. Optimize for the customer and ensure that only the best quality is presented to them. At the same time, it is important to keep track of “technical debt” - compromises that have been made to urgently ship software to address a business or customer need. And it is important to have a real plan to fix that. Typically, a “technical budget” of 15-20% development capacity is a good way to ensure that the debt does not mount beyond unmanageable levels. Shipping software is far more important than perfecting it A few thoughts to keep in mind here Software sitting on a development or test machine is worthless until it is made available to the consumers The best way to perfect a software is to put it in front of customers and get feedback on it. There is no way one can perfect a software without the customers providing inputs to it The longer you wait to release software, chances are that the quality will be lower. Counter-intuitive? That’s because the longer you wait, the needs of business are likely to evolve. Plus, it will be more complicated to merge all on-going changes being produced by the larger team, and it is more difficult to isolate and fix problems since there is too much change deployed at once I remember working with a colleague who had previously worked in electronics industry - he was stunned that we could modify software and deploy “so quickly”. In hardware world, they had to plan every change, implement the change on a breadboard, send the design to a factory for circuit printing, send the circuits over to the QA department, and work on the feedback. It took them weeks. That’s not how software engineering works though, and it is important to recognize the difference. In today’s world, if a software takes months or years to deliver, somebody’s heart sinks. There are various stories from leading technology companies. Amazon deploys every 11.7 seconds [1], and Google plans for 4 million builds a day [2]. How about the risk of errors due to frequent deployments? Risk management is often misunderstood. In my experience in software engineering, risk mitigation is far more effective than risk avoidance. As long as issues are immediately identified and quickly addressed on production. So, while all change managers will tell you otherwise, set an aspiration for your tech team to deploy multiple times a day, to production. OK - for a greenfield product, you need to first establish a minimum viable product (MVP) on production, before you have multiple deployments a day, but in that case, you only have a production environment once the MVP is ready. It is extremely important to have processes and technology that support multiple daily deployments. I read somewhere: if you are not failing, you are not trying hard enough. Failure is not a problem, not being able to learn or come out of a failure is a problem. Technically, Use Cloud for all deployments - ideally public cloud Automate everything - DevOps, Continuous Delivery, etc. Support zero-touch processes. Anything that requires a human interaction will slow you down Push for an MVP mindset across the board and rationalize the scope for software delivery Shipping software is probably second only to customer centricity in terms of a tech organization’s priorities. Quality is directly proportional to the investment into talent and culture To start with, I am not talking about the financial investment only. I am also talking about time that you invest as a leader. Now, of course, getting quality developers will cost a bit more than the cheapest available in the market. But you do not need the most expensive ones either. Having an all-star team does not guarantee quality. However, a team that sticks together, challenges each other, and believes in the goals of the organization goes a long way in establishing quality. Similarly, the importance of culture cannot be overstated. My key considerations here: Hire quality developers and enable them for success. Let them take decisions. Collective brain power is always better than ivory towers Have a performance centric culture. Celebrate successes and capture learning from failures. However, ensure that people are not scapegoated for failures. The only failures that need to be discussed are where people were comfortable with their status-quo and failed to try or innovate Ensure alignment of common language and goals across the organization. As long as there is a separate “business” and “IT team”, quality will suffer. Ensure that the same goals are used for both, and that they are working as collaborators. Software needs to be business led, and not IT led (although the tech team needs to have a sufficient degree of freedom to bring in tech innovation). Encourage everyone to think of the customer. It is not just the designer’s problem, or the customer service department’s. Spend time with the teams, so they feel connected Invest into the best tooling for the developers. High quality tooling improves productivity, encourages creativity and innovation, and improves people retention. E.g., buying good laptops for developers is a one-time cost, and not a great cost, but significantly improves the quality of their output. Good tooling can also improve collaboration and cut down on unnecessary meetings, which further improves the productivity Ensure that everyone is learning from external community (outside of your company) via meet ups, conferences, or talks delivered by external experts. This needs to happen frequently, and the experts need to be real experts, even if they do not speak the local language Getting quality delivered to customers is hard and it will only happen when the whole organization collaborates, instead of throwing it over the wall to the “IT team”. Be bold: there is no replacement for testing and learning The road to wisdom? — Well, it's plain and simple to express: Err and err and err again but less and less and less. - Piet Hein There is no shortcut to testing. Before the teams start building a product, test the business case. Conduct user tests on cheap prototypes. Not every fancy idea is worth developing, and what may work for another company in another set up, may not always work for you. As a leader, you can (and should) help the teams rationalize their requirements. Once built, measure everything, and capture as much customer feedback as possible. Invest into analytics and capture every customer interaction. Analyze the data for any trends, and feed that back to the technical teams’ “backlog” to be prioritized and implemented. Also, that means that approaches like Mechanical Turk which involves setting up “fake” solutions until “proper” solutions are available, can be fantastic in getting customer insights. The cycle should be: Build -> Measure -> Learn -> Repeat [1]. Shorter this cycle, the better it is. However, a balance is important as always - avoid rabbit holes and know when to pivot. A VC-like mindset is often helpful. So be a coach for the team, encourage testing, but also encourage learning from others and to let go when tests consistently reveal negative results. At the same time, encourage the teams to be bold and bring in innovation from around the world, and not just constrain themselves to a specific sector. Every idea is worth testing. In the end, there is no silver bullet solution, and you will need to review all of these in the context of your organizations. But I certainly hope that these may warrant a discussion within your leadership circles and help define a strategy that works for you.

Modern Software Engineering - Introduction

5 months ago | Aishwarya Singhal: Aishwarya Singhal

Here’s a topic that I have been planning to write about for quite a while now, and I thought a new year is probably a good reason to start penning it down. Software Engineering has naturally evolved since the time the first programs were written. And so have the expectations of its consumers. Today’s world expects everything to be digital. We use our smart phones to read news, to talk to our friends and family, and to perform most of our day-to-day chores. As consumers, we expect good websites, apps and technology enablement from all businesses. (I am going to focus on websites and apps but the same principles can be applied to any software) This expectation has 3 constituents that define our happiness (or our perception of quality): All features we have seen elsewhere must exist (feature parity with competition) It must be easy and quick to use (customer centricity) Everything must work without flaws (bug free software) I have often defined bugs as “deviations in a software’s behavior against stated or unstated expectations”. (Even if no one said they expect a software to work in a certain way, if it does not, they will still be disappointed and will still call it a quality issue). This in turn puts a lot of pressure on businesses and their IT teams (I intentionally draw a difference between the two. We will address it later). The businesses want to deliver to all facets of the customers’ expectations, while managing the cost of delivering them. And the IT teams are flooded with requests, often overwhelmed with conflicting priorities coming from various stakeholders. This makes software engineering much more complex than any other trade - a seemingly impossible scenario. It is only natural then that most IT teams do not deliver to the expectations of their business teams. At the same time, my experience with various large scale IT teams showed a less than 60% time spent by developers on coding features. Even worse, many traditional organizations (businesses for which software is not the core product) have about 50% of team members than are in “overhead” roles - managers, coordinators, etc. - people that are not directly involved in writing the software on a daily basis. So, while there is an ever-increasing expectation of faster delivery, the actual effort spent on delivering the software is about 30-40%. Let’s look at 2 exhibits I found on the internet. [Exhibit 1] [Exhibit 2] The numbers may be different per organization, but we know that the reality is not far off for most of them. So, it naturally begs a question - can we fix this? How do we maximize the software delivery, and cater to our customers’ needs? In this 3-part blog series, I intend to share my perspectives on the various tenets for this topic. We will explore the following: Part 1: Defining a strategy for success Part 2: Maximizing developer experience and writing high quality software Part 3: Designing the organization These blogs will follow in the coming days, and I look forward to hearing your reflections and experiences.

Picking up the pen again

11 months ago | Aishwarya Singhal: Aishwarya Singhal

I haven’t written much in recent years, at least not publicly, and I decided now would be a good time to start again. I also haven’t written much code in recent years, so I decided to re-write Factile in a more popular tech stack. It has been a great experience for various reasons: I could resurrect Factile which has been sitting on a broken server for past 3 years with me having no time to fix it I could experience first hand some of the techs my colleagues have been talking about - its been extremely fulfilling … and it gives me confidence that I can still write a fair quality of code It took me about 4 weeks of few hours a day to completely rewrite it, and I followed the engineering practices I have been coaching my teams on. Factile has been re-written in Node JS and React JS, something that has been a trend for a few years now, and I believe represents a robust developer community. I discarded Scala as a the language for Factile primarily because the frequent churn in the language made it extremely difficult to keep the stack up-to-date and stable in the past, not to mention the fact that it still has a niche in developer community as compared to the ultra vibrant JS community. I chose Cypress for browser testing because a) I wanted to see what the fuss was about, and b) I absolutely love the idea that you can stub API calls inside of tests Finally, I use UptimeRobot for monitoring, CircleCI for continuous integration and APIDoc for, well, API documentation. Of these, APIDoc suprised me the most - it is just amazingly simple to use, and simple to extend (which I did in a way because I did not like the default template/ color schemes). I have been using CircleCI since 2015 and I think recently, it has become incredibly complex with poor documentation. I could have used TravisCI I guess, but I’ll stay with Circle for now. Oh yes, I didn’t use Typescript. Why? I personally don’t like my JS code to have a need to compile, and I feel that instead of writing types for JS, I would rather write code in Scala (or Java, or Kotlin) ;-)

Migrating to Jekyll

over 6 years ago | Aishwarya Singhal: Aishwarya Singhal

So finally, I bit the bullet. I have been thinking of migrating my blog to Jekyll for almost 2 years now, never had a true reason. Well now I do, I am free and bored of the themes that Wordpress offers for free. I have been a long time fan of Wordpress, which I believe is still awesome, but I needed more control on how my blog looks and so I moved. Migrating itself was not super hard. There is sufficient help available online, but a quick summary: Get started with [https://help.github.com/articles/using-jekyll-with-pages/] Skip the hello world goodie, create a site instead bundle exec jekyll new bundle exec jekyll build bundle exec jekyll serve Migrate your blogs using ExitWp Migrate comments using Disqus. Read this blog. Use the domain migration tool on Disqus to change the links in the imported comments and get the links right. I basically just did a find and replace in Wordpress exported XML. Modify the SCSS to make your blog look the way you want. Attach Google Analytics to your pages if you like stats. Get yourself a coffee :-) There are three additional benefits of this: The best thing about a Jekyll backed blog is that there is no database involved. It is static content and is super fast in rendering! It is hosted for free on github. And you get to write your blogs in markdown!

Building a web app with node js

about 8 years ago | Aishwarya Singhal: Aishwarya Singhal

NodeJS is an event driven javascript framework that makes writing asynchronous code a piece of cake! For small apps and a limited user base, this is almost magical – the code can be churned out fast, it is clean and … Continue reading →

Building a web app with node js

about 8 years ago | Aishwarya Singhal: Aishwarya Singhal

NodeJS is an event driven javascript framework that makes writing asynchronous code a piece of cake! For small apps and a limited user base, this is almost magical - the code can be churned out fast, it is clean and it is perfectly unit testable. The purpose of this blog is not to sell node js though, we’ll instead look at how an application could be built easily. The applications I work on are primarily client-server, like web apps, or mobile apps with a backend. So here’s what my toolkit with node js looks like: Express JS: Sinatra like web framework that gives basic structure to the web app Sequelize JS: ORM framework Require JS: for modularization Node DB Migrate module: just like rails db migrations Mocha JS: For unit tests Chai JS: For test assertions Crypto: For encryptions Angular JS: For the front end MySQL: database Lets keep the front end out of scope for this article and only focus on getting an app that can serve JSON over REST. Setting up the environment Install nodejs Next, we need expressjs Run the following npm install -g express # create the app now. we'll call it 'myapp' express myapp cd myapp node app.js You should now see something like this “Express server listening on port 3000” Yay! You now have a basic Express JS app running! Configuring the app The ExpressJS guide is a pretty good resource for getting started, so I would recommend that you read through it. Next, add the following in package.json under dependencies: { "mysql": "*", "supervisor": "*", "db-migrate": "*", "sequelize": "*", "requireindex": "*", "mocha": "*", "crypto": "*", "chai": "*" } Save package.json and run “npm install” Supervisor module is a great module for development environments and it auto reloads the app on change, so you dont have to restart your node server every time. To run the app using supervisor, just use “supervisor app.js” Requireindex is a nice module that helps get all objects from  a directory into a single object, without adding a “require” for each file Add the error handler in app.js (as described in expressjs guide) app.use(function(err, req, res, next){ console.error(err.stack); res.send(500, 'Something broke!'); }); Run the app again and access using http://localhost:3000. Adding database support Install mysql We already have the node module included in package.json (mysql), so the app is now ready to start talking to the database We’ll use node db migrate module to set up the database. Create a file database.json under myapp. The contents should look as follows: { "dev": { "driver": "mysql", "user": "root", "database": "myapp" }, "test": { "driver": "mysql", "user": "root", "database": "myapp_test" }, "production": { "driver": "mysql", "user": "root", "database": "myapp" } } Create a db.js in the myapp directory with the following contents: var express = require('express'), Sequelize = require("sequelize"); var app = express(); var env = app.get('env') == 'development' ? 'dev' : app.get('env'); // db config var fs = require('fs'); var dbConfigFile = __dirname + '/database.json'; var data = fs.readFileSync(dbConfigFile, 'utf8'); var dbConfig = JSON.parse(data)[env]; var password = dbConfig.password ? dbConfig.password : null; var sequelize = new Sequelize(dbConfig.database, dbConfig.user, password, { logging: true }); exports.sequelize = sequelize The above uses the same file (database.json) as used by db-migrate module so all your configuration stays at one place. It also initializes our ORM framework viz. sequelizejs. To use this, just add require('db.js') as required and get sequelize. Configure unit tests Modify your package.json and add the following under “scripts”: "pretest": "db-migrate up -m ./migrations --config ./database.json -e test", "test": "NODE_ENV=test mocha test test/*/**" The above will ensure that your db migrations are run automatically when you run the tests, and also that you don’t have to remember longish commands for recursively running tests in sub directories. Add a basic test case under “test” directory. var expect = require('chai').expect, should = require('chai').should(); var db = require("../db.js").sequelize; var DataTypes = require("sequelize"); var assert = require("assert") describe('DB', function(){ it('should check db connection', function(done){ db .query("select count(1) from users") .success(function(o){ expect(o.length).to.not.equal(0); done(); }).error(function(error) { done(); }); }) }) Prepare for test data - create a file _setup.js under tests and put the following: var db = require('../db.js').sequelize; var testData = [ "INSERT INTO users (name) VALUES ('test user');", "INSERT INTO users (name) VALUES ('test user 2');" ]; // now run the test data testData.forEach(function(sql){ db.query(sql).success(function(){ }).error(function(e){ console.log(e); }); }); console.log(">>>> starting tests..."); The SQLs above are obviously just indicative and you’ll have to add your SQLs as needed. Run npm test to execute the tests! Build a model Start creating your db migrations (like db-migrate create users) ! They will be stored under myapp/migrations directory. Run the following: db-migrate create users Now open the migration that has been created under migrations directory and add table definition, e.g. var dbm = require('db-migrate'); var type = dbm.dataType; exports.up = function(db, callback) { db.createTable('users', { id: { type: 'int', primaryKey: true, autoIncrement: true }, name: 'string' }, callback); }; exports.down = function(db, callback) { db.dropTable('users', callback); }; Run the migrations to create users table. Now create a directory called “models” under myapp. This is where we’ll put our models. Under models, create users.js with the following contents (or similar) var db = require("../db.js").sequelize; var crypto = require('crypto'); var DataTypes = require("sequelize"); var User = function(name, username, password) { this.name = name, this.user_name = username, this.password = password }; var users_table = db.define('users', { name: DataTypes.STRING, user_name: DataTypes.STRING, password: DataTypes.STRING }, { timestamps: false, underscored: true }); User.prototype.save=function(onSuccess, onError) { var shasum = crypto.createHash('sha1'); shasum.update(this.password); this.password = shasum.digest('hex'); users_table.build(this).save().success(onSuccess).error(onError); }; User.find=function(username, password, onSuccess, onError) { users_table.find({ where: {user_name: username, password: password}, attributes: ['id', 'name', 'user_name'] }).success(onSuccess).error(onError); }; User.lookup=function(name, onSuccess, onError) { users_table.findAll( { where : [ "name like ?", '%' + name + '%' ] } ).success(onSuccess).error(onError); }; exports.get=User; exports.table=users_table; As a reminder, we use sequelizejs for ORM and crypto for encryptions above. To use this model, all we now need is to create an object of User and call user.save(), or directly call User.find or User.lookup as needed. Notice that these take callbacks for success and error, thats because node js is a totally event driven framework and everything is synchronous. These methods don’t return anything :smile: Lets add a route, create user.js under routes directory. var User = require('../models/users').get; exports.authenticate = function(req, res) { User.find(req.body.username, req.body.password, function(o) { if (o) { res.json(o.selectedValues); } else { res.send(401, "Auth failed"); } }, function(error) { console.log(error); res.send("Auth failed"); }); }; And in app.js, add the route. app.post('/authenticate', user.authenticate); All done! You now have an app that can connect to the database and authenticate users! In the next blog, we’ll see how we can quickly assemble a front end

Don’t write off Microsoft just yet!

over 8 years ago | Aishwarya Singhal: Aishwarya Singhal

I just bought a new laptop with the cool new Windows 8 installed. I must admit that I was a bit skeptical of how the new OS would be, but its totally taken me by surprise, and in a good … Continue reading →

Don't write off Microsoft just yet!

over 8 years ago | Aishwarya Singhal: Aishwarya Singhal

I just bought a new laptop with the cool new Windows 8 installed. I must admit that I was a bit skeptical of how the new OS would be, but its totally taken me by surprise, and in a good way!  First, I absolutely adore the new Metro layout. Its like a cool dashboard where I have all my tools and information handy for me to get started. From facebook to gmail to google search, everything is a shortcut on there. And its not just a shortcut! The mails tile shows new ones and facebook one shows the highlights - everything that makes you decide if you want to click that icon or not. Similarly, there are news feeds, weather updates and other handy info. Its really much more useful than the mostly empty windows desktop that exists on previous versions. And its much brighter and colorful too! Secondly, IE 10 is a total pleasure to use. Its super fast, and feels much lighter. I also hear that it finally adheres to global standards too (yay, developers rejoice) ! Thirdly, each new window/ app is a full screen by default. No task bar, no title bar, nothing. It makes the full use of the screen - can it get better ?! Oh well, some things require 2 clicks (including a right click) that should really just need one. Like closing windows. Yeah. You either drag a window to the bottom to ask it to go to hell (or where ever), or you right click, find yourself a cross button and close the damn thing. I wish there was a close button that would appear when I hover around one of the corners, closing a window should be a one click thing. And I could not install Google Chrome on Win 8 - its just kept on hanging up on me. I think the performance of the OS and system in general is pretty good, and the fact that the same system will run on mobile and desktop/ laptop computers is pretty encouraging. May be we should consider how we’ll develop apps for windows along with Android and iOS ? ha ha! I hear Windows 8 is all HTML and JS anyways!

Line Charts with d3 js

over 8 years ago | Aishwarya Singhal: Aishwarya Singhal

Want to do a line chart with d3? There are no ready APIs right? At least none that I could find. What I did find was http://benjchristensen.com/2012/05/02/line-graphs-using-d3-js/ (very useful!) and I hacked up a line chart taking cue from there. Here’s an example: And the code? <!DOCTYPE html> <html lang="en"> <head> <title>Line Charts</title> <script src="http://code.jquery.com/jquery-1.8.2.min.js"></script> <script src="http://d3js.org/d3.v2.js"></script> <script type="text/javascript"> function getDate(d) { var dt = new Date(d.date); dt.setHours(0); dt.setMinutes(0); dt.setSeconds(0); dt.setMilliseconds(0); return dt; } function showData(obj, d) { var coord = d3.mouse(obj); var infobox = d3.select(".infobox"); // now we just position the infobox roughly where our mouse is infobox.style("left", (coord[0] + 100) + "px" ); infobox.style("top", (coord[1] - 175) + "px"); $(".infobox").html(d); $(".infobox").show(); } function hideData() { $(".infobox").hide(); } var drawChart = function(data) { // define dimensions of graph var m = [20, 40, 20, 100]; // margins var w = 700 - m[1] - m[3]; // width var h = 360 - m[0] - m[2]; // height data.sort(function(a, b) { var d1 = getDate(a); var d2 = getDate(b); if (d1 == d2) return 0; if (d1 > d2) return 1; return -1; }); // get max and min dates - this assumes data is sorted var minDate = getDate(data[0]), maxDate = getDate(data[data.length-1]); var x = d3.time.scale().domain([minDate, maxDate]).range([0, w]); // X scale will fit all values from data[] within pixels 0-w //var x = d3.scale.linear().domain([0, data.length]).range([0, w]); // Y scale will fit values from 0-10 within pixels h-0 (Note the inverted domain for the y-scale: bigger is up!) var y = d3.scale.linear().domain([0, d3.max(data, function(d) { return d.trendingValue; } )]).range([h, 0]); // create a line function that can convert data[] into x and y points var line = d3.svg.line() // assign the X function to plot our line as we wish .x(function(d, i) { // return the X coordinate where we want to plot this datapoint return x(getDate(d)); //x(i); }) .y(function(d) { // return the Y coordinate where we want to plot this datapoint return y(d.trendingValue); }); function xx(e) { return x(getDate(e)); }; function yy(e) { return y(e.trendingValue); }; $("#chart").append("<p><small><em>Please move the mouse over data points to see details.</em></small></p>"); // Add an SVG element with the desired dimensions and margin. var graph = d3.select("#chart").append("svg:svg") .attr("width", w + m[1] + m[3]) .attr("height", h + m[0] + m[2]) .append("svg:g") .attr("transform", "translate(" + m[3] + "," + m[0] + ")"); // create yAxis var xAxis = d3.svg.axis().scale(x).ticks(d3.time.months, 1).tickSize(-h).tickSubdivide(true); // Add the x-axis. graph.append("svg:g") .attr("class", "x axis") .attr("transform", "translate(0," + h + ")") .call(xAxis); // create left yAxis var yAxisLeft = d3.svg.axis().scale(y).ticks(10).orient("left"); //.tickFormat(formalLabel); // Add the y-axis to the left graph.append("svg:g") .attr("class", "y axis") .attr("transform", "translate(-25,0)") .call(yAxisLeft); // Add the line by appending an svg:path element with the data line we created above // do this AFTER the axes above so that the line is above the tick-lines graph .selectAll("circle") .data(data) .enter().append("circle") .attr("fill", "steelblue") .attr("r", 5) .attr("cx", xx) .attr("cy", yy) .on("mouseover", function(d) { showData(this, d.trendingValue);}) .on("mouseout", function(){ hideData();}); graph.append("svg:path").attr("d", line(data)); graph.append("svg:text") .attr("x", -200) .attr("y", -90) .attr("dy", ".1em") .attr("transform", "rotate(-90)") .text("Trending Value"); $("#chart").append("<div class='infobox' style='display:none;'>Test</div>"); } var draw = function() { var data = [ {'date': "2012-10-01", 'trendingValue': 1000}, {'date': "2012-09-01", 'trendingValue': 900}, {'date': "2012-08-01", 'trendingValue': 1100}, {'date': "2012-07-01", 'trendingValue': 950}, {'date': "2012-06-01", 'trendingValue': 1050}]; drawChart(data); } </script> <style> #chart path { stroke: steelblue; stroke-width: 2; fill: none; } .axis { shape-rendering: crispEdges; } .x.axis line { stroke: lightgrey; } .x.axis .minor { stroke-opacity: .5; } .x.axis path { display: none; } .y.axis line, .y.axis path { fill: none; stroke: #000; } .infobox { border:2px solid steelblue; border-radius:4px; box-shadow:#333333 0px 0px 10px; margin:200px auto; padding:5px 10px; background:rgba(255, 255, 255, 0.8); position:absolute; top:0px; left:0px; z-index:10500; font-weight:bold; } </style> </head> <body onload="draw();"> <div id="chart"> </div> </body> </html>

Line Charts with d3 js

over 8 years ago | Aishwarya Singhal: Aishwarya Singhal

Want to do a line chart with d3? There are no ready APIs right? At least none that I could find. What I did find was http://benjchristensen.com/2012/05/02/line-graphs-using-d3-js/ (very useful!) and I hacked up a line chart taking cue from there. … Continue reading →

Why Clojure scares me

almost 9 years ago | Aishwarya Singhal: Aishwarya Singhal

There has been a lot of buzz about this relatively new language. Its a new kid on the block, it is Lisp, functional and runs on the JVM. Nice! But its very fact that its a Lisp scares me off. … Continue reading →

Why Clojure scares me

almost 9 years ago | Aishwarya Singhal: Aishwarya Singhal

There has been a lot of buzz about this relatively new language. Its a new kid on the block, it is Lisp, functional and runs on the JVM. Nice! But its very fact that its a Lisp scares me off. When I look at Clojure code, compact as it may be, I just see a lot of brackets. And I mean a lot of them. I was reading a deck yesterday that admitted that long time Java developers may suffer from this ‘problem’, and I do fall in that category :-) Even if I ignore all those scary brackets, the way the code is written, it looks like prefix notation (it reminds me of Yoda in Star Wars who talked in a very funny and interesting manner, but I won’t use that dialect in practice). I mean seriously, when I first learned about numbers and Algebra, I wrote expressions as 1 + 2, and not + 1 2. The latter is not really a natural way for me. Oh yes, I can get used to the latter too, but I think for a long time, I would just be doing a translation for myself in my head. And it would surely be slow and painful. I must admit that this is just a feeling - I have only _seen _Clojure yet, not practiced it. And there are many things that are good in this language. For me, Scala works wonders and I intend to stick to it for now.

New Features on Factile

about 9 years ago | Aishwarya Singhal: Aishwarya Singhal

Factile, as you know, is a free and open source survey platform. It generated significant interest in the first few days of its launch and I received some feedback that I thought would be good to build into the tool. … Continue reading →

New Features on Factile

about 9 years ago | Aishwarya Singhal: Aishwarya Singhal

Factile, as you know, is a free and open source survey platform. It generated significant interest in the first few days of its launch and I received some feedback that I thought would be good to build into the tool. I have just released a new version at http://www.factile.net and the following are the main changes: Better Navigation: Clickable bars showing the steps in survey creation/ editing to enhance user experience. Offline Survey Capability: Even if the survey participants are in transit or are in areas of unreliable connectivity, they can still ‘work offline’. i.e. They can keep on working through the survey and just submit the results once they are back on the network! Easy-peasy ! Word Clouds: Factile now has the capability to build a word cloud (aka tag cloud) of free text responses. Factile combines all responses for free text questions, removes common words (aka stop words) and generates a word cloud. Just so you can analyse survey takers’ comments much more easily. Insights: This is a feature that was always planned and was delayed. You can now select a group of questions and define constraints to generate a list of top 5 combinations as picked by the users. So for example, if you wanted to see what demographic group tweets/ blogs the most, you could comine the demographics related questions, add a constraint around tweeting/ blogging and get sorted and aggregated insights. Custom URI: Would you like your survey to be called http://www.factile.net/mycoolsurvey instead of the long system generated identifier? You can do so now! Obviously, Factile is freely downloadable and customizable (and these surveys work on mobile devices, as would be convenient to use on a mobile device and not as small difficult to click HTML radio buttons). You can let me know if you want a feature/ faced a bug by logging an issue on github, or just leave a comment on this blog! You can also send an email at https://groups.google.com/group/factile. I did a quick video on the use of Factile a week or so back and uploaded to YouTube. You can watch it here.

Factile - A free online survey tool

about 9 years ago | Aishwarya Singhal: Aishwarya Singhal

On Friday last week, I launched Factile at http://www.factile.net. It is a free and open source survey platform that I created and aims at making the job of data collection and analysis simpler. It supports a** variety of question types** (Text boxes, Radio buttons, Check boxes, Combo boxes (dropdowns), Text areas, Plain texts, matrix of choices/ rating scales). The surveys generated are fully mobile device compatible. So you can create a survey and share it with people to take on iPads/ iPhones/ BackBerry/ Samsung S2s, really, anywhere! It is truly free and unlimited. There is zero usage cost, and you can add as much content to your survey as you like. And, if you wanted to download it and install on your infrastructure, you can! Whats more? You can define logic in the survey, you can build charts of the captured data, customize the appearance of the survey to match your requirements and the best is, you can request for missing features and I’ll be happy to oblige! Check out http://www.factile.net and let me know of what you think.

Factile – A free online survey tool

about 9 years ago | Aishwarya Singhal: Aishwarya Singhal

On Friday last week, I launched Factile at http://www.factile.net. It is a free and open source survey platform that I created and aims at making the job of data collection and analysis simpler. It supports a variety of question types … Continue reading →

Deploy Play 2 application on AWS with Tomcat and Apache HTTPD

about 9 years ago | Aishwarya Singhal: Aishwarya Singhal

I have created a web application on Play 2.0 framework, in Scala. To deploy it, I looked at various cloud options - Amazon looks the best because, well its free :-)  Once the instance was created, it already had Java 6, I installed Apache HTTPD and Tomcat 7. Lets first add some swap space sudo -i dd if=/dev/zero of=/swapfile bs=1024 count=524288 mkswap /swapfile swapon /swapfile Now edit /etc/fstab and append the following line to it: /swapfile swap swap defaults 0 0 Ok, lets install tomcat and httpd now. yum -y install httpd mkdir -p /var/www/html/assets mkdir -p /usr/share/tomcat7 cd /usr/share/tomcat7 wget http://apache.mirrors.timporter.net/tomcat/tomcat-7/v7.0.27/bin/apache-tomcat-7.0.27.tar.gz gzip -d apache-tomcat-7.0.27.tar.gz tar xvf apache-tomcat-7.0.27.tar mkdir -p /var/log/tomcat7 /var/cache/tomcat7/temp /var/lib/tomcat7/webapps /var/cache/tomcat7/work rm -rf logs temp webapps work ln -s logs /var/log/tomcat7 ln -s webapps /var/lib/tomcat7/webapps ln -s work /var/cache/tomcat7/work ln -s temp /var/cache/tomcat7/temp useradd -d /usr/share/tomcat7 tomcatusr chown -R tomcatusr /var/log/tomcat7 chown -R tomcatusr /var/cache/tomcat7/ chown -R tomcatusr /var/lib/tomcat7 chown -R tomcatusr /usr/share/tomcat7 Now open the server.xml (inside /usr/share/tomcat7/conf) and comment out the connector for port 8080 <!-- Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" / --> Look for 8009 (AJP connector) and modify it to: <Connector port="8009" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" URIEncoding="UTF-8" /> Now create a start up script under /etc/init.d and call it tomcat7: #!/bin/bash # Tomcat7: Start/Stop Tomcat 7 # # chkconfig: - 90 10 # description: Tomcat is a Java application Server. . /etc/init.d/functions . /etc/sysconfig/network CATALINA_HOME=/usr/share/tomcat7 TOMCAT_USER=tomcatusr LOCKFILE=/var/lock/subsys/tomcat RETVAL=0 start(){ echo "Starting Tomcat7: " su - $TOMCAT_USER -c "$CATALINA_HOME/bin/startup.sh" RETVAL=$? echo [ $RETVAL -eq 0 ] && touch $LOCKFILE return $RETVAL } stop(){ echo "Shutting down Tomcat7: " $CATALINA_HOME/bin/shutdown.sh RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f $LOCKFILE return $RETVAL } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; status) status tomcat ;; *) echo $"Usage: $0 {start|stop|restart|status}" exit 1 ;; esac exit $? All set. Now let us connect the HTTPD to talk to Tomcat. Open /etc/httpd/conf/httpd.conf for editting Go to the end of the file and uncomment the VirtualHost (port 80). The whole block, of course. Add the following inside the VirtualHost ErrorLog logs/error_log CustomLog logs/ajp.log combined SetOutputFilter DEFLATE BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Don't compress images SetEnvIfNoCase Request_URI \ \.(?:gif|jpe?g|png)$ no-gzip dont-vary # Make sure proxies don't deliver the wrong content #Header append Vary User-Agent env=!dont-vary ExpiresByType image/gif A604800 ExpiresByType image/png A604800 ExpiresByType image/jpg A604800 <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass /assets ! ProxyPass / ajp://localhost:8009/ ProxyPassReverse / ajp://localhost:8009/ The above will enable gzip compression on your pages (for performance), cache images on the client for a week and enable you to serve static assets from the webserver itself. Set docroot and error pages: DocumentRoot "/var/www/html" ErrorDocument 404 /assets/html/missing.html ErrorDocument 503 /assets/html/missing.html All done. Now use the Play WAR plugin to generate the WAR file. Copy the generated WAR file into /var/lib/tomcat7/webapps as ROOT.war (otherwise you don’t get the “/” root URL). Package the static files from inside APP_HOME/public separately into a TAR and extract into the /var/www/html/assets directory.

Deploy Play 2 application on AWS with Tomcat and Apache HTTPD

about 9 years ago | Aishwarya Singhal: Aishwarya Singhal

I have created a web application on Play 2.0 framework, in Scala. To deploy it, I looked at various cloud options – Amazon looks the best because, well its free :-)  Once the instance was created, it already had Java … Continue reading →

Generating Excel in Play 2

about 9 years ago | Aishwarya Singhal: Aishwarya Singhal

Play 1.x has a nice module that allows you to create Excel sheets. The new Play 2.x however lacks this capability. Or atleast is not very evident. While I found most of the information by searching the net, I thought it may help someone to just have quick start notes here. I use Apache POI to generate Excel. I also wanted to create .xlsx rather than a plain .xls. Add the following to your Build.scala (under APP_HOME/project) val apache_poi = "org.apache.poi" % "poi" % "3.8" val apache_poi_ooxml = "org.apache.poi" % "poi-ooxml" % "3.8" val appDependencies = Seq( ... apache_poi, apache_poi_ooxml ) Actually, you don’t need ooxml if you only want to create a plain xls (and not a xlsx). Now start the Play console (‘play’) and execute ‘run’. This will resolve the dependencies and get you the requisite libraries. Now generate a excel: import java.io.File import java.io.FileOutputStream import org.apache.poi.xssf.usermodel._ val file = new File("mydata.xlsx") val fileOut = new FileOutputStream(file); val wb = new XSSFWorkbook val sheet = wb.createSheet("Sheet1") var rNum = 0 var row = sheet.createRow(rNum) var cNum = 0 val cell = row.createCell(cNum) cell.setCellValue("My Cell Value") wb.write(fileOut); fileOut.close();  All done! Now, in your controller action, add the following: Ok.sendFile(content = file, fileName = _ => "mydata.xlsx")

Generating Excel in Play 2

about 9 years ago | Aishwarya Singhal: Aishwarya Singhal

Play 1.x has a nice module that allows you to create Excel sheets. The new Play 2.x however lacks this capability. Or atleast is not very evident. While I found most of the information by searching the net, I thought … Continue reading →

What happened to Slate?

about 9 years ago | Aishwarya Singhal: Aishwarya Singhal

Slate is still very alive (and kicking), I just have been busy with my day to day job to make aggressive improvements lately. Plus, I did need to urgently catch up with Play 2.0 and Mongo DB so whatever free time I get these days, goes there. I intend to get fully active on Slate again by early June, though. In the meanwhile, Eclipse based Scala IDE seems to have improved massively. It is much faster, smarter and does not hang up that often. I now prefer using text editors for writing Scala and Ruby though, so I am very excited about making Slate a light and usable development environment. Early releases of Slate had a number of problems, especially in the UI. I believe most of them are now fixed, but there's still a lot I would like to do with it.

What happened to Slate?

about 9 years ago | Aishwarya Singhal: Aishwarya Singhal

Slate is still very alive (and kicking), I just have been busy with my day to day job to make aggressive improvements lately. Plus, I did need to urgently catch up with Play 2.0 and Mongo DB so whatever free … Continue reading →

An IDE for Scala

over 9 years ago | Aishwarya Singhal: Aishwarya Singhal

I have been working on Scala in my spare time for the past ~3 months now and I absolutely love it! It is extremely powerful, the syntax is sleek and it has an API for almost every basic operation! My choice … Continue reading →

Lets Play with Scala

almost 10 years ago | Aishwarya Singhal: Aishwarya Singhal

Sometime earlier this year, I read a blog  and an article. These are interesting thoughts and coming from the Java space of enterprise applications, I know exactly how bad performance and unmaintainable the code can get. Scala’s claim of reducing the code by a factor of 2 or 3 is extremely tempting! Add to that a lot [...]