Category Archives: technology

Grail Quest to find where went our productivity to?

I had talked with friend, he was worried that his team was not delivering so fast that he was expected. He used to be a programmer, he is a manager today, and he remembers that “at that time” he delivers much faster, than it is delivered right now.

He is confused because after all, the developer world is getting easier … He works in a company that takes time and attention to make the programmers life better: continuous deployments with the right platform, automatic monitoring and much more.

In addition, the open source movement brings value, most of the time we just choose solution for our needs. The OSS ecosystem is huge today, just name a few:

 Why is this happening? Why is friend’s team productivity suffers?

The problem is not trivial, it seems that today’s software engineer has everything. In addition to open source libraries, he has tools available to deliver the product quickly and in excellent quality. An extensive development environment (IDE) with support for automated code refactoring that measures its quality and prompts us to code as we type.

All of this comes with tools that support continuous integration, code deployment, and support for multiple cloud environments. The developer does not have to wait weeks for a new server or software installation.

Looking at portfolios of cloud providers we can agree that the developer has everything he or she needs to get the job done. From machines (IaaS), storage, databases (various types), queuing systems, even algorithms, specific types of services (such as speech or image recognition) and ending with solutions for processing large data sets. You can go and check one of cloud providers:

What else do we need?

To go deeper into the problem, I cite two other friends who expressed doubts in “fast delivery of the value”.

The first one:  after hours he works on various contracted products. Talking recently he said something that made me think. Doing after hours brings more value in one week,  than his whole team together. The team is about 7 developers and we can assume that”after hours” means about 50% of the standard time of work. It means he is 14 times more productive. Puzzling!

Another one during the conversation said: “You know Pedro, there is work for a few hours, unfortunately as we pass it through SCRUM, it will take us two weeks.” Hmm … again assuming that a few hours is less than a day’s work, and two weeks is 10 working days – it means we are 10 times less productive.

Scary! What goes wrong? What is the problem?

While the statement of the first colleague does not give us a hard evidence. The second statement clearly points to the scrum methodology. It points, that certain ceremonies or procedures cause the “same work” to last longer.

Scrum is a great framework to organize and build software products, however, I have seen many times what things went wrong. What is it like? I would point out a number of distortions which, from a reasonable set of rules (eg. SCRUM), create “the RIGHT to success rules, which you must OBLIGATORILY follow, otherwise you and your project are DOOMED”.
Characteristic for these rights are:

  • If you follow these laws, your productivity will increase (hmm … doing daily meetings solves all problems)
  • If you do not follow a “law,” you should be punished (a cap or t-shirt that reads “I was late for daily”  and so on).
  • You follow the law, you should be proud of it – you are doing (surely :/) a good job (it does not matter that the implementation delayed another week – important that you were on the planning).
  • Did not succeed? Still not working productively? – it means that you do not follow the law or follow it in wrong way (daily at bad time … daily not in the morning …, or did not do grooming …., or did grooming to late/to early …. etc)

The side effect is that since we have “a set of rights” and have to follow them. We measure our success in the context of correlation with the laws we introduced. The “Institution of Measurements” is formed. After a while we start to measure our busyness – no matter if what we do make sense, what is important that we follow the law and all the indicators show that we are doing it wonderfully! 😉

“If We Measure Busyness, We’ll Create More Busyness.”

Applying these rules often leads to running with so-called “empty wheelchairs” where it’s important to run, because that’s what it’s all about. In this way, “accidentally” we reach the essence: the disposal of the resource (human one).

Resource Efficiency principle: If you can not finish the job, switch to the next task. Do so, until you find yourself in all the time switching from task to task and/or waiting for others to finish the task you waiting for.

From business owner point of view, the best utilization is when a person is working 100% of the time without a break for coffee. Once I heard from C-Level “You have to tell your people to work faster.”,  but in my experience such a situation is ridiculous, as there is usually something in the system that causes the work to be slow down and the “work faster, do more, get more people” change nothing in this situation (many times things go even worser).

Pretty good analogy is the highway. If there is a refurbishment works or narrowing of the motorway, more cars will only slow down the number of cars that are able to pass (work done) and increase speed is a high probability of accident and complete blocking of the motorway. The more cars, the slower, and finally total traffic jam (business projects to be realized).

Paradoxically less cars and less speed increases the throughput (work done). If worker do not work we can say that, in both situations, velocity is the same (close to zero), with quite different utilization (close to 0 versus close to 100%).

The first part ends here.

Next time you want to add more people to move faster – stop – and think what bottlenecks are in your environment. It might be one huge codebase with several teams develop different features get in their way, or lack of knowledge, or lack of automation in micro-services world. There are not one solution to fit all yours needs…

By the way, as Tech Rebels we help companies go through technical hurdles. Just hire us, and we will bring your organization to next technical level.

Monitoring and metrics – Yes, of course.

I always try to convince everybody to measure. The first reaction are very different, but after a while everyone come back and says “Wow! I didn’t realize how helpful metrics can be”.

I’ve watched video: “Using Monitoring and Metrics to Learn in Development”  (by Patrick Debois from Atlassian) with pleasure. I use Atlassian tools and talk with few guys. They really care about code of their products. Patrick not only talks about metrics but also talks about technics and ideas which helps deliver better software.

Ideas from presentation:

  • smaller and frequent changes – easier to repair (dev for ops)
  • faster and better feedback – easier to find problem (ops for dev)
  • continuous integration maturity model – see slideshare presentation.
  • reuse “workflows” across environments by using virtualization – vagrant (great tool for building dev environments), puppet/chef (configuration automation and management)
  • infrastructure code repository and application code repository have to be in sync.
  • always remember that: “a lot of different monitoring levels we have” :).
  • monitoring driven development 🙂 – create a monitor check before implementing a feature (it is useless but you can think about similarities
  • monitoring tools grumble (around 00:19:20) – there is a projects by “Monitoring Sucks” (github), you can check but only one seems to be alive.
  • use monitoring as a service – this sound reasonable (pingdom, NewRelict, boundary, librato, and some others).
  • and  lot of other tools are mentioned – if you want to evaluate tools for metrics and monitoring you should watch video and note potential tools for evaluation.
  • always know the context of the metrics.
  • final thought: Metrics Driven Engineering (Etsy on the stage) – IMHO this is great idea to follow.

Whatever we will doing, it will happen our code do not work, our code stinks and become worser, but every time we figure out, we can do something with it. This is the reason it is so important to monitor our application continuously.

Patrick also mention about developer and operation responsibility sharing, he wrote a nice blog post: Just Enough Developed Infrastructure. I recommend you to read it too.

skipfish – fast, easy and simple

Skipfish is google code project. It is web application security scanner, high speed (they claim 2000 requests per second* – * – at local LAN :)) and due the fact it is command line tool without fancy wizards, options and so on, it is relatively easy to use, and for sure it is easy to just start scanning.

Skipfish is active scanner so it first scan application, preparing the map of web site, than recursively ran different test, the last thing is report generation. Documentation is simple and has a lot of example we can start on. So let’s see that in action.

One of such command is:

$ skipfish -m 5 -LVJ -W /dev/null -o output_dir -b ie http://www.example.com/

During the scan, Skipfish is displaying statistics:

Scan statistics
---------------

       Scan time : 0:11:07.0068
   HTTP requests : 2446 sent (3.71/s), 16228.73 kB in, 659.18 kB out (25.32 kB/s)  
     Compression : 0.00 kB in, 0.00 kB out (0.00% gain)    
 HTTP exceptions : 34 net errors, 0 proto errors, 0 retried, 0 drops
 TCP connections : 2451 total (1.09 req/conn)  
  TCP exceptions : 0 failures, 1 timeouts, 0 purged
  External links : 745 skipped
    Reqs pending : 219        

Database statistics
-------------------

          Pivots : 471 total, 94 done (19.96%)    
     In progress : 323 pending, 38 init, 12 attacks, 4 dict    
   Missing nodes : 54 spotted
      Node types : 1 serv, 269 dir, 46 file, 1 pinfo, 91 unkn, 63 par, 0 val
    Issues found : 70 info, 111 warn, 49 low, 1 medium, 13 high impact
       Dict size : 0 words (0 new), 0 extensions, 0 candidates

After few hours/minutes, it depends on the site we are scanning, we will got

[+] Copying static resources...
[+] Sorting and annotating crawl nodes: 1666
[+] Looking for duplicate entries: 1666
[+] Counting unique issues: 1158
[+] Writing scan description...
[+] Counting unique issues: 1666
[+] Generating summary views...
[+] Report saved to outputDir/index.html
[+] This was a great day for science!

The report consist of “crawl results”, “document type overview” and “issue type overview”. My last scan result has some finding, but also has a lot of false positives, it seams that a lot of work still waiting for a Skipfish team, but it looks promising.