18 quick tips to improve developer productivity: Dos and Don’ts

18 quick tips to improve developer productivity: Dos and Don’ts

Kostis Kapelonis,
Kostis Kapelonis, 8 Min Read

Hello everyone, and welcome to the Sprkl tips & tools series. In our series, we talk to prominent developers to explore how to improve developer productivity and deal with software complexity.  

This time, Kostis Kapelonis, developer advocate at Codefresh, gave us quick and insightful tips on enhancing the productivity of software developers operating in modern complex environments.

Software developers are critical to a company’s success. They create products that can make or break a company’s profitability. Ensuring the well-being of these developers is crucial for any modern software organization.

Here are 18 tips to improve developer productivity 

Let’s start with the don’ts

  • Don’t get carried away with the micro-service craze. Think about your problem domains rather than replacing function calls with network calls.
  • Don’t assume Docker is a magic bullet. If you package a badly designed application in a Docker container, it will still cause issues for everybody that runs it.
  • Don’t waste time with lengthy reviews for pull requests. Automate formatting and linting checks. When a pull request is reviewed, all parties should focus on content and business logic instead of unrelated formatting.
  • Don’t reinvent the wheel. Adopt trunk based development. Understand 12 factor app. Read the books for CI and CD. In most cases, something you are trying to do is already solved by somebody else out there.
  • Don’t use vanity metrics that have no visible impact on customers (code coverage, number of issues closed, sprint velocity, etc.)
  • Don’t skip running tests locally because it takes too much time; refactor them. If you don’t trust your QA tests, fix them (especially the flaky ones). 
  • Don’t let the QA department run tests except for sanity checks or extreme corner cases. They should only look at test results, approve/block releases, and add tests to the existing test suites.

Let’s continue to the dos

  • Do create different test suites with different depths and speeds. Mix and match to get the desired results and use testing in all software lifecycle stages. 
  • Do test locally while developing, run CI tests when you commit, QA tests when the app is promoted and smoke tests that run AFTER the application is deployed.
  • Do create temporary test environments for each feature. Ideally, each Pull request should auto-deploy its contents in an ephemeral environment. Closing/merging the pull request should destroy it.
  • Do automate all release processes. If you have runbooks, wiki pages, readme files, and other documents that explain how to run a sequence of steps manually, you are doing it wrong.
  • Do create a bootstrapping mechanism for local development. As a new developer, you should be able to check out the code of any application and launch a local environment with 1-2 commands in less than 5 minutes.
  • Do deploy multiple times daily to production. (on demand and without waiting for release “trains”) 
  • Do track failed deployments and how long it takes to recover. Recovery from a failed deployment should ideally be less than 1-2 hours. 
  • Do include metrics/logs/traces (the trinity) in your apps to gain good visibility into what the application is doing.
  • Do focus on real customer information such as successful logins, active users, and latency for common requests when you set up your alerts and metric dashboards.
  •  Do adopt strategies to help minimize the impact of failed deployments as soon as or before they happen. (smoke tests, progressive delivery, feature flags).

Testing & automation 

Sprkl: At which phase is it best to perform testing, and what type of testing should be used? (i.e., local, CI, staging, production)

Kostis: First, let me set the stage by saying I have written a book about testing and published a well-respected guide for software testing antipatterns. 

The answer is that you should have tests in all phases, starting with local tests while developing, CI tests when you commit stuff, QA tests when the application is promoted, and even smoke tests that run AFTER the application is deployed. The important thing here is to have different test suites to mix and match to get the desired results.

As a developer, for example, you should have access to some unit tests that run in less than 4 minutes and be able to run locally as fast as possible. QA tests can take much longer (maybe 20 minutes), and dedicated performance tests can take even longer. You need to use the appropriate test suite at the appropriate phase. I have seen companies where developers don’t run tests locally because it takes too much time or because they don’t trust their QA tests because they don’t cover enough functionality.

Also, all tests (apart from the local ones) should run completely automatically without human intervention. The QA department of a company should look at test results, approve/block releases, and add tests to the existing test suites. They should never run tests regularly except for sanity checks or extreme corner cases.

Deployment & Feedback

Sprkl: How do development teams in your organization receive feedback on deployment processes, and how does it improve developer productivity?

Kostis: The easiest metric – is the number of successful releases sent to production. The more, the merrier. If you deploy multiple times per day, you are in good shape. If you deploy once per month, then there is room for improvement.

You can formalize the deployment process by following the DORA metrics and tracking failed deployments and how long it takes to recover.

It is important to mention that developers only add value when features reach customers. That is the ultimate goal. I sometimes see companies that use vanity metrics without a visible impact on customers. This can be code coverage, the number of issues closed, sprint velocity, etc.

As a rule of thumb, if:

  • You deploy multiple times per day to production. (on demand without waiting for release “trains”)
  • Your production deployments are successful most of the time. (let’s say 80%)
  • Your recovery time from a failed deployment is less than 1-2 hours.
  • The lead time is less than 30 minutes (the time it takes to push a brand-new commit to production.

Then you’re doing well with your deployment process. You can continuously improve, of course.

Debugging & observability

Sprkl: How do you trace back issues? How do you debug or prevent them?

Kostis: To catch issues: implement a testing suite as the first line of defense for catching issues. Whenever you find an issue in production and fix it, you should always create a test and put it in a ‘regression’ test suite. This will guarantee that every bug is fixed once and for all.

To detect issues: implement the trilogy of traces, logs, and metrics in all your production systems. Again, the advice here is the same. Focus on real customer information (successful logins, active users, latency for common requests) and do not spend too much attention on underlying data (number of DB connections, CPU/mem capacity).

Measurability & productivity 

Sprkl: Do you measure developer productivity in your org? And if so, in what manner?

Kostis: It is no surprise that at Codefresh, we use Codefresh for development. The product has native support for DORA metrics.

Delivery & execution

Sprkl: Where do you think the blind spot is in the delivery process?  

Kostis: There isn’t just one.

First, I notice that many companies do not use smoke tests. These tests run after deployment in production and should be able to detect serious issues. 

As a second precaution, teams should learn about progressive delivery and feature flagging. If your users are detecting issues in production, it is already too late. You should have all the tools available to minimize the impact of failed deployments as soon as they happen.

Another big problem I see is configuration drift between the “staging” and production environments. Teams deploy to “staging” and assume a release will work the same way in production. But in most cases, staging and production differ a lot.

About the author

My name is Kostis Kapelonis. Good to meet you:)  I’m a developer/technical writer with 15 years of experience. I have worked with both big enterprises and startups. I love testing and deployments. I am currently working for Codefresh, a company dedicated to Kubernetes deployments.

What is Sprkl

Sprkl Personal Observability platform can increase your productivity by allowing you to immediately see the impact of your code changes on the entire application while coding directly in the IDE. 

If you feel like giving it a try: Start here


Share on facebook
Share on twitter
Share on linkedin

Enjoy your reading 8 Min Read

Further Reading