Future of Work for Productive Software Teams is About Results

The global pandemic and the shift to all-remote work forced many managers out of their comfort zones. Many had objected to remote work out of fear that employees would not work as hard when no one was looking. Suddenly bosses had no choice--and many are seeing results that make them trust remote employees for the first time--so much that they want to continue to work remotely long-term.

Elliott Holt, CEO of a Nashville health information management company, was dead set against letting developers work from home. “There’s no control over it,” he said. After working all-remote for a few months, Holt has had a change of heart. “It’s working,” he said--so much that he’ll continue to allow remote work long-term.

Measuring Results vs. Activity

Experiences like Holt’s mark a shift from measuring performance based on concrete outcomes rather than on activity like desk time. “One of the biggest holdbacks of remote work is trust—managers simply don’t trust their people to work untethered, said Kate Lister, President of Global Workforce Analytics. “They’re used to managing by counting butts-in-seats, rather than by results. That’s not managing, that’s babysitting.” 

For Matt Mullenweg, CEO, and founder of WordPress parent and all-remote Automattic, it’s measuring developers by results is not only a more accurate measure of productivity, it’s more objective and fair.  Automattic’s global workforce has been built without a traditional office and without the traditional hiring process. Interviews take place via chat, purposely avoiding “face time” to prevent unconscious bias.  “What you’re accountable for is a result,” Mullenweg said.  “You could work 60 hours and not do a lot, or you could work 20 hours and do a ton. It’s really about result.”

Measuring Developer Productivity

What results can you use to measure the productivity of distributed software engineers and software development teams? It’s not just about code written--that’s activity. The results of the coding work--the quality and impact on the product--can be trickier to calculate. Andela has built and managed thousands of all-remote software engineering teams working with hundreds of companies. We have developed data-based (read: results) assessment best practices that measure the quality of work to establish results-based performance management. Productivity metrics that can be measure include:

  • New Code: Brand new code that does not replace other code written for new features
  • Legacy Refactor: Code that updates or edits old code that required rework 
  • Churn: Code which is deleted or rewritten shortly after being written
  • Help Others: Code where a developer modifies someone else’s recent work 
  • Efficiency: The percentage of all contributed code which is productive

Measuring productivity by results like new code uses data to take assessments out of the realm of opinion. Concrete outcomes overcome speculation and assumptions about how developers will perform can be confirmed or challenged with facts. Remote workers that are trusted based on performance in turn deliver even more positive results. “Employees who do feel trusted are higher performers and exert extra effort, going above and beyond role expectations,” research published in the Harvard Business Review found. This snowballing increase in performance explains why many businesses are considering extending remote work policies beyond the pandemic.

Andela has developed considerable expertise in building and managing high-performing remote teams with long-term engineering staff augmentation. To learn more about Andela’s model and whether it can work for you, download the e-book, “Engineering Staff Augmentation: Flexible Hiring Without Sacrificing Quality.”

Related posts

The latest articles from Andela.

Visit our blog

Customer-obsessed? 4 Steps to improve your culture

If you get your team's culture right, you can create processes that will allow you to operationalize useful new technologies. Check out our 4 steps to transform your company culture.

How to Build a RAG-Powered LLM Chat App with ChromaDB and Python

Harness the power of retrieval augmented generation (RAG) and large language models (LLMs) to create a generative AI app. Andela community member Oladimeji Sowole explains how.

Navigating the future of work with generative AI and stellar UX design

In this Writer's Room blog, Carlos Tay discusses why ethical AI and user-centric design are essential in shaping a future where technology amplifies human potential.

We have a 96%+
talent match success rate.

The Andela Talent Operating Platform provides transparency to talent profiles and assessment before hiring. AI-driven algorithms match the right talent for the job.