Why Coders Should Care About Ethics

By Solomon Osadolo

The average coder’s work isn’t particularly fraught with moral dilemmas or questions bordering on ethical consequences. Unless what they’re building is a product or service that directly impacts the quality of life of the users of said product or service. As technology products pervade our way of life more and more, it is increasingly apparent that coders and engineers behind such products have to start thinking about the impact that their work is having on the larger population. This is what ethics in software development is about.

When the Facebook-Cambridge Analytica story broke earlier this year, more than ever before, the impact of a couple lines of code mixed with finely targeted marketing was made apparent. Even though that story made waves when it broke – and still gets referenced every now and again, there are several other well-documented cases of technology products with rather manipulative features coded into them. 

At some deep level, coders must realize that their work has implications that reach farther than their benign source code editors would suggest.

Are Coders Responsible For Ethical Implications?

No snowflake in an avalanche ever feels responsible – Stanisław Jerzy Lec


Large-scale software products often require sophisticated, hierarchical teams. Depending on the scale and nature of the project in question, the team could comprise product managers, software architects, backend and frontend devs and designers, etc. There are people who say that PMs and architects, by virtue of being custodians of the big picture and direction, should bear the responsibility for how the final product impacts on the end user. A coder who is only responsible for coding a tiny feature shouldn’t have to be saddled with such responsibility, because they don’t necessarily have the full picture, right? Not really.

If a coder on a team is eager to share in the responsibility and praise for building something cool, they should be just as eager to share in the responsibility when the product has malicious outcomes. You can’t conveniently deny complicity and expect the PMs to pick up the flak for negative outcomes. Complicity works both ways.

In mid-August, 1,400 Google staffers signed a petition demanding that the company institute more transparency to understand the ethical consequences of their work. This happened right after Google announced plans to launch a search engine in China that would be decidedly censored to serve the communist governments’ agenda. The ethical dilemma is quite apparent. The impact analysis of technology products is necessary, and it is a responsibility shared by all who work on the product. 

In Stack Overflow’s 2018 annual survey of software developers, in the Ethics category, over half of the developers who responded (70, 782 in total) said that they wouldn’t write code for a product or purpose they considered clearly unethical. A third said that they would, depending on the situation. I set up a poll on Twitter earlier this week with the same hypothetical scenario. The results were oddly similar, even though the sample size was much smaller.

Making rather rough extrapolations from both polls, we can assume at least a third of devs either don’t prioritize ethics, or they don’t necessarily feel that it is something that is within the scope of their responsibility in product development. One of those options is naive; while the other is irresponsible — and neither is the desired quality in a coder building products that will impact people’s lives.

Should Software Developers Be Regulated (Certified)?

When we observe professions like Medicine, Law, and Finance, it is often through the lens of credentials, ability, and ethics. It is not enough that a doctor is good and has a license to practice medicine; we have to be sure that they practice in a way that meets the ethical standards of the profession. The same is true of bankers, lawyers, teachers and several other professions for which ethics is practically layered into their operations that it is borderline intuitive. Because, at the core of their practice, peoples’ health, finances, economic futures and quality of life are affected. At the instance of evident malpractice, a professional in these sectors could lose their license to practice. 

There have been cases proposed about the need to regulate software developers too. You know, get them to require certain certifications that deem them fit to practice; credentials that can be revoked if they write “dirty code”. While the notion sounds quite reasonable in principle, it is, however, rather untenable. Any correlation drawn with software development and the professions listed above, in the context in question, is insufficient at best. Software development is so ubiquitous today that it would be quite naive to assume that we could create blanket credentials that covered every use case. Technology products and projects come to bear on every field, from healthcare to finance, automobiles, entertainment, manufacturing, etc. A coder would require way too many certifications across multiple subareas to be able to do work.

Coming up with a system that solves for the third developer who will, given the “right conditions,” write dirty code might be a problem that persists for much longer. Given how decidedly secretive the impact analysis of products with malicious intent coded into them tend to be, there will always be coders who are comfortable working under that shade.

Why Ethical Thinking Matters

Asking coders to take a mandatory philosophy course on ethics might be a bit of a stretch. But by design, ethical thinking — at a base level, at least —  is kind of hardwired into humans. We tend to know when something is bad or good. In cases where people blatantly ignored the ethical implications of their work, they either had assumed they could get away with it (i.e. they’d be shielded from the consequences), or they didn’t ask all the questions regarding the extent of the impact their work would have on the end user.

Software is everywhere you look today. Software is deployed in the military, in banks, at airports and malls and in all sorts of places and things humans interact with. Adding that extra layer of thinking about the type and scale of impact when building software that will impact people should be the gold standard for everyone. Otherwise, a lot of people could come to harm or they could lose their money, dignity or means of livelihood because of some lines of code. There’s also the part where, when things blow up and indictments come in, job loss and jail terms come into the fray. Cambridge Analytica, referenced earlier, filed for insolvency in May, months after the scandal was exposed. Jobs have been lost.

As we push for a more egalitarian world, it is critical to deliberately ask all the possible ethical questions surrounding all we do. A seemingly innocuous project you freelanced on or wrote some code for may be why some people get discriminated against when the product ships halfway around the world. That product could affect the opportunity or quality of life of others.

It’s especially difficult to expect people to be ethical in scenarios where it’d be of great personal gain for them if they’re not. Also, considering the fact that there’s often obvious degrees of separation between the coder and the end user, anyone can choose to be oblivious of or not bother about the impact of their code. But there are real implications for real people behind those lines. Lives could be saved or ruined as a result of ethical thinking (or the lack thereof) — including that of the coder. Accounting for ethical implications is vital because everyone is better off because of it. 


Need to hire top developers? Reach out and let’s help you scale your team. Also, we are always hiring senior developers looking to advance their careers.

Written by
Solomon Osadolo
Storyteller @ Andela. Dilettante. Techie. Retired Superhero.