A long time ago and far away, a Captain in the British Army retired from service and moved to an estate near Ballinrobe in Ireland. His name was Charles and he moved into Lough Mask House, a property owned by a wealthy landowner by the name of Lord Erne who was, by the way, the third Lord of Erne, and became an agent for Erne’s properties. Charles oversaw the Lord’s properties in County Mayo and collected rents. Lord Erne and other landlords of the time owned something like 99.8% of all the land and property in Ireland. The Irish were not entirely happy with this arrangement.
By all accounts, Charles, our retired captain, was a bit of a control freak and sought to enforce the ‘divine rights’ of the landowners, in particular those of Lord Erne, by increasingly strict rules, financial penalties for minor infractions, and forced evictions. Eventually, the good people of County Mayo went on strike and began a campaign to effectively ostracize Charles from the community in which he lived. James Redpath called it “social excommunication”. The captain’s full name, I should probably mention, was Charles Cunningham Boycott. It’s from him that we get the word, “boycott”.
Over the decades and into the 21st century, boycotts have become increasingly complex affairs but. As large corporations swallow up smaller companies who have themselves swallowed up others, the trail of who owns who becomes murkier all the time. You may decide to boycott one particular product, but that product may be owned by a company that is owned by a shell corporation that is itself owned by a numbered corporation in a foreign country.
Boycott, or the idea, has recently entered the world of free software, in the sense that some have been trying to punish companies whose business practices don’t fall in line with a particular social ideology. There was a recent spate of articles regarding the creators of particular open source packages fighting against companies that were using that specific package; it had to do with a developer who worked on a piece of software that was being used by ICE (Immigration and Customs Enforcement) in the United States.
Do we blame the tool, or do we blame the tool user? Perhaps we shouldn’t blame either. After all, how are we to know how a particular tool is going to be used? It’s pretty obvious that a gun’s purpose, for instance, is to kill, but it has also been used to hunt and feed families, and also to defend and protect life. When our free software is being used by a company to sell to an organization that collects children at the border and locks them in cages, can the developer of that software decide to recall their software or block its use?
Consider openssl, the secure sockets layer that is used to encrypt web traffic and digital Communications. This particular piece of technology is basically the backbone of the entire internet, and everything having to do with e-commerce as we know it. If we were to in advance trying to decide whether or not a company was worthy of being allowed to use this particular piece of software it would be a daunting task indeed. By what means would you analyze whether or not a company was acting in the public good, for instance? Are the products being sold dangerous to minors? Have they been tested it away to make sure that they’re completely safe? Is the labour that is being employed to produce this particular product that is being sold in our open source powered website, taking advantage of underaged children working in Sweatshop conditions, in some third-world country?
Worse . . . Is the technology being used to encrypt communications between terrorist groups? Foreign agents? Powers hostile to our governments?
When we create an open-source package, and release it into the wild, it typically doesn’t come with any kind of provision as to how that software is going to be used. To be perfectly honest, it would be practically impossible to do this, but let me step back for a moment and try to come up with an environment that makes a little bit of sense, in terms of how we might set this up.
In the world of Open Source, we have “The Four Freedoms“. They bear consideration and are as follows.
A program is “free software” if the program’s users have the four essential freedoms:
If software is licensed in a way that does not provide these 4 freedoms, then it is categorized as nonfree or proprietary.
By definition then, for free software to be ‘free’, there can be no restrictions on who uses it or what they use it for. Can you block people from using your free software?
This has led to the suggestion that we create a new license called “The Hippocratic License” which, like the medical oath to which it alludes, begins be saying that the software being developed must, above all, do no harm. If we adhere to the Four Freedoms, the resulting software would then be nonfree. As with my openssl question, it’s just not that simple since the initial user of the free software may themselves not be causing harm, but how many levels deep in the distribution of that software do you need to go before you discover that harm is being done.
There are no easy answers and plenty of quetions. Perhaps the first to ask is, “Is it time to rethink what ‘free‘ means?” Feel free to weigh in with your comments.
Note: The image that accompanies this post is from a version of the Hippocratic Oath in Greek and Latin, and published in Frankfurt in 1595. For a high resolution image, visit Wikimedia Commons here.