Common Objections To Work Involving Artificial Intelligence

Artificial intelligence is taking people’s jobs!

Yes it is. However all technology does, and a healthy job market is constantly evolving: new jobs being created and old ones consolidated. When we look back on the past 300 years, we appreciate the technological advances that have dramatically reduced the cost of providing essentials as food, water, clothing, and healthcare (though not to say that this margin always reaches consumers), and it can be similar with artificial intelligence.

Understandably, as our ancestors experienced in the above cases, change is hard, especially when technological revolutions rapidly reshape the status quo. However no single person or organization is driving the evolution of artificial intelligence. We should prepare for the inevitable, and we definitely do not want to be in the dark while malicious actors are working hard to engineer such technology. By keeping my notes, code, and other project artifacts transparent, inviting others to join me on this totally volunteer initiative, and by drawing public attention to the multi-potency of artificial intelligence, I contribute to what I see as the safest, most democratic, decentralized, transparent, responsible, and benevolent possible introduction of this revolutionary technology.

Artificial intelligence can produce unintended negative consequences.

Most new technology comes with unanticapated side effects, a case in point being pharmecueticals. I do not claim that my work has no negative potential or impact. That being said, people’s view of artificial intelligence is usually very different than the real thing. Often, the term “artificial intelligence” merely denotes an algorithm that is new, powerful, and not well understood, such as neural networks. As they not well understood, these algorithms can make produce outputs that cause unexpected problems. However those sorts of problems are really the system architect’s fault, and not the system itself. Computers rarely make a mistake.

Science fiction and popular statements often echo the experiment-gone-wrong motif as follows: a ‘critical mass’ of software complexity suddenly explodes into a mythologically inspired Artificial Super-Intelligence. It turns evil, breaks out of the lab, and takes over the world. What people rarely get to hear is that we haven’t found such a mythologically-inspired algorithm even after seven decades of searching, and it takes a lot of engineering to make agents that act with any awareness of morality — good or bad. Even if an agent did make an unanticapated off-the-charts leap in intelligence while given unsupervised access to the Internet or a robot, my understanding is that its objectives would not necesarily change. If it displayed a benevolent ‘personality’ prior to the change, then it would continue following such. Of course, its already a very bad idea to deploy an untested, unsupervised controller on a production system to begin with. The Massive MAN mitigates this issue by virtue of its distributed, decentralized, transparent, and democratic nature. Rapid runaway growth must occur simultaneously on many agents for an unrivaled Massive Mutli-Agent subNetwork superpower to emerge. Provided agents occupy diverse niches, this seems unlikely.

There is however valid reason to be wary of the gradual centralization of such AI resources. It may be an invisible hand rather than any particular human, algorithm, or organization that aligns many of the Massive MAN’s stakeholder’s interests too perfectly for its origonally transparent, democratic ideal to hold. In the hands of a few corperate superpowers, it might be too easy for the Massive MAN to serve not-so-benevolent ends. Careful monitoring, diligent research, an involved community, and an informed public provide strong countermeasure to mitigate such dangerous outcomes.

Your research empowers people to hurt, manipulate, and exploit others!

It makes me sad to think about how people are always trying to hurt each other, regardless of the technology at their disposal for doing so. Artificial intelligence is no exception. It is an essential technology to lethal autonomous weapons systems, aka, “slaughterbots” or “killer robots”. Poorly trained face classification systems perpetuate racial bias. Deepfake generators can be used to synthesize extremely deceptive propaganda and slander. Recommender systems empower advertisers to make highly targeted marketing efforts, convincing people to buy things they really do not need, never wanted, and don’t have money to buy. Developments in artificial intelligence place strain on the labor markets as it replaces jobs faster than they are created. Inequality already experienced, such as between the Global South and North, are exacerbated, especially when the poor are exploited as underpaid information workers to feed data-hungry machine learning algorithms.

That being said, not all research and development in artificial intelligence have objectionable intent. Artificial intelligence has made many positive industrial, social, and economic contributions to humankind such as automating quality control processes, driving therepuetic and surgical robots, translating between hundreds of languages, and accelerating healthcare diagnoses. It’s the electricity of the 21st century. Across many industries, artificial intelligence complements, rather than replaces, human labor, reducing human error, increasing reliability and safety, and automating repetitive processes. It is not a substitute for human intelligence, but just another useful tool in our intellectual toolkit.

Thus I personally do not consider generic artificial intelligence research as wrong. While both I and clever marketers make use of this technology and even contribute to its development, my purpose is not to support personally objectionable use cases, just as the average economic agent inescapably supports the interests of those who do not share his/her values, however far-fetched or unrelated, by merely particapating in their economy. After making reasonable, conscientious efforts to mitigate undesirable application, I am no longer responsible for the misuse of my research and developments. However I understand that others have different perspectives on this very sensitive topic and do not force my views on anyone.

You’re abusing the term “artificial intelligence”.

It seems everyone has their own definition for “intelligence”. I’m never satisfied with any single one. Nonetheless, I’m sure we can both pick out human and animal behaviors that we consider examples of intelligence. That’s what I aim to engineer into computer programs which I call AI.