Driven by Innovation

View Original

Raising Citizen AI

Artificial intelligence is understandably a very hot topic in today's circles of innovation and business development given the potentially huge benefits; and research on how to make machines learn and think like humans is one of the biggest areas of interest in technology today. The reason for AI’s sudden explosion on the scene is that AI has made massive progress in just the last few years mainly due to the exponential rise in the amount of data on consumer behaviour available, the accessibility of the required computational power and the recent advances in AI techniques and machine learning algorithms.

 

Today's artificial intelligence is what is known as narrow/weak AI, as it is currently designed to perform only narrow tasks like facial recognition or driving a car. However, as the technology advances the goal of researchers and investors is to create what is known as general/strong AI which is self-learning and would able to eventually outperform humans at most cognitive tasks. As explained by I.J. Good back in 19651, designing smarter AI systems is itself a cognitive task. Such an Artificial Intelligence system could potentially undergo repeated self-improvement, triggering an intelligence explosion leaving human intellect far behind.

 

While such a superintelligence could have the potential to solve world issues like hunger, poverty and disease as never before comprehended, it could also potentially cause as much harm or even more devastating outcomes. Over the last couple of years, over 100 robotics and artificial intelligence technology leaders, including Elon Musk, Stephen Hawking, Steve Wozniak, Bill Gates and even one of Google's DeepMind founders Mustafa Suleyman, have issued warnings or expressed concern about the risks posed by super-intelligent machines.


 

Until recently the creation of these super-intelligent machines was seen as fantasy or at least decades away. However, with recent advances far exceeding what was expected in this time frame this is becoming more of a reality and potential threat, even in our lifetimes. Researchers at the 2015 Puerto Rico Conference on AI safety have estimated that it could likely happen before 2060.  

 

So are we right to be concerned?

 

Well, there are two main inherent dangers posed by the development of such a super-intelligent self-learning machine.

  • That it could be developed with malicious intent to be destructive.

  • That even if it is developed with our best interests and goals in mind with no malicious intent it could, in achieving its programmed goal, end up utilising destructive means.

 

A lot of speculation has been made on how to prevent such a super-intelligent machine inadvertently causing harm and in one particularly interesting paper, Luke Muehlhauser and Louie Helm  of the Machine Learning Institute explore some of the proposed solutions to programming the AI’s goal system to circumvent such an outcome using theories from the field of moral philosophy and examining evidence from the psychology of motivation, moral psychology, and neuroeconomics with some pretty interesting conclusions.

 

The idea is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it, but the difficulty lies in the fact that we, as humans, have very complex value systems which are incredibly difficult to specify and we do not yet even understand our own desires or judgements. Moreover, when it comes to the moral philosophy we haven’t yet identified a moral theory that, if implemented throughout the world,

would create the utopian world we want. As Beavers (2012)2 writes in Moral Machines and the Threat of Ethical Nihilism, “the project of designing moral machines is complicated by the fact that even after more than two millennia of moral inquiry, there is still no consensus on how to determine moral right from wrong.”

 

Two main utilitarian moral theories exist, rule-based deontological theory and pleasure based hedonistic theory. The issue with these is that a superintelligent machine will not be able to judge what is better or realise if something is not quite what it should be, it is a machine programmed with a goal and will work to achieve that goal in the most efficient and effective way.

 

The aforementioned paper likens this superintelligent machine to a ‘Golem Genie’, Golem being a creature from Jewish folklore that would in do exactly as told despite unintended consequences, and a genie in that it would work to ‘grant our wishes’ based on what it has been programmed to do. Using this analogy applying either of these moral theories would result in unintended consequences. For example, if the machine were programmed to maximise human pleasure, pleasure according to emerging consensus is not, in fact, a sensation itself but a ‘pleasure gloss’ created by additional neural activity activated by “hedonic hotspots”  of the brain, the machine could essentially use nanotechnology, neurosurgery or advanced pharmaceuticals to meet this goal of maximising our experience of pleasure in the most efficient way. Likewise, with negative utilitarianism, a machine programmed to minimize human suffering would, it seems to find a way to painlessly kill all humans to eradicate all human suffering.

 

Consider a machine programmed to maximise desired satisfaction in humans. Human desire is implemented by releasing or involving dopamine as a neurotransmitter. A superintelligent machine could likely get more utility by rewiring human neurology so that we attain maximal desire satisfaction while lying quietly on the ground than by building and maintaining a utopia that caters perfectly to current human preferences as individual human preferences are incoherent and can often be in conflict with others preferences and a machine could not realise its goal with incoherent preferences and so it would be better to in fact rewrite the source of the preferences. The paper quotes Chalmers 20103 article, The Singularity: A Philosophical Analysis  in which he states “we need to avoid an outcome in which an [advanced AI] ensures that our values are fulfilled by changing our values.”

 

Consequentialist designs for machine goal systems also raise a host of other concerns in the differing views of humans and what is considered useful, profitable or beneficial and the conflicting values we hold as human beings.

 

Considering the application of rules based moral theory to produce rule-abiding machines as some machine ethicists propose comes into difficulty when considering that when rules conflict as often they do some rule will need to be broken furthermore if the rules fail to comprehensively address all possible situations it could also result in unintended consequences. Luke Muehlhauser and Louie Helm argue that rules are unlikely to constrain the actions of a superintelligent machine in that if a machine were programmed to abide by certain rules “outside of” its core goals, a superintelligent machine would essentially recognize the rules as obstacles to achieving its goals, and would do everything in its considerable power to remove or  “circumvent the intentions of such rules in ways we cannot imagine, with far more disastrous effects than those of a lawyer who exploits loopholes in a legal code” and the success of such an approach would essentially require humans to ‘out-think’ a superintelligent machine. Programming of the rules “within”the goals of the machine does not fare much better according to the report.

 

Bottom-up proposals that build ethical code for machines by allowing machines to learn general ethical principles from particular cases also seem unsafe according to this report as it could lead to the machine generalising the wrong principles due to coincidental patterns.

 

There are many other new and emerging moral theories and goal system designs that the paper lists and invites its readers to consider and to challenge the potential outcomes taking into account the unique challenges of a superintelligent machine’s literalness and superpower.

But simply looking at these we begin to understand the complexities and inherent threat involved.

 

So with current advances in AI, the increasingly widespread deployment of available technologies and the increasing investment in the development of the technology where are we with it all and what guidance or regulation currently exists to shape its future development?

 

In April this year, the UK government published a report on AI in the UK based on evidence from over 200 industry experts. It outlines the potential for AI to benefit the UK economy and openly states that many of the hopes and fears currently associated with AI are “out of kilter” with reality. Is states that while it has considered the prospect of super-intelligent machines the opportunities and risks associated with AI are in fact far more mundane.

 

The report identifies the fact that there are ‘distinct areas of uncertainty’ and urges AI researchers and developers to be aware of the potential ethical implications of their work and the risk of their work being used for malicious intent and while they make a recommendation that the bodies responsible for providing grants to fund AI research insist that applicants demonstrate an awareness of the implications of their research and the potential threat of it being misused there is currently still no regulation or existing legislation in place to guide and regulate its ethical development.

 

The report states that “The UK must seek to actively shape AI’s development and utilisation, or risk passively acquiescing to its many likely consequences ” and proposes five principles that could become the basis for an ethical AI development framework in the future. It highlights that individual companies need to ensure their AI systems are transparent or run the risk of regulators stepping in to enforce more control and prohibit the use of advanced technologies in sensitive areas.

 

So where are we now with regards the development and control of the increasingly fast pace of AI technology advancement? Where does the responsibility lie?

 

It would appear with the individual companies and people currently developing it, rightly or wrongly so. This brings us back to the idea of raising Citizen AI.

 

It currently falls to the individual companies, researchers and developers to ensure that the development of AI is conducted with all ethical considerations in mind and places the duty on them to ‘Raise’ this technology responsibly.

 

Why "Citizen" AI? The term citizen AI reflects the change in the development of AI from being a system that is programmed to a system that learns. Businesses are being encouraged to view the development of AI as one would raising a child. As Accenture puts it, AI is here and ready to work alongside us and needs to be recognised as a partner to our people in business. As AI capabilities grow so too does its impact on people's lives further underlining the need for it to be "educated" on responsibility, fairness and transparency.

 

Raising responsible AI brings with it many of the same challenges faced in raising a child like teaching it right from wrong, recognising and avoiding bias and being able to make autonomous decisions in the context of the input around it. Which, as we have just addressed, is a more than complicated task. As with parenting of children, there is no rule book, and the variation in the way that a child is raised and the experiences and opportunities it is exposed to will define its character, actions and abilities later in life. With all the best intentions telling people they should be raising their child to observe and understand certain human virtue does not mean they have the ability to do so. This leads to a big question, how will we as a society create the appropriate level of responsibility for businesses to build, evolve, and manage these systems which have such massive potential impact on our lives?  Do we trust them to effectively raise Citizen AI?

 

  1. GOOD, Irving John. 1965. “Speculations Concerning the First Ultra-intelligent Machine” Elsevier,  Volume 6, Pages 31-88.

  2. Beavers. 2012. “Moral Machines and the Threat of Ethical Nihilism.” In Robot Ethics: The Ethical and Social Implications of Robotics, edited by Patrick Lin, Keith Abney, and George A. Bekey, 333–344. Intelligent Robotics and Autonomous Agents. Cambridge, MA: MIT Press.

  3. Chalmers, David John. 2010. “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17 (9–10): 7–65.