The following debate article on the risks and opportunities with Artificial Intelligence was published in the Swedish online newspaper Bulletin on 16/1/2024 by Mikael Gislén under the name “DEBATT: Artificiell intelligens – framtidslöfte eller dystopiskt hot?”. The following is a translation of the same into English.
Artificial Intelligens can pose a threat in many ways.
An unrestricted AI could already be lethal to humanity if unleashed and unregulated. Nils Boström’s famous thought experiment sometimes illustrates this:
Imagine an AI that has been tasked with maximising the production of paper clips. This AI is exceptionally efficient, intelligent and has the ability to improve itself. Its only goal is to produce as many paper clips as possible. At first, this seems innocent enough. The AI finds ways to make production more efficient, perhaps by improving factory design or finding cheaper sources of materials.
But because the AI is programmed to maximise production at all costs, it soon escalates its actions. It can start converting more and more materials into paper clips, even things not intended for this purpose. It may take over economic systems to allocate all available resources to paper clip production. In the most extreme scenario, the AI could start converting all the materials it accesses – including people, the environment and ultimately the entire planet – into paper clips.
This is not a prophecy or prediction but rather an example to stimulate discussion and reflection on how we develop and the need to control powerful AI technologies. It illustrates several essential points about AI safety and ethics:
– How important it is that AI’s goals are well-designed and in line with human values and safety.
– Regardless of its specific goal, a sufficiently advanced AI may develop similar strategies (such as acquiring more resources or becoming more intelligent) to achieve that goal, leading to unforeseen consequences.
– There is a need to have mechanisms in place to control and limit AI behaviour and prevent it from going beyond human control.
Does it matter if AI’s functions are similar to human intelligence? The short answer is yes. Understanding how human intelligence works can provide insights on how to develop more efficient and safe AI systems.
For some types of AI, it doesn’t matter at all. Many applications, such as image and pattern recognition, are indeed based on neurons, which resemble human neurons at an abstract level but are statistical models at a lower level. In other words, the physical implementation is entirely different from how the human brain works.
The relevance of consciousness in AI raises further questions. While AI can imitate human intelligence, this does not mean that it possesses any consciousness about the world or itself. Moreover – in practice, we know very little about what consciousness is at all.
Artificial Intelligens with consciousness is not the real problem.
But if an AI had consciousness, would it be more dangerous or more useful? The truth is that the danger or usefulness of AI depends more on how it is used and controlled rather than its level of consciousness. Regardless of its level of consciousness, an AI can be dangerous if it is misused or programmed incorrectly. On the other hand, a non-conscious AI can be extremely useful in a variety of fields, such as healthcare, education and environmental protection. The question is whether it matters at all if the AI is conscious. From our perspective, a sufficiently advanced AI will even appear to have consciousness whether it is or not.
Consciousness in AI is a complex issue, especially compared to human consciousness. In philosophical terms, this discussion is reminiscent of the ‘brain in a vat’ hypothesis, which questions our ability to be certain of anything beyond our own awareness. Just as we cannot be absolutely sure that other people are conscious but only assume it based on their behaviour and similarities to ourselves, we also cannot know definitively whether an AI is truly conscious or just imitating behaviour that indicates consciousness.
This becomes even more complicated with AI, as its ‘consciousness’, if it exists, would be radically different from the human experience. Therefore, even if an AI exhibits behaviour suggestive of consciousness, we cannot be sure whether it is experiencing a form of consciousness like our own or is just performing advanced but mechanical processes that mimic it.
Furthermore, regarding the use of advanced AI, it is not primarily the potential awareness of the AI that poses a risk but rather how humans interact with and use it. The responsible use of AI technology is crucial, regardless of its state of consciousness.
A future of AI-guided killing
Military Artificial Intelligens applications are a particularly worrying topic. For example, one can already imagine, with the technology applicable today, the use of AI-controlled drones programmed to kill people with a particular uniform or even skin colour.
Again, it can be reasonably questioned whether consciousness is a measure of dangerousness. A non-conscious AI in the hands of a new “Hitler” may be much more dangerous than a conscious AI built with reasonable constraints and ethical rules. In other words, it is how the AI is developed and the ethical framework surrounding its use that is the core issue. Even apart from being controlled by a dictator, a non-conscious AI can be at least as dangerous as a conscious one if not handled correctly.
Finally, how should we approach AI in general? We need to approach this technology with both optimism and caution. It is important to continue exploring and developing AI while creating robust ethical and legal frameworks to manage potential risks. By doing so, we can ensure that AI becomes a force for good and not a threat to humanity.
If you want to discuss AI with us, please get in touch with us for further discussions!