Urgently needed: AI governance in cyber warfare


Artificial intelligence is quickly becoming central to societal growth. AI has great power to improve daily life, from education to healthcare, from sustainability to defense. AI also brings to the forefront a number of risks that cut across the core values of our societies. For instance, when AI is biased and prejudiced, it doesn’t just discriminate against one person, it discriminates against thousands.

In the context of a battlefield, where AI conducts target identification, if that system is not secure it may potentially harm a lot of non-combatants, violating fundamental war principles. If we abandon the ethics of cyber war, we might as well stop defending liberal democracies, because we’re behaving like our opponents want us to behave. This starts with the underlining ethics and governance of these technologies and the respect and values we hold dear and must protect and preserve.

Traditionally, western countries did not invest in non-kinetic cyber warfare because at the time the West held the upper hand in terms of superior technologies that allowed it more room to maneuver in this domain. This is no longer the case. China and Russia have partnered, proving to hold formidable positions in cyberspace.

The war in Ukraine has certainly proven cyber to be a concrete element, not just a domain of war but a terrain where opponents confront each other daily. Currently, there is no international agreement on regulating cyber warfare. An accident or conflict escalation could create a dangerously fatal scenario that will be difficult to recuperate from. Let’s say an autonomous weapons system is deployed without the threshold of risk or predictability. That’s why we need rules in cyberspace, not to just curb the behavior of opponents, but to hold them accountable should the unimaginable happen.

Governance of cyber warfare may seem like an alien concept because for governance to exist, it requires conflicting parties to adhere to the same set of rules and principles in the context of warfare.

Unsurprisingly, this is lacking now. That is because lives are not being lost. We’re not infiltrating a physical space. Algorithms are the weapon of choice, used to disrupt. This kind of thinking needs a conceptual shift – a philosophical or conceptual analysis on what can be considered good or bad in this domain. How much damage is assessable? How much is not assessable? Who can be held accountable?

During the Cold War, a mutually assured destruction allowed opponents to defuse the nuclear threat. Both parties could pledge détente. But cyber capabilities are not developed in the same way as nuclear capabilities. We must look at this problem from a different context and think about new dynamics to arrive at a model where we can encourage the opponent to behave rightfully.

There’s also an ethical reflection that needs to occur. For instance, AI is more than a tool. It’s a technology that has agency, a power to interact with the environment and the capacity to act autonomously. AI’s ethics must align with society’s fundamental values; they should be designed to foster transparency. It should be designed for sustainability without draining too many resources from the environment.

As humans, we should have the ability to intervene when we are unable to control the risks or outcomes.

Conclusion

Despite government efforts to regulate technologies like AI, there will always be gaps between policy, regulation, and the rapid pace of innovation. While some of these gaps are necessary for the democratic process, allowing representatives to debate and consider the interests and risks involved openly, it is still important to find solutions that are acceptable to all parties.

One way to bridge this gap is through collaboration with technology developers, gaining insights into the direction of technology and initiating public discourse with individuals from diverse backgrounds and areas of expertise. This will help ensure that those responsible for governance are aware of the issues that need to be addressed, a practice that is not always followed by administrators and regulators.

Will adversaries ever play by the same code of ethics and rulebook? Perhaps not. Even in a world filled with good people we still require that standards and regulations be coordinated. If guardrails are in place, we can be aware of the risks and conduct good governance.

Technology can provide significant leverage; we need it now more than ever because future generations will confront hardships related to climate, disease, warfare, and resource scarcity. Society relies on advanced technologies. We must employ them intelligently within the appropriate context and for the right purposes.



Source link