Musings on what AI catastrophic risks mean for developing countries
Published 29 January 2024 and last updated 26 December 2024 by Lenz Dagohoy • 10 minute read
When I participated in the Global Challenges Project last December 2023 in Oxford, I had minimal appreciation for the idea of global catastrophic risks (GCRs). When I applied, I had just begun working as a Research Associate for a professor working on AI governance in the Philippines. By my understanding, AI risks were limited to:
- Issues of bias in training which leads to discrimination. Since AI is trained on real world data, then it seems very likely that the AI we develop will carry over the same biases and tendencies for bigotry that our current society exhibits.
- Lack of explainability of its internal mechanisms. No one knows how AI actually works, even the people who create these models.
- Cyberattacks that can lead to data leaks and lost of institutional trust. People can use AI to hack into big institutions. The effects of this incident can lead the public to distrust these institutions.
- Lack of transparency and accountability from institutions developing AI. The companies that create AI models might be doing something sus but we can’t know about it unless someone exposes them.
- Extreme polarization and political instability caused by unregulated AI. This can lead to bigger echo chambers that can divide the public when it comes to important political causes. There’s probably a lot more that I missed, but I was basically scared that this Cambridge Analytica issue was going to happen again on a much larger scale. At least the way I see it, the Philippineswas not exactly the strongest economy back in 2023 (and it’s arguably much worse in 2024). If anything, we are very vulnerable. To me, a large-scale misaligned agent going wild in an unstable economy is a recipe for disaster. This led me down a rabbit hole towards learning more about AI risk and GCRs in series of conversations within my local EA community and the people I met in GCP.
Initially, I just wanted to know why LMICs were not included in the conversation about global AI governance.
Although I technically had an idea what AI risks are, I was also vastly misinformed about how AI works on an analytical level. I admittedly did not fully understand what a lot of the concepts actually meant. I was genuinely concerned with the way AI is being used and developed in the current innovation scene and how it could possibly exacerbate the Philippines’ current socio-economic issues. So I applied to GCP with the the following questions in mind:
- What can developing countries learn from the US and UK when developing policies for AI alignment?
- How can countries like the Philippines effectively contribute to global AI governance efforts, given resource constraints and varying levels of technological readiness?
- Are there emerging models or frameworks for assigning liability and accountability to determine who should be held responsible for AI-related harms, especially in the international scale?
I do recognize that most of the work on AI safety is happening in the West, specifically the US and UK. But since we (i.e., the Philippines) are one of the more vulnerable economies globally, shouldn’t we get to have a say on the stuff that’s probably going to affect our industries on a massive scale?
I wanted to know how we can contribute from halfway across the world. I wanted to know what we could do. I wanted to know who would be accountable should an AI cause massive harm to our economy. Do we remain as observers or should we do something about it?
Eventually, I realized how much bigger the risk actually is.
At least from personal conversations, I realize a lot of local AI or data experts do not actually consider that AI could be a much bigger risk to the point that it can be considered as a GCR. For a lot of the people I’ve talked to, existential AI risk seems too speculative — and honestly, that’s a valid sentiment. A professor once told me that all technologies get demonized when they are released. We think they are threats, but we couldn’t be more wrong. We shouldn’t be afraid of AI, but of the humans who enable these mere tools to be a cause for concern. I thought, ‘well, that makes sense’. By my understanding, AI is a program, and a program needs a programmer.
I know the effective altruism (EA) community thinks of it differently though which is why I asked an EA-aligned friend, who was more exposed to the AI safety community more than me, what they thought about this. Can AI come to a point where it doesn’t need humans? Is it fair to compare AI to previous technology? At what point does it realistically become a global catastrophic risk? They gave me three points to ponder upon:
- AI and AGI (i.e., artificial general intelligence) are fundamentally different from previous technologies. The adoption of AI/AGI is very likely to bring about a societal and economic transformation that far exceeds that of the industrial revolution. Honestly, when you think about it so many people died from the industrial revolution because of the lack of humane labor laws. Let’s be real, so many people have also died from engaging with the internet. There’s the dark web with all the illegal transactions, the mental toll of social media, and even the issue of doxing. It is difficult to argue that AI is harmless because it’s ‘just like any other tech’. All these techs came with tradeoffs which happened to be actual lives of people. We just forgot about them. This happened not because the tech is bad but because humans have failed (time and time again) to evolve as quick as tech does. We have poor foresight on the issues that should alarm us. I’d argue that it’s better to be anxious about these new technologies than to be dismissive of its possible (even speculative) risks. Maybe then, we’d avoid so much casualties.
- Given its current development trajectory, AI/AGI has the potential to increasingly automate a wide range of tasks. Some argue that AI is similar to other technologies, which could lead to new jobs and skill sets. However, there is a real economic incentive to automating cognitively demanding tasks, previously thought un-automatable. This can lead to increasing economic disempowerment.
- Major AI companies like OpenAI and DeepMind is explicitly trying to develop AGI. This could be orders of magnitude more intelligent and capable than humans in virtually every domain. The ability of AGI to optimize towards any given goal is, in comparison to humans, deeply concerning. I understand that this idea seems arrogant. It’s almost like creating ‘God.’ But honestly, whether these companies develop AGI or not, there is no guarantee that what they develop will be net positive for the world.
Basically, we don’t know what the impact of AI could be. Given the worst case scenario, it doesn’t really hurt to try and spend our time working on preventing its impact. The idea is we hopefully never get to experience the worst case scenario we could imagine. AI is developing faster than we are adapting to it. This adaptability is also much worse in low-middle income countries (LMICs) where the digital divide is so wide. Simply put, we want a roadmap — rules and plans — for how AI should be used and developed to ensure that this technology is safe and fair. The way I see this, being “safe” includes the idea that developments in AI should protect the long-term interest of human communities. I’m not a good enough expert enough to be able to pinpoint how this could be done. If it turns out that safe AI means the most amount of lives saved in terms of well-being, then so be it. In this regard, if a person in an LMIC loses their source of livelihood because of AI, and remains unemployed and unable to find work, that’s a way worse value of WELLBY, deeming said AI to be harmful for that person.