Notes by Lenz

Working notes. Digital garden. Brain dump.

Why I started learning more about GCRs

Lenz Dagohoy • 29 January 2024 • 10 minute read

I came to the Oxford workshop with minimal appreciation for the idea of global catastrophic risks (GCRs). At the time I applied, I had just began working at a local think tank doing some research on AI governance in the Philippines. AI risks, as I believe, was limited to:

  • Issues of bias in training leading to discrimination (i.e., since AI is trained on real world data then it is very likely that AI will carry over the same biases and bigot tendencies)
  • Lack of explainability of its internal mechanisms (i.e., no one knows how AI actually works, even the people who code AI)
  • High-impact cyberattacks leading to data leaks and lost of institutional trust (i.e., people can use AI to hack big institutions and this can lead to people distrusting these institutions)
  • Lack of transparency and accountability from public and private institutions developing AI (i.e., AI developers might be doing something sus but we won’t know about it if no one exposes them)
  • Extreme polarization and political instability caused by unregulated AI (i.e., bigger echo chambers because of AI) ..and probably a lot more that I can’t exactly articulate. Basically I was scared that this Cambridge Analytica issue was going to happen again, but this time in a much larger scale. At least the way I see it, as a non-expert but an avid follower of the news, the Philippines is not exactly the strongest economy back in 2023 (and it’s arguably much worse in 2024). If anything, we are very vulnerable. To me, a large-scale misaligned agent going wild in an unstable economy is a recipe for disaster. This led me down a rabbit hole towards learning more about AI risk and GCRs in series of conversations within my local EA community and the people I met in GCP.

Initially, I just wanted to know why LMICs were not included in the conversation about global AI governance.

Although I technically had an idea what AI risks are, I was also vastly misinformed about how AI works on an analytical level. Even though I read through Bluedot Impact’s AI Safety Alignment Course, I admittedly did not fully understand what a lot of the concepts actually meant.[^1] I was genuinely just concerned with the way AI is being used and developed in the current innovation scene[^2] could possible exacerbate the Philippines’ socio-economic instability. So I applied to GCP with the the following questions in mind:

  1. What can developing countries learn from the US and UK when developing policies for AI alignment?
  2. How can countries like the Philippines effectively contribute to global AI governance efforts, given resource constraints and varying levels of technological readiness?
  3. Are there emerging models or frameworks for assigning liability and accountability to determine who should be held responsible for AI-related harms, especially in the international scale?

I do recognize that most of the work on AI safety is happening in the West, specifically the US and UK. But since we (i.e., the Philippines) are one of the more vulnerable economies globally, shouldn’t we get to have a say on the stuff that’s probably going to affect our industries on a massive scale?

I wanted to know how we can contribute from halfway across the world. I wanted to know what we could do. I wanted to know who would be accountable should an AI cause massive harm to our economy. Do we remain as observers or should we do something about it?

Eventually, I realized how much bigger the risk actually is.

At least from personal conversations, I realize a lot of local AI or data experts do not actually consider that AI could be a much bigger risk to the point that it can be considered as a GCR. For a lot of the people I’ve talked to, existential AI risk seems too speculative — and honestly, that’s a valid sentiment. A professor once told me that all technologies get demonized when they are released. We think they are threats, but we couldn’t be more wrong. We shouldn’t be afraid of AI, but of the humans who enable these mere tools to be a cause for concern. I thought, ‘well, that makes sense’. By my understanding, AI is a program, and a program needs a programmer.

I know the effective altruism (EA) community thinks of it differently though which is why I asked an EA-aligned friend, who was more exposed to the AI safety community more than me, what they thought about this. Can AI come to a point where it doesn’t need humans? Is it fair to compare AI to previous technology? At what point does it realistically become a global catastrophic risk? They gave me three points to ponder upon:

  1. AI and AGI (i.e., artificial general intelligence) are fundamentally different from previous technologies. The adoption of AI/AGI is very likely to bring about a societal and economic transformation that far exceeds that of the industrial revolution.[^3]
  2. Given its current development trajectory, AI/AGI has the potential to increasingly automate a wide range of tasks. Some argue that AI is similar to other technologies, which could lead to new jobs and skill sets. However, there is a real economic incentive to automating cognitively demanding tasks, previously thought un-automatable. This can lead to increasing economic disempowerment.
  3. Major AI companies like OpenAI and DeepMind is explicitly trying to develop AGI. This could be orders of magnitude more intelligent and capable than humans in virtually every domain. The ability of AGI to optimize towards any given goal is, in comparison to humans, deeply concerning.[^4]

Basically, we don’t know what the impact of AI could be. Given the worst case scenario, it doesn’t really hurt to try and spend our time working on preventing its impact. The idea is we hopefully never get to experience the worst case scenario we could imagine. AI is developing faster than we are adapting to it. This adaptability is also much worse in low-middle income countries (LMICs) where the digital divide is so wide. Simply put, we want a roadmap — rules and plans — for how AI should be used and developed to ensure that this technology is safe[^5] and fair.


[^1]: Note to self: Study this again and hopefully understand it the next time. [^2]: Admittedly, a lot of my concerns were based on hearsay which is not exactly reliable. My circle, both personal and professional, include people who are active in the local startup scene, and I hear a lot of the ideas they want to implement via AI. Not to name any names, but some of these ideas were just downright questionable to me. [^3]: Honestly, when you think about it so many people died from the industrial revolution because of the lack of humane labor laws. Let’s be real, so many people have also died from engaging with the internet. There’s the dark web with all the illegal transactions, the mental toll of social media, and even the issue of doxing. It is difficult to argue that AI is harmless because it’s ‘just like any other tech’. All these techs came with tradeoffs which happened to be actual lives of people. We just forgot about them. This happened not because the tech is bad but because humans have failed (time and time again) to evolve as quick as tech does. We have poor foresight on the issues that should alarm us. I’d argue that it’s better to be anxious about these new technologies than to be dismissive of its possible (even speculative) risks. Maybe then, we’d avoid so much casualties. [^4]: I understand that this idea seems arrogant. It’s almost like creating ‘God.’ But honestly, whether these companies develop AGI or not, there is no guarantee that what they develop will be net positive for the world. [^5]: The way I see this, being “safe” includes the idea that developments in AI should protect the long-term interest of human communities. I’m not a good enough expert enough to be able to pinpoint how this could be done. If it turns out that safe AI means the most amount of lives saved in terms of well-being, then so be it. In this regard, if a person in an LMIC loses their source of livelihood because of AI, and remains unemployed and unable to find work, that’s a way worse value of WELLBY, deeming said AI to be harmful for that person.

This space was built in 2024 by @ramennaut. My deepest gratitude goes to the open-source community for the resources and tutorials that made this site possible, and to Mai who helped me figure out how to use Svelte.