# Next <div class="pills-container"> <span class="pill">Last Updated: August 9, 2025</span> </div> ## The next five years Five years ago, I wanted 3 things for my 5-years-later self: (a) a completed graduate degree; (b) a career in data science; and (c) for whatever I was doing to have some form of social impact. Five years later, now, I retain only some of these dreams. More specifically: **I still yearn for impact in my work**, and I want the next five years to be designed around that. When people ask me what the next 3-5 years look like for me, I reckon they want an answer like "In 5 years, I want to be a lawyer" or "In 5 years, I want to be in a more managerial role." I feel uneasy answering that. I don't quite believe in job titles, but rather I look into the gravity of the contribution. In 5 years, I want to be doing the most impactful work I can. That would mean working on projects or products that affects people's lives at scale (and ideally for the better). Right now, [the obvious thing](https://www.lesswrong.com/posts/Zpqhds4dmLaBwTcnp/trying-the-obvious-thing) would be to work in policy reforms. If I end up being in a more technical role, then the next obvious thing would be monitoring, evaluation, and risk management in high-impact sectors. For me, that might look like working on alignment evals, multi-agent sandboxing, building automation safety nets for displaced workers, or defining tech risks in fintech, healthtech, and other high-risk verticals. Now, these high-impact domains are also high-trust. To even be taken seriously among these circles, most of the time, the minimum qualification is a graduate degree. So perhaps, my answer is: **In the next five years, I want to build the career capital needed to help design better guardrails for emerging technologies like AI.** That might mean working for a few years, then pursuing a graduate degree, and eventually stepping into more senior roles where I can shape decisions that matter. If I want downstream impact, I need upstream positioning. That means building both credibility and capability so I can eventually shape how tools and policies are designed, and be able to contribute competently in my chosen field. ## Questions worth answering This space is reserved for questions and ideas I want to explore (or at least I want them explored even by other people). I am treating this space as a living document. The things written here reflects what I'm currently thinking about and how I might be thinking about them. I could be wrong about certain assumptions, as I've been countless of times before. Note that I have not done a comprehensive literature for some of these so it would be good to check the existing literature before starting work on any of them or believing it as is. If you are interested in working on any of these ideas, notify me at <a href="&#109;a&#105;l&#116;&#111;:&#109;&#97;&#105;&#108;&#64;&#108;&#101;&#110;&#122;&#46;&#119;&#105;&#107;&#105;">&#109;&#97;&#105;&#108;&#64;&#108;&#101;&#110;&#122;&#46;&#119;&#105;&#107;&#105;</a>. ### Stress-testing alignment in agentic systems I'm currently interested in how value alignment emerges in multi-agent and multi-objective systems. This newfound interest was primarily due to [my work](https://docs.google.com/presentation/d/1ePaTc4qq4Ec8eZQV-V4Ev1NfK5x-Ky3P8JmpwA2XDp0/edit?usp=sharing) in [AI Safety Camp](https://www.aisafety.camp/) over the past 3 months. * **Multi-agent benchmarks.*** In general, there is currently not a lot of literature on multi-agent alignment. I can count with my two hands the number of people who are considered thought leaders in the space. But I think [ecological validity](https://en.wikipedia.org/wiki/Ecological_validity) and [construct validity](https://en.wikipedia.org/wiki/Construct_validity) of benchmarks is a real concerns; and so does knowing how well do agents retain cooperation under different conditions like resource asymmetry, temporal pressure, or framing shifts. * **Learning from social environments of specific domains.** There are many verticals that can be rich testbeds for modeling adversarial and cooperative dynamics. I also generally believe working in this direction can lead to more domain-specific alignment evals which is much needed given the trajectory of agentic AI deployment that we see now. ### Field-building in under-resourced regions While frontier AI safety within AIS hubs are obviously very relevant, I do believe there's more of an urgency to amplify the concerns of nations in the east given current developments in the field. More specifically, I find that concerns for low-middle income countries (LMICs) when it comes to emerging technologies, or AI specifically, are different from the concerns of the rest of the world. - **Project-based upskilling.** Too many people seem to get stuck in theory and discussion groups. Environments where people can ship and get feedback is critical. In AI safety, we're working with way less manpower and funding compared to the rest of AI development investments globally. Maximizing this resource could be one of our best chances at ensuring responsible AI training and deployment. - **Designing quicker feedback loops.** The gap between ideas and prototyping is getting narrower over time. But this sort of opportunity exists mostly for senior-level researchers and the small pool of junior-level researchers that they can take in. Young folks from halfway across the world usually don't have access to these types of opportunities. But infra that is designed for this like the [Apart Sprints](https://apartresearch.com/sprints) are promising. I think we should have more of these across different sub-areas of AI safety. Not everyone needs to be a full-time AI safety researcher. I think strong middle layers can help fill the vacuum between interest and expertise, and can actually move high-agency people towards doing research that is 4x more impactful that the normative output with more FTEs. ### Other ideas that I think are cool - **Building a tech consultancy for AI safety.** I reckon that once policies are implemented, there would be more demand for AI governance experts and auditors. With that in mind, I think it's only logical to have some kind of McKinsey for AI governance. - **[Shoutout.io for feature requests.](https://shoutout.io/)** Okay, this may exist already. But wouldn't it be really cool to have a shoutout dashboard for feedback and requests instead of testimonials? - **Mobile-based AI agent builder.** Imagine [n8n](https://n8n.io/) but you can access it with your phone. I genuinely think there's a lot of tools that can be made for micro-entrepreneurs. Most of the time, software is too expensive or too complicated for their needs. I think there's a sweet spot that could target a bunch of people in this pipelines who don't have budget to sustain hiring manpower but would love to be able to scale independently as well. - **Vibe-modified UI libraries.** Imagine if you can basically modify [Magic UI](https://magicui.design/) like how you choose colors with [Coolors](https://coolors.co/). Then you can just download and paste them. All you have to think about is composition. That's like drag and drop web builders + vibe-coding tools, which genuinely sits in the middle of technical and non-technical building.