# Now
<div class="pills-container">
<span class="pill">Last Updated: June 2, 2025</span>
<span class="pill">Location: Metro Manila, PH</span>
<span class="pill">Inspired by <a href="https://nownownow.com/about">Derek Sivers</a></span>.
</div>
## What's keeping me busy
[[Ethos|I want to make nontrivial progress on things that matter.]] To do that, I have to stay aligned with my values.
- **Actively applying to roles in product or organizational strategy** where I can support lean and fast-moving teams in delivering valuable tools.
- **[Building a SaaS tool](https://loomify.app/)** that helps social commerce sellers track and manage sales across multiple platforms, without changing where they sell.
- **[Designing alignment benchmarks](https://docs.google.com/presentation/d/1ePaTc4qq4Ec8eZQV-V4Ev1NfK5x-Ky3P8JmpwA2XDp0/edit?usp=sharing)** to test whether AI agents preserve human-compatible values with some folks I met at [AI Safety Camp](https://www.aisafety.camp/), under the supervision of [Roland Pihlakas](https://www.lesswrong.com/users/roland-pihlakas).
- **[Curating a database](https://aisafetypapers.com/)** of 700+ papers on AI safety to support better synthesis of knowledge within the field.
- **[Hosting a reading group 2x a week](https://paperclipminimizer.club/)** and exploring those papers in depth with other alignment researchers and budding enthusiasts.
- **Contributing to research projects** organized by the [MIT AI Risk Repository](https://airisk.mit.edu/) under the supervision of [Peter Slattery](https://futuretech.mit.edu/team/peter-slattery) and [Jess Graham](https://futuretech.mit.edu/team/jessica-graham).
## What I'm obsessed about
- **Breaking alignment on purpose.** I like poking multi-agent systems until they crack (basically). I'm exploring how to stress-test alignment within AI agents. then documenting how to possibly keep these systems stable. I like designing evals for this since it reminds me of game design (which was what I wanted to do in middle school), so I decided to self-study the [Cooperative AI curriculum](https://course.aisafetyfundamentals.com/cooperative-ai) starting last March to help me with this.
- **Making abstractions.** I pride myself in being to translate complicated *stuff* into much simpler interfaces. I think this is why I like ops work (in both product and project management contexts), and why a lot of my research are designing a low-cost or low-complexity version of X. This type of optimization tickles my brain, and I do think information and access shouldn't be excluded to a select few.
## What I'm still learning
I've always been a self-directed learner, but I'm learning that feedback loops and mentorship are force multipliers. If I want to grow, these are things I need to work on:
- **Rapid prototyping of ideas.** My goal in the next year is to basically be [Ethan Perez' ideal empirical alignment researcher](https://www.alignmentforum.org/posts/dZFpEdKyb9Bf4xYn7/tips-for-empirical-alignment-research). I'm noticing the disparity between my lecture-based uni education in the Philippines vs. the experiments-/data-heavy training abroad, at least in the schools where most of the alignment researchers I meet come from. I'm not yet quick at piloting experiments, but I've gotten better at designing better-scoped experiments because of [AI Safety Camp](https://www.aisafety.camp/). Personally, I think Ethan Perez' tips apply even if I don't work on research. Rapid prototyping is generally something I should do.
- **Positioning my generalist background.** I've been told quite a lot that my resume is confusing and scattered. But when I'm in execution mode, my multi-disciplinary background becomes my strength. From experience, I can move fast in the things I've experienced even at least once, connect the dots, and adapt across problem domains fairly quicker than my peers. Though it becomes a challenge when I can't communicate clearly that this is what I bring to the table.