FLF - AI for human reasoning fellowship application answers
1) What is something you’ve created or helped create that you’re particularly proud of?
I love creating things.
Most fun to build & most popular (250k+ visitors): everysingleframe.com
Most amount of time, effort, & profit (my startup): wrenly.ai
AI Safety stuff: givewiki.org (2022) & aisafetyfeed.com (new & current)
Out of everything, I’m most proud of my startup, Wrenly. In 3 short years, we’ve raised a pre-seed round, become profitable, and successfully compete head to head with absolute giants in the space. If I weren’t looking to seriously pivot into AI Safety, I would continue to work full-time on Wrenly and grow it until a large successful exit. However, my short AI timelines, moral framework, and inherent interests have led me to seriously pursue AI Safety.
2) What do you think are the most crucial challenges in the AI for Human Reasoning space? Why?
I think the tech necessary for AI for Human Reasoning to succeed is already here or will be here within a year. That’s the easy part.
So my take on the most crucial challenges are:
- Can we actually find the best things to build?
(There are relatively few people searching, & so many possible things to build & iterate.) - Can we search, build, and get adoption fast enough for it to matter?
(The high leverage pivotal period might only last a few years. It might generally be a short window of opportunity.) - Adoption may be hard
(It could also be easier with new tech, but if you think most of the key users of this tech are older, risk-averse, bombarded with stuff already, etc., then the “ideal customer profile” might be much harder to serve than most standard startup customers.) - Can success scale fast & big enough?
(If you have a fast-growing startup, you can quickly raise many millions of dollars. Unless “AI for Human Reasoning” can be made highly profitable, it might fall more in the public-good sphere, which is generally much harder to raise money for and scale quickly.)
Not a crucial challenge, but definitely something to keep in mind – if you can’t sell your product / service you don’t have access to the clean MRR and churn signals that startups have, and the incentives of your users are different. It’s very easy to think your idea is great when you’re not connected to reality (the market).
So in short, we need to find great ideas, iterate on them, get adoption, and scale them in (maybe) a small number of years.
On the plus side, the talent density in the space will be high, AI leverage will be huge, and the demand for these tools might explode.
3) What might you be interested in working on during this fellowship?
I’m honestly open to any AI for Human Reasoning project, but I guess I’m naturally drawn to “tools to help navigate complex information landscapes”. I wouldn’t want to commit to any idea though; I’d want to hear more about the other fellows, ideas, etc., and then make a more informed decision.
But generally this is how I’m thinking about it:
-
I’m getting so much leverage out of AI in my day-to-day life (5k chats w/ chatbots and 50k lines of code) and I would love to bring more of that value to important use cases
-
I’m naturally very interested in epistemics, coordination, debate, etc.
- I built schelling.vote to coordinate IRL with my friends
- I created meetup.com/avl-friendly-ambitious-nerds because I love learning, debating, exploring idea spaces, etc.
-
My current side project aisafetyfeed.com (MVP) is exploring the idea that important information will be generated at a faster and faster speed (especially once AIs start contributing more) all while important people get busier and busier. So if some people are already finding it hard to keep up, imagine if there was 10x more info and you had nearly no free time. My plan is to collate all sources of important AI Safety information, process it intelligently with AI, surface (or let people search for) the pertinent stuff, and then help readers digest and explore.