Skip to main content

On AI Taking Jobs (and the World)

AI is good at a lot of things, and most of them are well-defined problems. These models are great at acing math olympiads and some of them have a higher codeforces ratings than me. But their performance stops at those well-defined tasks. One of the reasons for this is that it's hard to evaluate what isn’t well-defined. But by the same reasoning, it's also hard to optimize for it.

I agree that AI is great at solving problems, but it isn’t great at figuring out what problems to solve and reiterating on the problem itself. Humans are really good at both of these things. We think of hacky ways around challenges and act on problems we see in our communities. “Thinking outside the box,” so to speak. Humans are good at this because we have been conditioned on a lifetime of experience doing cool stuff in weird ways. All while the best AI is trained to solve hard, but well-defined, IOI problems.

I’ve talked to a few people deeply about my perspective, and a common response I get is about giving an AI the same resources humans have. The exact same internal company documents or the entire Harvard law course. But what this boils down to is “if humans can do it, so can AI.” In general, I don’t think there’s enough evidence pointing to AI gaining the ability to figure out what to do instead of being really good at doing what humans tell it to do. And AI can’t take over the world unless it figures out what to do for itself.

Comments

  1. Great take. I think it's also worth mentioning, however, the current evolution in our education system and its increasing reliance on AI. The ability for humans to "think outside the box" has always been a reinforced idea ever since grade-school but if humans become increasingly reliant on AI (whether it be students or teachers), then they'll eventually lack the skills needed to think outside the box.

    ReplyDelete
  2. Woah this is a really cool connection, and I could not agree more. I think most people talk about AI-reliance as like preventing us from gaining research skills, but this makes more sense to me. Something like AI will tell you how other people have "thought outside the box" before, and so you won't train this skill when you have a problem no one has done before.

    ReplyDelete

Post a Comment

Popular posts from this blog