AI is good at a lot of things, and most of them are well-defined problems. These models are great at acing math olympiads and some of them have a higher codeforces ratings than me. But their performance stops at those well-defined tasks. One of the reasons for this is that it's hard to evaluate what isn’t well-defined. But by the same reasoning, it's also hard to optimize for it. I agree that AI is great at solving problems, but it isn’t great at figuring out what problems to solve and reiterating on the problem itself. Humans are really good at both of these things. We think of hacky ways around challenges and act on problems we see in our communities. “Thinking outside the box,” so to speak. Humans are good at this because we have been conditioned on a lifetime of experience doing cool stuff in weird ways. All while the best AI is trained to solve hard, but well-defined, IOI problems. I’ve talked to a few people deeply about my perspective, and a common response I get is abou...
Posts on the 20th of every month