logoalt Hacker News

halfnhalfyesterday at 8:51 PM3 repliesview on HN

Don't table tennis players learn to predict how the ball will act based on their opponents movements? Seems like if they aren't able to do that with a robot opponent (who doesn't look or behave like a human) then they wouldn't be able to play at their best.


Replies

ACCount37yesterday at 9:14 PM

I do expect this to have a "novelty edge" over human opponents - which can be closed with practice, on the human end.

And, like many AIs, it can have "jagged capability" gaps, with inhuman failure modes living in them - which humans can learn to exploit, but the robot wouldn't adapt to their exploitation because it doesn't learn continuously. Happened with various types of ML AIs designed to fight humans.

show 2 replies
hermitcrabyesterday at 9:26 PM

You can predict the movement of the ball (speed, direction, spin) based on the movement of the bat relative to the ball. What the rest of the player's body is doing is irrelevant to predicting what the ball will do - but relevant to predicting where they will be when you make the return shot.

LeCompteSftwareyesterday at 10:30 PM

Yes, you're dead on:

  Rui Takenaka, an elite-level player who has won and lost matches against Ace, said in comments provided by Sony AI: "When it came to my serve, if I used a serve with complex spin, Ace also returned the ball with complex spin, which made it difficult for me. But when I used a simple serve - what we call a knuckle serve - Ace returned a simpler ball. That made it easier for me to attack on the third shot, and I think that was the key reason why I was able to win."
It seems like the human players might be playing in a way that tacitly overestimates their AI opponents' intelligence and underestimates their skill. AFAIK the SOTA Go AIs are still vulnerable to certain very stupid adversarial strategies that wouldn't fool an amateur (albeit they're not something you'd come up with in normal play, more like a weird cheat code). I wonder if this could get ironed out with a bit more training against humans vs. simulation.