TwinIon Posted October 30, 2019 Share Posted October 30, 2019 Google's DeepMind AI team has put out a new paper in Nature detailing AlphaStar, their Starcraft II AI. While they've been playing against high level players for almost a year, those previous experiences were considered unfair because AlphaStar was given superhuman speed and vision. The new builds are far more limited. AlphaStar now sees the world through a camera and is only allowed 22 actions every 5 seconds of play. It can also play as any of the three races. Also, training is now fully automated and starts only with agents trained by supervised learning, rather than from previously trained agents from past experiments. The final version relied on 44 days of training and placed within the top 0.15% of the European player base. While that makes it a "Grandmaster" it also means it can't consistently beat the best players in the world. It's particularly vulnerable to strategies it hasn't seen before. You can see replays of it's games here. I think it still has a way to go, but I see this as a far more interesting and generally applicable kind of problem than Chess or Go. In particular not having perfect information and having such a huge number of possible moves (1026 actions to choose from at any moment!) means it's facing a lot of unpredictability. It certainly seems like it's only a matter of time before it's the best in the world. Maybe a show match at the next Blizzcon. Quote Link to comment Share on other sites More sharing options...
legend Posted October 30, 2019 Share Posted October 30, 2019 It's an impressive result. This is a very hard problem and I didn't think we would get this far without making some theoretical advances. Instead, this is mainly a bunch of existing techniques thrown together with insane amounts of game playing and compute. In someways I'm kind of disappointed in that. The trend of "lets just keep throwing compute money at it" only works if the decision-making task you're trying to solve is in a simulation. If we couldn't do it in such a brute force way, the domain could have helped inspire more fundamental advances. Quote Link to comment Share on other sites More sharing options...
Kal-El814 Posted October 30, 2019 Share Posted October 30, 2019 2 minutes ago, legend said: It's an impressive result. This is a very hard problem and I didn't think we would get this far without making some theoretical advances. Instead, this is mainly a bunch of existing techniques thrown together with insane amounts of game playing and compute. In someways I'm kind of disappointed in that. The trend of "lets just keep throwing compute money at it" only works if the decision-making task you're trying to solve is in a simulation. If we couldn't do it in such a brute force way, the domain could have helped inspire more fundamental advances. So they shouldn’t just be constructing additional pylons? 1 Quote Link to comment Share on other sites More sharing options...
Zaku3 Posted October 31, 2019 Share Posted October 31, 2019 Can it play Hearts of Iron? Be nice to not always need 18 some odd players to get a good game. Quote Link to comment Share on other sites More sharing options...
Keyser_Soze Posted October 31, 2019 Share Posted October 31, 2019 Open AI could kick it's butt. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.