Also, this isn’t possible with current or even next gen tech, unless they literally script the “AI” responses to all available situations which would be infeasible.
LLMs can’t reason or handle complex situations. They are text auto complete programs or image generation programs.
Game playing is not LLM. They’re game-specific reinforcement learning models. It’s not easy, but definitely doable with existing tech. Sony’s GT Sophy is a good demonstration on what they’re capable of.
Machine learning is not viable for anything other than simpler 2d games or small segments of more complex games. The training required to get good results on that is intense already.
Also, this isn’t possible with current or even next gen tech, unless they literally script the “AI” responses to all available situations which would be infeasible.
LLMs can’t reason or handle complex situations. They are text auto complete programs or image generation programs.
Game playing is not LLM. They’re game-specific reinforcement learning models. It’s not easy, but definitely doable with existing tech. Sony’s GT Sophy is a good demonstration on what they’re capable of.
Machine learning is not viable for anything other than simpler 2d games or small segments of more complex games. The training required to get good results on that is intense already.