Contents
This course is oriented at creation of artificial players of computer games. We will focus especially on games, for which a forward model can be created and thus a search-based methods of artificial intelligence could be used. We will be dealing neither with navigation and path-finding (that is covered by NAIL068) nor neural networks and evolution algorithms (as they are taught elsewhere). Instead, we will be connecting on Artificial Intelligence I (NAIL069) with search-based methods suitable for games, e.g., Monte Carlo Tree Search, and with various suboptimal heuristic approaches for modeling game trees in video games.
News
Follow the appropriate channel at Gamedev Discord!
https://discord.gg/c49DHBJ
Dates
Lectures + Labs: Mondays, 9:00, SW2 (we start 4.10.2021)
Course Exam
There will be an oral examination done during the examination period. Find the list of topics for the oral examination in this document.
Exam dates:
Lectures
Note that each lecture is associated with Q&A link, GDrive doc where you can anonymously post your questions or write ideas!
Lectures Schedule
No. | Date | Topic | Content | Slides |
1. | 4.10.2021 | AI for StarCraft: Brood War Lecture |
Introduction lecture about the complexity of creating artificial player for StarCraft: Brood War (non-DNN way). |
|
2. | 11.10.2021 | Basics of AI player modeling, Forward model, A*-based agent Lecture |
We will be talking about how to think of AI player, what’s its position (abstractly) in the code, how it usually interacts with the rest of the game code base, what are considerations there. The we contrast it with common intelligent agents models (reflex based agent and model/goal based agent) showing the correspondence. This will lead us to acknowledge, an agent needs some game model in order to be able to lookahead. Better the game model, better the lookahead. While having ad-hoc stuff (like for navigation in 3D open worlds), we can have separate game abstractions serving its purpose, if we want to do smart things, we need to be able to “simulate the game”. Extreme stance is then game “forward model”, which is constructed to simulate the game in its entirety. We will coin an example of SuperMario Framework (Java) and AI we (mainly David Šosvald) developed at MFF, showing how A*-based agent is constructed there including aggressive game space pruning.Details on A* algorithm (in case you’ve never implemented one): WikiData, Wikipedia |
|
HOMEWORK – PONG A-Star Agent | Implement PONG game; implement its forward model; abstract AI player; implement A*-based agent for the game. Provide measurements of your solution (forward model clone and advance times, number of A* iterations your agent can do per second, roughly).Resources: nCine (landing page, GitHub, Discord, my setup notes), Super-Mario AI (paper, GitLab) |
|||
3. | 18.10.2021 | Cancelled, I’m hosting lectures at Turkey! | ||
4. | 25.10.2021 | PONG A-Star Agent Lab / Open-discussion |
We will be discussing your agents, even finished or in-progress giving tips and learning from each other. |
|
5. | 1.11.2021 | F.E.A.R. AI | Let’s dive deep into the way how AI for the game F.E.A.R. was made and is still deemed to be awesome in contemporary days (some even says it’s the best FPS AI ever). |
|
6. | 8.11.2021 | Lab 2 – F.E.A.R. SDK | Let’s review your PONG A-Star solutions and then, check out the F.E.A.R. AI code! The F.E.A.R. SDK is actually on GitHub, so let’s use this opportunity to study the code! |
|
7. | 15.11.2021 | Real-time Strategy Game Combat | Here we will walk through standard min-max, alpha-beta applied to simultaneous games such as RTS combat. We will pinpoint the problem of simultaneous moves and propose way how to address that while still be able to you “regular recursive iterative deepening” min-max / alpha-beta. We will review basic terms from game theory along the way to ground the problem theoretically. |
|
8. | 22.11.2021 | Lab 3 – SparCraft | Details how to setup the SparCraft package using VS2019 on Windows 10, we had some successes 🙂 | |
|
||||
9. | 29.11.2021 | Cancelled | ||
10. | 6.12.2021 | MCTS – Foundations | In this lecture, we look at foundations of MCTS that is, we visit Markov Decision Processes (MDP) and talk about solutions to Multi-armed problems, improving general MDP policies and regret minimization via UCB. Ultimately, we build understanding how and why MCTS algorithm is working in practice. |
|
11. | 14.12.2021 | Lab 4 – Semester topics walkthrough | ||
12. | 20.12.2021 | PGS and NGS | Here we will visit algorithms, which are specifically tailored to search over script script space via portfolio of possible scripts to assign to units and comment on their strengths and weaknesses. |
|
13. | 3.1.2022 | MCTS techniques | In this last lecture we will look at the plethora of modifications proposed to MCTS algorithm. |
The Credit
In order to gain the credit you will be required to choose and incrementally work on the semester project. There are going to be a few homeworks as well! It is mandatory to do all the homeworks, deadlines are flexible 😉
The list of semester projects is available here: Semester projects
Extra Links
Computational Complexity of Games and Puzzles