Journal Reflection #3: Insights into your first AI/ML Pygame project

From https://www.activestate.com/blog/how-to-use-pygame-for-game-development/

Please use your reply to this blog post to detail the following:

  1. Please give a full description of the nature of your first AI/ML game project.
  2. What was the steepest part of the learning curve for this project? Was it learning how to implement the AI or how to use the Pygame/Pyglet library? Please elaborate and explain your answer.
  3. Describe the AI/ML algorithm your game implements. Did you work through a tutorial you found online? Did you start from scratch because you were motivated by a particular game or algorithm and you wanted to implement it using Pygame/Pyglet?
  4. If you had to teach this class next year, what project would you recommend to students in the Advanced Topics class to give them a broad and comprehensive overview of some fundamental AI algorithms to implement in a game?
  5. Include your Github repo URL so your classmates can look at your code.

Take the time to look through the project posts of your classmates. If you saw any project or project descriptions that pique your interest, please reply or respond to their post with feedback. Constructive criticism is allowed, but please keep your comments civil.

8 comments

Skip to comment form

  1. 1. I created Connect 4 and then attempted to make a Minimax algorithm for it, which I ran out of time for. So the AI just picks random columns and having it win is a bigger challenge than beating it.

    2. The hardest part was dealing with the nuances of how pygame works. From getting the screen to update, to getting text to appear and disappear, to the event queue… it was a process to figure out.

    3. The video and code that I was starting to implement used Minimax. I’ve been watching a channel called Code Bullet on youtube that does a lot of this sort of thing (building a game and then attempting to build an AI to beat it), and it was his video on connect 4 that made me want to try it. Not to mention the fact that I’ve played far too much connect 4 in my life and just wanted to have an excuse to mess around with it more.

    4. Something like Connect 4 that can easily be pseudo-coded, but is tricky to actually get the syntax right on really is perfect, you just have to make sure you have enough time to actually finish the tutorial on how to do Minimax.

    5. https://github.com/RavenclawLunatic/Connect-4

  2. 1.Please give a full description of the nature of your first AI/ML game project.

    I coded a Chess MiniMax Algorithm. First, I created the Chess board, its pieces, and the rules of the game (ex. Where pieces can move, checkmate, checks, pins, castling). The way that I did checks was to see if a piece could take the king after a piece moved, then that move was not valid. If one side has no moves and is in check, it is checkmate; otherwise, it is stalemate. If a pawn reaches the other side, it becomes a queen. I also implemented move animation and highlighted the valid moves. After doing this, I worked on a Chess Algorithm. First, it was just random moves (depth 0). Next, I implemented a normal MiniMax but this was super slow. On depth 3, it was evaluating like 10000 positions per move. After optimization such as Alpha-Beta Pruning, it was able to cut this number down to around 2000 to 3000. This pruning becomes more effective the greater the depth. Overall, it was a super fun experience especially with playing with the scoring function to see if I could make it better. For future improvements, I could import an opening book and endgame table base so that it would not have to calculate early positions but instead could use memorized moves. Additionally, I could implement transposition tables to make it more efficient.

    2. What was the steepest part of the learning curve for this project? Was it learning how to implement the AI or how to use the Pygame/Pyglet library? Please elaborate and explain your answer.

    Definitely Pygame. The actual algorithm
    itself took like 1/10 of the time because the algorithm itself isn’t actually too complicated; the hard part is making it work with the game you programmed from scratch. Pygame definitely has its annoying parts especially when drawing text and such. I also used a lot of arrays when I probably could have used numpy arrays so there were definitely things I would do better if I went back to a drawing board to increase efficiency.

    3. Describe the AI/ML algorithm your game implements. Did you work through a tutorial you found online? Did you start from scratch because you were motivated by a particular game or algorithm and you wanted to implement it using Pygame/Pyglet?

    My algorithm is a standard Minimax algorithm with Alpha-Beta Pruning, Move ordering, and position tables. The score function evaluates both the material values and the position of the pieces on the board. Move ordering prioritizes moves that are more likely to have high position scores, making pruning more effective. I worked through a tutorial by Eddie Sharick and tried to make a basic MiniMax algorithm more efficient so I could play it on higher depth. I also just really enjoy chess and I have always wanted to make a chess engine. I think in the future I will try to use neural networks and an unsupervised reinforcement learning environment to see the AI learn by itself.

    4. If you had to teach this class next year, what project would you recommend to students in the Advanced Topics class to give them a broad and comprehensive overview of some fundamental AI algorithms to implement in a game?

    I would recommend Tic, Tac, Toe. The game is really easy conceptually and it has been solved because there aren’t that many combinations. It provides a very nice way to implement MiniMax and is also a game that most people are familiar with; however, for more advanced versions of minimax such as with Alpha Beta Pruning, I think more complex games like Connect-4 and Chess are necessary to actually see the effects of increasing efficiency on depth.

    5. Include your Github repo URL so your classmates can look at your code.

    https://github.com/Brian5998/ChessMinMax

  3. – Please give a full description of the nature of your first AI/ML game project.
    – My project centered around using a genetic algorithm to find the minimum point of a function in the least number of generations. A genetic algorithm, as the name implies, uses the principles of evolution to optimize a number of parameters. In this case, I only attempted single-objective optimization — that is, the fitness function was directly proportional to the axis across which we wanted to optimize the function. The majority of the project was the implementation of a genetic algorithm, an implementation that can be clearly segmented into seven steps:
    1. Encoding
    Encoding means how we represent the genotype of the agent (where a single point is considered an agent) — for this project, I used a binary encoding, so each attribute of a point was converted into a simple binary representation, which was implemented as a percentage of the allotted domain of the function
    2. Selection
    In a genetic algorithm, each point generates successors through two-parent reproduction, where two parent points are selected, and their attributes are transferred over the two children they generate. The step of selection is probabilistic, where pairs of parents are generally selected based on their level of fitness (proportional to how low the points are in the desired optimization axis). It is also worth explaining that if a point is not selected, it is essentially eliminated from the gene pool.
    2. Elitism
    We want to maintain the best “genes,” so the best individuals within a certain threshold are maintained and are not subjected to the selection process and are guaranteed a spot in the next population.
    3. Crossover
    This defines how the genes of the children are selected from the two parents. This step is also probabilistic, so a random percentage of the genes come from the first and second parents. The second child generated has the inverse selection of the genes of the first child selected; this is known as one-point crossover.
    4. Mutation
    Introduces a random mutation by flipping an arbitrary number of bits within the genomic sequence of each child generated.
    5. Termination
    Defines the condition of termination — I have only completed numerical termination as of now, so you pass in a number of generations that you want to run, and it terminates after generating that number. I would like to add convergence as a termination condition soon.
    6. Visualization
    This function makes a 3d animated visualization of each generation — superimposed on the target optimization function. You can see a demo here: https://genetics.emilianogarcia.tech/

    – What was the steepest part of the learning curve for this project? Was it learning how to implement the AI or how to use the Pygame/Pyglet library? Please elaborate and explain your answer.
    – The part of this project with the steepest learning curve was making the encodings and functions using numpy, learning how to think in matrices was by far the most difficult aspect of this project.
    – Describe the AI/ML algorithm your game implements. Did you work through a tutorial you found online? Did you start from scratch because you were motivated by a particular game or algorithm and you wanted to implement it using Pygame/Pyglet?
    – My algorithm is already defined above. I started from scratch, reading webpages on genetic algorithms and watching videos on how to use numpy to define test functions, or learn how to do specific array manipulations.
    – If you had to teach this class next year, what project would you recommend to students in the Advanced Topics class to give them a broad and comprehensive overview of some fundamental AI algorithms to implement in a game?
    – I would ask them to learn the concept of a genetic algorithm, q-learning, reinforcement learning, or min-max optimization.
    – Include your Github repo URL so your classmates can look at your code.
    https://github.com/EmilianoGarciaLopez/genetic-algorithm

  4. 1. I chose to implement AI into mancala because the game is super fun and it lends itself perfectly to a minimax algorithm and alpha beta pruning because the computer can create a look ahead tree with layers for the possible moves that the computer could choose and the player could choose.

    Instead of starting by writing the minimax algorithm, I started by teaching myself how to use pygame. Similar to my experience teaching myself pandas and geopandas, I really enjoy the visual design aspect of these projects. So, I had a lot of fun learning how to design rectangles and circles and text boxes and every way to make my program look prettier. To be fair, I won’t claim that my mancala game looks completely professional and it could definitely use some improvements, but I am proud of how it turned out.

    After the visual design aspect, I had to figure out how to make my game loop and detect clicks on the screen, which was another difficult part of pygame for me to understand. I’m not sure if I designed my game in the most efficient way possible, but I ended up using a while true loop that updates the screen, checks that the game isn’t over, and checks for some detected action. If the computer detects a click, then if checks whose turn it is and then executes the necessary action. The computer and the player continue taking turns until the game is won. Of course, the player gets to choose their own move, and the computer chooses its move with the chooseMove method where the minimax algorithm and alpha beta pruning are implemented. I will describe those aspects below.

    2. The steepest part of the learning curve was figuring out how to implement the minimax algorithm and how to make a class in python, because I haven’t done that before. The Kalah class defines every aspect of one game of mancala, and it becomes especially useful in the chooseMove function, where I implemented the minimax algorithm which has to create a new instance of the Kalah class when it recurses through a new layer of children node. Especially as I increase the maxdepth, which is the number of layers of the game which the computer looks ahead at, the computer takes longer to find the best move. The alpha beta cutoff helps this issue because it allows the algorithm to know to stop expanding a branch if it is less optimal than a branch that has come before.

    3. I took inspiration from several minimax algorithm guides online, but I looked at tic tac toe and chess examples. With a minimax algorithm, the computer makes a tree with child nodes of every possible move that it could make from the current game board. Then it makes branches off of those branches for every possible move that the player could make, and it makes layers of branches until it reaches the max depth. Then it evaluates each of these possible moves and assuming that the player and computer both make the most optimal moves, the computer chooses the move that has the best outcome for it. The alpha beta pruning is a way to short circuit and stop checking branches that the computer knows will not be the best branch. It knows this by keeping track of alpha and beta scores, and if the alpha and beta scores for a current branch are less optimal than a previous branch, then the computer will stop exploring the current branch.

    This part was hard, but the biggest thing I gained from this project was taking another step towards demystifying what AI means. This project showed me that in many cases, the main advantage that computer have over humans if that they can behave much faster. I could also imagine every possible move if I had a lot of time, but the computer can do it quickly. The computer can also remember past games, although I did not store results in this project, which also helps the AI beat the computer. I think its important for me to continue understanding and also grounding my understanding of AI if I want to have a solid understanding of ways to apply it to real problems.

    4. If I had to teach this class next year, I would definitely encourage students to explore minimax algorithms. I think they are complex enough to challenge students but also are not so confusing that they are impossible for students to grasp. Another great thing about minimax is that it can be used in so many different games, from tic tac toe to mancala to chess to connect four. This gives more freedom to the students, but it also gives them a way to decide for themselves how hard or complicated of a game they want to build. This is perfect for this class because everyone comes in with different experiences.

    5. https://github.com/22ridley/Mancala.git

  5. 1. Please give a full description of the nature of your first AI/ML game project.

    My first AI/ML game was Minesweeper, which, unbeknownst to me at the beginning of the process, is hard to create an AI/ML program for. My first step was to create the actual game itself, so I could add the algorithm over it. This was also my first misstep. Without thinking down the line, I created a perfectly functional, but hardly perfect, game of minesweeper. The hardly perfect turned out to be a major problem as integrating the algorithm I found online turned out to be a lofty task, and one I failed to perfect. The implemented algorithm worked just as a human player would, using logic and reasoning to best deduce where a mine might be, and then flagging it if it is known to be a flag.

    2. What was the steepest part of the learning curve for this project? Was it learning how to implement the AI or how to use the Pygame/Pyglet library? Please elaborate and explain your answer.

    There were several parts of this project that were really challenging to me. For starters, I had a tough time implementing pygame. I’ve never used it before, and for some reason the sprites I was using did not seem to be working. That was eternally frustrating, and unfortunately, even after working with some other people on it, I decided to throw my cards in and try just drawing rectangles, which worked perfectly. Secondly, the algorithm, when I first looked through it, was daunting and complex. I read through the entire section twice or three times before I understood fully how everything worked together and then I cobbled the pieces of the algorithm with my code. Both of these were challenges in themselves, although the biggest personally was trying to trace the bigs in the logic, which was especially tricky as I was new to the algorithm.

    3. Describe the AI/ML algorithm your game implements. Did you work through a tutorial you found online? Did you start from scratch because you were motivated by a particular game or algorithm and you wanted to implement it using Pygame/Pyglet?

    I used an algorithm I found online, which was a semi-tutorial (not exactly a full tutorial – it walked through the high-level aspects of the code, but did not step by step go through the actual code or explain the choices made on the code level). The algorithm, as mentioned above, works through minesweeper the same way a human would: it starts at a random tile, and uses the #s revealed to determine the probability of a mine at each square. It flags any with a probability of one (must be a mine) and clicks any with a probability of zero (it cannot be a mine).

    4. If you had to teach this class next year, what project would you recommend to students in the Advanced Topics class to give them a broad and comprehensive overview of some fundamental AI algorithms to implement in a game?

    I think what’s really tricky about recommending a project to students in the Advanced Topics class next year is that, as a class, we are really all over the place in terms of prior experiences. My game was fairly simple and straightforward, and although it pushed me, it wouldn’t have pushed other people. The mini-max algorithm, from what I saw, seemed to be a very good algorithm to learn – it’s not so complex that it’s unapproachable for a beginner, but it also has a wide variety of applications, which have various levels of difficulty. Granted, I did not see any other algorithms (besides my own) that were not mini-max.

    5. Include your Github repo URL so your classmates can look at your code.
    https://github.com/23ellaann/Advanced-Computer-Science.git

  6. 1. I started out by creating Tic-Tac-Toe in Pygame. I have experience with both of these. Text-based Tic-Tac-Toe is one of the IntroCS projects and I created my final project (2048) in Pygame. Once that was done, I researched the minimax algorithm and specifically its application in Tic-Tac-Toe. I then implemented the minimax algorithm to make an AI that beats the user at Tic-Tac-Toe every time.

    2. I have no prior experience with the minimax AI, so definitely that. Tic-Tac-Toe was super easy to code in Pygame, especially since I already knew the basics. I had to figure out how to both create a working minimax algorithm and make it input a move into the game.

    3. I went over it in class, but I found an algorithm of psuedocode that I used to aid me in the process:

    minimax(state, depth, player)
    if (player = max) then: best = [null, -infinity]
    else: best = [null, +infinity]

    if (depth = 0 or gameover) then
    score = evaluate this state for player
    return [null, score]

    for each valid move m for player in state s do
    execute move m on s
    [move, score] = minimax(s, depth – 1, -player)
    undo move m on s

    if (player = max) then: if score > best.score then best = [move, score]
    else: if score < best.score then best = [move, score]

    return best

    I didn't use this exact structure, but it served as a good guide. Upon the AI's turn, it goes through every available move and analyzes each of the possibilites of moves that can be played afterwards. It rewards an AI win and disincentivizes letting the user win. It also takes into account the number of moves it takes to win. In the end, the move with the highest score is returned and played for the AI's turn.

    4. A lot of people used minimax algorithms, so an overview of the basics of minimax would not be a bad idea. Ella brought up the point that there are people in this class at very different levels of knowledge of computer science. That's a really good thing but it definitely makes it hard to teach things like that; at the beginning of this project I didn't know how minimax worked but Ben did. Maybe having some sort of optional session on different algorithms would be a good idea, but I really appreciate the freedom we are given in this class.

    5. https://github.com/tylerpowers/TicTacToeMinimax

  7. 1. Please give a full description of the nature of your first AI/ML game project.
    vectorboi stems from wanting to explore agent-based games (ones that had computer control entities) through the lens of genetic algorithms. I was initially expired by carykh’s videos on his evolutionary node walkers. They really opened my eyes to the possibilities of this AI. At first, I had lofty ambitions which I, unfortunately (and quite predictably) had to downgrade from. With only 5 days left, I pivoted from my complex-shaped intelligent vectorbois to a more simple dot with a series of moves. This allowed me to hit all of my objects in a compressed time budget. The point of this project was to train a “dot” to propel from its “spawn point” to a “target”. Every dot contains a list of vectored forces called “kicks”. During the simulation, every 1.2 seconds the simulation would apply the subsequent kick to the dot in an incremented fashion. If the simulation’s kick index is out of range, the dot would simply stop kicking and slowly halt. The closer a dot is to the target, the higher its fitness (more specifically lower, due to that being easier). If a dot hits the edge of the screen, it dies. If it hits a killwall (present within the simulation) it dies. Dying in this sense doesn’t remove it from the gene pool but rather lowers it fitness greatly. After 10 seconds, the simulation was stopped, the fitness of all the dots evaluated, and then evolution (?) . During evolution, the population of dots was sorted by their fitness, with the highest at the top. Then, every two adjacent pairs of dots were crossed over to create two children. The two children overwrote the corresponding lower half. This crappy selection/elitism? Yes. Does it matter? No, it worked anyway. The scenario and the population can be saved to a json file if need be, and loaded.

    2. What was the steepest part of the learning curve for this project? Was it learning how to implement the AI or how to use the Pygame/Pyglet library? Please elaborate and explain your answer.
    Unlike my classmate’s projects, I coded in Go. I did this for two reasons 1) im mentally unsound and 2) I wanted to challenge myself. Well the first reason was just a joke, I am quite familiar with the language but I do atest its a bit harder to work with than python. It also prevented me from doing “import genetics” which I was insecure about doing something like that in my last project. The biggest learning curve was probably the game engine that I used: ebiten (https://ebiten.org/). My orginal project called for drawing circles, which was virtually impossible to do with my knowledge at that time. In hindsight, there was some API designed to draw vector graphics, but I found it too late. My second revision allowed me to simply draw dots, lines, and rectangles, greatly reducing the graphical complexity of my project. The second most intensive learning curve was probably the physics engine (https://github.com/jakecoffman/cp). The Go port that I was using has no documentation, instead directing you to the original C implementation documentation. Trying to pin direct correlations was quite hard at first, but became easier over time.

    3. Describe the AI/ML algorithm your game implements. Did you work through a tutorial you found online? Did you start from scratch because you were motivated by a particular game or algorithm and you wanted to implement it using Pygame/Pyglet?
    See part 1. Used a genetic algorithm. I started from scratch, implementing my own. I was initially inspired by similar projects online but hated how they implemented genes (namely binary encoding). Furthermore, I hated how most physics-based genetic algorithm projects were tied to realtime (due to graphics), so I built my project around being able to speed up the simulation.

    4. If you had to teach this class next year, what project would you recommend to students in the Advanced Topics class to give them a broad and comprehensive overview of some fundamental AI algorithms to implement in a game?
    Honestly, my game was a simple application of GAs. One can do so much with genetic algorithms, they’re amazingly simple optimization devices (no wonder why we exist as humans today). My goal would be to go wild.

    5. Include your Github repo URL so your classmates can look at your code.
    https://github.com/memsdm05/vectorboi

  8. 1. Please give a full description of the nature of your first AI/ML game project.
    vectorboi stems from wanting to explore agent-based games (ones that had computer control entities) through the lens of genetic algorithms. I was initially expired by carykh’s videos on his evolutionary node walkers. They really opened my eyes to the possibilities of this AI. At first, I had lofty ambitions which I, unfortunately (and quite predictably) had to downgrade from. With only 5 days left, I pivoted from my complex-shaped intelligent vectorbois to a more simple dot with a series of moves. This allowed me to hit all of my objects in a compressed time budget. The point of this project was to train a “dot” to propel from its “spawn point” to a “target”. Every dot contains a list of vectored forces called “kicks”. During the simulation, every 1.2 seconds the simulation would apply the subsequent kick to the dot in an incremented fashion. If the simulation’s kick index is out of range, the dot would simply stop kicking and slowly halt. The closer a dot is to the target, the higher its fitness (more specifically lower, due to that being easier). If a dot hits the edge of the screen, it dies. If it hits a killwall (present within the simulation) it dies. Dying in this sense doesn’t remove it from the gene pool but rather lowers it fitness greatly. After 10 seconds, the simulation was stopped, the fitness of all the dots evaluated, and then evolution (?) . During evolution, the population of dots was sorted by their fitness, with the highest at the top. Then, every two adjacent pairs of dots were crossed over to create two children. The two children overwrote the corresponding lower half. This crappy selection/elitism? Yes. Does it matter? No, it worked anyway. The scenario and the population can be saved to a json file if need be, and loaded.

    2. What was the steepest part of the learning curve for this project? Was it learning how to implement the AI or how to use the Pygame/Pyglet library? Please elaborate and explain your answer.
    Unlike my classmate’s projects, I coded in Go. I did this for two reasons 1) im mentally unsound and 2) I wanted to challenge myself. Well the first reason was just a joke, I am quite familiar with the language but I do atest its a bit harder to work with than python. It also prevented me from doing “import genetics” which I was insecure about doing something like that in my last project. The biggest learning curve was probably the game engine that I used: ebiten (https://ebiten.org/). My orginal project called for drawing circles, which was virtually impossible to do with my knowledge at that time. In hindsight, there was some API designed to draw vector graphics, but I found it too late. My second revision allowed me to simply draw dots, lines, and rectangles, greatly reducing the graphical complexity of my project. The second most intensive learning curve was probably the physics engine (https://github.com/jakecoffman/cp). The Go port that I was using has no documentation, instead directing you to the original C implementation documentation. Trying to pin direct correlations was quite hard at first, but became easier over time.

    3. Describe the AI/ML algorithm your game implements. Did you work through a tutorial you found online? Did you start from scratch because you were motivated by a particular game or algorithm and you wanted to implement it using Pygame/Pyglet?
    See part 1. Used a genetic algorithm. I started from scratch, implementing my own. I was initially inspired by similar projects online but hated how they implemented genes (namely binary encoding). Furthermore, I hated how most physics-based genetic algorithm projects were tied to realtime (due to graphics), so I built my project around being able to speed up the simulation.

    4. If you had to teach this class next year, what project would you recommend to students in the Advanced Topics class to give them a broad and comprehensive overview of some fundamental AI algorithms to implement in a game?
    Honestly, my game was a simple application of GAs. One can do so much with genetic algorithms, they’re amazingly simple optimization devices (no wonder why we exist as humans today). My goal would be to go wild.

    5. Include your Github repo URL so your classmates can look at your code.
    https://github.com/memsdm05/vectorboi

    duplicate?

Leave a Reply