I'm currently a High School student in Monta Vista High School where I was introduced to Computer Programming. As you might be able to tell from my website and my blog
, I focus more on the logistical side of programming and less on the aesthetic side. That doesn't mean I can't work with html and css—I can and I have—but I find it much less enjoyable, and it takes me more than it should to figure things out. On the other hand—I'm in love with the world of—
Sadly, my knowledge in algorithms is still very limited. The AP Computer Science course only covered the very basics of logic. Most of my knowledge in the field has been entirely self-taught. While still in my Introductions to Java course, I built a fully functioning Old Snakey game
with a few AIs—teaching me the basics of Breadth-first search, a maze generator and solver
that taught me Depth-first search, and much, much more
that helped me refine those skills. In the last half-year, however, my limited knowledge has seeped into the grand field of Game Theory!
The problem with me obtaining my knowledge independently is that I end up reinventing preexisting algorithms because I simply don't know they exist. When I started programming and my father asked me to create an ordered histogram, I needed to sort the letters. At that time, I had no knowledge of sorting algorithms, so the way I ended up sorting was what I now know to be a simple selection sort. When creating an AI for Gomoku
(five-in-a-row), I had no knowledge of AI algorithms except the breadth-first search I used for an Old Snakey AI—which wouldn't be much help. However, I knew I needed some form of analysis function, so I made one, and then I used it recursively to find the best move. I now know this to be the infamous naive Minimax algorithm. I also greatly improved my algorithm (See my blog post on minimax improvements
) without external aide. After researching it, however, I came across alpha-beta pruning, which is more significant than any of the individual improvements I made myself.
There is, however, one AI that I learned about far before I decided to tackle—the relatively new Monte Carlo tree search
. If you were familiar with this AI algorithm, you would be able to guess that I learned about this from an obsession with the game of Go. After joining my school's Go Club (and becoming the president sophomore year), my love for Go has only been increasing. This made it hard to avoid hearing the words "Monte Carlo tree search" although I never knew what they really meant other than that they referred to some random AI algorithm (note the punny use of the word 'random'). After I started getting into Game Theory, I decided to finally teach myself this Monte Carlo tree search. With no formal guide or lesson plan, it took me two whole days to finally create a basic working Monte Carlo tree search AI for the game Mancala
. I refined and greatly improved this algorithm for my strong Connect Four AI
. While on vacation, seeing my older brother play Ultimate Tic Tac Toe, I whipped up an AI
overnight, and greatly improved it on the second night.
That's my story so far, and I can't wait until going to a University to formally learn computer science and improve!