Ben Likes Games
Friday, April 17, 2015
Positive Play
I think good teachers and good games can evoke that; by carefully constructing context, you can guide intuition toward the right answers without having to go through the (fairly jarring) process of feedback and adjustment. Any neural network can adjust appropriately based on how wrong it was; part of the real beauty of human cleverness is in figuring out which data to connect, and beautifully clever teachers and games are the ones that make you smart just by teaching you in just the right order.
If you want one of those game worlds that seems endlessly intricate and explorable, part of the key can be in guiding attention away from the things that shouldn't be examined too closely. Things retain a lot more detail if they aren't observed. Dead-end paths don't have to look like dead ends; as long as they look less appealing than some other path, they won't be explored first. And if the path explored first keeps bringing up new interesting branches, it's very easy to never think back to that one fork in the road.
One big barrier to this is an anxious completionist mindset: if the player feels like they need to get every coin, they'll intentionally go against all their intuition in order to explore everything before they keep going. And it can be really easy to make players feel that way: if I don't know that I absolutely definitely don't need to bother getting more coins, I'll probably go looking for them. In the absence of evidence, I assume more coins will be noticeably better, so if I have any, I'll look for em all. (It's not true for all players, of course).
But even if you want to have that path explored, even if you really want a big intricate game world with lots of side paths and hidden treasures, you don't want a player's first play to feel crippling. They'll be grateful that they seem to be going the right way, even though they don't quite know how (they must be in the zone!) It helps give the player that relaxing, optimistic feeling that good games cultivate: we love to feel like we don't have to go through that harrowing feedback and adjustment. We love to feel like we go down the deep, dark corridors only because we chose to go looking for treasure (not because we really didn't know if we were supposed to!)
I'm playing with the idea that in the perfect game, you never actually screw anything up, not because it's impossible to do so, but because your intuitions are so carefully guided that you manage to understand exactly the right decision at every turn.
That's the extreme; obviously failure happens. Gunpoint had a great mechanic where, when you died, it would rewind by increasing time increments until you stopped making the same mistake. This is a fantastic way to minimize how painful adjusting to feedback is: there's nothing worse than fighting through 1000 grunts before you can fight Ifrit again.
I'm really a fan of the idea that ultimately, a game should allow the player to do whatever they want. Sometimes that's different from how they'd self-report, though: you might say you want to win, but you mean you want to win by getting good at the mechanics laid out before you. But if you really do hate critical hits, you, as the player/user/dev/everyman, should be able to turn them off. The player should be allowed to explore all the dead end paths, but if you don't really want them going there, craft your environment such that they don't really want to go there either. If they never do, then all those forks in the road will forever hold infinite amounts of untold mystery.
I'm always a little afraid when somebody takes control of somebody else's experience "because they know best". But teaching is an interesting exception: the imbalance in knowledge or skill is an intentional part of the relationship between student and teacher, and in that case, it does make sense for a teacher to assume they might know better than the student, with regards to what the student means, what they want, or what they can do.
I'm a big fan of what I'm calling positive play (until somebody stops me): give the player everything they want.. but not necessarily what they say they want.
Tuesday, October 7, 2014
Why not voxels?
But with time, research, and careful re-re-reading of procworld posts, I've come to think that voxels are a lot less magical than I once thought.
First of all, it's worth noting that you don't just have to have one data structure in your game. For everything to work as well as it can, you'll probably have lots of different ones. You might treat everything as a box for physics purposes, but that doesn't mean you need to render boxes; they can be slanty or curvy on the screen, without taking that into account anywhere else. Just because your UI makes all the blocks grid-aligned doesn't mean you have to enforce that throughout your entire codebase, and in fact, doing that would probably do more harm than good, since you're going to now have a bunch of special casing for all the things that aren't grid-aligned, like mobs and projectiles.
As far as rendering's concerned, voxels are just a form of compression - if you're going to only be rendering cubes, why would you send all that triangle data to VRAM? You can take advantage of the fact that all your shapes are the same, and just send the parts that matter. But like all compression, it comes at a cost: namely, how quickly you can use the uncompressed data. Like it or not, the graphics card renders triangles, and you can either send it the triangles directly, or send it blocks that will turn into those triangles, but it'll render the same triangles either way. It'll do it quicker if you just hand them to it directly, instead of making it jump through hoops. So if you can fit all the triangles you want to render into VRAM uncompressed, the compression will only cost you. The question then becomes whether you can fit all those triangles into VRAM, and how much work it will take to only keep the relevant ones in VRAM.
There's also the issue of how fast it is to send uncompressed vs compressed data to VRAM, but that seems relatively minor - initial loading time is a fairly minimal thing to be considering, since it only happens once every time you initialize a scene, and streaming the updates after that seems like a fairly solved problem too, based on my (admittedly limited) research.
And of course, having arbitrary triangles gives you tons of advantages too - you can draw whatever you like, however you like it.
So what are voxels good for? Well, as this procworld post points out, procedural generation is a lot easier when you're making things on a grid. And as Minecraft demonstrated, it makes the UI a breeze for malleable terrain. But like I said, you don't need to have a single data structure in your game; use each one only where appropriate. Once you've gotten the advantages of one data structure, it's fine to convert your results to something that's more useful, like turning your voxels from your easier terrain generation into a triangle mesh for rendering. Your physics can still treat things like boxes. Your UI can still mostly add and remove blocks. But your render code doesn't have to think the same way the rest of it does, and that also gives you the freedom to make your UI a little more free, too - maybe you want to add slope to your blocks, or subdivide them. If your whole game is predicated on the idea of a rigid grid with just a block type in each cell, and every part of your "engine" from terrain gen to rendering relies on that idea, then making those changes is going to be really obnoxious.
Obviously, there's a cost to converting between data structures, but don't forget the option entirely; just weight it appropriately against the others.
Sunday, May 20, 2012
AI evolution sim idea
- Poll a gate
- If the entity has not completed the previous gate a certain number of times, reduce the request to their max gate
- Send the entity a stream of bits
- Give the entity a certain timespan in which to start responding. If it fails, score 0 and return
- Once the above timespan has elapsed, entities may not stop sending bits for more than a certain amount of time, or they score 0
- If an entity exceeds the number of bits, they score 0
- A correct answer yields points based on the gate's difficulty, an incorrect answer gets 0
- Try to send an entity data
- Try to listen for signals being sent
- Transfer food to another entity
- Various boolean algebra
- Jump to another code location if the top stack element is negative
- Reproduce
- Read and write from their memory space (one space for both code and data)
Friday, July 23, 2010
Voltorb Flip – Move Recommendations
Safety
The first step to recommending a move is to filter by safety; safety overrides all other factors when deciding which move to take. For each tile, I simply check that tile in every solution; the tile(s) that have the fewest solutions giving them voltorbs are the safest.
Value
The next filter is by estimated value; this is done by going through each solution for each tile, and seeing which tile averages the highest value across all the solutions (in the actual code, I don’t use averages, I just use sums, because the relative magnitude of the averages is the same as the relative magnitude of the sums, since they all divide by the same number).
Helpfulness
So far we’ve found the safest tiles, and within those tiles, found the ones that are likeliest to be high. Now there’s a third filter, which hasn’t yet been built in to the released code. This filter determines how much the tiles are likely to help the computer. Basically, it looks at each tile, and determines which tiles will most reduce the number of remaining solutions (if it takes out half the remaining solutions, then we’ll be that much more accurate with future calculations).
So we look through each remaining tile, and find the one that maximizes the average number of reductions by putting it into the formula
avg_reductions = P1 * R1 + P2 * R2 + P3 * R3, where P is the probability of an event, and R is the amount of reduction it gives. P can also be expressed as N/T, where N is the number of solutions that have that specific number, and T is the total number of solutions.
Further Math For Efficiency’s Sake
Since the amount of reductions R for any given value is (T –N), this leads to a formula I prefer:
avg_reductions = (N1/T)(T – N1) + (N2/T)(T – N2) + (N3/T)(T – N3)
or
avg_reductions = (Tsum(N1..N3)-sum(N1^2..N3^2))/T
Since all the tiles are going through the formula using the same T, and we only care about relative magnitude, we can forget about the division by T altogether, and it ends up being much simpler to calculate overall.
Thursday, July 22, 2010
Voltorb Flip – Optimizations
The simplest optimization to explain is as follows: because empty tiles are filled in order when solving the board, I don’t have to recheck all the ones that have already been solved. Basically, I just save the row and column every time I recurse into the solve function, so it starts at the last tile solved, rather than looking through the beginning.
The “Move” Optimization
My original method essentially said:
- Solve the board
- Get the user’s move, and store it in the board
- Go to step 1
- Using a vector; vectors allow for constant-time random-access, and linear-time removal/insertion at any place but the end
- Using a list; lists allow for linear-time random-access, but constant-time removal/insertion
- Using a deque, which is similar in efficiency to a vector
The Solver Optimization
This is a hugely important optimization, despite the fact that it’s very few lines of code. First of all, realize that in a 25-tile board, where each tile can be 4 possible things, there are 4^25 permutations. Now, if we try each one, and then try to check if each one is valid, this would take an unfathomable amount of time (4^25 is like the number of characters you could store in .TXT files in a 1,000,000 GB hard drive).
What can be done, however, is after a tile is modified, no matter which tile it is, recheck the row and column that tile was in. If the row/column is no longer valid, we don’t need to try and solve the rest of the board (none of the solutions will be valid, because they contain some impossible row/column), we can simply move on to the next step without recursing.
Now, how does one check if a row/column is valid? It’s actually quite easy. First of all, we check if the sum of all the tiles in the row/column exceeds the sum the board tells us for that row/column; if it’s too big, it’s invalid. Next, we check if the number of voltorbs exceeds the number the board tells us for that row/column; if it’s too big, it’s invalid.
Now it gets a little trickier. We take the number of unknown tiles left in the row/column, and we subtract the number of voltorbs we have yet to find in that row/column. That gives us the number of tiles that actually have values. We multiply the number of valuable tiles by 3 (the maximum possible value), and add it to whatever sum we already have for that row/column. If it’s less than the sum the board tells us for that row/column, it’s invalid. In other words, this just assumes the rest of the tiles are all 3s, so if you can’t hit the required sum with all 3s, then there’s no way it can be valid.
I lied a little on the Solver blog post; it's not quite as simple as what I wrote; there's that extra validation step every time you change a tile.
Voltorb Flip – Solver
- Find the first empty tile (left-right, top-down, start at top-left)
- If none are found, you’re done; otherwise, continue to step 3.
- Pretend the tile’s a voltorb
- Try and solve the rest of the board
- If the rest of the board was solved, add the solution to our list of possible solutions
- If the rest of the board was unsolvable, ignore it
- Repeat step 3 for all possible tile states (voltorb, 1, 2, and 3)
As for steps 5 and 6, when I say the board was solved, I mean it was solved correctly; that is, all the rows and columns added up to what they should have, and contained all the voltorbs they should have.
So that’s it! I’ll post again shortly, detailing some of the optimizations I made in the above algorithm, and the recommendation algorithm.
Wednesday, July 21, 2010
Voltorb Flip - Update History
Version 2.1 wasn't really published. It was a very temporary update just patching the giant memory leak (and subsequent crashing) of version 2.0
Version 2.2 is up now. I took the speed optimization of 2.0, the non-crashing of 2.1, mixed them together, and got 2.2. For those of you who are interested, the crashing was an issue with recursive member function calls, a vector which was a member of the thing it was copying, and a copy constructor.
Version 2.3 includes a speed fix for 2.2, which may be going slowly for some people. It also includes a title bar-type thing, so now you can be sure of what version you're running. Also made some minor formatting/text changes and cleaned up the code a little.
Version 2.4 includes some minor text/formatting changes and yet another speed fix. This fix only applies to moves after the initial board setup, but it should be a rather noticeable jump. However, it's a little hard to test whether or not it still works; it's a simple change, and it seems to work at a glance and basic use, but if it does strange things, then let me know!
Version 2.4.2 fixed two bugs in version 2.4. Both are giving inaccurate data and 2.4.0 is crashing. If you downloaded 2.4 before 11:30 EST 17/07/2010, then redownload.
Version 2.5.0 has a few formatting changes (XX is used instead of OO for recommended moves, and -- is used for unknown tiles). It also introduces a simple, quick system that will automatically include all the tiles that are guaranteed to be 1 or a voltorb, and it will no longer recommended tiles that are guaranteed 1s.
Version 2.5.1 has a few formatting changes (V for voltorb) and also adds a relatively important patch for 2.5.0. 2.5.0 was having issues with getting new solutions after moves, so it was getting messed up on probability and value predictions.