cbloom rants

9/24/2014

09-24-14 - Smart Phone Advice

Errmmm... I think I might finally get a smart phone.

I don't really want to waste any time researching about this because I fucking hate them and want as little as possible to do with this entire industry.

Definitely not anything Apple. I abolutely don't want to deal with any headaches of doing funny OS flashes or anything nonstandard that will complicate my life.

I'm thinking Google Nexus 5 because I understand it's the most minimal pure android ; I hate dealing with bloatware. I've already spent more time on this than I would like to. I actually prefer something smaller and lighter since I will only use it in emergencies. Galaxy S5 Mini? Not actually significantly smaller. Jesus christ.

It looks like "Straight Talk" is probably the right plan option for someone like me who will rarely use it. (?). I can't use anything T-Mobile because their coverage sucks around Seattle. Too bad because they seem to have the best pay-go plans.

One thing I have no idea about is how much bandwidth I need and whether paying per byte is okay or if I'll be fucked. If I accidentally browse to some web page that spews data at me, will that cost me a fortune? I don't like the idea of having to worry about that.

9/10/2014

09-10-14 - Suffix Trie EOF handling

I realized something.

In a Suffix Trie (or suffix sort, or anything similar), handling the end-of-buffer is a mess.

The typical way it's described in the literature is to treat the end-of-buffer (henceforth EOB) as if it is a special character with value out of range (such as -1 or 256). That way you can just compare strings that go up to EOB with other strings, and the EOB will always mismatch a normal character and cause those strings to sort in a predictable way.

eg. on "banana" when you insert the final "na-EOB" you compare against "nana" and wind up comparing EOB vs. 'n' to find the sort order for that suffix.

The problem is that this is just a mess in the code. All over my suffix trie code, everywhere that I do a string lookup, I had to do extra special case checking for "is it EOB" and handle that case.

In addition, when I find a mismatch of "na-EOB" and "nan", the normal path of the code would be to change the prefix "na" into a branch and add children for the two different paths - EOB and "n". But I can't actually do that in my code because the child is selected by an unsigned char (uint8), and EOB is out of bounds for that variable type. So in all the node branch construction paths I have to special case "is it a mismatch just because of EOB, then don't create a branch". Blah blah.

Anyway, I realized that can all go away.

The key point is this :

Once you add a suffix that hits EOB (eg. the first mismatch against the existing suffixes is at EOB), then all future suffixes will also hit EOB, and that will be the deepest match for all of them.

Furthermore, all future suffix nodes can be found immediately using "follows"

eg. in the "banana" case, construction of the trie goes like :


"ban" is in the suffix trie
  (eg. "ban..","an..","n.." are all in the trie)

add "ana" :

we find the existing "an.."

the strings we compare are "anana-EOB" vs "ana-EOB"

so the mismatch hits EOB

That means all future suffixes will also hit EOB, and their placement in the tree can be found
just by using "follows" from the current string.

"ana-EOB" inserts at "anana"
"na-EOB" inserts at "nana"
"a-EOB" inserts at "ana"

That is, at the end of every trie construction when you start hitting EOB you just jump out to this special case of very simple follows addition.

So all the special EOB handling can be pulled out of the normal Trie code and set off to the side, which is lovely for the code clarity.

In practice it's quite common for files to end with a run of "000000000" which now gets swallowed up neatly by this special case.


ADD : if you don't care about adding every occurance of each suffix, then it gets even simpler -

If you hit EOB when adding a string - just don't add it. A full match of that suffix already exists, and your suffix that goes to EOB can't match any lookup better than what's in there.

(note that when I say "hit EOB" I mean you are at a branch node, and your current character doesn't match any of the branches because it is EOB. You will still add leaves that go to EOB, but you will never actually walk down those leaves to reach EOB.)

8/31/2014

08-31-14 - DLI Image Compression

I got pinged about DLI so I had a look.

DLI is a closed source image compressor. There's no public information about it. It may be the best lossy image compressor in the world at the moment.

(ASIDE : H265 looks promising but has not yet been developed as a still image compressor; it will need a lot of perceptual work; also as in my previous test of x264 you have to be careful to avoid the terrible YUV and subsamplers that are in the common tools)

I have no idea what the algorithms are in DLI. Based on looking at some of the decompressed images, I can see block transform artifacts, so it has something like an 8x8 DCT in it. I also see certain clues that make me think it uses something like intra prediction. (those clues are good detail and edge preservation, and a tendency to preserve detail even if it's the wrong detail; the same thing that you see in H264 stills)

Anyway, I thought I'd run my own comparo on DLI to see if it really is as good as the author claims.

I tested against JPEG + packJPEG + my JPEG decoder. I'm using an unfinished version of my JPEG decoder which uses the "Nosratinia modified" reconstruction method. It could be a lot better. Note that this is still a super braindead simple JPEG. No per-block quantizer. No intra prediction. Only 8x8 transforms. Standard 420 YCbCr. No trellis quantization or rate-distortion. Just a modern entropy coding back end and a modern deblocking decoder.

I test with my perceptual image tester imdiff . The best metric is Combo which is a linear combo of SCIELAB_MyDelta + MS_SSIM_IW_Y + MyDctDelta_YUV.

You can see some previous tests on mysoup or moses or PDI

NOTE : "dlir.exe" is the super-slow optimizing variant. "dli.exe" is reasonably fast. I tested both. I ran dlir with -ov (optimize for visual quality) since my tests are mostly perceptual. I don't notice a huge difference between them.

My impressions :

DLI and jpeg+packjpg+jpegdec are both very good. Both are miles ahead of what is commonly used these days (old JPEG for example).

DLI preserves detail and contrast much better. JPEG tends to smooth and blur things at lower bit rates. Part of this may be something like a SATD heuristic metric + better bit allocation.

DLI does "mangle" the image. That is, it gets the detail *wrong* sometimes, which is something that JPEG really never does. The primary shapes are preserved by jpeg+packjpg+jpegdec, they just lose detail. With DLI, you sometimes get weird lumps appearing that weren't there before. If you just look at the decompressed image it can be hard to spot, because it looks like there's good detail there, but if you A-B test the uncompressed to the original, you'll see that DLI is actually changing the detail. I saw this before when analyzing x264.

DLI is similar looking to x264-still but better.

DLI seems to have a special mode for gradients. It preserves smooth gradients very well. JPEG-unblock creates a stepped look because it's a series of ramps that are flat in the middle.

DLI seems to make edges a bit chunky. Smooth curves get steppy. jpeg+packjpg+jpegdec is very good at preserving a smooth curved edge.

DLI is the only image coder I've seen that I would say is definitely slightly better than jpeg+packjpg+jpegdec. Though it is worse in some ways, I think the overall impression of the decoded image is definitely better. Much better contrast preservation, much better detail energy level preservation.

Despite jpeg often scoring better than DLI on the visual quality metrics I have, DLI usually looks much better to my eyes. This is a failure of the visual quality metrics.


Okay. Time for some charts.

In all cases I will show the "TID Fit" score. This is a 0-10 quality rating with higher better. This removes the issue of SSIM, RMSE, etc. all being on different scales.

NOTE : I am showing RMSE just for information. It tells you something about how the coders are working and why they look different, where the error is coming from. In both cases (DLI and JPEG) the runs are optimized for *visual* quality, not for RMSE, so this is not a comparison of how well they can do on an RMSE contest. (dlir should be run with -or and jpeg should be run with flat quantization matrices at least).

(see previous tests on mysoup or moses or PDI )

mysoup :

moses :

porsche640 :

pdi1200 :


Qualitative Comparison :

I looked at JPEG and DLI encodings at the same bit rate for each image. Generally I try to look around 1 bpp (that's logbpp of 0) which is the "sweet spot" for lossy image compression comparison.

Here are the original, a JPEG, and a DLI of Porsche640.
Download : RAR of Porsche640 comparison images (1 MB)

What I see :

DLI has very obvious DCT ringing artifacts. Look at the lower-right edge of the hood, for example. The sharp line of the hood has ringing ghosts in 8x8 chunks.

DLI preserves contrast overall much better. The most obvious places are in the background - the leaves, the pebbles. JPEG just blurs those and drops a lot of high frequency detail, DLI keeps it much better. DLI preserves a lot more high frequency data.

DLI adds a lot of noise. JPEG basically never adds noise. For example compare the centers of the wheels. The JPEG just looks like a slightly smoothed version of the original. The DLI has got lots of chunkiness and extra variation that isn't in the original.

In a few places DLI really mangles the image. One is the A-pillar of the car, another is the shadow on the hood, also the rear wheel.

Both DLI and JPEG do the same awful thing to the chroma. All the orange in the gravel is completely lost. The entire color of the laurel bush in the background is changed. Both just produce a desaturated image.

Based on the scores and what I see perceptually, my guess is this : DLI uses an 8x8 DCT. It uses a quantization matrix that is much flatter than JPEG's.

8/27/2014

08-27-14 - LZ Match Length Redundancy

A quick note on something that does not work.

I've written before about the redundancy in LZ77 codes. ( for example ). In particular the issue I had a look at was :

Any time you code a match, you know that it must be longer than any possible match at lower offsets.

eg. you won't sent a match of length of 3 to offset 30514 if you could have sent offset 1073 instead. You always choose the lowest possible offset that gives you a given match length.

The easy way to exploit this is to send match lengths as the delta from the next longest match length at lower offset. You only need to send the excess, and you know the excess is greater than zero. So if you have an ML of 3 at offset 1073, and you find a match of length 4 at offset 30514, then you send {30514,+1}

To implement this in the encoder is straightforward. If you walk your matches in order from lowest offset to highest offset, then you know the current best match length as you go. You only consider a match if it exceeds the previous best, and you record the delta in lengths that you will send.

The same principle applies to the "last offsets" ; you don't send LO2 if you could sent LO0 at the same length, so the higher index LO matches must be of greater length. And the same thing applies to ROLZ.

I tried this in all 3 cases (normal LZ matches, LO matches, ROLZ). No win. Not even tiny, but close to zero.

Part of the problem is that match lengths are just not where the bits are; they're small already. But I assume that part of what's happening is that match lengths have patterns that the delta-ing ruins. For example binary files will have patterns of 4 or 8 long matches, or in an LZMA-like you'll have certain patterns show up like at certain pos&3 intervals after a literal you get a 3-long match, etc.

I tried some obvious ideas like using the next-lowest-length as part of the context for coding the delta-length. In theory you could be able to recapture something like a next-lowest of 3 predicts a delta of 1 in places where an ML of 4 is likely. But I couldn't find a win there.

I believe this is a dead end. Even if you could find a small win, it's too slow in the decoder to be worth it.

7/15/2014

07-15-14 - I'm back

Well, the blog took a break, and now it's back. I'm going to try moderated comments for a while and see how that goes.

I also renamed the VR post to break the links from reddit and twitter, but it's still there.

7/14/2014

07-14-14 - Suffix-Trie Coded LZ

Idea : Suffix-Trie Coded LZ :

You are doing LZ77-style coding (eg. matches in the prior stream or literals), but send the matches in a different way.

You have a Suffix Trie built on the prior stream. To find the longest match for a normal LZ77 you would take the current string to code and look it up by walking it down the Trie. When you reach the point of deepest match, you see what string in the prior stream made that node in the Trie, and send the position of that string as an offset.

Essentially what the offset does is encode a position in the tree.

But there are many redundancies in the normal LZ77 scheme. For example if you only encode a match of length 3, then the offsets that point to "abcd.." and "abce.." are equivalent, and shouldn't be distinguished by the encoding. The fact that they both take up space in the numerical offset is a waste of bits. You only want to distinguish offsets that actually point at something different for the current match length.

The idea in a nutshell is that instead of sending an offset, you send the descent into the trie to find that string.

At each node, first send a single bit for does the next byte in the string match any of the children. (This is equivalent to a PPM escape). If not, then you're done matching. If you like, this is like sending the match length with unary : 1 bits as long as you're in a node that has a matching child, then a 0 bit when you run out of matches. (alternatively you could send the entire match length up front with a different scheme).

When one of the children matches, you must encode which one. This is just an encoding of the next character, selected from the previously seen characters in this context. If all offsets are equally likely (they aren't) then the correct thing is just Probability(child) = Trie_Leaf_Count(child) , because the number of leaves under a node is the number of times we've seen this substring in the past.

(More generally the probability of offsets is not uniform, so you should scale the probability of each child using some modeling of the offsets. Accumulate P(child) += P(offset) for each offset under a child. Ugly. This is unfortunately very important on binary data where the 4-8-struct offset patterns are very strong.)

Ignoring that aside - the big coding gain is that we are no longer uselessly distinguishing offsets that only differ at higher match length, AND instead of just wasting those bits, we instead use them to make those offsets code smaller.

For example : say we've matched "ab" so far. The previous stream contains "abcd","abce","abcf", and "abq". Pretend that somehow those are the only strings. Normal LZ77 needs 2 bits to select from them - but if our match len is only 3 that's a big waste. This way we would say the next char in the match can either be "c" or "q" and the probabilities are 3/4 and 1/4 respectively. So if the length-3 match is a "c" we send that selection in only log2(4/3) bits = 0.415

And the astute reader will already be thinking - this is just PPM! In fact it is exactly a kind of PPM, in which you start out at low order (min match length, typically 3 or so) and your order gets deeper as you match. When you escape you junk back to order 3 coding, and if that escapes it jumps back to order 0 (literal).

There are several major problems :

1. Decoding is slow because you have to maintain the Suffix Trie for both encode and decode. You lose the simple LZ77 decoder.

2. Modern LZ's benefit a lot from modeling the numerical value of the offset in binary files. That's ugly & hard to do in this framework. This method is a lot simpler on text-like data that doesn't have numerical offset patterns.

3. It's not Pareto. If you're doing all this work you may as well just do PPM.

In any case it's theoretically interesting as an ideal of how you would encode LZ offsets if you could.

(and yes I know there have been many similar ideas in the past; LZFG of course, and Langdon's great LZ-PPM equivalence proof)

7/03/2014

07-03-14 - Oodle 1.41 Comparison Charts

I did some work for Oodle 1.41 on speeding up compressors. Mainly the Fast/VeryFast encoders got faster. I also took a pass at trying to make sure the various options were "Pareto", that is the best possible space/speed tradeoff. I had some options that were off the curve, like much slower than they needed to be, or just worse with no benefit, so it was just a mistake to use them (LZNib Normal was particularly bad).

Oodle 1.40 got the new LZA compressor. LZA is a very high compression arithmetic-coded LZ. The goal of LZA is as much compression as possible while retaining somewhat reasonable (or at least tolerable) decode speeds. My belief is that LZA should be used for internet distribution, but not for runtime loading.

The charts :

compression ratio : (raw/comp ratio; higher is better)

compressor VeryFast Fast Normal Optimal1 Optimal2
LZA 2.362 2.508 2.541 2.645 2.698
LZHLW 2.161 2.299 2.33 2.352 2.432
LZH 1.901 1.979 2.039 2.121 2.134
LZNIB 1.727 1.884 1.853 2.079 2.079
LZBLW 1.636 1.761 1.833 1.873 1.873
LZB16 1.481 1.571 1.654 1.674 1.674
lzmamax  : 2.665 to 1
lzmafast : 2.314 to 1
zlib9 : 1.883 to 1 
zlib5 : 1.871 to 1
lz4hc : 1.667 to 1
lz4fast : 1.464 to 1

encode speed : (mb/s)

compressor VeryFast Fast Normal Optimal1 Optimal2
LZA 23.05 12.7 6.27 1.54 1.07
LZHLW 59.67 19.16 7.21 4.67 1.96
LZH 76.08 17.08 11.7 0.83 0.46
LZNIB 182.14 43.87 10.76 0.51 0.51
LZBLW 246.83 49.67 1.62 1.61 1.61
LZB16 511.36 107.11 36.98 4.02 4.02
lzmamax  : 5.55
lzmafast : 11.08
zlib9 : 4.86
zlib5 : 25.23
lz4hc : 32.32
lz4fast : 606.37

decode speed : (mb/s)

compressor VeryFast Fast Normal Optimal1 Optimal2
LZA 34.93 37.15 37.76 37.48 37.81
LZHLW 363.94 385.85 384.83 391.28 388.4
LZH 357.62 392.35 397.72 387.28 383.38
LZNIB 923.66 987.11 903.21 1195.66 1194.75
LZBLW 2545.35 2495.37 2465.99 2514.48 2515.25
LZB16 2752.65 2598.69 2687.85 2768.34 2765.92
lzmamax  : 42.17
lzmafast : 40.22
zlib9 : 308.93
zlib5 : 302.53
lz4hc : 2363.75
lz4fast : 2288.58

While working on LZA I found some encoder speed wins that I ported back to LZHLW (mainly in Fast and VeryFast). A big one is to early out for last offsets; when I get a last offset match > N long, I just take it and don't even look for non-last-offset matches. This is done in the non-Optimal modes, and surprisingly hurts compression almost not all while helping speed a lot.

Four of the compressors are now in pretty good shape (LZA,LZHLW,LZNIB, and LZB16). There are a few minor issues to fix someday (someday = never unless the need arises) :

LZA decoder should be a little faster (currently lags LZMA a tiny bit). LZA Optimal1 would be better with a semi-greedy match finder like MMC (LZMA is much faster to encode than me at the same compression level; perhaps a different optimal parse scheme is needed too). LZA Optimal2 should seed with multi-parse. LZHLW Optimal could be faster. LZNIB Normal needs much better match selection heuristics, the ones I have are really just not right. LZNIB Optimal should be faster; needs a better way to do threshold-match-finding. LZB16 Optimal should be faster; needs a better 64k-sliding-window match finder.

The LZH and LZBLW compressors are a bit neglected and you can see they still have some of the anomalies in the space/speed tradeoff curve, like the Normal encode speed for LZBLW is so bad that you may as well just use Optimal. Put aside until there's a reason to fix them.


If another game developer tells me that "zlib is a great compromise and you probably can't beat it by much" I'm going to murder them. For the record :

zlib -9 :
4.86 MB/sec to encode
308.93 MB/sec to decode
1.883 to 1 compression

LZHLW Optimal1 :
4.67 MB/sec to encode
391.28 MB/sec to decode
2.352 to 1 compression
come on! The encoder is slow, the decoder is slow, and it compresses poorly.

LZMA in very high compression settings is a good tradeoff. In its low compression fast modes, it's very poor. zlib has the same flaw - they just don't have good encoders for fast compression modes.

LZ4 I have no issues with; in its designed zone it offers excellent tradeoffs.


In most cases the encoder implementations are :


VeryFast =
cache table match finder
single hash
greedy parse

Fast = 
cache table match finder
hash with ways
second hash
lazy parse
very simple heuristic decisions

Normal =
varies a lot for the different compressors
generally something like a hash-link match finder
or a cache table with more ways
more lazy eval
more careful "is match better" heuristics

Optimal =
exact match finder (SuffixTrie or similar)
cost-based match decision, not heuristic
backward exact parse of LZB16
all others have "last offset" so require an approximate forward parse

I'm mostly ripping out my Hash->Link match finders and replacing them with N-way cache tables. While the cache table is slightly worse for compression, it's a big speed win, which makes it better on the space-speed tradeoff spectrum.

I don't have a good solution for windowed optimal parse match finding (such as LZB16-Optimal). I'm currently using overlapped suffix arrays, but that's not awesome. Sliding window SuffixTrie is an engineering nightmare but would probably be good for that. MMC is a pretty good compromise in practice, though it's not exact and does have degenerate case breakdowns.


LZB16's encode speed is very sensitive to the hash table size.


-h12
24,700,820 ->16,944,823 =  5.488 bpb =  1.458 to 1
encode           : 0.045 seconds, 161.75 b/kc, rate= 550.51 mb/s
decode           : 0.009 seconds, 849.04 b/kc, rate= 2889.66 mb/s

-h13
24,700,820 ->16,682,108 =  5.403 bpb =  1.481 to 1
encode           : 0.049 seconds, 148.08 b/kc, rate= 503.97 mb/s
decode           : 0.009 seconds, 827.85 b/kc, rate= 2817.56 mb/s

-h14
24,700,820 ->16,491,675 =  5.341 bpb =  1.498 to 1
encode           : 0.055 seconds, 133.07 b/kc, rate= 452.89 mb/s
decode           : 0.009 seconds, 812.73 b/kc, rate= 2766.10 mb/s

-h15
24,700,820 ->16,409,957 =  5.315 bpb =  1.505 to 1
encode           : 0.064 seconds, 113.23 b/kc, rate= 385.37 mb/s
decode           : 0.009 seconds, 802.46 b/kc, rate= 2731.13 mb/s

If you accidentally set it too big you get a huge drop-off in speed. (The charts above show -h13 ; -h12 is more comparable to lz4fast (which was built with HASH_LOG=12)).

I stole an idea from LZ4 that helped the encoder speed a lot. (lz4fast is very good!) Instead of doing the basic loop like :


while(!eof)
{
  if ( match )
    output match
  else
    output literal
}

instead do :

while(!eof)
{
  while( ! match )
  {
    output literal
  }

  output match
}

This lets you make a tight loop just for outputing literals. It makes it clearer to you as a programmer what's happening in that loop and you can save work and simplify things. It winds up being a lot faster. (I've been doing the same thing in my decoders forever but hadn't done in the encoder).

My LZB16 is very slightly more complex to encode than LZ4, because I do some things that let me have a faster decoder. For example my normal matches are all no-overlap, and I hide the overlap matches in the excess-match-length branch.

6/26/2014

06-26-14 - VR Impressions

NOTE : changed post title to break the link.

Yesterday I finally went to Valve and saw "The Room". This is a rather rambly post about my thoughts after experiencing it.

For those who have been under a rock (like me), Valve has got this amazing VR demo. It uses unique prototype hardware that provides very good positional head tracking and very low latency graphics. It's in a calibrated room with registration spots all over the walls. It's way way better than any other VR, it's the real thing.

There is this magic thing that happens, it does tickle your brain intuitively. Part of you thinks that you're there. I had the same experiences that I've heard other people recount - your body starts reacting; like when a sphere moves towards you, you flinch and try to dodge it without thinking.

Part of the magic is that it's good enough that you *want* to believe it. It's not actually good enough that it seems real. Even in the carefully calibrated Valve room, it's glitchy and things pop a bit, and you always know you're in a simulation. But you choose to ignore the problems. It felt like when you're watching a good movie, and if you were being rational you would say that this is all illogical and the green screening looks fucking terrible and that is physically impossible what he just did, but if it's good you just choose to ignore all that and go along for the ride. VR felt like that to me.

One of the cool things about VR is that there is an absolute sense of scale, because you are always the size of you. This gives you scale reference in a way that you never have in games. Which is also a problem. It's wonderful if you're making games where you play as a human, but you can't play as a giant (if you just scale down everything else, it feels like you're you in a world where everything else is tiny, not that you're bigger; scale is no longer relative, you are always you). You can't make the characters run at 60 mph the way we usually do in games.

As cool as it is, I don't see how you actually make games with it.

For one thing there are massive short term technical problems. The headset is heavy and uncomfortable. The lenses have to be perfectly aligned to your eyes or you get sick. The registration is very easy to mess up. I'm sure these will be resolved over time. The headset has a cable which is always in danger of tripping or strangling you, which is a major problem and technically hard to get rid of, but perhaps possible some day.

But there are more fundamental inherent problems. When I stepped off the ledge, I wanted to fall. But of course I never actually can. You make my character fall, but not my body? That's weird. Heck if my character steps up on something, I want to step up myself. You can only make games where you basically stand still. In the room with the pipes, I want to climb on the pipes. Nope, you can't - and probably never can. Why would I want to be in a virtual world if I can't do anything there? I don't know how you even walk around a world without it feeling bizarre. All the Valve demos are basically you stuck in a tiny box, which is going to get old.

How do you ever make a game where the player character is moved without their own volition? If an NPC bumps me and pushes my avatar, what happens? You can't push my real human body, so it breaks the illusion. It seems to me that as soon as your viewpoint has a physical reaction with the virtual world and isn't just a viewer with no collision detection, it just doesn't work.

There's this fundamental problem that the software cannot move the player's viewpoint. The player must always get to move their own viewpoint with their head, or the illusion is broken (or worse, you get sick). This is just such a huge problem for games, it means the player can only be a ghost, or an omniscient observer in an RTS game, or other such things. Sure you can make games where you stand over an RTS world map and poke at it. Yay, it's a board game with fancy graphics. I see how it could be great as a sculpting or design tool. I see how it would be great for The Witness and similar games.

For me personally, it's so disappointing that you can't actually physically be in these worlds. The most exciting moments for me were some of the outdoor scenes, or the pipe room, where I just felt viscerally - "I want to run around in this world". What would be amazing for me would be to go in the VR world to alien planets with crazy strange plants and geology, and be able to run around it and climb on it. And I just don't see how that ever works. You can't walk around your living room, you'll trip on things or run into the wall. You can't ever step up or down anything, you have to be on perfectly flat ground all the time. You can't ever touch anything. (It's like a strip club; hey this is so exciting! can I interact with it? No? I have to just sit here and not move or touch anything? How fucking boring and frustrating. I'm leaving.)

At the very minimum you need gloves with amazing force feedback to give you some kind of tactile experience of the VR world, but even then it's just good for VR surgery and VR board games and things where you stand still and touch things. (and we all know the real app is VR fondling).

You could definitely make a killer car racing game. Put me in a seat with force feedback, and that solves all the physical interaction problems. (or, similarly, I'm driving a mech or a space ship or whatever; basically lock the player in a seat so you don't have to address the hard problems for now).

There are also huge huge software problems. Collision detection has to be polygon-perfect ; coarse collision proxies are no longer acceptable. Physics and animation have to be way better. Texture mapping and normal mapping just don't work. Billboard cards just don't work. We basically can't have trees or smoke or anything soft or complex for a long time, it's going to be a lot of super simple rigid objects. Skinned characters and painted on clothing (and just using textures to paint on geometry), none of it works. Flat shaded simple stuff is totally fine, but all the hacks we've used for so long are out the window.

I certainly see the appeal (for a software engineer) of starting from scratch on so many issues and working on the hard problems. Fun.

Socially I find VR rather scary.

One issue is the addictive nature of living in a VR world. Yes yes people are already addicted to their phones and facebook and WoW and whatever, but this is a whole new level. Plus it's even more disengaged from reality; it's one thing for everyone in a coffee shop these days to be staring at their laptops (god I hate you) but when they're all in headsets then interaction in the real world is completely over. I have no doubt that there will be a large class of people that live in the VR world and never leave their living room; Facebook will provide a "deliver pizza" button so that you don't even have to exit the simulation. It will be bad.

Perhaps more disturbing to me is how real and scary it can be. Just having a cube move into me was a kind of real physical fright that I haven't felt in a game. I think that being in a realistic VR world with people shooting each other would be absolutely terrifying and disgusting and really would do bad things to the brains of the players.

And if we wind up with evil overlords like Facebook or Apple or whoever controlling our VR world, that is downright dystopian. We all had our chance to say "no" to the rise of closed platforms when the Apple shit started to take off, and we all fucking dropped the ball (well, you did). Hell we did the same thing with the PATRIOT act. We're all just lying down and getting raped and not doing a damn thing about it and the future of freedom is very bleak indeed. (wow that rant went off the rails)

Anyway, I look forward to trying it again and seeing what people come up with. It's been a long time since I saw anything in games that made me say "yes! I want to play that!" so in that sense VR is a huge win.


Saved comments :

Tom Forsyth said... Playing as a giant is OK - grow the player's height, but don't move their eyes further apart. So the scale is unchanged, but the eyes are higher off the ground. July 3, 2014 at 7:45 PM

brucedawson said... Isn't a giant just somebody who is way taller than everybody else? So yeah, maybe if you 'just' scale down everyone else then you'll still feel normal size. But you'll also feel like you can crush the tiny creatures like bugs! Which is really the essence of being a giant. And yes, I have done the demo. July 3, 2014 at 8:56 PM

Grzegorz Adam Hankiewicz said... I don't understand how you say a steering wheel with force feedback solves any VR problem when the main reason I know I'm driving fast is how forces are being applied to my whole body, not that I'm holding something round instead of a gamepad. You mention it being jarring not being able to climb, wouldn't it be jarring to jump on a terrain bump inside your car and not feel gravity changes? Maybe the point of VR is not replicating dull life but simulating what real life can't possibly give us ever? July 4, 2014 at 3:08 AM

cbloom said... @GAH - not a wheel with force feedback (they all suck right now), but a *seat* like the F1 simulators use. They're quite good at faking short-term forces (terrain bumps and such are easy). I certainly don't mean that that should be the goal of VR. In fact it's quite disappointing that that is the only thing we have any hope of doing a good job of in the short term. July 4, 2014 at 7:33 AM

Stu said... I think you're being a bit defeatist about it, and unimaginative about how it can be used today. Despite being around 30 years old, the tech has only just caught up to the point whereby it can begin to travel down the path towards full immersion, Matrix style brain plugs, holodeck etc. This shit's gotta start somewhere, and can still produce amazing gaming - an obvious killer gaming genre is in any vehicular activity, incl. racing, normal driving, flying, space piloting, etc. Let the other stuff slowly evolve towards your eventual goal - we're in the 'space invaders' and 'pacman' era for VR now, and it works as is for a lot of gaming. July 4, 2014 at 9:11 AM

cbloom said... I'd love to hear any ideas for how game play with physical interaction will ever work. Haven't heard any yet. Obviously the goal should be physical interaction that actually *feels* like physical interaction so that it doesn't break the illusion of being there. That's unattainable for a long time. But even more modest is just how do you do something like walking around a space that has solid objects in it, or there are NPC's walking around. How do you make that work without being super artificial and weird and breaking the magic? In the short term we're going to see games that are basically board games, god games, fucking "experiences" where flower petals fall on you and garbage like that. We're going to see games that are just traditional shitty computer games, where you slide around a fucking lozenge collision proxy using a gamepad, and the VR is just a viewer in that game. That is fucking lame. What I would really hate to see is for the current trend in games to continue into VR - just more of the same shit all the time with better graphics. If people just punt on actually solving VR interaction and just use it as a way to make amazing graphics for fucking Call of Doody or Retro Fucking Mario Indie Bullshit Clone then I will be sad. When the top game is fucking VR Candy Soul-Crush then I will be sad. What is super magical and amazing is the feeling that you actually are somewhere else, and your body gets involved in a way it never has before, you feel like you can actually move around in this VR world. And until games are actually working in that magic it's just bullshit. July 4, 2014 at 9:36 AM

cbloom said... If you like, this is an exhortation to not cop the fuck out on VR the way we have in games for the last 20 years. The hard problems we should be solving in games are AI, animation/motion, physics. But we don't. We just make the same shit and put better graphics on it. Because that sells, and it's easy. Don't do that to VR. Actually work on how people interact with the simulation, and how the simulation responds to them. July 4, 2014 at 10:03 AM

Dillon Robinson said... Son, Bloom, kiddo, you've talking out of your ass again. Think before you speak.

.. and then it really went downhill.

6/21/2014

06-21-14 - The E46 M3

Fuck Yeah.

Oh my god. It's so fucking good.

When I'm working in my little garage office, I can feel her behind me, trying to seduce me. Whispering naughty thoughts to me. "Come on, let's just go for a little spin".

On the road, I love the way you can just pin the throttle on corner exit; the back end gently slides out, just a little wiggle. You actually just straighten the steering early, it's like half way through the corner you go boom throttle and straighten the lock and the car just glides out to finish the turn. Oh god it's like sex. You start the turn with your hands and then finish it with your foot, and it's amazing, it feels so right.

On the track there's a whole other feeling, once she's up to speed at the threshold of grip, on full tilt. My favorite thing so far is the chicane at T5 on the back side of pacific. She just dances through there in such a sweet way. You can just lift off the throttle to get a little engine braking and set the nose, then back on throttle to make the rear end just lightly step out and help ease you around the corner. The weight transfer and grip front to back just so nicely goes back and forth, it's fucking amazing. She feels so light on her feet, like a dancer, like a boxer, like a cat.

There are a few things I miss about the 911. The brakes certainly, the balance under braking and the control under trail-braking yes, the steering feel, oh god the steering feel was good and it fucking SUCKS in the M3, the head room in the 911 was awesome, the M3 has shit head room and it's really bad with a helmet, the visibility - all that wonderful glass and low door lines, the feeling of space in the cabin. Okay maybe more than a few things.

But oh my god the M3. I don't care that I have to sit slightly twisted (WTF); I don't care that there are various reliability problems. I don't care that it requires super expensive annual valve adjustments. I forgive it all. For that engine, so eager, so creamy, screaming all the way through the 8k rev range with not a single dip in torque, for the quick throttle response and lack of electronic fudging, for the chassis balance, for the way you can trim it with the right foot. Wow.

06-21-14 - Suffix Trie Note

A small note on Suffix Tries for LZ compression.

See previously :

Sketch of Suffix Trie for Last Occurance

So. Reminder to myself : Suffix Tries for optimal parsing is clean and awesome. But *only* for finding the length of the longest match. *not* for finding the lowest offset of that match. And *not* for finding the longest match length and the lowest offset of any other (shorter) matches.

I wrote before about the heuristic I currently use in Oodle to solve this. I find the longest match in the trie, then I walk up to parent nodes and see if they provide lower offset / short length matches, because those may be also interesting to consider in the optimal parser.

(eg. for clarity, the situation you need to consider is something like a match of length 8 at offset 482313 vs. a match of length 6 at offset 4 ; it's important to find that lower-length lower-offset match so that you can consider the cost of it, since it might be much cheaper)

Now, I tested the heuristic of just doing parent-gathers and limitted updates, and it performed well *in my LZH coder*. It does *not* necessarily perform well with other coders.

It can miss out on some very low offset short matches. You may need to supplement the Suffix Trie with an additional short range matcher, like even just a 1024 entry hash-chain matcher. Or maybe a [256*256*256] array of the last occurance location of a trigram. Even just checking at offset=1 for the RLE match is helpful. Whether or not they are important or not depends on the back end coder, so you just have to try it.

For LZA I ran into another problem :

The Suffix Trie exactly finds the length of the longest match in O(N). That's fine. The problem is when you go up to the parent nodes - the node depth is *not* the longest match length with the pointer there. It's just the *minimum* match length. The true match length might be anywhere up to *and including* the longest match length.

In LZH I was considering those matches with the node depth as the match length. And actually I re-tested it with the correct match length, and it makes very little difference.

Because LZA does LAM exclusion, it's crucial that you actually find what the longest ML is for that offset.

(note that the original LZMA exclude coder is actually just a *statistical* exclude coder; it is still capable of coding the excluded character, it just has very low probability. My modified version that only codes 7 bits instead of 8 is not capable of coding the excluded character, so you must not allow this.)

One bit of ugliness is that extending the match to find its true length is not part of the neat O(N) time query.

In any case, I think is all a bit of a dead-end for me. I'd rather move my LZA parser to be forward-only and get away from the "find a match at every position" requirement. That allows you to take big steps when you find long matches and makes the whole thing faster.

old rants