08 Jan 2009
For at least 10 years GPU vendors have been talking about “Pixar Quality”
graphics. But what does that mean? Well, according to this lecture on
The Design of Renderman,
the original goals for the REYES architecture were
- 3000 x 1667 pixels (5 MP)
- 80M Micropolygons (each 1/4th of a pixel in size, depth complexity of 4)
- 16 samples per pixel
- 150K geometric primitives
- 300 shading flops per micropolygon
- 6 textures per primitive
- 100 1MB textures
That’s a shading rate of 80M * 300 * 30 Hz = 720 Gflops. (They were probably
more concerned about 24 Hz, but for games 30Hz is better.)
In comparison I
think the peak shader flops of high-end 2008-era video cards are in the 1
TFlop range. (Xbox 360 Xenos is 240 Gflops, PS3 is a bit less.). Now, GPU
vendors typically quote multiply-accumulate flops, because that doubles the
number of flops. So it’s more realistic to say that 2008-era video cards are
in the 500 Gflop range. So we really are entering the era of “Pixar Quality”
graphics.
31 Dec 2008
A very thorough talk describing the Nintendo Wii game console security model
and the bugs and weaknesses that allowed the Wii to be compromised:
Console Hacking 2008: Wii Fail
In a nutshell, security is provided by an embedded ARM CPU that sits between the
CPU and the IO devices, and handles all the IO. The two main flaws were (a) A
bug in the code that compared security keys, such that it was possible to
forge security keys, and (b) secret break-once-run-everywere information was
stored un-encrypted in RAM, where it could be extracted using hardware
modifications.
There’s a nice table at the end of the presentation showing a
number of recent consumer devices, what their security model was, and how long
it took to break them.
The PS3 is the only console that’s currently unbroken.
The PS3’s security model seems similar to the Xbox 360, but somewhat weaker.
But it remains unbroken. This seems to due to the existence of an official PS3
Linux port, which means most Linux kernel hackers are not motivated to hack
the PS3 security. (Only the ones who want full access to the GPU from Linux
are motivated, and only to the extent that they can access the GPU.)
26 Dec 2008
…as seen on the Beyond3D GPGPU
forum, here
are the presentations from the recent (December 12th 2008) “Beyond
Programmable Shading” course:
SIGGRAPH Asia 2008: Parallel Computing for Graphics: Beyond Programmable Shading
There
are good presentations from both GPU vendors and academics. My favorite
presentations are the Intel ones on Larrabee, just because I’m so interested
in that architecture:
Parallel Programming on Larrabee -
describes the Larrabee fiber/task programming model.
Next-Generation Graphics on Larrabee
- how Larrabee’s
standard renderer is structured, and how it can be extended / modified.
IBM / Sony missed a bet by not presenting here. That’s too bad, because Cell sits
between the ATI / NVIDIA parts and Larrabee in terms of programmability. And
Cell’s been available for long enough that there should be a number of
interesting results to report.
Note to self: consider buying a PS3 and
learning Cell programming, just to get ready for Larrabee. Heh, yeah, that’s
the ticket. Being able to play PS3-specific games like Little Big Planet and
Flower would be just a coincidental bonus.
08 Dec 2008
This weekend I reorganize my home source code projects. I have a number of
machines, and over the years each one had accumulated several small source-
code projects. (Python scripts, toy games, things like that.) I wanted to put
these projects under source code control. I also wanted to make sure they were
backed-up. Most of these little projects are not ready to be published, so I
didn’t want to use one of the many web-based systems for source-code
management.
After some research, I decided to use replicated git repositories.
I created a remote git repository on an Internet-facing machine, and then
created local git repositories on each of my development machines. Now I can
use git push and git pull to keep the repositories synchronized. I use git’s
built-in ssh transport, so the only thing I had to do on the Internet-facing-
machine was make sure that the git executables were in the non-interactive-
ssh-shell’s path. (Which I did by adding them in my .bashrc file.)
Git’s
ability to work off-line came in handy this Sunday, as I was attending an
elementary-school chess tournament with my son. Our local public schools don’t
have open WiFi, so there was no Internet connectivity. But I was able to
happily work away using my local git, and later easily push my changes back to
the shared repository.
19 Nov 2008

I
just tried creating an avatar on Microsoft’s new Xbox dashboard. As you can
see (at least when the Microsoft server isn’t being hammered) on the left,
they provide a URL for displaying your current Avatar on a web page.
The
character creation system is not too bad. In some ways it’s more flexible than
Nintendo’s Mii (for example more hair styles and clothing), but in other ways
it’s more limited (less control over facial feature placement).
My avatar
looks better on the Xbox than it does here – they should consider sharpening
the image. For example, the T-shirt my avatar is wearing has a thin-lined Xbox
symbol.
I think they do a good job of avoiding the Uncanny Valley effect. I
look forward to seeing how avatars end up being used in the Xbox world.
In
othe Xbox-related news I’m enjoying playing Banjo Kazooie Nuts & Bolts with my
son. All we have right now is the demo, but it’s great fun for anyone who
likes building things. It’s replaced Cloning Clyde as my son’s favorite Xbox
game.
19 Nov 2008
I’m a big fan of CPU architectures. Here’s a conversation between David Moon
formerly of Symbolics Lisp Machines and Cliff Click Jr. of Azule Systems. They
discuss details of both the Lisp Machine architecture and Azule’s massively
multi-core Java machine.
A Brief Conversation with David Moon
The claim (from both Symbolics and Azule)
is that adding just a few instructions to an ordinary RISC instruction set can
make GC much faster. With so much code being run in Java these days I wonder
if we’ll see similar types of instructions added to mainstream architectures.
20 Oct 2008
This one can: XKCD: Someone is Wrong on the Internet
-– this comic’s punchline has saved me at least an hour of a week since it
came out. That’s more than I’ve saved by learning Python. :-)
30 Sep 2008
The next generation of video game consoles should start in 2011. (Give or take
a year). It takes about three years to develop a video game console, so work
should be ramping up at all three video game manufacturers.
Nintendo’s best course-of-action is pretty clear: Do a slightly souped-up Wii. Perhaps with
lots of SD-RAM for downloadable games. Probably with low-end HD resolution
graphics. Definately with an improved controller (for example with the recent
gyroscope slice built in.)
Sony and Microsoft have to decide whether to aim high or copy Nintendo.
Today a strong rumor has it that Sony is polling
developers to see what they think of a PlayStation 4 that is similar to a
cost-reduced PlayStation 3 (same Cell, cheaper RAM, cheap launch price.)
Sony PS4 Poll
That makes sense as Sony
has had problems this generation due to the high launch cost of the PS3. The
drawback of this scheme is that it does nothing to make the PS4 easy to
program.
In the last few weeks we’ve seen other rumors that Microsoft’s being
courted by Intel to put the Larrabee GPU in the next gen Xbox. I think that if
Sony aims low, it’s likely that Microsoft will be foreced to aim low too,
which would make a Larrabee GPU unlikely. That makes me sad – in my dreams,
I’d love to see an Xbox 4 that used a quad-core x86 CPU and a 16-core Larrabee
GPU.
Well, the great thing is that we’ll know for sure, in about 3 years. :-)
24 Sep 2008
Team Blue Iris (that’s me and my kids!) took 19th place, the top finish for a
Python-based entry! Check out the
ICFP Programming Contest 2008 Video.
The winning team list is given at 41:45.
19 Sep 2008
That’s the question
Dean Kent asks over at Real World Tech’s
forums. I replied briefly there, but thought it would make a good blog post as
well.
I’m an Android developer, so I’m probably biased, but I think most people in
the developed world will have a smart phone eventually, just as most people
already have access to a PC and Internet connectivity.
I think the ratio of phone / PC use will vary greatly depending upon the
person’s lifestyle. If you’re a city-dwelling 20-something student you’re
going to be using your mobile phone a lot more than a 70-something suburban
grandpa.
This isn’t because the grandpa’s old fashioned, it’s because the two people
live in different environments and have different patterns of work and play.
Will people stop using PCs? Of course not. At least, not most people. There
are huge advantages to having a large screen and a decent keyboard and mouse.
But I think people will start to think of their phone and their PC as two
views on the same thing – the Internet. And that will shape what apps they
use on both the phone and the PC.
And this switching will be a strong force
towards having people move their data into the Internet cloud, so that they
can access their data from whatever device they’re using. This tendency will
be strongest with small-sized data that originates in the cloud (like email),
but will probably extend to other forms of data over time.