…as seen on the Beyond3D GPGPU
are the presentations from the recent (December 12th 2008) “Beyond
Programmable Shading” course:
SIGGRAPH Asia 2008: Parallel Computing for Graphics: Beyond Programmable Shading
are good presentations from both GPU vendors and academics. My favorite
presentations are the Intel ones on Larrabee, just because I’m so interested
in that architecture:
Parallel Programming on Larrabee -
describes the Larrabee fiber/task programming model.
Next-Generation Graphics on Larrabee
- how Larrabee’s
standard renderer is structured, and how it can be extended / modified.
IBM / Sony missed a bet by not presenting here. That’s too bad, because Cell sits
between the ATI / NVIDIA parts and Larrabee in terms of programmability. And
Cell’s been available for long enough that there should be a number of
interesting results to report.
Note to self: consider buying a PS3 and
learning Cell programming, just to get ready for Larrabee. Heh, yeah, that’s
the ticket. Being able to play PS3-specific games like Little Big Planet and
Flower would be just a coincidental bonus.
This weekend I reorganize my home source code projects. I have a number of
machines, and over the years each one had accumulated several small source-
code projects. (Python scripts, toy games, things like that.) I wanted to put
these projects under source code control. I also wanted to make sure they were
backed-up. Most of these little projects are not ready to be published, so I
didn’t want to use one of the many web-based systems for source-code
After some research, I decided to use replicated git repositories.
I created a remote git repository on an Internet-facing machine, and then
created local git repositories on each of my development machines. Now I can
use git push and git pull to keep the repositories synchronized. I use git’s
built-in ssh transport, so the only thing I had to do on the Internet-facing-
machine was make sure that the git executables were in the non-interactive-
ssh-shell’s path. (Which I did by adding them in my .bashrc file.)
ability to work off-line came in handy this Sunday, as I was attending an
elementary-school chess tournament with my son. Our local public schools don’t
have open WiFi, so there was no Internet connectivity. But I was able to
happily work away using my local git, and later easily push my changes back to
the shared repository.
just tried creating an avatar on Microsoft’s new Xbox dashboard. As you can
see (at least when the Microsoft server isn’t being hammered) on the left,
they provide a URL for displaying your current Avatar on a web page.
character creation system is not too bad. In some ways it’s more flexible than
Nintendo’s Mii (for example more hair styles and clothing), but in other ways
it’s more limited (less control over facial feature placement).
looks better on the Xbox than it does here – they should consider sharpening
the image. For example, the T-shirt my avatar is wearing has a thin-lined Xbox
I think they do a good job of avoiding the Uncanny Valley effect. I
look forward to seeing how avatars end up being used in the Xbox world.
othe Xbox-related news I’m enjoying playing Banjo Kazooie Nuts & Bolts with my
son. All we have right now is the demo, but it’s great fun for anyone who
likes building things. It’s replaced Cloning Clyde as my son’s favorite Xbox
I’m a big fan of CPU architectures. Here’s a conversation between David Moon
formerly of Symbolics Lisp Machines and Cliff Click Jr. of Azule Systems. They
discuss details of both the Lisp Machine architecture and Azule’s massively
multi-core Java machine.
A Brief Conversation with David Moon
The claim (from both Symbolics and Azule)
is that adding just a few instructions to an ordinary RISC instruction set can
make GC much faster. With so much code being run in Java these days I wonder
if we’ll see similar types of instructions added to mainstream architectures.
This one can: XKCD: Someone is Wrong on the Internet
-– this comic’s punchline has saved me at least an hour of a week since it
came out. That’s more than I’ve saved by learning Python. :-)
The next generation of video game consoles should start in 2011. (Give or take
a year). It takes about three years to develop a video game console, so work
should be ramping up at all three video game manufacturers.
Nintendo’s best course-of-action is pretty clear: Do a slightly souped-up Wii. Perhaps with
lots of SD-RAM for downloadable games. Probably with low-end HD resolution
graphics. Definately with an improved controller (for example with the recent
gyroscope slice built in.)
Sony and Microsoft have to decide whether to aim high or copy Nintendo.
Today a strong rumor has it that Sony is polling
developers to see what they think of a PlayStation 4 that is similar to a
cost-reduced PlayStation 3 (same Cell, cheaper RAM, cheap launch price.)
Sony PS4 Poll
That makes sense as Sony
has had problems this generation due to the high launch cost of the PS3. The
drawback of this scheme is that it does nothing to make the PS4 easy to
In the last few weeks we’ve seen other rumors that Microsoft’s being
courted by Intel to put the Larrabee GPU in the next gen Xbox. I think that if
Sony aims low, it’s likely that Microsoft will be foreced to aim low too,
which would make a Larrabee GPU unlikely. That makes me sad – in my dreams,
I’d love to see an Xbox 4 that used a quad-core x86 CPU and a 16-core Larrabee
Well, the great thing is that we’ll know for sure, in about 3 years. :-)
Team Blue Iris (that’s me and my kids!) took 19th place, the top finish for a
Python-based entry! Check out the
ICFP Programming Contest 2008 Video.
The winning team list is given at 41:45.
That’s the question
Dean Kent asks over at Real World Tech’s
forums. I replied briefly there, but thought it would make a good blog post as
I’m an Android developer, so I’m probably biased, but I think most people in
the developed world will have a smart phone eventually, just as most people
already have access to a PC and Internet connectivity.
I think the ratio of phone / PC use will vary greatly depending upon the
person’s lifestyle. If you’re a city-dwelling 20-something student you’re
going to be using your mobile phone a lot more than a 70-something suburban
This isn’t because the grandpa’s old fashioned, it’s because the two people
live in different environments and have different patterns of work and play.
Will people stop using PCs? Of course not. At least, not most people. There
are huge advantages to having a large screen and a decent keyboard and mouse.
But I think people will start to think of their phone and their PC as two
views on the same thing – the Internet. And that will shape what apps they
use on both the phone and the PC.
And this switching will be a strong force
towards having people move their data into the Internet cloud, so that they
can access their data from whatever device they’re using. This tendency will
be strongest with small-sized data that originates in the cloud (like email),
but will probably extend to other forms of data over time.
Peter Moore on Xbox
I always liked Peter Moore, and I was sorry when he left Xbox for EA. He’s
given a very good interview on his time at Sega and Microsoft. (He ran the
Xbox game group at Microsoft before moving on to Electronic Arts.) Lots of
insight into the Xbox part of the game industry.
Here he is talking about Rare:
...and you know, Microsoft, we'd had a tough time getting Rare back -
Perfect Dark Zero was a launch title and didn't do as well as Perfect Dark...
but we were trying all kinds of classic Rare stuff and unfortunately I think
the industry had passed Rare by - it's a strong statement but what they were
good at, new consumers didn't care about anymore, and it was tough because
they were trying very hard - Chris and Tim Stamper were still there - to try
and recreate the glory years of Rare, which is the reason Microsoft paid a lot
of money for them and I spent a lot of time getting on a train to Twycross to
meet them. Great people. But their skillsets were from a different time and a
different place and were not applicable in today's market.
Sometimes I need to get a feature into the project I’m working on, but the
developer who owns the feature is too busy to implement it. A trick that seems
to help unblock things is if I hack up an implementation of the feature myself
and work with the owner to refine it.
This is only possible if you have an
engineering culture that allows it, but luckily both Google and Microsoft
cultures allow this, at least at certain times in the product lifecycle when
the tree isn’t frozen.
By implementing the feature myself, I’m (a) reducing
risk, as we can see the feature sort of works, (b) making it much easier for
the overworked feature owner to help me, as they only have to say “change
these 3 things and you’re good to go”, rather than having to take the time to
educate me on how to implement the feature, (c) getting a chance to implement
the feature exactly the way I want it to work.
Now, I can think of a lot of
situations where this approach won’t work: at the end of the schedule where no
new features are allowed, in projects where the developer is so overloaded
that they can’t spare any cycles to review the code at all, or in projects
where people guard the areas they work on.
But I’ve been surprised how well it
works. And it’s getting easier to do, as distributed version control systems
become more common, and people become more comfortable working with multiple
branches and patches.