19 Nov 2008

I
just tried creating an avatar on Microsoft’s new Xbox dashboard. As you can
see (at least when the Microsoft server isn’t being hammered) on the left,
they provide a URL for displaying your current Avatar on a web page.
The
character creation system is not too bad. In some ways it’s more flexible than
Nintendo’s Mii (for example more hair styles and clothing), but in other ways
it’s more limited (less control over facial feature placement).
My avatar
looks better on the Xbox than it does here – they should consider sharpening
the image. For example, the T-shirt my avatar is wearing has a thin-lined Xbox
symbol.
I think they do a good job of avoiding the Uncanny Valley effect. I
look forward to seeing how avatars end up being used in the Xbox world.
In
othe Xbox-related news I’m enjoying playing Banjo Kazooie Nuts & Bolts with my
son. All we have right now is the demo, but it’s great fun for anyone who
likes building things. It’s replaced Cloning Clyde as my son’s favorite Xbox
game.
19 Nov 2008
I’m a big fan of CPU architectures. Here’s a conversation between David Moon
formerly of Symbolics Lisp Machines and Cliff Click Jr. of Azule Systems. They
discuss details of both the Lisp Machine architecture and Azule’s massively
multi-core Java machine.
A Brief Conversation with David Moon
The claim (from both Symbolics and Azule)
is that adding just a few instructions to an ordinary RISC instruction set can
make GC much faster. With so much code being run in Java these days I wonder
if we’ll see similar types of instructions added to mainstream architectures.
20 Oct 2008
This one can: XKCD: Someone is Wrong on the Internet
-– this comic’s punchline has saved me at least an hour of a week since it
came out. That’s more than I’ve saved by learning Python. :-)
30 Sep 2008
The next generation of video game consoles should start in 2011. (Give or take
a year). It takes about three years to develop a video game console, so work
should be ramping up at all three video game manufacturers.
Nintendo’s best course-of-action is pretty clear: Do a slightly souped-up Wii. Perhaps with
lots of SD-RAM for downloadable games. Probably with low-end HD resolution
graphics. Definately with an improved controller (for example with the recent
gyroscope slice built in.)
Sony and Microsoft have to decide whether to aim high or copy Nintendo.
Today a strong rumor has it that Sony is polling
developers to see what they think of a PlayStation 4 that is similar to a
cost-reduced PlayStation 3 (same Cell, cheaper RAM, cheap launch price.)
Sony PS4 Poll
That makes sense as Sony
has had problems this generation due to the high launch cost of the PS3. The
drawback of this scheme is that it does nothing to make the PS4 easy to
program.
In the last few weeks we’ve seen other rumors that Microsoft’s being
courted by Intel to put the Larrabee GPU in the next gen Xbox. I think that if
Sony aims low, it’s likely that Microsoft will be foreced to aim low too,
which would make a Larrabee GPU unlikely. That makes me sad – in my dreams,
I’d love to see an Xbox 4 that used a quad-core x86 CPU and a 16-core Larrabee
GPU.
Well, the great thing is that we’ll know for sure, in about 3 years. :-)
24 Sep 2008
Team Blue Iris (that’s me and my kids!) took 19th place, the top finish for a
Python-based entry! Check out the
ICFP Programming Contest 2008 Video.
The winning team list is given at 41:45.
19 Sep 2008
That’s the question
Dean Kent asks over at Real World Tech’s
forums. I replied briefly there, but thought it would make a good blog post as
well.
I’m an Android developer, so I’m probably biased, but I think most people in
the developed world will have a smart phone eventually, just as most people
already have access to a PC and Internet connectivity.
I think the ratio of phone / PC use will vary greatly depending upon the
person’s lifestyle. If you’re a city-dwelling 20-something student you’re
going to be using your mobile phone a lot more than a 70-something suburban
grandpa.
This isn’t because the grandpa’s old fashioned, it’s because the two people
live in different environments and have different patterns of work and play.
Will people stop using PCs? Of course not. At least, not most people. There
are huge advantages to having a large screen and a decent keyboard and mouse.
But I think people will start to think of their phone and their PC as two
views on the same thing – the Internet. And that will shape what apps they
use on both the phone and the PC.
And this switching will be a strong force
towards having people move their data into the Internet cloud, so that they
can access their data from whatever device they’re using. This tendency will
be strongest with small-sized data that originates in the cloud (like email),
but will probably extend to other forms of data over time.
19 Sep 2008
Peter Moore on Xbox
I always liked Peter Moore, and I was sorry when he left Xbox for EA. He’s
given a very good interview on his time at Sega and Microsoft. (He ran the
Xbox game group at Microsoft before moving on to Electronic Arts.) Lots of
insight into the Xbox part of the game industry.
Here he is talking about Rare:
...and you know, Microsoft, we'd had a tough time getting Rare back -
Perfect Dark Zero was a launch title and didn't do as well as Perfect Dark...
but we were trying all kinds of classic Rare stuff and unfortunately I think
the industry had passed Rare by - it's a strong statement but what they were
good at, new consumers didn't care about anymore, and it was tough because
they were trying very hard - Chris and Tim Stamper were still there - to try
and recreate the glory years of Rare, which is the reason Microsoft paid a lot
of money for them and I spent a lot of time getting on a train to Twycross to
meet them. Great people. But their skillsets were from a different time and a
different place and were not applicable in today's market.
16 Sep 2008
Sometimes I need to get a feature into the project I’m working on, but the
developer who owns the feature is too busy to implement it. A trick that seems
to help unblock things is if I hack up an implementation of the feature myself
and work with the owner to refine it.
This is only possible if you have an
engineering culture that allows it, but luckily both Google and Microsoft
cultures allow this, at least at certain times in the product lifecycle when
the tree isn’t frozen.
By implementing the feature myself, I’m (a) reducing
risk, as we can see the feature sort of works, (b) making it much easier for
the overworked feature owner to help me, as they only have to say “change
these 3 things and you’re good to go”, rather than having to take the time to
educate me on how to implement the feature, (c) getting a chance to implement
the feature exactly the way I want it to work.
Now, I can think of a lot of
situations where this approach won’t work: at the end of the schedule where no
new features are allowed, in projects where the developer is so overloaded
that they can’t spare any cycles to review the code at all, or in projects
where people guard the areas they work on.
But I’ve been surprised how well it
works. And it’s getting easier to do, as distributed version control systems
become more common, and people become more comfortable working with multiple
branches and patches.
15 Sep 2008
Ars Technica published an excellent interview with Tim Sweeney on the
Twilight of the GPU.
As the architect of the Unreal Engine series of game engines,
Tim has almost certainly been disclosed on all the upcoming GPUs. Curiously he
only talks about NVIDIA and Larrabee. Is ATI out of the race?
Anyway, Tim says a lot of sensible things:
- Graphics APIs at the DX/OpenGL level are much less important than they were in the fixed-function-GPU era.
- DX9 was the last graphics API that really mattered. Now it’s time to go back to software rasterization.
- It’s OK if NVIDIA’s next-gen GPU still has fixed-function hardware, as long as it doesn’t get in the way of pure-software rendering. (ff hardware will be useful for getting high performance on legacy games and benchmarks.)
- Next-gen NVIDIA will be more Larrabee-like than current-gen NVIDIA.
- Next Gen programming language ought-to-be vectorized C++ for both CPU and GPU.
- Possibly the GPU and CPU will be the same chip on next-gen consoles.
12 Aug 2008
The OpenGL 3.0 spec was released this week, just in time for SigGraph. It
turns out to be a fairly minor update to OpenGL, little more than a
codification of existing vendor extensions. While this disappoints OpenGL
fans, it’s probably the right thing to do. Standards tend to be best when they
codify existing practice, rather than whey they try to invent new ideas.
What about the future?
The fundamental forces are:
- GPUs and CPUs are going to be on the same die
- GPUs are becoming general purpose CPUs.
- CPUs are going massively multicore
Once a GPU is a general purpose CPU, there’s little reason
to provide a standard all-encompasing rendering API. It’s simpler and easier
to give an OS and a C compiler, and a reference rendering pipeline. Then let
the application writer customize the pipeline for their application.
The big unknown is whether any of the next-generation video game consoles
will adopt
the CPU-based-graphics approach. CPU-based graphics may not be cost
competitive soon enough for the next generation of game consoles.
Sony’s a
likely candidate - it’s a natural extension to the current Cell-based PS3.
Microsoft would be very comfortable with a Larrabee-based solution, given
their OS expertiese and their long and profitable relationship with Intel.
Nintendo’s pretty unlikely, as they have made an unbelievable amount of money
betting on low-end graphics. (But they’d switch to CPU-based graphics in an
instant if it provided cost savings. And for what it’s worth, the N64 did have
DSP-based graphics.)