30 Sep 2008
The next generation of video game consoles should start in 2011. (Give or take
a year). It takes about three years to develop a video game console, so work
should be ramping up at all three video game manufacturers.
Nintendo’s best course-of-action is pretty clear: Do a slightly souped-up Wii. Perhaps with
lots of SD-RAM for downloadable games. Probably with low-end HD resolution
graphics. Definately with an improved controller (for example with the recent
gyroscope slice built in.)
Sony and Microsoft have to decide whether to aim high or copy Nintendo.
Today a strong rumor has it that Sony is polling
developers to see what they think of a PlayStation 4 that is similar to a
cost-reduced PlayStation 3 (same Cell, cheaper RAM, cheap launch price.)
Sony PS4 Poll
That makes sense as Sony
has had problems this generation due to the high launch cost of the PS3. The
drawback of this scheme is that it does nothing to make the PS4 easy to
program.
In the last few weeks we’ve seen other rumors that Microsoft’s being
courted by Intel to put the Larrabee GPU in the next gen Xbox. I think that if
Sony aims low, it’s likely that Microsoft will be foreced to aim low too,
which would make a Larrabee GPU unlikely. That makes me sad – in my dreams,
I’d love to see an Xbox 4 that used a quad-core x86 CPU and a 16-core Larrabee
GPU.
Well, the great thing is that we’ll know for sure, in about 3 years. :-)
24 Sep 2008
Team Blue Iris (that’s me and my kids!) took 19th place, the top finish for a
Python-based entry! Check out the
ICFP Programming Contest 2008 Video.
The winning team list is given at 41:45.
19 Sep 2008
That’s the question
Dean Kent asks over at Real World Tech’s
forums. I replied briefly there, but thought it would make a good blog post as
well.
I’m an Android developer, so I’m probably biased, but I think most people in
the developed world will have a smart phone eventually, just as most people
already have access to a PC and Internet connectivity.
I think the ratio of phone / PC use will vary greatly depending upon the
person’s lifestyle. If you’re a city-dwelling 20-something student you’re
going to be using your mobile phone a lot more than a 70-something suburban
grandpa.
This isn’t because the grandpa’s old fashioned, it’s because the two people
live in different environments and have different patterns of work and play.
Will people stop using PCs? Of course not. At least, not most people. There
are huge advantages to having a large screen and a decent keyboard and mouse.
But I think people will start to think of their phone and their PC as two
views on the same thing – the Internet. And that will shape what apps they
use on both the phone and the PC.
And this switching will be a strong force
towards having people move their data into the Internet cloud, so that they
can access their data from whatever device they’re using. This tendency will
be strongest with small-sized data that originates in the cloud (like email),
but will probably extend to other forms of data over time.
19 Sep 2008
Peter Moore on Xbox
I always liked Peter Moore, and I was sorry when he left Xbox for EA. He’s
given a very good interview on his time at Sega and Microsoft. (He ran the
Xbox game group at Microsoft before moving on to Electronic Arts.) Lots of
insight into the Xbox part of the game industry.
Here he is talking about Rare:
...and you know, Microsoft, we'd had a tough time getting Rare back -
Perfect Dark Zero was a launch title and didn't do as well as Perfect Dark...
but we were trying all kinds of classic Rare stuff and unfortunately I think
the industry had passed Rare by - it's a strong statement but what they were
good at, new consumers didn't care about anymore, and it was tough because
they were trying very hard - Chris and Tim Stamper were still there - to try
and recreate the glory years of Rare, which is the reason Microsoft paid a lot
of money for them and I spent a lot of time getting on a train to Twycross to
meet them. Great people. But their skillsets were from a different time and a
different place and were not applicable in today's market.
16 Sep 2008
Sometimes I need to get a feature into the project I’m working on, but the
developer who owns the feature is too busy to implement it. A trick that seems
to help unblock things is if I hack up an implementation of the feature myself
and work with the owner to refine it.
This is only possible if you have an
engineering culture that allows it, but luckily both Google and Microsoft
cultures allow this, at least at certain times in the product lifecycle when
the tree isn’t frozen.
By implementing the feature myself, I’m (a) reducing
risk, as we can see the feature sort of works, (b) making it much easier for
the overworked feature owner to help me, as they only have to say “change
these 3 things and you’re good to go”, rather than having to take the time to
educate me on how to implement the feature, (c) getting a chance to implement
the feature exactly the way I want it to work.
Now, I can think of a lot of
situations where this approach won’t work: at the end of the schedule where no
new features are allowed, in projects where the developer is so overloaded
that they can’t spare any cycles to review the code at all, or in projects
where people guard the areas they work on.
But I’ve been surprised how well it
works. And it’s getting easier to do, as distributed version control systems
become more common, and people become more comfortable working with multiple
branches and patches.
15 Sep 2008
Ars Technica published an excellent interview with Tim Sweeney on the
Twilight of the GPU.
As the architect of the Unreal Engine series of game engines,
Tim has almost certainly been disclosed on all the upcoming GPUs. Curiously he
only talks about NVIDIA and Larrabee. Is ATI out of the race?
Anyway, Tim says a lot of sensible things:
- Graphics APIs at the DX/OpenGL level are much less important than they were in the fixed-function-GPU era.
- DX9 was the last graphics API that really mattered. Now it’s time to go back to software rasterization.
- It’s OK if NVIDIA’s next-gen GPU still has fixed-function hardware, as long as it doesn’t get in the way of pure-software rendering. (ff hardware will be useful for getting high performance on legacy games and benchmarks.)
- Next-gen NVIDIA will be more Larrabee-like than current-gen NVIDIA.
- Next Gen programming language ought-to-be vectorized C++ for both CPU and GPU.
- Possibly the GPU and CPU will be the same chip on next-gen consoles.
12 Aug 2008
The OpenGL 3.0 spec was released this week, just in time for SigGraph. It
turns out to be a fairly minor update to OpenGL, little more than a
codification of existing vendor extensions. While this disappoints OpenGL
fans, it’s probably the right thing to do. Standards tend to be best when they
codify existing practice, rather than whey they try to invent new ideas.
What about the future?
The fundamental forces are:
- GPUs and CPUs are going to be on the same die
- GPUs are becoming general purpose CPUs.
- CPUs are going massively multicore
Once a GPU is a general purpose CPU, there’s little reason
to provide a standard all-encompasing rendering API. It’s simpler and easier
to give an OS and a C compiler, and a reference rendering pipeline. Then let
the application writer customize the pipeline for their application.
The big unknown is whether any of the next-generation video game consoles
will adopt
the CPU-based-graphics approach. CPU-based graphics may not be cost
competitive soon enough for the next generation of game consoles.
Sony’s a
likely candidate - it’s a natural extension to the current Cell-based PS3.
Microsoft would be very comfortable with a Larrabee-based solution, given
their OS expertiese and their long and profitable relationship with Intel.
Nintendo’s pretty unlikely, as they have made an unbelievable amount of money
betting on low-end graphics. (But they’d switch to CPU-based graphics in an
instant if it provided cost savings. And for what it’s worth, the N64 did have
DSP-based graphics.)
27 Jul 2008
I just bought another Mac Mini to use as a HTPC (home theater PC). I tried
this a year ago, but was not happy with the results. But since then I’ve
become more comfortable with using OS X, so today I thought I’d try again.
Here’s my quick setup notes:
- I’m using a Mac Mini 1.83 Core 2 Duo with 1 GB of RAM. This is the cheapest Mac Mini that Apple currently sells. I thought about getting an AppleTV, but I think the Mini is easier to modify, has more CPU power for advanced codecs, and can be used as a kid’s computer in the future, if I don’t like using it as an HTPC. I also have dreams of writing a game for the Mini that uses Wiimotes. I think this would be easier to do on a Mini than an AppleTV, even though the AppleTV has a better GPU.
- I’m using “Plex” as for viewing problem movies, and I think it may end up becoming my main movie viewing program. It’s the OSX version of Xbox Media Center. (Which is a semi-legal program for a hacked original Xbox. The Plex version is legal because it doesn’t use the unlicensed Xbox code.) The UI is a little rough. (Actually, by Mac standards it’s very rough. :-) ) Plex has very good codec support and lots of options for playing buggy or non-standard video files.
- I connected my Mac Mini to my media file server using gigabit ethernet. This made Front Row feel much snappier than when I was using an 802.11g wireless connection.
- I installed the Perian plugin adds support for many popular codecs to Quicktime and Front Row.
- I set up my Mac Mini to automatically mount my file server share at startup and when coming out of sleep. Detailed instructions here. Synopsis: Create an AppleScript utility to mount the share, put the utility in your Login Items so that it’s run automatically at startup, and finally use SleepWatcher to run the script after a sleep.
- I added FrontRow to my Login Items (Apple Menu:System Preferences…:Accounts:Login Items) to start Front Row at startup.
- I administer my Mini HTPC using VNC from a second computer. I don’t have a keyboard or mouse hooked up to the HTPC normally. I disabled the Bluetooth keyboard detection dialog using Apple Menu:System Preferences…:Bluetooth:Advanced… then uncheck “Open Bluetooth Setup Assistant at startup when no input device present”.
Things I’m still working on:
- No DVR-MS codec support in Perian, and therfore none in Front Row. I have to use my trusty Xbox 360 or VLC to view my Microsoft Windows Media Center recordings.
14 Jul 2008
This year’s ICFP
contest was a traditional one: Write some code
that solves an optimization problem with finite resources, debug it using
sample data sets, send it in, and the judging team will run it on secret
(presumably more difficult) data sets, and see whose program does the best.
The problem was to create a control program for an idealized Martian rover
that had to drive to home base while avoiding craters, boulders, and moving
enemies.
I read the problem description at noon on Friday, but didn’t have
time to work on the contest until Saturday morning.
The first task was to
choose a language. On the one hand, the strict time limit argued for an easy-
to-hack “batteries included” language like Python, for which libraries, IDEs,
and cross-platform runtime were all readily available. On the other hand, the
requirement for high performance and ability to correctly handle unknown
inputs argued for a type safe, compiled language like ML or O’Caml.
I spent a
half an hour trying to set up an O’Caml IDE under Eclipse, but unfortunately
was not able to figure out how to get the debuger to work. Then I switched to
Python and the PyDev IDE, and never
ran into a problem that made me consider switching back.
I realize that the
resulting program is much slower than a compiled O’Caml would be, and it
probably has lurking bugs that the O’Caml type system would have found at
compile time. But it’s the best I could do in the limited time available for
the contest.
It was very pleasant to develop in Python. It’s got a very nice
syntax. I was never at a loss for how to proceed. Either it “just worked”, or
else a quick web search would immediately find a good answer. (Thanks Google!)
The main drawback was that the Python compiler doesn’t catch simple mistakes
like uninitialized variables until run time. Fortunately that wasn’t too much
of a problem for this contest, as the compile-edit-debug cycle was only a few
seconds long, and it only took a few minutes to run a whole test suite.
The
initial development went smoothly: I wrote was the code to connect to the
simulation server and read simulation data from the server. Then I created
classes for the various types of objects in the world, plus a class to model
the world as a whole. I then wrote a method that examined the current state of
the world and decided what the Martian rover should do next. Finally I wrote a
method that compared the current and desired Martian rover control state, and
sent commands back to the simulation server to update the Martian rover
control state.
The meat of the problem is deciding how to move the rover. The
iterative development cycle helped a lot here – by being able to run early
tests, I quickly discovered that the presence of fast-moving enemies put a
premium on high speed movement. You couldn’t cautiously analyze the world and
proceed safely, you had to drive for the goal as quickly as possible.
My
initial approach was to search for the closest object in the path of the
rover, and steer around it. This worked, but had issues in complicated
environments. Then I switched to an idea from Craig Reynolds’ Not Bumping
Into Things paper: I rendered
the known world into a 1D frame buffer, and examined the buffer to decide
which way to go. That worked well enough that I used it in my submission.
I spent about fourteen hours on the contest: Two hours reading the problem and
getting the IDE together, ten hours over two days programming and debugging,
and about two hours testing the program on the Knoppix environment and
figuring out how to package and submit the results.
Things I wish I had had time to do
- My rover is tuned for the sample data sets. The organizers promised to use significantly different data sets in the real competition. Unfortunately, I didn’t have time to adapt the program to these other data sets, beyond some trivial adjustments based on potential differences in top speed or sensor range.
- I model the world at discreet times, and don’t account for the paths objects take over time. I can get away with this because I’m typically traveling directly towards or away from important obstacles, so their relative motion is low. But I would have trouble navigating through whirling rings of Martians.
- I don’t take any advantage of knowledge of the world outside the current set of sensor data. The game explicitly allows you to remember the world state from run to run during a trial. This could be a big win for path planning when approaching the goal during the second or later trials.
- I don’t do any sort of global path planning. A simple maze around the goal would completely flummox my rover.
I very much enjoyed the contest this year. I look forward to finding out how
well I did, as well as reading the winning programs. The contest results will
be announced at the actual ICFP conference
in late September.
09 Jul 2008
The rules for this year’s ICFP contest have
just been posted. Although the actual problem won’t be posted until Friday
July 11th, the rules themselves are interesting:
- Your code will be run on a 1GB RAM 4GB swap 2GHz single-processor 32-bit AMD x86 Linux environment with no access to the Internet.
- You have to submit source code.
- You may optionally submit an executable as well (useful if for example you use a language that isn’t one of the short list of languages provided by the contest organizers.)
- Teams are limited to 5 members or less.
I have mixed feelings about these rules. The good news is:
- It should be possible for most interested parties to recreate the contest environment by using the contest-provided Live CD. A computer capable of running the contest could be purchased new for around $350.
- It seems that the focus will be on writing code in the language of the contestant’s choice, rather than writing code in the language of the contest organizer’s choice. This wasn’t the case in some previous year’s contests.
- It provides a level playing field in terms of CPU resources available to contestants.
- It ensures that the winning entry is documented. (A few years ago the contest winner never wrote up their entry, which was quite disappointing.)
The bad news is:
- It penalizes contestants with low Internet bandwidth. The Live CD image is not yet available for download, and I anticipate some contestants will have difficulty downloading it in time to compete in the contest.
- It penalizes non-Linux users, who are forced to use an alien development environment and operating system.
- It penalizes languages too obscure to make the contest organizer’s list. That goes against the whole “prove your language is the best” premise of the contest.
- The target system is 32 bits and single core, which is at least five years out of date, and does little to advance the state of the art. This penalizes many languages and runtimes. For example OCaml has a harsh implementation limit on array size in 32 bit runtimes that is relaxed in 64-bit runtimes.
- It seems as if there won’t be any during-the-contest scoring system, so we will have to wait until the ICFP conference to find out how the contestants did.
Still, I’m hopeful that the contest itself will still be enjoyable. I look
forward to reading the actual programming problem on Friday.