Peter Moore on Xbox

Peter Moore on Xbox

I always liked Peter Moore, and I was sorry when he left Xbox for EA. He’s given a very good interview on his time at Sega and Microsoft. (He ran the Xbox game group at Microsoft before moving on to Electronic Arts.) Lots of insight into the Xbox part of the game industry.

Here he is talking about Rare:

...and you know, Microsoft, we'd had a tough time getting Rare back -
Perfect Dark Zero was a launch title and didn't do as well as Perfect Dark...
but we were trying all kinds of classic Rare stuff and unfortunately I think
the industry had passed Rare by - it's a strong statement but what they were
good at, new consumers didn't care about anymore, and it was tough because
they were trying very hard - Chris and Tim Stamper were still there - to try
and recreate the glory years of Rare, which is the reason Microsoft paid a lot
of money for them and I spent a lot of time getting on a train to Twycross to
meet them. Great people. But their skillsets were from a different time and a
different place and were not applicable in today's market.

Pro tip: Try writing it yourself

Sometimes I need to get a feature into the project I’m working on, but the developer who owns the feature is too busy to implement it. A trick that seems to help unblock things is if I hack up an implementation of the feature myself and work with the owner to refine it.

This is only possible if you have an engineering culture that allows it, but luckily both Google and Microsoft cultures allow this, at least at certain times in the product lifecycle when the tree isn’t frozen.

By implementing the feature myself, I’m (a) reducing risk, as we can see the feature sort of works, (b) making it much easier for the overworked feature owner to help me, as they only have to say “change these 3 things and you’re good to go”, rather than having to take the time to educate me on how to implement the feature, (c) getting a chance to implement the feature exactly the way I want it to work.

Now, I can think of a lot of situations where this approach won’t work: at the end of the schedule where no new features are allowed, in projects where the developer is so overloaded that they can’t spare any cycles to review the code at all, or in projects where people guard the areas they work on.

But I’ve been surprised how well it works. And it’s getting easier to do, as distributed version control systems become more common, and people become more comfortable working with multiple branches and patches.

Tim Sweeney on the Twilight of the GPU

Ars Technica published an excellent interview with Tim Sweeney on the Twilight of the GPU. As the architect of the Unreal Engine series of game engines, Tim has almost certainly been disclosed on all the upcoming GPUs. Curiously he only talks about NVIDIA and Larrabee. Is ATI out of the race?

Anyway, Tim says a lot of sensible things:

  • Graphics APIs at the DX/OpenGL level are much less important than they were in the fixed-function-GPU era.
  • DX9 was the last graphics API that really mattered. Now it’s time to go back to software rasterization.
  • It’s OK if NVIDIA’s next-gen GPU still has fixed-function hardware, as long as it doesn’t get in the way of pure-software rendering. (ff hardware will be useful for getting high performance on legacy games and benchmarks.)
  • Next-gen NVIDIA will be more Larrabee-like than current-gen NVIDIA.
  • Next Gen programming language ought-to-be vectorized C++ for both CPU and GPU.
  • Possibly the GPU and CPU will be the same chip on next-gen consoles.

The Future of Graphics APIs

The OpenGL 3.0 spec was released this week, just in time for SigGraph. It turns out to be a fairly minor update to OpenGL, little more than a codification of existing vendor extensions. While this disappoints OpenGL fans, it’s probably the right thing to do. Standards tend to be best when they codify existing practice, rather than whey they try to invent new ideas.

What about the future?

The fundamental forces are:

  • GPUs and CPUs are going to be on the same die
  • GPUs are becoming general purpose CPUs.
  • CPUs are going massively multicore

Once a GPU is a general purpose CPU, there’s little reason to provide a standard all-encompasing rendering API. It’s simpler and easier to give an OS and a C compiler, and a reference rendering pipeline. Then let the application writer customize the pipeline for their application.

The big unknown is whether any of the next-generation video game consoles will adopt the CPU-based-graphics approach. CPU-based graphics may not be cost competitive soon enough for the next generation of game consoles.

Sony’s a likely candidate - it’s a natural extension to the current Cell-based PS3. Microsoft would be very comfortable with a Larrabee-based solution, given their OS expertiese and their long and profitable relationship with Intel. Nintendo’s pretty unlikely, as they have made an unbelievable amount of money betting on low-end graphics. (But they’d switch to CPU-based graphics in an instant if it provided cost savings. And for what it’s worth, the N64 did have DSP-based graphics.)

Mac Min HTPC take two

I just bought another Mac Mini to use as a HTPC (home theater PC). I tried this a year ago, but was not happy with the results. But since then I’ve become more comfortable with using OS X, so today I thought I’d try again.

Here’s my quick setup notes:

  • I’m using a Mac Mini 1.83 Core 2 Duo with 1 GB of RAM. This is the cheapest Mac Mini that Apple currently sells. I thought about getting an AppleTV, but I think the Mini is easier to modify, has more CPU power for advanced codecs, and can be used as a kid’s computer in the future, if I don’t like using it as an HTPC. I also have dreams of writing a game for the Mini that uses Wiimotes. I think this would be easier to do on a Mini than an AppleTV, even though the AppleTV has a better GPU.
  • I’m using “Plex” as for viewing problem movies, and I think it may end up becoming my main movie viewing program. It’s the OSX version of Xbox Media Center. (Which is a semi-legal program for a hacked original Xbox. The Plex version is legal because it doesn’t use the unlicensed Xbox code.) The UI is a little rough. (Actually, by Mac standards it’s very rough. :-) ) Plex has very good codec support and lots of options for playing buggy or non-standard video files.
  • I connected my Mac Mini to my media file server using gigabit ethernet. This made Front Row feel much snappier than when I was using an 802.11g wireless connection.
  • I installed the Perian plugin adds support for many popular codecs to Quicktime and Front Row.
  • I set up my Mac Mini to automatically mount my file server share at startup and when coming out of sleep. Detailed instructions here. Synopsis: Create an AppleScript utility to mount the share, put the utility in your Login Items so that it’s run automatically at startup, and finally use SleepWatcher to run the script after a sleep.
  • I added FrontRow to my Login Items (Apple Menu:System Preferences…:Accounts:Login Items) to start Front Row at startup.
  • I administer my Mini HTPC using VNC from a second computer. I don’t have a keyboard or mouse hooked up to the HTPC normally. I disabled the Bluetooth keyboard detection dialog using Apple Menu:System Preferences…:Bluetooth:Advanced… then uncheck “Open Bluetooth Setup Assistant at startup when no input device present”.

Things I’m still working on:

  • No DVR-MS codec support in Perian, and therfore none in Front Row. I have to use my trusty Xbox 360 or VLC to view my Microsoft Windows Media Center recordings.

ICFP 2008 post-mortem

This year’s ICFP contest was a traditional one: Write some code that solves an optimization problem with finite resources, debug it using sample data sets, send it in, and the judging team will run it on secret (presumably more difficult) data sets, and see whose program does the best. The problem was to create a control program for an idealized Martian rover that had to drive to home base while avoiding craters, boulders, and moving enemies.

I read the problem description at noon on Friday, but didn’t have time to work on the contest until Saturday morning.

The first task was to choose a language. On the one hand, the strict time limit argued for an easy- to-hack “batteries included” language like Python, for which libraries, IDEs, and cross-platform runtime were all readily available. On the other hand, the requirement for high performance and ability to correctly handle unknown inputs argued for a type safe, compiled language like ML or O’Caml.

I spent a half an hour trying to set up an O’Caml IDE under Eclipse, but unfortunately was not able to figure out how to get the debuger to work. Then I switched to Python and the PyDev IDE, and never ran into a problem that made me consider switching back.

I realize that the resulting program is much slower than a compiled O’Caml would be, and it probably has lurking bugs that the O’Caml type system would have found at compile time. But it’s the best I could do in the limited time available for the contest.

It was very pleasant to develop in Python. It’s got a very nice syntax. I was never at a loss for how to proceed. Either it “just worked”, or else a quick web search would immediately find a good answer. (Thanks Google!)

The main drawback was that the Python compiler doesn’t catch simple mistakes like uninitialized variables until run time. Fortunately that wasn’t too much of a problem for this contest, as the compile-edit-debug cycle was only a few seconds long, and it only took a few minutes to run a whole test suite.

The initial development went smoothly: I wrote was the code to connect to the simulation server and read simulation data from the server. Then I created classes for the various types of objects in the world, plus a class to model the world as a whole. I then wrote a method that examined the current state of the world and decided what the Martian rover should do next. Finally I wrote a method that compared the current and desired Martian rover control state, and sent commands back to the simulation server to update the Martian rover control state.

The meat of the problem is deciding how to move the rover. The iterative development cycle helped a lot here – by being able to run early tests, I quickly discovered that the presence of fast-moving enemies put a premium on high speed movement. You couldn’t cautiously analyze the world and proceed safely, you had to drive for the goal as quickly as possible.

My initial approach was to search for the closest object in the path of the rover, and steer around it. This worked, but had issues in complicated environments. Then I switched to an idea from Craig Reynolds’ Not Bumping Into Things paper: I rendered the known world into a 1D frame buffer, and examined the buffer to decide which way to go. That worked well enough that I used it in my submission.

I spent about fourteen hours on the contest: Two hours reading the problem and getting the IDE together, ten hours over two days programming and debugging, and about two hours testing the program on the Knoppix environment and figuring out how to package and submit the results.

Things I wish I had had time to do

  • My rover is tuned for the sample data sets. The organizers promised to use significantly different data sets in the real competition. Unfortunately, I didn’t have time to adapt the program to these other data sets, beyond some trivial adjustments based on potential differences in top speed or sensor range.
  • I model the world at discreet times, and don’t account for the paths objects take over time. I can get away with this because I’m typically traveling directly towards or away from important obstacles, so their relative motion is low. But I would have trouble navigating through whirling rings of Martians.
  • I don’t take any advantage of knowledge of the world outside the current set of sensor data. The game explicitly allows you to remember the world state from run to run during a trial. This could be a big win for path planning when approaching the goal during the second or later trials.
  • I don’t do any sort of global path planning. A simple maze around the goal would completely flummox my rover.

I very much enjoyed the contest this year. I look forward to finding out how well I did, as well as reading the winning programs. The contest results will be announced at the actual ICFP conference in late September.

Getting ready for ICFP 2008

The rules for this year’s ICFP contest have just been posted. Although the actual problem won’t be posted until Friday July 11th, the rules themselves are interesting:

  • Your code will be run on a 1GB RAM 4GB swap 2GHz single-processor 32-bit AMD x86 Linux environment with no access to the Internet.
  • You have to submit source code.
  • You may optionally submit an executable as well (useful if for example you use a language that isn’t one of the short list of languages provided by the contest organizers.)
  • Teams are limited to 5 members or less.

I have mixed feelings about these rules. The good news is:

  • It should be possible for most interested parties to recreate the contest environment by using the contest-provided Live CD. A computer capable of running the contest could be purchased new for around $350.
  • It seems that the focus will be on writing code in the language of the contestant’s choice, rather than writing code in the language of the contest organizer’s choice. This wasn’t the case in some previous year’s contests.
  • It provides a level playing field in terms of CPU resources available to contestants.
  • It ensures that the winning entry is documented. (A few years ago the contest winner never wrote up their entry, which was quite disappointing.)

The bad news is:

  • It penalizes contestants with low Internet bandwidth. The Live CD image is not yet available for download, and I anticipate some contestants will have difficulty downloading it in time to compete in the contest.
  • It penalizes non-Linux users, who are forced to use an alien development environment and operating system.
  • It penalizes languages too obscure to make the contest organizer’s list. That goes against the whole “prove your language is the best” premise of the contest.
  • The target system is 32 bits and single core, which is at least five years out of date, and does little to advance the state of the art. This penalizes many languages and runtimes. For example OCaml has a harsh implementation limit on array size in 32 bit runtimes that is relaxed in 64-bit runtimes.
  • It seems as if there won’t be any during-the-contest scoring system, so we will have to wait until the ICFP conference to find out how the contestants did.

Still, I’m hopeful that the contest itself will still be enjoyable. I look forward to reading the actual programming problem on Friday.

Network Attached Storage Notes

I just bought a Buffalo LinkStation Mini 500GB Networked Attached Storage (NAS) device. It’s a very small fanless Linux file server with two 250 GB hard drives, 128 MB of RAM, a 266 MHz ARM CPU and a gigabit Ethernet port.

My reasons for buying a NAS

  • I wanted to provide a reliable backup of family photos and documents, and I was getting tired of burning CDs and DVDs.
  • I wanted a small Linux-based server I could play with.

My reason for buying the LinkStation Mini

  • It’s fanless.
  • It’s tiny.
  • Buffalo has a good reputation for NAS quality.
  • There is a decent sized Buffalo NAS hacking community.
  • Fry’s had it on sale. :-)

Setting it up

Setup was very easy – I unpacked the box, pluged everything in, and installed a CD of utility programs. The main feature of the utility program is that it helps find the IP address of the NAS. All the actual administration of the NAS is done via a Web UI.

To RAID or not to RAID

The LinkStation Mini comes with two identical drives, initially set up as RAID0. This means that files are split across the two drives, which means that if either drive fails all your files will be lost. Using the Web UI, I reformatted the drives to RAID1, which means that each file is stored on both drives. This of course halves the amount of disk space available to store files, but I thought the added security was worth it. This process of switching over was fairly easy to do, but it erases all the data on the drives and it takes about 80 minutes. RAID1 is more secure than RAID0, but it is not perfectly secure. There’s still a chance of losing all the data if the controller goes bad, or if the whole device is stolen or destroyed. So for extra security I will probably end up buying a second NAS (or USB 2.0 drive), and setting up an automatic backup of the backup device. The Mini can be set to perform periodic automatic backups to a second LinkStation for this very reason. Once I do that, I’ll probably reformat my NAS’s drives back to RAID0 to enjoy the extra storage space.

Getting Access to Linux root

There is a program called acp_commander, that enables you to remotely log in as root on any Buffalo LinkStation Mini on the same LAN as your PC. Once logged in as root you can read and write any file on the NAS. You can use this power to install software and reconfigure your system.

Yes, this is a security hole – it means anyone with access to your local LAN can bypass all the security on the file server. Very advanced users can patch the security hole by following the instructions at this web forum. I think it’s extremely negligent of Buffalo to configure their NAS devices in this way. Imagine the uproar if Microsoft shipped a product with this kind of security hole.

Playing with Linux

Once I obtained root access to the Mini I was able to install additional software. I installed the Optware package system, which gives access to a wide variety of precompiled utility programs, as well as tools for writing new programs. (Yeah, I know, it’s crazy to run software on a file server that’s supposed to be backing up important data. Right now I’m just having fun playing with my new toy, but eventually I’m going to have to get serious about making it work reliably.)

From looking at what other people have done, I am thinking that I might set up a small web server, or perhaps a media server for streaming music and video.

Thinking of the Future

There’s an active LinkStation hacking community at buffalo.nas-central.org. Unfortunately the Linkstation Mini is so new that nobody in the NAS hacking community knows much about it. Right now it seems to be similar to a LinkStation Pro Duo, but only experience will show if this is true. The Mini comes with a USB 2.0 port, to which you can attach a printer and/or a hard disk. While the hard disk isn’t part of a RAID array, it could be used to back up the RAID array, providing an additional layer of security.

Alternatives

There must be 20 different NAS vendors, although many of them just repackage reference designs made by the SOC vendors. SOC mean System on Chip. Marvell seems to be the dominant player in the NAS SOC market these days. A good overview of available NAS products can be found by visiting Small Net Builder. Some brands like Revolution, QNAP and Synology cater to enthusiasts who are interested in using the NAS as a mini Linux server. The only thing that stopped me from buying those brands is that (a) they’re more expensive, and (b) they don’t currently have fanless RAID1 form factors. The Revolution brand is actually owned by Buffalo. They add hardware daughter boards to standard Buffalo products. The daughter boards have extra flash chips and I/O connectors. It’s possible that there will be a Revolution “Kuro box” version of the Mini some day.

The venerable (out-of-production, but still available in stores) Linksys NSLU2 product is fanless and cheap, and very popular with hackers, but you need to add hard drives, and I don’t think its networking performace is very good compared to more recent products.

Another approach is to use a PC, either running a regular OS like Windows XP, Windows Server, OSX or Linux, or a special-purpose stripped-down NAS version. I do have an old PC currently running Windows Media Center that I could use for this purpose, but I didn’t seriously consider this option because I wanted something small, low-power, and quiet. (And I was looking for an excuse to learn how to administer a Linux system anyway.)

Apple makes NAS products too. Their the Airport Extreme and Time Capsule products both look OK, but neither one supports RAID1. And there doesn’t seem to be a software hacking community around these products. There is a software hacking community around the AppleTV, which you could make into a NAS by adding some USB 2.0 hard drives.

Some routers (like the Apple Airport Extreme mentioned above) have USB 2.0 ports, but I think they avoid advertising themselves as NAS products because they don’t have enough RAM (or CPU) to act as both routers and file servers. As a result, these products tend to have relatively low NAS performance.

Some people would laugh at a NAS that has only 240GB of storage. They are more interested in the high-end NASes that use four or five 1GB disks. When formatted in RAID5 configuration those NASes have 3GB of usable space. But they also cost $600 plus the cost of the drive ($160 each). Which is much more than I wanted to spend. Besides the cost, another drawback is that these products are nearly as large and noisy as regular PCs. Still, if you’ve got a lot of video (or are anticipating generating a lot of video in the future) the larger NASs are the way to go.

A NAS in Every Garage?

While all my friends and I are setting up file servers to store their family’s videotapes, I’m not sure if the product will become universally popular. I think it will depend on how people’s secure storage needs evolve. We’re already seeing small files (email, photos, low-res videos) being stored in the cloud. It seems like it’s just a matter of time before everything is. Unless people suddenly come up with compelling new applications that use dramatically more data (holographic TV perhaps?), it seems likely that people’s personal storage needs are going to top out in the next decade. If disk capacity and network bandwidth keep growing at a rapid pace for several decades beyond that, then it seems inevitable that cloud storage will eventually take over.

In any event, by the time this happens my little Mini will long since have been retired. (I remember paying $100 apiece for 1GB Jaz disks back in the day. It’s amazing how far and how fast storage prices have fallen.) If all goes well, my my family’s photos and other important documents will still be around!

I saw the original Spacewar! on a PDP-1 today

I went to the Computer History Museum today. I saw the Visual Storage exhibit, which is a collection of famous computers, the Babbage Difference Engine, which is a very elaborate reproduction of a never-actually-built Victorian era mechanical calculator, and the PDP-1 demo. This last demo was very special to me, because I finally got to play the original Spacewar! game, and meet and chat with Steve Russell, the main developer. (Perusing Wikipedia I now realize that Steve was also an early Lisp hacker. D’Oh!, I was going to ask a question about Lisp on the PDP-1, but I got distracted.)

There’s a Java Spacewar! emulator, but it doesn’t properly convey the look of the PDP-1 radar-scope-based display. The scope displays individual dots, 20,000 times per second. Each dot starts as a fuzzy bright blue-white dot, but then fades quickly to a dim yellow-green spot, which takes another 10 seconds to fade to black. This means that dim yell0w-green trails form behind the ships as they fly around. These trails add a lot to the game’s distinctive look. (In addition, due to time multi-plexing, the stars of the starfield are much dimmer than the space ships or the sun.) The fuzzyness of the dots means that the spaceships look much smoother on the PDP-1 scope than they do in the Java simulator.

According to Steve Russel and the other docents, the Java version also runs faster than a real PDP-1.

I also got to see serveral other cool PDP-1 hacks, including the original Munching Squares, 4-voice square-wave computer synthezed music, and the famed Minskeytron. The author of the music synth program, Peter Sampson, was present, and explained how he carefully patched into four of the console lights to make a four-voice D/A converter to get music out of the machine.

They keep all the hacks loaded into the PDP-1 core at the same time, and just use the front panel to decide which one to jump to. The core memory is non- volitile. The PDP-1 even booted in a few seconds – just the time it took the power supply to come up to speed.

The PDP-1 demo is given twice a month, on the second and fourth Saturdays. I highly recommend it for adults and children over 12. (It’s 45 minutes long, so younger kids might get bored.)

Thoughts on In-Flight Entertainment systems

I recently spent a lot of time using two different in-flight entertainment systems: one on Eva Air, and another on Virgin Atlantic. For people who haven’t flown recently, I should explain that these systems consist of a touch-sensitive TV monitor combined with a remote-control-sized controller. The systems typically offer music, TV, movies, flight status, and video games.

I believe both systems were based on Linux. I saw the Eva system crash and reboot, and the Virgin Air system has a number of Linux freeware games.

The GUI frameworks were pretty weak – both systems made poor use of the touch screen and had obvious graphical polish issues. The Virgin system was much higher resolution, and was 16:9 aspect ratio. I expect it was running on slightly higher-spec hardware.

Both systems worked pretty well for playing music and watching TV or movies. The media controls were pretty limited - neither system allowed seeking to a particular point in a movie, or even reliably fast forwarding. Both systems provided enough media to entertain your average customer for the duration of the flight.

One cool feature of the EVA system was backwards compatibility mode with the older “channel” music system from the 70’s. The controller came with the traditional “channel” UI. If you used the channel buttons, the system simply acted like the old system, cycling through a limited number of preset channels. One nice difference from the old channel system is that these new virtual channels always started when you switched to them, rather than having to join the looping presentation at whatever point it happened to be in.

The game portions of both sysetems were very weak. None of the games were very good. Perhaps the best game was a port of the shareware Doom game on the Virgin Atlantic system. (I used an in-flight entertainment system on Singapore Air many years ago that had Nintendo games. It was more fun.)

The Virgin system allowed you to order food and drink, which was nice. Both systems had credit card swipers, and offered some for-pay options.

Both systems allowed you to make in-flight phone calls. EVA allowed you to send SMS messages and emails. Both systems allowed you to create “play lists” of music tracks that would then be played while you did other tasks. I enjoyed this, but I suspect it’s not used much, as anyone with the sophistication and interest to use this UI would probably have their own MP3 player.

The Virgin system had two other very nice features: 1) laptop power in most seats (although only two plugs for every three seats), and 2) Ethernet connections. Unfortunately the ethernet connections were not yet active.

Virgin allowed you to “chat” between seats. I didn’t try this, but it seems like it would be fun for some situations (e.g. when a high school class takes a trip.) I expect that the Doom game can play between seats as well, but didn’t investigate.

Virgin also had normal mini stereo headphone plugs, which I think was a good idea. Eva had two kinds of audio plug, but neither one was the normal mini stereo plug. I tried using “Skull candy” noise-canceling headphones with the Virgin system, and while they helped suppress the airplane noise, they didn’t eliminate it completely.

It will be interesting to see how these systems evolve over time. I think that once in-plane internet access becomes practical people will prefer to surf the Internet to using most of the other services. (besides movie watching) And with the in-seat power, I think many people will prefer using their own laptop to the in-seat system. On the other hand, the in-seat system is very space efficient. There’s a chance people will use it as a remote display for their own laptop or mobile phone, which could then remain tucked away in the carry-on luggage.