Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Michael Abrash on Quake: "Finish the product and you’ll be a hero." (bluesnews.com)
177 points by gregschlom on June 22, 2011 | hide | past | favorite | 99 comments


Fun to read the comments. Thanks to the people who said nice things. My daughter is 25 now, btw :)

YMMV, but I've never done anything worthwhile without going through a difficult period near the end. It's just the nature of the beast, and it was as true on Windows NT as on Quake.

I never said anything about death marches, or at least I don't think I did. I worked hard and steadily throughout Quake, and John worked incredibly hard and steadily always, but that didn't change significantly toward the end. It's just more difficult at the end, because fixing bugs and improving performance and putting in less interesting features (like menus) isn't as much fun as experimenting with new renderers, because there's pressure to get it exactly right, and because you've just seen too much of the same game for too long. It's not hellish or life-destroying, it's just hard. I know some very talented people who just aren't willing to push themselves through it, and that's their right, but they also don't get finished projects out the door, and in the end, that's what it's all about.

As for companies that routinely put their people through death marches - I have nothing good to say about that. John and I volunteered to do what it took to ship Quake. It makes a huge difference when it's internally motivated rather than demanded by management.


Thanks for this and all your writing - the Zen of Code Optimization was a hugely inspirational book for me when I was younger, and is one of the main reasons I'm a programmer (even if it has instilled a slight paranoia about performance in everything I do).

I'd love to see more writing from you!


Um, holy shit!

Part of the reason I'm a programmer today is I read Zen of Graphics Programming when I was 16 and I just had to get my own bsp tree algorithm working. Thank you.


I got the hero speech too, once. If anyone ever mentions the word "heroic" again and there isn't a burning building involved, I will start looking for new employment immediately. It seems that in our industry it is universally a code word for "We're about to exploit you because the project is understaffed and under budgeted for time and that is exactly as we planned it so you'd better cowboy up."

Maybe it is different if you're writing Quake, but I guarantee you the 43rd best selling game that year also had programmers "encouraged onwards" by tales of the glory that awaited after the death march.


3 guys wrote the world's first game that rendered a truly three-dimensional world, and they did it in software, and it ran at 20+ frames per second on my Pentium 75. It's not quite correct to talk about Quake the way you would talk about almost any other software project.

BTW, if you're curious, Abrash goes into greater depth at the end of his "Black Book of Graphics Programming", which I think is now freely available online.


Abrash's Graphics Programming Black Book was my favorite programming book in high school. I read it everywhere I went, even when camping. I highly recommend it to anyone who wants to learn how the lowest-level graphics primitives were drawn before GPUs. The first half of the book also makes for an excellent guide to optimizing algorithms by reducing overhead and converting to assembly language.

My only complaint about the book was that the publisher (Coriolis?) included a notice at the front saying that no code samples, algorithms, etc. could be used in any project without their express approval, but most of the algorithms and code were previously published elsewhere.


I used to do graphic demos in Pascal, in 286 mode, using hand-coded procedures in assembler for the graphics parts, using Mode X for buffering the animations, like 320x240 resolution, 256 colors and only one virtual screen (although more were possible).

Doing 3D was pretty cool too - I couldn't do fancy stuff, but I could display and rotate 3D objects, drawing only visible polygons, draw textures and even have simple lighting.

To me it seems weird how kids work these days. I don't know what a pixel shader is for instance.


To me it seems weird how kids work these days. I don't know what a pixel shader is for instance.

Thanks for bringing up some nostalgia. A pixel shader is roughly analogous to a hand-optimized texturing routine, except it runs on the GPU.


Don't forget Wolfenstein 3D. While not directly germain to the topic at hand, this makes Quake second and probably more likely its development was handled like any other software project.


I think you misunderstood the parent post. Wolf3D was not really 3D, nor was Doom after it, and many other FPSs that came around at that time. Quake was the first really 3D FPS (e.g. you could walk under a bridge without "tricks").

I don't know what development process they had at ID at the time, but given that it was a Carmack project, I very much doubt it was anything like "any other" software project.


Descent came out over a year before Quake and was full 3D (except the annoying hostage sprites). It may have taken advantage of being set in a mine to have twisty passages that obstructed views, but it did have its fair share of wide open spaces (most reactor rooms, for instance). I think the reason Quake gets all the credit is due to its popularity, not for being the first.


Descent used affine texture projection, which makes texels slide around as your viewpoint changes, unless extremely small polygons are used (they weren't). It also had extremely simple lighting (intensities set at vertices, interpolated in a simple gradient in screen space like the texture projection). Neither of these things means you're wrong in calling it "full 3D", of course. I don't recall if the topology allowed room-internal objects other than the robots; I think it did.


There was also Driller in 1987 which had full 3D representations though you could rarely take advantage of it. It was based on the Freescape technology which I played with in 3D Construction Kit in 1991 and that was definitely full 3D in terms of movement (though not a game in itself).


Was this similar to Oblivion? Where you had to plant some gas valves to relieve the pressure on the planet? Awesome, frustrating little game :)


> Quake was the first really 3D FPS (e.g. you could walk under a bridge without "tricks").

Sometimes the tricks were cool.

In Bungie's Marathon for instance, the map format allowed you to have multiple rooms occupying the same physical 3-d space. Imagine a spiral stair case that doesn't go up or down, you just run around and around this long corridor like a manifold mobius loop. Quite fun in multiplayer maps with radar. Also worth noting that Marathon was the first game that allowed rocket jumping due to being able to look up and down slightly.

Obviously Marathon had nothing on Quake, but it was really a great multiplayer FPS for its day.


I'm pretty sure a few of the LookingGlass games came out before Quake and would qualify as 3D.

Ultima Underworld came out in 1992: http://en.wikipedia.org/wiki/Ultima_Underworld


I also remember Mechwarrior 2 came out before Quake and was mostly 3D-ish. Granted most of the levels were flat but you could jump (with jump jets) and climb on top of some terrain.


I think games are fundamentally different than web apps in a couple of ways.

(1) There tends to be times where having the product ready is much more profitable than other times (holiday season)

(2) It's a lot harder to rev games.

The end result is you often end up on death march slogs. And unfortnately the state of project management is that they're surprisingly hard to avoid.

Part of the reason is human nature. If we honestly schedule everything out and see it will take 12 months, it will take longer. Because at month 8 it will look like we have three weeks of work left -- to everyone but the project manager and a senior dev or two -- who realize that there is still an honest to goodness four months of work left. On paper it doesn't look like four months of work, but it is (especially because a lot of things are fixing bugs to things that you don't know yet are broken).

So what happens is the pace dev starts to slow down. People start taking vacation, or three day weekends. Before you know it, its month 11 and you're further behind than you were at month 8! WTF!?

You can't just ship the game with this quality. Characters disappearing, levels not loading every other time you play them, etc... A web app you could, and just fix it over the next three months. With a game, while you can service it, its a lot harder, and day 1 reviews are super important -- a 70% metacritic score means no one will even care that you serviced it.

I'm not saying death marches are a good thing. But I've seen that in certain industries they do seem harder to avoid.


I know where you're coming from, and what you're saying is currently true, I do however fundamentally believe that the death march is not a given. It is a choice that the games industry has made because it:

a) Doesn't know any better

b) Doesn't care enough to change (there are always hungry young fresh graduates to replace those old guys who keep complaining about not seeing their children grow up)

It's partly a process thing, and it's partly a technology thing. I'm researching game testing [1]. Perhaps if we can get game testing to be better, we might end up with better processes, such as how TDD helps other software domains. Same goes for other technologies leading to other better processes.

Lots of really complicated software gets made every day, but only the games industry really uses terms like "crunch time" and "hero" and "oh thanks for finishing Red Dead Redemption after months of crunch time, here's your pink slip [2]." It has to mature, at some point, I just don't know when that will be.

[1] http://www.zenetproject.com [2] http://www.next-gen.biz/news/rockstar-san-diego-confirms-lay...


"I'm not saying death marches are a good thing. But I've seen that in certain industries they do seem harder to avoid."

How hard they are to avoid depends entirely on how good the management is at coming up with excuses for them and at using carrots and sticks, and on how gullible and desperate their developers are.


I'm not sure that's true. But if you look at certain industries, like web startups, death marches are almost the complete culture.

Try to be employee #1 of a YC startup while being upfront that you work 40 hours a week, period. Not going to happen in many cases. The death march starts on day one in this industry.


"in many cases" being the operative term there.

How many YC startups would kill to get Steve Jobs to work even 1 hour a week for them as employee #1?

It's all about perceived value and negotiation. If your company values you enough, they will not make you work unreasonable hours. And if a non-workaholic employee values himself enough, he will not agree to work unreasonable hours.

Death marches are definitely a pathology in the computer industry, as they are in the medical field, where residents are forced to work insanely long shifts without sleep. In both cases, people's mental and physical well being is put at risk, the chance of burnout increases, and the quality of the results suffers.

It does not have to be this way. What will it take for management to stop understaffing and overworking their employees? And how long will employees consent to being worked in to an early grave?

Companies pulling this sort of crap is what really makes me wish the computer field had some effective unions that could collectively bargain for reasonable hours for reasonable pay. I know I'd join in a heartbeat.


It does not have to be this way.

It doesn't, but it will be. :-)

I worked on bug, almost full time, about a decade ago, for about a month. A single bug. I hadn't anticipated the bug, and didn't plan for it. I've seen teams spend months trying to hit perf targets. It's the type of thing that isn't uncommon in our industry. I'm not sure how you plan for it.

What will it take for management to stop understaffing and overworking their employees?

Many tmies its not understaffing. Again, I've seen teams struggle to hit perf targets. If you doubled the size of the team it wouldn't make their lives much easier. It may even make it worse.

Our industry is one where you are fundamentally solving new problems (because if someone else solved it, you'd use their code).

Now maybe you're saying we need to toss schedules altogether. Things are done when they're done -- and no crunch time. You do it with 40 hours/week for however long it takes. I'd love to see that dream happen to, but I honestly don't think it ever will.


I get your point, but I think that's really not what Michael Abrash was meaning here when referring to "heroes". His point is simply to show how hard it can be to finish a product, no matter how well or ill managed the team is.


Exciting as it was, we hit the same rough patches toward the end as any other software project.  I am quite serious when I say that a month before shipping, we were sick to death of working on Quake.

Finishing projects should not require this and anyone who tells you different is your enemy. I mean that.


> Finishing projects should not require this and anyone who tells you different is your enemy. I mean that.

Enemy? How melodramatic.

Finishing complex multi-year retail projects is never easy. Someone will have to fix those nasty, hard to reproduce crash bugs, find ways to free up another few megs of memory, smooth out those frame rate spikes, etc. It doesn't matter how many politically correct agile practices you follow; you will always be left with critical last minute tasks that are anything but fun and relaxing to complete. You can choose to work on the kinds of projects where this doesn't crop up as much, but games with blockbuster aspirations were never that kind of project.


My theory is that un-released software is like physical inventory. Keeping inventory is very expensive because the money spent could have been earning interest elsewhere and product sitting in the warehouse becomes gradually obsolete. As you accumulate a great deal of software inventory towards the end of a big project the cost of keeping the inventory, and therefore the pressure to complete the project, naturally increases. Shipping small increments frequently and not keeping much software inventory is the most efficient process if the product design/market allows. (I'm not saying this is always possible or would have been appropriate in this case, just making a comparison.)


In the past I would agree with you but nowadays I've come to the opposite conclusion.

Shipping crappy, trivial, mundane, or even good software shouldn't require this.

But shipping something that is timeless is absolutely mentally, physically, and emotionally draining.

I think around eleven men died building the Sydney Harbor Bridge. I'm sure many men will die in the next few decades as we start to commercially explore space. Few programmers will die shipping a breakthrough product, but to say it won't require a heroic effort I think is to say it's not a breakthrough product.


The production of Apocalypse Now was by any account HELL. Far worse than any game development example I can think of. (Well, DNF jokes spring to mind, but at least they essentially built multiple games there. Apocalypse Now wasn't released until two years after it finished shooting.)

It didn't chase Francis Ford Coppola away from film. It was an isolated example of how bad things can get, a living worst-case-scenario. It wasn't "Tuesday at the office."

Are people dying during game production? No. But they're not in experimental spacecraft, and they're not balancing on I-beams hundreds of feet in the air, either. Given the sleep deprivation you hear about in game production, if they were in either of those scenarios? Yeah. Games would probably take more than eleven lives a year.

/edit: Also, do you consider the eleven men who died during the Sydney Harbor Bridge heroes? Or guys that got stuck with shitty, dangerous jobs in search of a paycheck, for whose deaths I feel more pity for, than respect? I'm more with the latter. Astronauts, well, that I can kinda get as a "hero". The gang that made Guitar Hero: Van Halen? Not on that level.


> Also, do you consider the eleven men who died during the Sydney Harbor Bridge heroes?

Kind of off topic but to answer your question:

If you define hero as someone who puts forth a heroic effort, then sure I would. Does them dying make them heroes? No. Maybe someone was hungover and slipped. But the type of work they were doing I respect.

If you defined hero as a role model, I have no idea of course (didn't know any of them).


"Few programmers will die shipping a breakthrough product,"

Not right away, but abusing your health takes its toll.


Extremely difficult technologies can be developed safely and correctly. The key example is the US Nuclear Naval Reactor Program.

No accidents. No injuries. No deaths. Thousands of years of reactor-time.


I don't think that's true. Shipping is really hard - basically the end result of execution. A perfectly run project can still suck to work on at the end. The problem is that for most people the fun part of working on a project is the discovery phase when you're building something new. The last 20% of squashing bugs and going from "the code is working" to the "code is shippable" is generally not a good time.


I speak from a point of ignorance, I've never worked in the field. That said, if a project requires months of unplanned crunch time, it wasn't perfectly run. And I don't think anyone would say that crunch isn't commonplace.

(At least, I think patio11 is referring to the culture of crunch time. Correct me if I'm wrong!) You probably know that, but, if not, (or for those who don't) here's some reading:

http://en.wikipedia.org/wiki/EA_Spouse

http://arstechnica.com/gaming/news/2011/05/the-death-march-t...


I've read those articles when they were written, and have even read the book Death March. Those situation are obviously not how things should be.

Patio11 responded to Abrash saying near the end of the project there were rough patches and that eventually they all were tired of working on it. No matter how well run a project might be there will always be rough patches and you'll likely be sick of working on it at the end. Quake is a great game, but imagine running the same level or even worse, same portion of a level all day long looking for obscure bugs. Even if you planned for this time, doing this for weeks is going to suck at some point. It's just the nature of working on a long term project and taking it from working to shippable.


This article resonated with me and it has nothing to do with poor management or exploitation or enemies: for some of us starting projects (personal or otherwise) is always a lot easier and more fun than finishing them. Finishing projects can just be hard. And this isn't just within the realm of programming, although there I find it especially true.


I couldn't disagree more. Doing the grungy work of getting a big project ship-ready doesn't automatically mean there's a death march underway. It means you're a team that's disciplined enough, grown-up enough, to focus on the good of the project, as opposed to what's fun.

There's no reason that kind of work can't also be rewarding. Running a marathon is hard work, not pure fun, yet people find the endeavor rewarding.


They may be your enemy in the sense of an employer / worker bargaining situation, but they may also be right.

Even in a well-run project, there are many factors that are outside the control or reasonable knowledge of the project workers (the unknown unknowns).


Okay, this is a bit sad. How would you respond to your employer if he used this "heroic talk"?


While yes, Abrash's point was that you are a hero because you finished your project on-time, patio11 has a great point.

There have been many a horror story of long long LONG hours posted on here ranging from RockStar (lost the link) and many others because of a looming deadline with no working benefits for those extra hours.


As far as I could see the article had nothing to do with exploiting programmers with visions of glory, but rather urging developers to finish what they start - a surprisingly difficult goal in our field.


This is true, but there's the question of what you do once you're in the situation.

I've been on fucked projects and projects that shipped industry-leading products. Success doesn't seem to correlate with a well-planned project. Instead it seems to correlate with how bloody-minded the participants are about shipping and moving forward from where they are.

This annoys me, as I used to be quite the process/method evangelist, and of course cynical project managers exploit this.


You don't get to be Carmack or Abrash by working 8-5. They didn't have a slave driver boss standing there making them work, they did it because its what they love doing.


A corollary of your point is that you don't get to be them by working 7-7 six days a week. You actually have to love what you're doing, which can be hard for the best of us after a six month death march.


I think you're thinking in the wrong context. You should be applying this to your own startup or weekend project.


While I have worked a heroics shop and learned to not do that again, I got a completely different read of that. It is a PG real artist ship speach not a to the death, or for queen and country speach.


I liked the discussion of transmitting game state for network play. Quick summary: Doom sent differences of state, which had to be received and therefore acknowledged. Quake sent the whole game state each time, but compressed, so it wouldn't matter if a transmission was lost.

Neither approach seems optimal from an information theory point of view. The Doom approach needs feedback, but communicating at the optimal rate of noisy channel doesn't need feedback. The Quake approach sends the game state redundantly, but across time simply repeats information, and repetition codes aren't optimal.

It could turn out that in this application the Quake approach is best, because for latency reasons it might not be possible to send long enough blocks for Shannon's theory to apply. The Quake approach is also nice and simple. However, here's the approach I have in mind: send the stream of deltas protected by a code that can cope with some of them being erased, such as a Digital Fountain Code [1]. Each message would contain deltas stored slightly redundantly and XORed with previous deltas. If we have all previous deltas then we are set, otherwise we'll have to wait for another packet or two before we can infer the deltas, but we don't need to bother telling the receiver that we lost a packet.

[1] http://en.wikipedia.org/wiki/Fountain_code — but a much better resource is chapter 50 of http://www.inference.phy.cam.ac.uk/mackay/itila/book.html

EDIT: In response to replies by VMG and JoachimSchipper: I didn't mean to suggest they should have done anything differently. It worked well enough, and they got it out the door; agreed! I just think it's an interesting puzzle to think about.


Note that fountain codes are very, very patent-protected by Luby et al. Five years ago I came across literature on rateless codes, which are digital fountain codes that allow practically infinite encoding, or a practically infinite stream of data from which the original could be reconstructed. One really promising type of rateless code was online codes, which required O(1) time to generate a single block and O(n) time to decode a message of length n, which was much better than the LT rateless codes that came before it. (The paper can be found at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.12....)

I wrote up a Java implementation in my free time, and it worked well and was quite fast. I pinged the authors of the paper, asking if I could open-source it, because I came across a web site with their names on it that seemed like they were going to commercialize the idea. They replied that they had abandoned the idea of online codes because, even though their approach was faster than all other coding schemes, it still violated patents held by Luby and his company Digital Fountain, which has since been acquired by Qualcomm. That was a bummer, because I thought the online codes paper was very elegant, and the implementation was quite simple.


You'll probably like another interesting article about a similar problem, the network code of X-Wing vs. TIE-Fighter and how they handled extremely high latency and packet drop rates: http://www.gamasutra.com/view/feature/3374/the_internet_suck...


I did. Thanks!

Their arguments suggest UDP with no acknowledgements. If they had needed to send more data they might have needed slightly more sophisticated error correcting codes than repetition codes. Doing that and keeping the latency low could be a challenge however.


You might like RakNet: http://www.jenkinssoftware.com/

Whenever I think "I should roll my own semi-reliable protocol..." I look there first.


You probably can get lost in choosing and implementing the optimal algorithm in every aspect of the game. Sometimes you just have to go with the sub-optimal to get it out of the door.


I'm not an expert, but don't you have the timeline wrong? E.g. Luby's "LT codes" is from 2002, well after Quake's 1996 launch date.

Honestly, though, I think the Quake guys were going for - and achieved - "good enough".

(Also, coding obviously doesn't work for client -> server communication, since the server has to keep going even if packets are lost.)


I agree Luby's codes came after Quake. Although other codes for the erasure channel did exist, and the important ideas for the class of scheme I'm talking about were known. That wasn't meant to be my point though. I was just thinking about the puzzle, rather than what "should" have been done.

I don't follow the final parenthetical comment: the point of the coding is precisely so that the server can recover all information regardless of packet loss so that it can keep going. If a client disappears then the server acts as if it disappeared at the last successfully decoded delta (it has to have a means to deal with clients disappearing regardless).


Regarding my parenthetical comment: the server can ignore input that was not received in time, block while waiting for input, or try to "rewind" the world. The first is lossy, the second leads to the same hiccups as reliable transmission protocols (although the effect may be smaller), and the third leads to "paradox" (and lots of complexity). There are no really good solutions here.

I agree that such codes are interesting, though - I really should know more coding theory...


Thanks for the pointer to fountain codes. Though, since I'm working on a networked game that works over websockets, this is basically something I wouldn't bother with since websockets are TCP, not UDP, and as such they are guaranteed delivery and correct order of arrival. I realize TCP vs UDP is something of a debate in game networking, but it's a simple solution here. Let the protocol do the work and just send (compressed) deltas. I suppose this is equivalent to the Doom way.


Yes, it's for the protocol to worry about. imurray basically described a protocol to build on top of UDP that has slightly weaker guarantees than TCP.

(Or does the construction that immurray described benefit from domain knowledge about the underlying data?)


A comparison of transport protocols to consider if anyone reading this ever decides to try their hand at implementing such a quasi-reliable UDP-based protocol:

https://secure.wikimedia.org/wikipedia/en/wiki/Transport_lay...

In addition to that list, there are µTP and rpp, plus others no doubt.


eru makes a really good point, and I suddenly feel much less confident. My intention was to sketch the idea for a protocol that was reliable (with very high probability, just like anything in the real world). So, to repeat the argument: if there's some information we need reliably, whatever cunning thing we layer on top of UDP, we're just reimplementing what TCP does. It seems unlikely we can do better than TCP unless we're doing something very problem specific.

I guess the argument is that a scheme based on UDP can have very low latency a lot of the time, only occassionally stalling while waiting for more data. Whereas TCP, which needs acknowledgements and to keep packets in order, has a uniformly high latency. For this particular type of application, the tradeoffs are different from typical internet use?


TCP is kind of the full feature kid on the block. You might do better making up a protocol that has what you want and skip on the rest. There are real latency costs for parts of TCP that a game will never use i.e. Nagle's algorithm(sure you can turn that one off but every thing has its cost and you can't turn off everything you don't need in TCP).

I am surprised how infrequently RTP comes up in these conversations. Something close to RTP seems like it would be ideal for games.


> For this particular type of application, the tradeoffs are different from typical internet use?

Absolutely. The value of any one packet is fairly low, you can interpolate the player's path, hell, maybe no one else was looking at him at the time. But with TCP, you basically stop the world for a couple roundtrips to make sure you have that packet, so all the packets you received after the original drop and before the retransmission are no longer useful, and you have to drop them too.

It's also worth noting that TCP does /not/ have a uniformly high latency, it does the sawtooth thing. In other words, if Level3 decides they're going to shovel a bunch of your ISP's traffic into the bitbucket for a second, your packets have been dropped, the server is going to wait until it has a complete set of packets, and your TCP implementation has decided to throttle the number of packets you can send to catch up.


> So, to repeat the argument: if there's some information we need reliably, whatever cunning thing we layer on top of UDP, we're just reimplementing what TCP does. It seems unlikely we can do better than TCP unless we're doing something very problem specific.

We can do much better than TCP. We just have to weaken the reliability requirement. And like you say, in the real world, that can go quite a long way, without actually being much weaker than TCP.


Apart from the timelines, my impression is that implementing fountain codes has some IP issues. Also, there is some code complexity in getting them implemented correctly (as hard to get right as paxos). Anyone know a game that used these? Any body have first hand experience?


If considering using fountain codes, I would definitely want to check out the IP issues; there are certainly some patents in that space. For the application being discussed, fountain codes wouldn't be an exact fit anyway. I pointed to them because the ideas used in fountain codes are fantastic, and it might be possible to do something with that flavor.

(I too would love to hear first-hand experience of anyone who's used this type of code.)


Too bad I only ever proved theorems about codes, but didn't actually use them. But many of code are really beautiful.


Haven't the newer games had advances in this area?


Quake really wasn't playable over the internet until the QuakeWorld "extension" came along.

It used prediction and delta codes, and actually ran the server and client (and still runs) at 72 fps, no other online multiplayer game that I know of has done it ever since. This means if you have ping 14 milliseconds, you really have ping 14 milliseconds.

If you've played Quakeworld with solid 72 fps (this only really became possible many many years after its release) and a good mouse, monitor and decent internet connection, all other games after that feel like clunky puppet shows where you can only vaguely affect the end result. That's the reason why it's still being played. It really feels like you're on the server.

The game also enables amazing moves that can be pulled off by sufficiently skilled players - there is no roof for domination abilities, There can be tens of levels of players that can crush anyone below them easily. This also means that it can be pretty newbie hostile at the same time. But it's pure bliss when dueling with someone your own level.

The game companies assumed (probably correctly) that the average consumer never pays attention to good control or stratospheric skill potential, and rather pays for the resolution, level of detail and nice colors as well as newbie entry easiness.

So, Quakeworld was and probably will always be a one off with no peers, a raw formula one in the world of soft SUV:s.


Quakeworld was such a revolution.

The gameplay is brutal. You have to put things in context. At the time Quake was designed rocket jumping was an unknown concept. Using the mouse was a controversial and new idea (you had to enable +mlook!). As such the game doesn't fit the mold of a refined and balanced FPS. Everything is raw and frenetic. You have to master rockets and lightning gun, you have to exercise strong map control, you have to aim and fire faster than any other FPS before or since. There's no weapon balance to allow for specialists, or forgiving maps to let you stage a comeback. They'll never make another game like it.


Fully agreed. I was hooked on expert mod ctf (with the offhand grappling hook) for quite a while. I've never found another game that approaches that pace of gameplay.


Valve's Source engine (and GoldSrc, HL1), which itself is heavily based on the Quake 3 engine, uses delta compression and prediction as well. This is part of what made Counter-Strike such a popular game, and still is today. Especially in the competitive gaming world.

http://developer.valvesoftware.com/wiki/Source_Multiplayer_N...


Valve licensed Quake code from id, and GoldSrc/Source are descended from that. There is no Quake2 or Quake3 code, the improvements over Quake are pure Valve.


This.

QW is the Starcraft of first-person shooters. It's so much fun to duel (or 2on2 or 4on4) that pro players for other games still enjoy coming back to qw, where many of them started. I would say it's a beautiful game.

It also has a ton of fanatical players/mappers/coders hacking on weekends, making their own custom additions to the game.


Quake 3 sent the game state update to clients as a delta relative to the last state they had acknowledged receiving: http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/Quake3Ne...

On a different end of the spectrum there is Unreal's replication model, still used today by all UE3-based games: http://udn.epicgames.com/Three/NetworkingOverview.html


I remember that MW2 used a different model where one of the players was the host.

That was great for folks away from servers, like me - until the guy decided to disconnect, when a new host had to be chosen.


That's still a client-server model, only one of the clients happens to be the server as well. Quake 1 worked that way, too. The Halo games use a client-server model but support transparent host migration if the original host quits mid-game. That's really tricky to get working even halfway decently!

The Call of Duty games all ultimately descend from the Quake 3 code base, so I wouldn't be too surprised if MW2 and BLOPS were still using a recognizable variation of the networking model described in the article by Brian Hook in my link.


I'm a huge fan of Michael Abrash and his writting style: He always starts with a totally unrelated story, and somehow manages to relate that to the topic of his article.

Check out the other chapters on Quake here: http://www.bluesnews.com/abrash/

The story on the first one is truly inspiring as well.


How old's his daughter these days? He used to start his Doctor Dobb's Journal (DDJ) articles with anecdotes about his (toddler) daughter.

I'm guessing she's now older than I was when I was reading those articles.


Quake was a huge part of my formative nerd years. I remember fondly the first time I ran the BSP calculation for a level I built, fired it up and crapped my pants at how smooth the rendering was. The game itself was pretty bizarre, but so many nerdgasms were had over the insanely beautiful shit they were able to render in real time.

And then if you got a 3dfx Voodoo Graphics card you would get translucent water and bilinear filtering on the textures....

Those were the days.

If anyone wants to really travel back in time, the original QuakeTalk newsletter is still available online: http://www.gamers.org/pub/idgames/docs/faqs/qtalk400.txt. I remember vividly reading through John Romero's gameplay concepts and being ridiculously excited. Of course there was no way they could've achieved what my naive 14 year old self imagined from those interviews. But it sure was nice to think about.


And then if you got a 3dfx Voodoo Graphics card you would get translucent water and bilinear filtering on the textures....

It's hard to explain to those who were not around the revolutionary change that occurred when GLQuake + the Voodoo card came out.

It was one of the few 'holy shit' moments in my technological life. I knew then and there that everything changed going forward.


Seeing DOOM for the first time in my buddy's residence room had that effect on me.

The thing is I knew the math for 3D CG, but more from an academic angle. I'd also been playing some true 3D games like LHX. I "knew" that what I was seeing was just impossible -- there was no way it could be that fast on that hardware.

So, simplifying the problem from 6DOF 3D to 2.5D made a world of difference.

I love the idea that I might be able to turn an impossible problem into something solvable if I just constrain myself to solving a subset.

Something to keep in mind when stakeholders want everything all at once, with no compromises. Compromise / prioritization is what makes cutting-edge technology possible.


http://www.quaddicted.com/software-vs-glquake/software-vs-gl... has some pictures of the texture filtering. But being able to run it at a "high" resolution (like 640x480) surely was a big part of the wow factor too.


As a computer games development student, this feels very relevant to me (as, I'm sure, it does to other software developers). Even a project I'd like to do, on my own, for a week, is very hard to finish. There never really seems like there's a point to polish after I've learned what I wanted to learn. I'm battling this literally as I write this comment - this is only another way to procrastinate. I find keeping rigid office hours helps, as does keeping in mind that this project or that will look good in a portfolio, or might bring in some cash.

Really, though, it's just damned tough to continue something that you view as finished. The next games project I start will be rather boring to code, but hopefully fun aesthetically. That way, I'll concentrate more on the second 90%, but getting past the first 90% might be harder than usual.

If you've got any tips for surviving the second 90% of a project, I'd love to hear them. My short attention span coupled with my low focus really makes it difficult to finish anything, and it's getting me down.


As a computer games development teacher ... :)

There are ups and downs, and one gets used to dealing with them. It's just practice.

One of the most motivating things is to get positive feedback -- both from shipping features, and from human contact.

I always have times where I'm in a rut ... I don't want to work on the project ... but someone sees it and they think it's cool. Or maybe I get away from it for a few days, then look at it with fresh eyes and get re-motivated to fix just one more thing...

Also, you can use procrastination to your advantage. Don't play games or surf to procrastinate. Instead if you're burnt out on project A, switch to project B, or set up a server, or write a script, or exercise, or play music, or read a math book (I'm giving away my preferences).

Anything to recharge or to make progress on something else that needs to be done.


Have you tried releasing early? As soon as you get some people who are actually using your stuff, and if only trying an early demo, you probably have more motivation to finish it.


The 90% + 90% is soooooo true. Everyone wants to do new shiny stuff, nobody wants to fix bugs so the product can be pushed out of the door.


Sounds like a Market for making bug fixing compelling and fun.


That's the difference between FOSS and commercial software: the latter gives developers a reason to do what is otherwise uncompelling and unfun.

Some things are just not fun.

Some things are just not compelling.

Some things are a big obnoxious hassle which nobody will enjoy doing and nobody will appreciate.

Doing those things differentiate the professional from the amateur, and are why the professional gets paid.


I think this is a great point, and helps explain why much FOSS software is clunkier than its commercial counterpart.

Although I don't think this is an issue of good, professional developers vs bad, amateur ones. Sometimes the same developer works on commercial and FOSS projects. It's just about the nature of work.

Work is hard. That's why you have to be paid to do it. If you're doing a project for fun, you'll probably stop when it stops being fun.

For many of us developers, the "first 90%" is fun and intrinsically motivating; the "last 90%" sucks and we do it because it's what we're paid to do.


Not to be snarky, but that's like saying "sounds like a market for making teeth-pulling compelling and fun". Some things are just hard and sometimes boring too. There's certainly a market for easing the pain, but I'm not sure you can make it fun.


Some things, but not all things. Consider Farmville. Then consider this from the Simpsons in the 90s: http://farm4.static.flickr.com/3022/2461504842_de9b7ef09f.jp...


I like fixing bugs. I hate that you have to replicate them beforehand. That's usually digging out certain versions of the environment setup. The actually theory-validation-cycle of bug fixing is fun, because it is just like science.


I like fixing bugs too. The more difficult and obscure the better. I am often called in to fix a bug that nobody else can find, partly because I'm good at it, and partly because I enjoy it.

My dream job is freelance bug hunter .. I'll track em down and squish em, when you cant, or dont want to :)


You should talk to John Robbins about that freelance profession. That's what he used to do and maybe still does. If you become really good at jumping into an unknown environment and fixing the hard bugs that no one else can fix, I suspect you can make a really healthy living.


Shipping is like a project in itself - and I'm not even just talking about the process of finishing the product, but the act of preparing something for public consumption once you're finished is like a whole other dimension. It's enough to leave you cowering in the cupboard. And you'll never be happy.

Ship in fragile terror my friends. It's the only way.


Unless you're in tools ;)


This comes at a great moment for me. I have just gotten into the second, not fun 90% part of the project and it's really frustrating. Mr. Abrash has articulated why.

I'm going to put my head down and work.

And ship it.


Today marks the 15th anniversary of the release of the Quake shareware.


I've been working on a Doom/Build style game engine for the browser so I've been reading Abrash's Graphics Programming Black Book[1] which has been very insightful. It covers key aspects of the development of Doom and Quake, and I highly recommend it.

[1] Full text: http://webcache.googleusercontent.com/search?q=cache:d-lFmnF...


> true 3-D objects. However, this raised a new concern; entities can contain hundreds of polygons, and there can be a dozen of them visible at once, so drawing them fast was one of our major challenges.

How quaint.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: