Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The American Who Designed the PlayStation 4 and Remade Sony (wired.com)
99 points by freshrap6 on Nov 7, 2013 | hide | past | favorite | 121 comments


I'm seeing a pattern, a cycle between hubris and humility.

When Nintendo ruled, Sony broke in by courting game developers. Then, they became arrogant, and made the best hardware they could, they was difficult to use. Thus, Microsoft won, simply because it was easier for developers. It was then amazing to see how arrogant they had become with the xboxone trying to squeeze everyone, in every way - similar to how they removed the ability to cancel xbox live online.

What Microsoft didn't realize is that they didn't win - Sony lost, by shooting themselves in the foot. But in this generation, if PS4 is easy to develop for, cheaper, more powerful, and not trying to squeeze every cent and control every aspect... things will be different.

Of course, Microsoft will see their mistake and adjust (as they've done many times). But while they can drop prices and relax controls, it may be difficult to make their console more powerful. Also, next year, mobile devices should reach GPU parity with last gen consoles... and with their faster iteration cycle, match PS4/xboxone three years after that. (unlike CPUs, GPUs scale really well).


I highly doubt that assertion, it's a handwavy extrapolation. The current GPUs have significantly higher TDP, 3 years ago, state of the art GPUs had 3 Billion transistors and power consumption over 150W.

The Playstation 3 is based on 2004 era (cut down G70 )chip designs, so you're talking mobile devices next year catching up with a chip that's almost ten years ago, based on a 90nm processor node. Today's PS4 will be based on a 28nm node, there's no way in 3 years you're going to shrink the PS4 GPU and 8GB of GDDR5 to fit into an iPhone like device and not heat up like a welding torch and drain the battery like a blackhole.

It's a common, but incorrect assertion that mobile devices are going to replace game consoles. It might replace them for casual gamers, but not hard-core gamers. I don't know anyone who wants to play CoD or Half Life 2 on a mobile device. They want to play them on a high end PC, console, or SteamBox.


Every single time I see someone say "Things in the future aren't going to see huge performance increases like that." I smile, because the beauty of the future is that we don't know. Maybe, maybe not. Maybe in ten years we'll all have crazy brain implants made possible by some new awesome technology, and we'll be watching pinar movies as these devices render them in real time.

Probably not likely, but I'm not going to make any absolute claims about anything.


To be fair, the person made no assertions, they said "I highly doubt", and couched other limits in terms of thermodynamics. I'm pretty comfortable saying that thermodynamics is not going to change in a year; at least as comfortable as saying that the sun will rise in the East next year.

More importantly to my way of thinking, the parent to both your posts is talking about company strategy. You have to base future strategy on reasonable projections. Some projections are more likely than other, and you have an n-dimensional space of Gaussian and non-Gaussian probability distributions to optimize. Fortunately, most of the likely scenarios lie near a low dimensional manifold, and we can thus say things like a cell phone won't perform like today's Kepler without running at 200C (because thermodynamics). On a website we can smile knowingly and speculate about possible futures, but company directors need to be far more prosaic and pragmatic.


That "we can't know anything about the future" attitude is way more absolute than what the parent is saying and it doesn't foster any interesting conversations about what could happen, why, how, when and how that could relates to our present and past.


I get what you're saying, but Microsoft never won any console race http://www.geekwire.com/2013/xbox-360-wii-ps3-won-console-ge...


Naive interpretations of the current console generation will lead to incorrect conclusions.

Microsoft unequivocally won the 360/ps3/wii console generation. Not because they shipped more consoles, no console maker actually makes the majority of their profit from raw console shipments, it's a very misleading figure. What matters more are sales of games and DLC and subscriptions to services. In those measures the 360 has been trouncing every other console maker. People who own 360s spend more time playing the console, they spend more money buying physical games, and they spend more money buying DLC and subscribing to xbox live gold. Microsoft does more business on higher margin items than other console makers.

The Wii made ok money for Nintendo but the sales dropped off really fast, and people didn't end up playing it much, or buying many games for it. The PS3 took a long time to reach a state of maturity where there was a sufficient stock of good games on the console to actually justify owning one. And eventually the PS3 managed to get to a state where it was actually doing well. But from a business perspective no matter how much better it was doing the 360 continued to outpace it (in game sales especially). These are some of the major reasons why there even is a new console generation this year, Nintendo and Sony need to put themselves on a new footing in order to have a chance of growing their market share.


Why sales matter and not profits? If I go and buy Ferrari F360s for $200K then turn around and sell them for $10K I can probably get a lot of sales yet from the business perspective I am going to be in a pretty bad state.

Has MS turned in any profit from their whole Xbox affair? There had been various estimates around but I've never seen ones claiming significant profits, the best ones I'd seen claimed it broke even. They had shown some minor profit on their entertainment divisions in some quarters but the RROD alone costs them $1B not counting minor stuff like purchasing and running Rare or paying for "exclusivity".


If profit figures were available we wouldn't be having this discussion, because the answer would be obvious. But they're not. Nintendo's figures are the most available (since gaming is the majority of their business and they are a public company), but the least relevant because we know they've been struggling. Sony has had reduced profitability and losses over the last few years. The interesting thing is how difficult it is to get numbers out of Microsoft. They fold the Xbox and console gaming part of the company in with web-based gaming and the windows phone (and formerly zune) part of the company, which muddies the waters considerably. Overall though there are many indications that console gaming has been profitable for Microsoft. For example, in 2012 alone xbox live made $1.2 billion in revenue. This is a very high-margin business for MS and that represents about 10% of the entire worldwide video game revenue.

MS has dumped a lot of money into console gaming, but they've built a business that is now generating a lot of revenue in a lot of high margin areas.


I think your numbers are way off, the entire worldwide video game revenue is not anywhere around $12B[1]

Also margins are not as high as you seem to imagine, look at any 3d party publisher reports (they are all public corps).

[1] http://in.reuters.com/article/2013/06/10/gameshow-e-idINDEE9...


If it's such a successful business for Microsoft, how come potential next CEO of Microsoft is thinking of ditching that business to someone else? Is he insane or that business is a burden to Microsoft?


From the original article.

"When the PS3 launched, according to most estimates, Sony controlled about 70 percent of the console market. Seven years later, it’s on even terms with Microsoft, whose Xbox 360 outsold the PS3 in the U.S. for 32 consecutive months."

Going from 70% to "even" with Microsoft seems like a non-win?


That graph is incredible. I had no idea holidays mattered so much. They may as well be the reason the industry exists.


First I will note that Sony hardware has reputedly never been easy to develop for, so the PS3 wasn't anything new there.

Second, you state while they can drop prices and relax controls, it may be difficult to make their console more powerful. We still don't know how the dice will land in this regard. Microsoft managed higher-than-expected frequency yield, and nobody knows where Sony's chip sits. It also remains to be seen how the SRAM vs GDDR5 plays out.


It was a different world then, but the original PSX was pretty straightforward. A commodity MIPS processor, relatively straightforward vector and image co-processors. In particular Sony made a bet that ram prices would fall enough to make the console profitable long term and that bet payed out. To remember how much things have changed, the CDROM format was controversial then, but was a huge win for developers.

Bluntly, Sony got it right with the PSX in making it very approachable for developers. They messed that up with PS2 and they should have learned that then. Instead they doubled down on the same misguided approach with the PS3 to predictable results. I'm glad to see they're not going to make the same mistake again.

Source: I was a subcontractor on a PSX title years ago.


And PS2 is officially the best selling console in the world [1]. But I agree - other than PS1 (which was easier than Saturn), Sony hardware was never easier than competition.

[1] http://en.wikipedia.org/wiki/List_of_best-selling_game_conso...


There is no debate about the hardware, really. The PS4 is the more powerful console in every single way.

What makes the Xbox One hardware interesting is the Kinect.


How so? I thought on specs they pretty much trade evenly blow-for-blow, aside from Sony's unknown chip frequency and the GDDR5 vs eSRAM?


The PS4's GPU is somewhat more powerful (in absolute terms) and the Xbox One's eSRAM isn't much to small to make up for the really slow DDR3 (it can barely hold one frame, I believe).


The notion that mobile GPUs will soon overtake console GPUs is nonsense, yet i hear it over and over since the iPad2 days. Even the latest iPad Air is still quite far off a Xbox360, they might catch up during the next 2 years but then we have a completely new console generation anyway.

If mobile GPUs will reach a point where they are good enough, thats an entirely different story, but that still leaves you with subpar controls and very casual games.

I also dont think it matters much if the XBox One is a tiny bit slower than the PS4, console exclusive titles will get even more rare in the future and the difference in visual quality between the due is negligible, as in hardly to be seen by the human eye.


They're more "different" than "weak" nowadays. Most tablets have way more memory and much better SIMD than a 360, with a (now) similarly powerful GPU, but with much lower bandwidth. While the end results don't look as good as on a 360 overall, some aspects do indeed look better.

The one huge difference is the input method, which will always hold tablets back.


Thats true but my fact still stands that in terms of raw processing power for games they are still far off. I believe a Xbox360 manages around 10 GTexels/sec while the ipad Air is around 3 GTexels/sec. For comparision a modern desktop GPU is at about 50-80 GT/s which is where the new console generation is at as well.

Same story for GFlops and most other relevant metrics..


But the tablets have much higher resolution and often no aliasing. It's not quite so clear-cut when comparing only the end-result.


The Cerny Method sounds remarkably like the Lean Startup Method, but apparently pre-dates it quite significantly.


The product of this style of production is most often called a "vertical slice." A slice of cake has just a thin amount of cake, but it contains every layer of cake. In this analogy the cake is the whole game and the vertical slice is a tiny snippit of the game (ie. a short level) where all features (every layer of cake) of the core functionality of the game is present.


That's exactly what I was going to say. Even the terminology, "publishable first playable" echoes the "minimum viable product" terminology that's in vogue with lean startup practitioners.


Agreed, and to their credit both are designed to solve the same problem which is to maximize the likelyhood that you're spending the money on the right thing. Long gone are the days when some decides to plop down a million dollars to make some random game, that will be first seen by customers when it is in the retail box.


I think this approach is evident in Metal Gear Solid Ground Zeros (http://www.eurogamer.net/articles/2013-11-04-metal-gear-soli...).

It's a level of the main game, but they are actually releasing it for $29. Taking this approach to the next level!


I had the exact same thoughts while reading the article. Cerny created MVPs of games, got feedback and would pivot/change based on user feedback. And he did this in the mid 90's!!


It's not a MVP if you have to build the whole game engine, but it's simular I guess.


This was a good read, and the narrative between Sony's launch and Microsoft's launch has the feel of some pretty classic hero's journey stuff in it.

The interesting upstart this time seems to be the SteamBox.


The PS4 may well make for a better video game machine and more profit for Sony, but it's too bad the very interesting and powerful cell processor concept will be relegated to the dustbin.


The Cell design was interesting, but it was obvious that chip multiprocessors would win. The argument is similar to why x86 continues to prevail vs RISC and VLIW: supporting x86 is only a modest constant factor in transistor count but preserves access to the existing software ecosystem. The latter trumps the pennies per part the x86 decoders add.

Likewise with chip multiprocessors vs systolic arrays: making a core fully general and with proper cache coherant memory access isn't that expensive in transistors, and allows you to leverage existing multi-threaded code.

Systolic arrays still have a future in true embeded devices: radar processing is a good example.

Oh, and before you point to the AMD architecture as being Cell-like, it's not at all. Unified virtual memory instead of Cell's complex 2 level store. Cell requires the CPU to manage DMA's to the APU's, on Fusion you can just pass a pointer. CPU and APU instruction sets are entirely decoupled on Fusion. APU supports thread pre-emption.

In any case, the lesson here is obvious: design hardware in opposition to the existing software ecosystem in your industry, and you'll make something that is academically interesting but struggles to deliver value to end products.


The x86 vs RISC debate is a perfect example of how technological advances can confound seemingly simple tradeoffs. There's simply no such thing as a CISC processor anymore. All "x86" processors have RISC internals and use micro-op translation to present an x86 ISA at a software level. As you point out, the same sort of dynamics play out at every level. There's only so much advantage that simplified core designs get you, since the overhead for "legacy" support tends to be small.

It'll be interesting to see how Intel's x86 SoC development pans out. With the Atom they weren't really aiming for the SoC market and only recently have they actually bothered to try to do so. Even with their first generation stuff they've produced pretty competitive, to ARM, results, given Intel's research muscle and their FAB capacity it wouldn't be surprising to see a lot more x86-SoC based smartphones and tablets hitting the market in the coming years.


The APU in the PS4 picks up many ideas from the Cell processor. You have some heavy cores for program logic and many small ones for graphics and computation tasks. Both on the same piece of silicon and attached to the same memory space.


Maybe, but it does not seem to be very powerful at all. I've seen most of the upcoming games for launch (and played about half of them) and I was far from being impressed. It feels like a mid-range gaming PC kind of power at best, and many games apparently struggled to run in full HD, with the framerate low enough to make you feel you were back in the PS2 days. If they are planning to sell these consoles solely on the graphics, they'll have a tough time.


A mid-range gaming PC that's ready to play out of the box for $400 is nothing to slouch at.

Also, keep in mind that the original launch titles for the Xbox 360 looked like this:

http://ve3dmedia.ign.com/ve3d/image/article/655/655564/perfe...

Launch titles are generally either carry-over concepts from earlier consoles and/or are drastically rushed to hit a very hard deadline, the titles hitting in the next couple of years will up the ante significantly.

That aside, I'm personally looking forward to next Friday quite a bit, as my preordered PS4 from Amazon is supposed to be arriving that day and having played around with some of the games already I'm really looking forward to more time with Killzone, Knack and Assassin's Creed 4.

FWIW, I'm one of those gamers who was a huge fan of the Xbox, mostly due to Live as a multiplayer platform, but I'm looking much more forward to the PS4 than the Xbox One, in fact I have no immediate plans to buy a One, though I'm sure I'll probably buy one eventually (much like I bought a PS3 eventually, but that wasn't until this year).


Thanks for posting at least one pic of what launch games looked like last generation. I urge people to go look at some others on the web.

Anyone arguing that games don't look that much better than current gen need to remember how much progress we've made this generation, and then look at how good games like Killzone: Shadow Fall already run and look at launch.


That's another topic. It's still a lot more powerful than the original Cell in the PS3. (And a lot easier to program to boot)

Sony originally intended to have the cell do all the rasterization work in the PS3. They had to add an additional NVidia GPU in the last moment once Cell proved too slow to do that at competitive speed. That was one of the reasons why the PS3 was so expensive at launch.


I suspect the $400 PS4 contains roughly $400 worth of hardware, so it's not surprising that it would be similar to a mid-range PC. This generation isn't as "generous" as the $600 PS3 that cost $800 to make.


Honestly, you'd spend $600-$650 to build a comparable computer. And that's not counting the fact that the PS4's software, both OS and games, are built for a single target. And that's assuming you'll get cheap peripherals and cheap case and stuff. And probably doesn't include the cost of a Windows license either.

Specifically video cards don't start passing the PS4 until the $200 range, at least when comparing via synthetic benchmarks.

I'm also unaware of how to get memory like GDDR5 for PC RAM.


Yeah, this is not true at all. You can get video cards that are better than the PS4's (by enough to counter any initial optimization, if not end-of-life optimization) for $150. Specifically, 7870 GHz editions.


The only thing you can't get in a PC is the high-bandwidth shared memory. You can probably make up for it by having tons of RAM on your PC's video card, but I still think the shared memory architecture will be the most interesting part of the current generation of consoles.


That, and the usual low-level hardware access that consoles have always had. That counts for a lot, especially later in the console's life.


> You can probably make up for it by having tons of RAM on your PC's video card.

You can't "make up" for having shared memory unless the developer is just using it to pretend that they have a machine with like 6 GB of vram.

Of particular relevance is the fact that the PS4 has cache coherent shared memory. Which is a separate ball game entirely.


By "make up" for it I mean you might get the same performance by moving more of your processing to the GPU and avoiding the CPU altogether. Obviously you can't do the same kind of close interaction between CPU and GPU like you could with shared, cache-coherent memory.

I can't wait until there's a PC that has that kind of memory architecture, though, if we ever get one.


AMD has been working on it with their APUs... That's why it's in the PS4 now.


Haha sorry buddy, the 7870 editions that beat the PS4 are $200. You're right that they are better than the PS4 GPU, but wrong on the price. The 7850's get down to $150, but they're also benchmarking lower than the PS4 in synthetic benchmarks.

Also, a $50 price difference doesn't invalidate my answer of $600-$650 --- in fact you'll notice that it lands exactly in my stated error range of $50.

EDIT: My bad, I found a super off-brand cheap bad-warranty version of the 7870 for $170.

But speaking from experience, don't buy a shit card from a shit company with a shit warranty.


You don't put an experimental design into production on day 1. This is similar to the classic "let's rewrite the whole thing" problem. Maturity of a platform is an important factor, and it's often difficult to pinpoint all of the particular aspects behind the maturity of a given platform.

Intel made the same mistake with the IA64/Epic/Itanium architecture. They were banking on that to be the future of high performance computing and then AMD kicked their ass by using an iterative approach with x86-64. The irony is that today the Itanium architecture is actually pretty good, finally, but it's been relegated to a tiny market niche.


I wish more people would care about Sony's record.

They initially officially supported Linux on the Playstation 3, but later decided not only to remove the feature from new models, but reach out through the cloud to remove the feature from existing consoles [1] [2].

This is also the company that put rootkits on CD's [3].

[1] https://news.ycombinator.com/item?id=5886509

[2] http://en.wikipedia.org/wiki/Otheros

[3] http://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootki...


  It makes a lot of sense to just a build a piece of it first — but that piece needs
  to be representative. You can’t just hack something together.” He calls this
  prototype a “publishable first playable,”
A better explanation of MVP than most.


So I'm clueless as I haven't been paying attention to this market since the PS4/XBox360 were released - does the PS4 not run PS3 titles? If so, does it have some emulation or is it new hardware sitting on top of the old Cell design?


Neither PS4 or Xbox One are backwards compatible with their previous generation counterparts. Both of them went x86 this time around with no emulation layer or additional legacy hardware to maintain any sort of backwards compatibility.

A little bit of a bummer, but we'll all forget about it in a year or two :)


I don't get this desire for backwards compatibility. Why would you want to play a really old game on a shiny new console?

If you really did, then pick up a used ps3 or 360 for about £50 and get the PS4 later!


Says a guy who obviously doesn't have children and who also doesn't own console games :)

I bought my kid the new Lego Batman II for PS3 in March for his birthday. We decided to upgrade to the PS4. He can't play this new game - that was recently released - on the PS4. It's a weird thing to not support backward compatibility IMO. It's a win for the developers/Sony but a loss for consumers. I have to now buy all new games and Lego Star Wars isn't going to be any better on PS4 than on PS3. Your question acts as though there should always be this "evolution" going on but I disagree. I like watching Daffy Duck from the 1940s - I don't like watching the Daffy Duck from the 1980s. Sometimes a game is "really good" and doesn't need revving - it just needs to be made available for the current gen console.


Firstly why are you getting rid of your PS3, secondly I have a 360, and a PS2 before that, PSX before etc etc I also have a XBone and PS4 pre ordered.

I trade in all my old games as I never re-play them, if I do it's normally after such a long time that I can download them on my phone or run them emulated as I'm sure will eventlally be possible on the PS4.

Supporting backwards compatibility could result in developers not bothering with the new hardware, soemthing that has happend a bit with the GBAdvanced and DS platforms.


Again, "Says the guy who obviously doesn't have children" haha


You don't see what difference it makes because you have no context. There are a lot of things I thought/did as a young man with no wife or children that I don't do today now that I have both. I can't tell whether you are being willfully dense (or "captious" as yesterday's word of the day taught us) or you just haven't taken the time to think from someone else's perspective.


Try making a valid point rather than saying I'm not considering your point of view.

Or I could just say, "Says a man who's insecure" or some other vague and assumed statement.


I don't see what difference that makes, you could get maybe $50 on ebay for a ps3 slimline.


Why get rid of them? I wish I still I had my NES and SNES from childhood. Even if I couldn't play them, I'd love to just see them, for the memories and nostalgia.


Because maybe I've already got $1000 invested in games I'd like to keep playing?

Well, I understand the clean break was necessary .. since their architecture strategy sort of requires it .. but its still a nuisance that we're confronted with yet more consumerist ideals being marketed as 'features'.


You have two options, keep your old console and keep playing them. Or sell the lot second hand and the IF you want the play them in the future download them on your phone or what ever device can now emulate them in JavaScript ;)


Unified memory architecture! Should be some cool hybrid cpu/gpu algorithm possibilities. Currently the overhead of copying stuff between the two memories in CUDA prevents any fine-grained cooperation. I wonder how much gpgpu ability is exposed.


They should have mentioned the number of titles at launch for xbox one. If ease of making games is such a big deal why aren't there any facts on how far behind their main competitor is.


Xbox One only launches with a couple more retail titles. PS4 launches with a dozen or so more PSN titles.


Because that will be forgotten in a matter of weeks as more titles begin to roll out for both platforms.


I want to know how much Sony payed Wired for that advertism... article.


PS4 is just a PC stuck in one configuration, linked to a popular delivery system.

I don't get what the hype is about.

Yes, it's a pretty good configuration at a reasonable price. But that will only be the case for a few months. PCs will catch up very quickly and will become cheaper, and then much cheaper.

As somebody who loves games, I don't see a reason to buy PS4. I'd rather spend a bit more on a PC, and then upgrade it as I go.

The positive thing I see is the PS4 <-> PC game portability. No longer developers need to have to very different engines and codebases. That's why PS4 is a much better option than XBox One in my eyes.


> PS4 is just a PC stuck in one configuration, linked to a popular delivery system.

You've belied the true strength of video game consoles. A single configuration means less testing work and a streamlined experience for consumers. Not everyone is willing to deal with graphics drivers and endless installers.


Which also leads to crazy optimization against the stable hardware. Just look at what PS3 and Xbox 360 are pushing out with ~512 MB RAM and 7+ year old hardware. It's pretty incredible.


>Not everyone is willing to deal with graphics drivers and endless installers.

To be fair this is much simpler now than it was 5-6 years ago.


Why would I get a mac, when I could get a PC for much cheaper and it will be faster and I'll have more control over the hardware. Why would I get a iPhone, I can get a cheaper Android and have more control over how I use it. Why would I buy Windows when I can get Linux which is more powerful, customizable, and free.

Not everything can be reduced down to specs and numbers. Everyone has different use-cases.


Funny, this is why I use a PC running Linux and an Android phone... and wait- so does my Grandma... And we have very different use cases.


Your Grandma uses a Linux PC? I'd love to hear about her experience with it.


For both her, my mother, my mother-in-law and my my step-mother-in-law it is great. From their perspective they click on Firefox, thunderbird/gmail link, OpenOffice, Skype , etc they get what they expect.

Only unlike when they were running Windows they don't have to call me over every 6 months to de-worm their sick machines. They all use Ubuntu in gnome-fallback mode (so it looks like gnome 2.x). Now days I only have to deal with rare hardware issues from my family's machines (and PC desktop are easy as hell to service).

P.S. Reflecting I think Linux is actually ideal for very technical users (programmers/sysadmins/etc) and very basic users (icon clickers). I think "power users" are actually the people who have the hardest time. They have a use case that requires more leaning than the basic users one, and can't just figure it out intuitively like the technical because they don't understand how everything actually works like the technical users do.


I have the same story by geting them a mac, although now I just send them to the apple store.

Enjoy your support calls ;)


Helping a couple hours a year is a small price to pay for giving them a rock solid experience AND not support a company like Apple (at least in my book ;) )

P.S. The nearest Apple stores are also 150-200+ miles away for them...


Without being snide, I have actually found facetime is great for support calls, as they normally call me on it when there is a problem. I can just say, point it at the screen...


For what it's worth, my brother & I set our mom up with a Slackware machine running dropline GNOME (this was years ago). It was a good thunderbird+firefox combo, very stable. I remember the uptimes past 90 days. She needed some software for work that only worked in IE, so it had to go eventually.

When it does work, it works really well. The only requirement she had was that none of her icons changed: there were several links to specific websites she wanted on the desktop (bookmarks be damned), arranged in clusters. It's no trouble at all for Gnome, with launchers, of course.


Linux can be great for alot of people, provided you set it up for them.

My computer illiterate wife uses Linux PCs all the time with no issue (albeit for basic tasks, but then again that's all she ever used Windows for either).


And what are you trying to say exactly?


The poster I responded to mapped out why logically PC, Linux and Android is the best combined platform, but then suggested that it is not really true is intangible reasons ("Not everything can be reduced down to specs and numbers").

I was expressing that I follow his logic and think this logic is valid. And not just for techies like me, but lay people as well.


So people who bought iphones and macs just didn't understand the options available to them through linux? I seriously doubt that premise.


I think that many tech people use apple because its what they think hip techies use and that is what they want to be. Its a brand, and image. Apple also has an aesthetic that many people like. I'm not saying people don't have a right to make up their own mind, but I have a right to think they are choosing wrongly.


So you are saying those intangibles don't exist? Maybe you and your mother just don't value them as much.


I never said that they don't exist, and I would agree I must value them less, that is implied by the choices I have made. There are also intangibles that push in the opposite direction as well, say such as Apple being a comparatively evil tech company in regards to opensource interactions, market tactics, patents and labor/human rights.


This is beautiful.


Well you get a few benefits.

> stuck in one configuration

This is a massive benefit to development everywhere. Additionally to really being able to push the hardware to the limit. It's a lot easier to learn how to properly "abuse" a single configuration then thousands of different pieces of hardware in millions of different configurations.

And just in reliability and testing. You can be sure that your exact hardware, with a nearly identical software load has been tested countless times. This is why you can often find games run much smoother on a 5 year old machine, to a modern PC.

Additionally API's are significant the fact that I can allocate memory directly on the PS4, and know that it won't ever be paged to disk is pretty significant. Additionally, Sony's rendering API's are much closer to the actual hardware, which give you much more flexibility in usage.

Then there are a few differences in the particular set of hardware that give it a good advantages to current PCs notably all 8 gbs of memory is GDDR5, which is very fast. And additionally, it's all shared, and cache-coherent between the GPU and the CPU which is really quite significant.


> This is a massive benefit to development everywhere. Additionally to really being able to push the hardware to the limit. It's a lot easier to learn how to properly "abuse" a single configuration then thousands of different pieces of hardware in millions of different configurations.

This is most obvious near the end of a console's life after developers master the hardware. Compare launch games and final releases for any console.


Indeed, you rarely see the same improvements on the same hardware for PCs.


Huh, Xbox One and PS4 are very close architecturally, beyond both being x86 based. Xbox One is closer to a Windows 8 PC on the software side, assuming you have a DX11 based game that runs on WinRT, which could make lazy ports easier. In practice anyone that would benefit from a lazy port is probably an indie developer, and is using Unity, Monogame or another higher level framework (or OpenGL and C++ for a weird minority) and AAA developers can't do lazy ports because the performance just won't be competitive.

PS4 isn't really close to an existing set of APIs, although it's possible that Mantle (AMD's low level graphics API) is the same or very close to the PS4 graphics API. There are low level specifics like unified memory, asynchronous compute, and Xbox One ESRAM that make neither system look like current PCs, although future PCs might adopt them through new API specs or hardware configurations.

Steam Machine fits your description more accurately, but even it's a little weird in that a lot of games don't support Linux.


>I don't get what the hype is about.

It's a video game console. Genuine question: what's confusing here?


> I don't get what the hype is about.

Video game consoles are entirely about experience. PC gaming and console gaming are entirely different. It's not just about graphics, it's about the controller, the immersive experience of playing on a TV screen in your living room (I know PCs can do that too, but for the average person it can be a hassle to set everything up).

For developers it's a single target that never moves.

Peoples' love for consoles is shown by the fact that many people still play on consoles that are 10-25 years old. I still play games on a PS1 occasionally.


The hype is that games will be much easier to program, so that's why it's much closer to PC programming. the CELL processor was very hard to use, and that was a big drag because it slows down developers considerably.

The good thing is that games will be made faster and with less difficulty. That will obviously improve game quality since it will reduce game making costs, I guess it will take much less time to make an engine and maintain it.

As for your comparison with a PC, you should not compare those, because all console are the same, which allows programmers to target a single configuration and optimize as much as possible, while on PC, programmers have to adapt their code for different configurations and drivers, but also require to offer multiple options to adapt to each configuration because PCs never perform the same.

To sum it up, there are no "minimum configuration" when releasing a game on console. Console have a better performance/price ratio. The PS4 would really make an interesting platform now that's easier to make a game on it, especially if it's dedicated to games, not like PC where the OS can suck a lot of resources you might want to allocate on your game.


The XBox One is an x86 architecture as well. Pretty much everything that you said about the PS4 also applies to XBox One. Now, there are differences between the consoles, but fundamentally both of the next-gen consoles are x86 CPUs tied to memory architectures that are designed to stream HD content to the screen as fast as possible.


What does processor architecture even matter to a consumer? Is the Xbox 360 a pre-Intel Mac because it uses PowerPC archictecture? I don't see why so many people mention this. Consoles have always been about custom software, not unique hardware architecture.


It absolutely doesn't matter to the consumer of {XBox One | PS4} games. However, as a PC gamer, I'm excited by the fact that the consoles have x86 architectures, because this removes one of the obstacles preventing PCs from getting console titles.


The boring nature of the architecture is notable as it puts the focus back on software, which is what people care about. That devs this generation can just make shit without discovering the complexities of custom hardware is a good thing.


That's probably true for Playstation, but I think developing for Xbox was already pretty much like developing for Windows, even with different hardware architectures. Most game devs don't looking too deeply at the hardware but just use the tools they're given. The ones who want to understand what's going on in hardware will still study the details of the CPU/GPU.


The endianess of the Xbox 360 made "already pretty much like developing for Windows" not quite accurate. And causes a lot of problems relating to I/O.


Doesn't the 360 run little-endian?


Nope, the 360 runs big-endian.


It does both.


Incorrect, the powerPC may be capable of both, but not the xbox360 in practice.


Yes, this is correct. Although it is capable if switching, this would cause problems that require a motherboard fix which is not implemented, so the machine runs big endian in practice. I included a wikipedia link to explain.


What, does MS let game developers choose?



No, the machine always runs big-endian.


All current software does this, but the switching capability means the motherboard could be modified to allow it to run little-endian. This means that if someone wanted to repurpose old Xbox hardware to run a little-endian operating system it would be entirely possible. Certainly not a practical process considering the cheapness of CPUs, but it could make for a fun experiment.


It's possible, but also keep in mind that the 360 chip was a custom PowerPC, that had high manufacture numbers. It's entirely possible that the little-endian functionality was somehow broken to save money or otherwise.

But the point was originally about what the Xbox360 runs, and before that, how it affected development. Not about what the silicon in a particular chip in the 360 has the capability of doing.


It does matter this time around because there's no direct backwards compatibility for older games.


Sony will start streaming eventually. It would be fantastic if they implemented a disc matching policy, allowing you to stream a game with the disc in the drive. Perfect backward compatibility with no software support necessary.


This is not going to work anywhere except the continental US due to latency, unless they manage to provision servers everywhere (to the cloud!)


US speeds are not great compared to some places. But let's see where things are in 5 years. Could be much better. Doesn't matter for me. I stopped buying anything from Sony when they removed OtherOS.


It matters to developers, and in turn to the customer in that it affects the amount and type of games released on the console.


PC's GPU is hidden behind layers of APIs. It does not matter when you compare different hardware on PC as any hardware there is boggled by the same APIs but it makes no sense to compare the same hardware on PC to a game console that exposes GPU directly.

It's like comparing JavaScript to native apps. Yes, sometimes JS can be pretty close and JS on powerful hardware can even beat native code on a weaker platform on some tests. Yet it would be very naive to think that JS nowadays runs faster than any native code on 10 y.o. hardware. Similarly, it will be a long while till DirectX/OpenGL apps will be unquestionably faster than native GPU code.


Why yes... it is the same old x86 we run on our PC configurations, but that's just the instruction set we're talking about. Just look at the hardware and chip configuration, it's not a regular PC setup, they have some serious innovation going on there. You've got 8GB of GDDR5 memory, plus a very modern, custom designed GPU embedded in the same CPU dye. Sony has always launched systems that resist time, and I wouldn't be surprised if the PS4 stayed around for the next 4/5 years to come.


> Yes, it's a pretty good configuration at a reasonable price

Not even. Actually for a repackaged PC (which is more or less what it is), I find the launch price relatively expensive. I have also heard that both Sony and Microsoft took the strategy for this generation to make money on the hardware itself from day 1, so they are not selling anything at loss, apparently.


Over twenty years of this, the only reason I have ever bought any console is for exclusive games.


good article...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: