It's a bit of a roundabout article, given that it's ostensibly talking about the Web version of Photoshop. Basically, Adobe already solved the problem (for decades, now) with a traditional non-web application by using the filesystem. And the new bit here is that Chrome offers a filesystem for web apps to use [1], so from the web app's point of view, it can still use the same essential solution that the native app would.
It doesn't really go into the guts of how PS actually manages all that data in much detail. Just "it goes to the filesystem in a smart way."
Photoshop can't rely on mmap, because its scratch file implementation is older than most modern OSes, and some systems have virtual memory limits it exceeds, not just physical memory limits. Though I'm not sure the browser even has mmap capability.
It's unfortunate that there's still no mmap equivalent for I/O in the browser, so you can't do demand paging or anything like that. Nothing approaching zero-copy either, you're typically looking at two copies or more. At least the OS page cache will probably help you out.
If the OS can't see the RAM, then the OS provided ramdisk can't use it either.
But in this case, the use-case for using a ramdisk for a PS scratch disk came from using an A23-aware patched version of System 5.x/6.x with a version of Photoshop that was not A23 aware. AFAIK This was a VERY tiny window around PS1.0 and possibly PS2.0, by the time of PowerPC and System 7.5+ (which this user had), there was very definitely absolutely no advantage to partitioning the RAM away from programs.
At a guess, someone had told them to do it this way in the era of the early Mac IIs and such, but they hadn't grasped the 'why'
Linux is full of stuff that writes to disk all the time.
Browsers do it too. It's possible to do multiple GB an hour doing nothing on some versions with extensions, just from writing and rewriting the same file every minute for probably no good reason other than a dev hated complexity too much to check for unnecessary writes and not wear out the SSD.
I worked on some memory stuff for Figma, a similar app, though it's surely changed since I did.
One interesting thing about Photoshop's perspective is that it's fundamentally about "files", which is like a single bytestream that contains all the picture and which must be transferred to the browser to work on. A Figma document also can refer to images, but in a different way. The sum of the document data (like layout) and the image pixels can be extremely large, like gigabytes, but you can transfer those two things separately from server to client.
Might be off-topic but always wondered why Figma limits files to 2Gb, even the paid users. I remember once I had to present my work to stakeholders of a large company and when I opened Figma I was met with the dreaded red banner saying I exceeded the 2Gb limit and wouldn't open the file even for viewing. I was frantically posting on the forums and emailing customer support for a solution. Luckily they replied quickly and suggested I create a new file and paste the important bits on it and leave out the old stuff.
Yes! I am a daily user plus the dev seems like a great guy(he did an awesome AMA back in the day). One of the few places I make it a point to whitelist my ad blocker. So glad to see this shout out.
I wonder what the intersection of people who do not know what RAM and Virtual Memory are, but interested in this article?
“ The amount of memory also varies from device to device, as you know when you order a new computer or device and specify the amount of Random-Access Memory (RAM) desired. Many of these platforms also support virtual memory, which allows an application to use more memory than is physically available.”
File buffering RAM and/or arrays was invented long before Adobe. The tricky part is probably optimizing the image processing algorithms to fit the filing method(s). For example, if images are split into "tile files", and a given algorithm requires referencing too many files/tiles at a time, creating too much file I/O, then it probably needs to be reworked into a tile-friendly algorithm.
Donald Knuth used a pre-release version of Photoshop (at Adobe headquarters after the employees had gone home for the day) to process the images for his book 3:16. He was using multiple Macs to parallelize his process, and there were frequent crashes. This was back in the days of Macs with 512K of RAM which makes your 128MB Mac seem positively capacious by comparison.
And the original of this page showed how it was possible to zoom into an apparent bitmap image to an incredible degree (demonstrating the vector resolution-independence of Satori): https://web.archive.org/web/20070804225012/http://www.animat...
[As far as I recall, it zoomed into the writing on the clipboard on the right, to show a single period!]
This isn't even Adobe's first attempt at solving this problem. Old, old versions of Photoshop in the early 90's (I believe it was introduced in Photoshop 2.5) had an ancient hack called "quick edit" which would allow you to select a file and then open a "slice" of the file, which is a specified bounding box within the file (and not only x,y coordinates -- it could also select a subset of the color channels). It would then present that chunk of the file as its own standalone document window which you could manipulate to your heart's content, then when you saved it, it wrote the slice back out to the original document. It only supported a subset of graphics formats because each one had to have custom code to read and write the slice data.
Starting with 32bit had some performance advantages because 64bit runtimes can use virtual memory shenanigans to implement bounds checking with zero overhead. In wasm64 they'll have to do explicit bounds checking instead.
No, I mean running 32bit wasm code on a 64bit runtime. The trick is that a 32bit wasm instance can only address up to 4GB of memory, and 64bit hosts give you terabytes of address space to play with, so you can reserve all 4GB of address space up front and incrementally commit the pages as the wasm memory grows. If the wasm code tries to access a page beyond what's been committed it will trigger a hardware exception, so there's no need to perform explicit bounds checking. This only works because the wasm address space is significantly smaller than the hosts address space.
That's one way they could do it, not quite free since they'd have to mask off the high address bits at each load/store, but that's probably cheaper than adding a full branch at every load/store.
I think the wasm spec would have to be amended for that to be legal though, currently out-of-bounds memory accesses must throw an exception one way or another, and silently dropping the high bits of addresses may turn an out-of-bounds access into an in-bounds access that doesn't throw.
Not sure why. They could just guarantee the allocator never hands out invalid addresses no? If you’re trying to access outside the valid range that’s an out of bounds access because the memory range isn’t mapped. I’m sure I’m missing some nuance though.
The allocator never hands out invalid addresses, but the wasm code can then try to access out of the bounds of the allocated memory (eg huge array index).
The wasm runtime and other browser code runs in the same address space but must stay out of reach of the wasm code.
One the one hand I like 64bits from a technical perspective but, on the other hand, I am not so sure I want to live in a world where a single web app needs to access more than 4 GB of address space.
But if the need occurs, you'd rather be forced to download/install an app rather than use a web app? This seems short sighted, especially for occasional tasks that come up where I'd like to just bang it out once with a webapp than install something.
I thought this was pretty well-known. Adobe has been running their own VM engine for a long time now. I'm not sure which Photoshop first shipped with it, but given how pretty much everyone in prepress used it it's likely since at least Photoshop 3.
The PS.temp file is a special file created by the Adobe Photoshop program to implement the program's virtual memory scheme. Virtual memory allows you to work on images of nearly any size, by using a hard disk instead of RAM (random-access memory) to hold information. The PS.temp file contains the program's virtual memory information. The file is normally deleted when you quit the program.
For implementation details, see the files UVMemory.p and UVMemory.inc1.p in the source distribution[2].
No, we've (collectively) discovered that we have a lot of mature technologies (like mmap), mature platforms and extremely fast hardware, so we created a crippled software development platform on a bloated layout engine so people can get creative again hacking around the arbitrary constraints and design flaws of the platform to solve problems that had been solved 40 years ago.
Web browser did drastically simplify delivery and deliverability though.
Not that there wouldn’t have been alternatives had web apps not become a thing. It’s just more user-friendly than most current open alternatives (eg apt-get), and more open than most current user-friendly alternatives (eg app stores).
That's ironic, given that it's a Chrome blog post. Because recently someone here said that Google / the Chrome team used their market power to kill JPEG XL (a proposed jpeg successor) in favor of AVIF, which has weird size restrictions.
It doesn't really go into the guts of how PS actually manages all that data in much detail. Just "it goes to the filesystem in a smart way."
[1] The "origin private file system": https://developer.chrome.com/articles/file-system-access/#ac...