Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Pixelmator Pro gets a magical, AI‑powered Deband feature (pixelmator.com)
278 points by ingve on Dec 21, 2022 | hide | past | favorite | 118 comments


I haven't tried this one yet, but in the last few months they finally released the ability to open up animated GIF files, which was one of the last reasons I was still using Photoshop.

Pixelmator has been my go to graphic design application on my M1 Mac for the past year now and it feels good to cut the Adobe cord.


Yep I chucked my adobe sub this year as well finally. Was hard getting off lightroom but I found that the apple photos app is good enough on the mac.


Really. Apple Photos has been a dumpster fire for my wife. It randomly freezes, she can’t get her photos back out to upload them, etc. It’s almost to the point that she’s done with MacOS because of it.


Is your photo library on a spinning hard drive or USB drive? I had this problem before which the setup, and it went away when I moved my library to my internal SSD.


The good old "if it's not working it's because you're not using enough of it" ;)


Moving it becomes necessary when you acquire lots of photos and videos


A decade plus library of photos & videos here; probably in the 50k mark now, it’s been pretty flawless for me other than than the iPhoto to Photos upgrade. Never really had any issues with imports and exports (which is how I ended up upgrading). I’ve even used the iCloud sharing more often than not which has been a bit clunky but works once you figure out how to get a link out to people.


You should try capture one instead of lightroom, it's not just "good enough", it's better. You can pay once and get that version forever too.


I do most of my processing before hitting the shutter. Thus it’s not necessary for me. But I appreciate the suggestion.

Edit: I don’t shoot raw now. I’ve found it unnecessary. I’m not a professional :)


I think most of the Aperture features got rolled into Photos at some point. Not all of them, but it’s pretty good.

(I haven’t done serious event photography since before Aperture was deprecated, though.)


There is a Photos alternative made by the ex-lead Aperture developer called RAW Power [1] that uses the same Apple raw engine built into macOS but with its own library management and with more fine-grained control of the processing. This may be of interest to anyone looking for a Lightroom alternative but who finds Photos lacking.

[1] https://www.gentlemencoders.com/


> Was hard getting off lightroom but I found that the apple photos app is good enough on the mac.

Same, after spending many weeks evaluating commercial and open-source alternatives. (For Lightroom users who can't imagine Apple Photos replacing it, for advanced editing you'll be using Photos in concert with other apps via Photos' extensions mechanism and/or "Edit With".)


I don't see how that's a Lightroom alternative. It would take forever to process a batch of RAW photos that way if it is more than a couple of photos. I'm also confused how Apple Photos would know what adjustments you made to the RAW in an external program, or how you would go about exporting non-RAW files in this scenario.

Something like Capture One is much better suited as a replacement for Lightroom. I wish that Apple had a Lightroom alternative, but they got rid of Aperture, so... they're probably not interested in competing in this space again any time soon.


> It would take forever to process a batch of RAW photos that way if it is more than a couple of photos.

Apple Photos has great built-in RAW support, so no processing is needed. If I want to use a different RAW processor, that's straightforward as well.

> I wish that Apple had a Lightroom alternative, but they got rid of Aperture, so…

Check out RAW Power¹ (macOS and iOS), created by the former lead of the Aperture and iPhoto engineering teams. It's able to use your Apple Photos library directly.

¹ https://www.gentlemencoders.com/


Are you editing RAWs from non-phone cameras? I keep trying to get off LR, and use Apple Photos for DAM/editing. But, I keep running into odd edges. Editing in external applications also isn't very seamless. Just wondering if I'm missing something b/c I'd love to use Apple Photos for everything b/c of all the other benefits.


As a heavy Fujifilm user I can recommend Capture One. It’s much cheaper than Lightroom and there’s actually free version for Fuji’s raws. I’ve tried many other apps and they all seemed really weak compared to Lightroom. This one is still not as good UX-wise but actually often gives better output that Lightroom itself.

Never heard of people using Photos for RAWs, would love that too.


>It’s much cheaper than Lightroom

No. There's no more brand specific versions at half the price. You now pull the full price - even with a Fuji or Nikon.

The subscription is roughly $19/month (if you pre pay annually - else it's $29/month) which is more than Adobe's offering. And the perpetual licenses got shitcanned recently (I bought one on Black Friday) - when you buy a perpetual license you won't get any minor updates for that version anymore. You buy C1 23.1 and they release 23.2? Well, bad luck. But there's a loyalty program where you can buy a paid update. Hehe.

Oh, and their iPad version is an extra subscription at $5/mo.

As a company I detest C1 more than Adobe. But their processing is so great. When I import my RAWs into C1 I don't have to do much with them. They look astounding out of the box.


" It’s much cheaper than Lightroom "

That's not true for the full version.


Capture One does sell perpetual licenses, so I guess it depends on how often you upgrade.

Regardless, I tried Capture One and Lightroom Classic side by side, and I liked Capture One a lot better. Even though it is more expensive for people who intend to upgrade regularly, it is worth every penny in my opinion, but I also think a lot of people will be able to get along fine on the freeware license. Capture One handled importing new batches of photos surprisingly better than Lightroom Classic, which actually locked up for long periods of time, and all sorts of other operations seemed faster, as if the underlying engine was not ancient. I don't remember all the details as I did this comparison about a year ago. I did not consider Lightroom CC to be a valid option for various reasons.

I also strongly appreciate being able to have a perpetual license, instead of being forced to subscribe for the rest of my life if I want to be able to continue accessing all of my RAWs with the adjustments I make, and not just the exported photos.

The one thing I disliked about both Capture One and Lightroom at the time is that neither supported proper, end-to-end HDR workflows. I have HDR displays. The RAW files contain a stunning amount of dynamic range. Even the common iPhone captures HDR photos (and I don't just mean HDR processing that gets stuffed into an 8-bit JPEG, I mean an actual HDR photo in 10-bit HEIF with an HDR curve). I want to be able to edit and view my RAW photos with extended dynamic range, not be limited to viewing an 8-bit SDR image while editing, and not be limited to exporting crappy 8-bit JPEGs that are missing so much dynamic range (or gargantuan TIFF files that have limited use cases). Adobe has started adding some HDR stuff this year, and that's the only thing Adobe offers that Capture One lacks that I'm even remotely interested in.

Incidentally, I'm also upset that Chrome removed JPEG XL, which promises to be a common format with proper HDR support.


> Even the common iPhone captures HDR photos

That’s not an “even” thing; the iPhone has the /only/ end to end HDR still photo pipeline.

Other cameras can do HDR video but not stills. The newest Sony and Canon cameras do support an HDR still colorspace, but not sure what image formats work with it.


My Sony A7IV also captures HEIFs instead of JPEGs (in addition to RAWs), but I haven’t looked closely enough at the Sony HEIF files to say for sure whether it is producing HDR photos or not.

Regardless, the RAW files have plenty of range that I would expect modern RAW processing software to be able to emit HDR photos from them.

My use of the word “even” was really meant to be a reprimand of the camera industry. If even an iPhone can do this, a “proper” camera should be able to do it even better. Extended dynamic range is incredibly important to make images photorealistic. Being limited to SDR should be pretty embarrassing for any photo-related company in 2022. But, that’s just like my opinion.


HEIF isn’t related to HDR really. iPhone HDR is a proprietary extension to HEIF, it doesn’t come for free.


It’s useless to upgrade with every new version (if you are not upgrading your camera); unfortunately all the companies slapping minor upgrades and making it a number release.

I suggest Lightroom 6 for those who has older camera, otherwise discounted standalone version of Capture One (often goes in discounts). DAM feature is great in Lightroom; but this can be resolved by using some other app Can not wait to ditch Lightroom though. C1 is great for tethered sessions.

Rethinking DAM, i think i need just a good filemanager, i am already organizing my folder by myself.

If you are working in professional environment as a photographer, Capture One is a quite gold standard for photographer. In anycase, retouching is done in Photoshop.


> Are you editing RAWs from non-phone cameras?

Yes, I have many years' worth of (over 10,000) RAW photos from DSLRs and compact cameras.

> Editing in external applications also isn't very seamless. Just wondering if I'm missing something…

I don't think you are — external editing is just not as elegant as an all-Lightroom workflow. That said, RAW Power (created by the former lead of the Aperture and iPhoto engineering teams) is quite nice and supports working directly with the Apple Photos library.


Been using RAW Power for that, and I'm quite happy. It integrates with Photos: you can open your photo library in RAW Power and save edits back


lightroom makes it so easy to apply filters in batch to all the photos in a certain room or environment making it much faster to process event photos.

I'd love to know if there is a good solution for saving time when you have hundreds of photos to process.


If you have photos in Apple Photos, I know you can edit one photo inside Apple Photos, then go to Image -> Copy Edits, and then you can select a batch of photos and paste those edits.

Maybe this works for RAW photos too, but I don't know how/if Apple Photos handles RAW files in this scenario. It definitely works for non-RAW photos.


Pixelmator is amazing - it’s replaced Photoshop for me as one of the essential pieces of software all computers should have installed.

I’m consistently impressed with the range and quality of features they add - it got video support only the other day. Won’t be long before it’s a motion design tool as well.

AI upscaling may not be as good as something like Topaz Gigapixel or RealESRGAN, but it’s damn handy to be able to enlarge a layer without worrying about image quality.


I remember using the Topaz MSDOS, it was ahead of its time with Targa graphics. I'll have to take a closer look at Gigapixel.


Slightly OT, but the one thing I hate about watching films on streaming services is the banding on the images. Any relatively dark scene with gradations of lighting in the background is a mess of shifting lines. I'd love to see something like this built in to a native Netflix client... although I realise hell will freeze over before that's a reality.


I don't see why clever video post-processing couldn't help with this. Why do you say hell will freeze over? This seems like a much easier solution for them than to increase bitrates and thus, bandwidth


The issue is that Netflix would have to do it, and they'd have to get permission from the content owners to modify the stream. I suspect hell would freeze over before anyone could agree on the relevant licensing arrangement


FYI, they already do this by synthesizing film grain! https://waveletbeam.com/index.php/news/48-netflix-film-grain...


Interesting!

For each source, they must develop a "grain profile" (or, likely multiple profiles) so they're applying the right grain ""effect" when re-graining?

And on the re-graining: is this being done on the client side??? Like, is my Roku adding back the grain to my Paul Thomas Anderson films?

(Note: I'm only 5 min through the video, so these may be answered)


TVs can do that, Netflix doesn’t have to. Our new top of the line Sony (A95K) has a debanding image setting. (But I think pretty much all Sony TVs have that setting.)

However, it doesn’t seem extremely great and tends to smooth out things it shouldn’t. I have it turned off. It seems like debanding in realtime is still too big of a task even for the most expensive TVs.


Which is why Netflix are the ones who should do it. Even on low-powered devices, they could stream some pre-computed debanding data for each pixel in the video (e. g. as a separate channel encoded with video).


They might agree to a compression format, and the format could define post-processing.

Third party desktop software could also do this were it not for HDCP. I'm pretty sure any attempt to front-run the stream with a full-screen image processor will blacken the display.


Every good chinese 10$ HDMI Splitter or Capture Card will just ignore HDCP entirely. It's actually refreshing, you just plug it in and it works fine.


You can do it in Chrome on macos without the screen going black, if you disable hardware acceleration. I dunno how long that will last, though.


No that's in the decoder step which is up to the shared library that's used for it on your system - likely doing a hardware call to your graphics card which is the perfect place to do this stuff.

The solution is a drop in decoder replacement. Heck it might already exist, it's a pretty obvious, easy and direct application.


Even the Apple TV 4K’s UI suffers from banding, such as when they generate blurred backgrounds in the Music app to match album artwork. They need to bring back their 1980s dithering algorithms or something.


Interesting. I don't see this with my Dolby Vision-capable setup, presumably because it stays in 10-bit mode for both UI rendering and video playback.


This may be due to how Netflix’s VMAF metric judges frame quality and prefers images with more edges, even when these are false edges caused by banding:

https://mobile.twitter.com/jonsneyers/status/157337162413241...


Moving to higher bit depths than 8-bit will also help a bit - it's not helped by agressive compression, but even with lossless compression (PNG) 8-bit gradients like skies still suffer from the problem...


AI based compression is under current research https://www.youtube.com/watch?v=dVa1xRaHTA0

Netflix has strong history of optimizing their streams. I bet they will be first to deploy.


You don’t need AI for this. The problem is that dark areas are fundamentally harder to code (they need more bits), and how much the artifacts are visible depends strongly on how your TV and viewing environment are set up.

It’s possible they’re viewing in a too bright setup and are seeing things they weren’t supposed to.


Netflix has the originals though. They would sooner just start streaming better quality.


Pixelmator is amazing, and features like this constantly astound me. Where other art tools are using AI to generate content, Pixelmator is using it to make annoying things convenient -- like debanding, rescaling and selecting background images. This, combined with a zippy UI and one-off cost, mean I've not really thought about Photoshop for years.

I love it so much I recently decided to make pxdlib [0] to automate fiddly bits like data entry or repositioning layers. It's surprisingly fun to reverse-engineer [1]; the `.pxd` format is a bizarre maze, with common properties some 6 encodings deep!

[0]: https://github.com/yunruse/pxdlib [1]: https://github.com/yunruse/pxdlib/tree/production/docs/pxd


I wonder if it will ever get to the point where we specifically as a lossy compression algorithm generate images that can be enhanced into what we want. “What is the smallest image that <popular AI of the day> will blow up into the image I want” might be an interesting question.


I've seen some prototype compression algorithms that work like:

1. Heavily compress image

2. Use neural net upscaler to upscale back to original size/quality

3. Record the diff between the real uncompressed image, and the upscaled image. This should be small if your neural net is good.

4. Store the compressed image + the diff.

This is a lossless image compression algorithm. Part of the trick is that the neural net size is not included in the compressed file size, because that is amortized across many different files you'll be compressing/decompressing.

It's not super portable, though, because whoever is opening your file needs identical neural net weights to get the identical output to apply the diff to. It wouldn't surprise me if eventually one of these neural nets gets baked into a standard somewhere, and every browser ships with that network baked in. Call it NeuralPNG or whatever.


That's a natural extension of prediction-based encodings, and I believe there are already some experimental ones for text. E.g.: predict the rest of the sentence using GPT-3 and then record the errors in its prediction.


Oh, that’s pretty clever


At one point the holy grail of compression algorithms was fractal encoding which if I understand correctly is basically asking the question what fractals do I need to get my image. theoretical compression ratios off the chart, the problem was the only method found to solve the problem was the "grad student algorithm", that is lock a grad student in a room with plenty of pizza and coffee until they figure out what fractals were needed.

Perhaps these modern day generative programs(sigh, ok ok, "AI") is our generations solution to fractal encoding and the problem with scaling the graduate student algorithm.

"Well we don't exactly know how to solve it but we took 60 Gb of linear algebra and hit it with a stick until it figured it out, no idea how it works but it appears to do the job"


Fractal compression worked automatically (grad student not required) but it typically took hours on the hardware of the late 80s/early 90s. And ISTR it worked well on particular images, but not generally.

It's likely it would work much better on modern hardware, but it was heavily IP-encumbered back in the day which caused almost everybody working on image compression algorithms to ignore it.

The patents have probably all expired by now so it might be worth another look.


> it was heavily IP-encumbered back in the day

It was also horrifically expensive to license from what I remember which meant that almost no-one used it.


I at least wish for an algorithm that handles this all too common banding issue. We already have formats supporting synthetic film grain (as in the format itself), so how about this? Banding can be taken care of by dithering — a very old technique.


Finally "enhance enhance enhance" won't just be a meme from CSI.


These AI enhancers just make the image look like whatever the training set looked like. My favorite example of how flawed it is https://video-images.vice.com/test-uploads/articles/5ef216e9...

There is really no solution to this because even we couldn't tell this is wrong without prior knowledge of the left side.


There was one issue I remember reading about where when someone upscale a picture, the upscale stick Ryan Reynold’s face in a building randomly.


Whats the source of that image? It looks like a suspicious result, for example the white shirt is lost in the "upscaled" image. Was is trained on a specific set of images to prove this point?


It's from an algorithm called PULSE. It seems to search the latent space of StyleGAN to find a face that compresses to something similar to the query image.

https://arxiv.org/pdf/2003.03808.pdf

https://www.theverge.com/21298762/face-depixelizer-ai-machin...


Thanks for the link!


It'll be used to convict the wrong person.

Compression/decompression shouldn't be used for subtle details.

We've seen problems with it before, with scanners[0].

[0]https://www.pcworld.com/article/453236/xerox-scanners-found-...


I seem to recall a project someone did where they encoded an image's position in the latent space of a generative algorithm as the file and used the algorithm to recreate the image later, which wound up being a very efficient compression technique, albeit lossy in weirder ways than usual.


I thought Nvidia had done research into a video codec that works like this.

It'd learn what a scene looked like and then switch to send a really low quality stream to the other person to reconstruct or something like that.


If you want to remove banding from jpeg images, check out https://github.com/google/knusperli. When jpeg quantizes coefficients, there is an entire interval that quantizes to the same number. A traditional decoder uses the quantized value, the center of the interval. When there's is a subtle gradient, the low-order coefficients quantize to zero, and you get banding because the gradient disappears. Knusperli does not use the center of the interval, it picks the value in the interval that makes the block boundaries the least discontinous. This means that it can recover subtle gradients.


How do Pixelmator Pro vs Affinity Photo compare these days?


Both are quite mature products these days, with subtly different emphases.

As with this release, Pixelmator Pro has prioritized ML-powered tools and has been an early adopter of new Mac technologies like utilizing the neural cores on M1/M2 processors.

Affinity Photo is in some ways a more traditional photo editor, and its biggest selling point may be its integration with the other members of the Affinity suite, Affinity Designer and Affinity Publisher.


I quit Affinity Photo for Pixelmator Pro a while back because Affinity Photo is quite a bit slower, and because it mostly just copies Photoshop’s UI, while Pixelmator Pro rethinks and improves the bad parts of Photoshop’s UI while copying the good. In the past Pixelmator (before the Pro) was lacking features, but Pro largely addressed that, and they continue to ship significant improvements regularly. Affinity Photo recently came out with a 2.0 across their suite which may have addressed the pain points, and I’d like to try.


Affinity photo has too many quirks and (small) glitches for me: one of the most annoying is that ALL tools/inputs reset to default values/state after single use. It also becomes imprecise when working on smaller images (not photos): it’s not well suited as a pixel editor. It allowed me to abandon Photoshop a few years ago but I’m not much satisfied.


I use Affinity Photo because when I wanted to try Pixelmator I discovered they only sell the product on the Mac App Store and I strongly dislike the App Store. Affinity lets me buy direct from their website.


Pixelmator Pro is great. I was just an occasional Photoshop user, so its subscription model was a poor fit. So I hopped off Photoshop some years ago with much relief.


I wish they'd make a Linux version or even Windows. Since I gave up on Mac this is the one app I miss. Gimp and Krita are not in the same league :(


Photopea is a surprisingly comprehensive free photoshop clone that runs in the browser. I've used Photoshop for more than two decades, but these days I only use it a couple of times a month, so learning these new tools and their completely different workflows have not been worth the time yet. I have yet to find anything that I couldn't do in Photopea. You can pay for a few extra features, or to support the developers, but the free version is complete.


Same. I'd love to ditch my old ass copy of Photoshop CS6. It doesn't really support high DPI scaling anymore so it's really a pain to use sometimes.


Yeah indeed. Their homepage is pretty hypocritical.

“Professional image editing tools that anyone can use.”

“Designed and built exclusively for Mac”


Debanding filters aren’t particularly difficult. You don’t need to train an AI to do them.

It’s more likely that people who know image processing are so rare that even an image editor can’t find someone to write a filter using actual math. As usual, all the actual good people are amateurs posting on doom9.


I've been staring at this post for a few minutes now, trying to figure out whether this was posted here to make fun of it, or as a genuine appreciation.

Debanding has been a thing for many years now, for frameservers (Avisynth, Vapoursynth), video players (madvr, mpv) and games (reshade), and it certainly didn't require AI to work.

Now in comes this company and introduces such a basic feature, while boasting about it as if it's the second coming of Einstein or something. And it uses AI, because of course it does, it's 2022 after all.

LOL.


Oh neat. I have not used Pixelmator since my company agreed to pay for Creative Cloud, I’ll have to check this out. I am personally super excited about uses of AI for restoring less than perfect old media.

I played with a model in a similar vein for deinterlacing old TV and DVDs several months ago. The results were mixed. Some things came out looking fantastic but some things came out looking worse than traditional methods like yadiff. I need an AI that can tell me which method is going to work better so I don’t have to do both and manually make a subjective decision.


Looks like a great app, unfortunately it's Apple only. And you have to dig pretty deep into their website to confirm this suspicion, it's not stated explicitly anywhere - ok, the "Buy" link goes directly to the Mac App Store, but the text itself is written with the underlying assumption that they're only talking about the Apple universe. Would have been nice to clarify this right from the first header, e.g. "Professional image editing tools that anyone with a Mac or iPad can use."


Oh man. I don’t get this complaint. There’s a quadrillion of Windows only software, basically 99% of games are Windows only. Most of Gnome and Kde apps are Linux(BSD) only. Yet they don’t scream on the home page “Watch Out! It’s Windows only!”

Yet everyone assumes that Mac users dream of using their apps on some other OS. Why? I have no idea.


It's not Mac users dreaming to use their apps on some other OS, it's non-Mac users stumbling over articles like this one on HN and wasting time before eventually learning that this app is not for them. "99% of games are Windows only" is no longer true, thanks to Valve's Proton - but finding out if a game is Windows only is usually as easy as looking at the system requirements.


> but finding out if a game is Windows only is usually as easy as looking at the system requirements

It's exactly the same with Pixelmator Pro. You click on the "Tech Specs" menu item, scroll to (or search for) the "System Requirements" section, and get:

"Pixelmator Pro requires macOS 11 Big Sur or later and is fully optimized for Macs with Apple silicon."


I can see what they’re doing but the original picture looks very artificial. Unnatural.


It has only 64 shades (6-bit) instead of your more typical 8-bit, so this more limited colors than you'd expect.

Banding is more typically seen in monochromatic gradients - black and white or shades of sky blue. But this image is a yellow/blue gradient, and it's super unnatural to have the RGB channels transition at the same time - typically they transition at different locations and that helps smooth things out.


I agree, almost looks like it’s a CGI image generated specifically to illustrate banding.


Never saw such an awfully banded pictures myself. Poor example.


I'm curious what they specifically do besides handwaving away the fact that they use machine learning for debanding. In particular, are they using the exact same bit depth?

The traditional way to do this with old Photoshop workflows was to accept the fact that if you were working with low contrast gradients over large spaces, and you were getting banding, it was probably because you were operating in a lower-bit bit depth.

So, what you'd do is simply move up bit depth for larger color ranges. There's no "machine learning" required.

What I'm getting at is, is this marketing wank? Because it sounds like marketing wank, unless they can do it without increasing file size, and staying in the same bit depth, by using some perceptually lossless dithering technique. Thus, handwavy "machine learning."

Edit: color space/bit depth


If I gave you a heavily compressed JPEG, and asked you to make it look better, could you remove the compression artifacts without much manual effort? You don't have access to the original uncompressed image.


> is this marketing wank?

I don’t think so. There are lots of ways to end up with banding even if you try to keep it high bit depth. Photoshop quietly dithers (usually invisibly on a monitor) when going from 16 or 32 bits per channel to 8. Last time I checked, Image Magick didn’t have that option, nor do other non-Photoshop options. I feel like I vaguely recall Image Magick doing blur in 8 bits, even when working in 16 bits? Cameras can sometimes spit out banded images as well. Certainly if you didn’t control the image, someone else might have added the banding. There’s a lot of 8-bit imagery out there, and no reason to assume that everyone in the world is working in 16 bits and is banding aware.

On top of that, removing banding is obviously going to be difficult to do well, since it has to differentiate between which edges to blur and which to leave alone. This will definitely be useful and save time for working design/photography professionals.


When I move the slider, I see the banding disappear from the sky (as expected), but then I also see banding appear in the sand below it. Safari on iPhone 13 Pro, FWIW. Is anyone else seeing it?


Interesting. On a MacBook Pro M1, the banding disappears from both.


Are you sure you're not somehow mixing up the Before and After pictures? I see definite banding on the sand in the before picture but not the after one.


Same, on an iPhone 13 Pro in Safari. It’s very pronounced in both sky and sand in the “before” image.


I wish Netflix et al would deband their video streams. Yes, I know, the problem wouldn't exist in the first place if they hadn't over-compressed the videos.


How is Pixelmator VS Affinity? Trying to ditch Adobe for good


The same question was asked in another thread, and there are some good responses.

https://news.ycombinator.com/item?id=34087462


On mobile there is no visible difference in the interactive example on the web. Before/After looks exactly the same (iPhone 13).

Maybe use a zoomed example on mobile?


That's odd - I am seeing a pretty clear difference on iPhone 13, iOS 16.1.


Are you looking at the example at low brightness? At normal / high brightness, both the sky and dunes have strong banding in the left side image. The right side image does not.


It was low brightness. Thx.


They should also add Real-ESRGAN for upscaling. I plan to use this to turn old photos into poster prints.


It already has its own ML-based upscaling, which is not as good. It has an ML-based 'remove background' too which is inferior to macOS's new in-built one IME. Haven't been too impressed with their ML features so far.


Where does macOS have a "remove background" tool?


It was added in macOS 13. You can access it in Preview (Tools menu->Remove Background) or as a Service/Quick Action in Finder (right click an image, Quick Actions->Remove Background, or click Customise... to enable it if it's hidden)

You can also do it on iOS by long pressing the foreground subject of an image in the photos app.


Draw Things just added upscaler support (Real-ESRGAN and its ESRGAN derivatives) :)


A lot of wallpaper just got better


very cool application


Is there any free software out there that can accomplish the same thing, i.e. remove banding from images using AI? I'd like to try it.



It's a cheesy marketing tactic to refer to anything as magical. Apple started this trend. There's nothing magical going on. Clever maybe but certainly not magic.


It’s not as cheesy as nitpicking flavor adjectives with the assertion that they’re not literally accurate.


Next you'll tell us that Lucky Charms aren't in fact 'magically delicious'... I think it's safe to say that marketers have been tossing around magic for quite a while, I'd be more willing to believe photo editing could feel magical than marshmallows...


I disagree. I'm sure "magic" is often used as a marketing tactic, but I would strongly caution against believing all people using the word "magic" are doing so insincerely.


"Any sufficiently advanced technology is indistinguishable from magic." - Arthur C. Clarke

Of course it's not "magic", obviously. But it impressive technology, and it does delight users. It's taking something that was difficult, time-consuming, and complicated and making it painless.

Yes, it's marketing. The point is to get people's attention and sell the product... or at least get them interested enough to read a white-paper for more details. Marketing departments have committed much more egregious sins over the years. The use of "magical" as an adjective isn't even on the radar.


This technology is not sufficiently advanced.


Apple most definitely did not start this trend. By so, so many decades.


When product marketers describe some app UX as beautiful, my first thought is, when is the last time this person has been to something like a national park? This kind of language does not appeal to me at all, but clearly it's successful on a certain segment of the market.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: