Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't that the classic problem of software engineers also being responsible for UX? There's a reason UX is a whole field of its own.


UX was a field in the 1990s when it was at its height. We still have designers, but most software houses closed their academic UI/UX research houses and just hire people that make things that look attractive.

If you've recently tried to teach a computer illiterate person to do something, you'll know what I mean. No consistency, no internal logic or rules, just pretty stuff that you just need to know.


Windows 95, of all things, was actually a good example of a company doing 'proper' research driving UI work.

Btw, I loathe the term UX, because 'interface' (in UI) should already be a big enough term to not just mean pretty graphics, but the whole _interface_ between user and program. But such is the euphemism treadmill.


I remember studying and the difference was not only about interaction but a general impact on the user.

I always found MacOS Finder's spatial file placement a good example (Non-MacOS users - Finder has this thing where it remembers windows locations and file locations in window, so one can arrange files as they please and they stick). Given that that feature is removed the UI stays the same (there are file icons, some windows, same layout), but it does remove some of the cognitive load.

UX is impacted by many non-UI things: load times, responsivity to input, reliability (hello dreaded printer dialogs that promise to print, but never will).

Anti-pattern I hate with passion is MacOS update bar. I want to do some work in the morning, I open my computer and it's friggin' updating. This sucks, but happens, we got forced into this. And then there's this progress bar that jumps: 20%, 80%, 50%, 30%, 90%. Colleague asks when I'm going to be online - "oh, 10% left, probably soon" - ding - progress bar backs to 30%.

UI is the same from observers point of view (it shows the progress, which I suppose is correct and takes into consideration multiple update phases) but UX is dropping ball here.


'Interface' used to encompass all these things.

See https://news.ycombinator.com/item?id=43396140


OSX has had the striped progress bar for hard-to-estimate processes as long as I remember. Did they do away with it?

There are situations where I don't exactly care how far something has progressed but I want to see that it at least has not hung. Fedora's dnf doing SELinux autorelabeling for two hours without any indication of progress is one of those things I hate with passion.


There's still progress bar on update and there's timer (but not always).

The timer also jumps. Once I had ~40 minutes update that was hope-feeding me with "2 minutes left" for most of the time.

My guess is that it's not worth optimizing, but nowadays I shy from updates if I don't have 2h of time buffer (not because I am afraid something will break, but because I know I'll be locked out).


Their update timer always start from 29 minutes remaining and goes from there, IIRC, and I find that it's way more accurate than Windows' one on 99% of the time.

Funnily, Linux (KDE) got very good at their estimations for some time now. Better behaving storage also has a role, I presume.


The only real indication of progress being made is a log output of steps completed. All a spinner or similar indicator tells you is that the UI thread is not hung but that isn't really useful information.


In the 90's I had this idea that in the future steps completed would be confirmed to the server so that the progress can be calculated for other users. Like, on system A downloading step 1 takes 1 minute and step 2 takes 3 minutes. If on system B step 1 takes 1.5 minutes step 2 should take 4.5. Do the same for stuff that requires processing.

But we apparently chose to make things complicated in other ways.


Obviously in the ideal case indicator animation would be tied to something else, like background process sending output, and there would be a textual description next to it.


Scrolling logs scare common people. I don't know why.


You can turn off auto update. I update my Mac when I can, not when it decides.


Bonus: if you do that you don't have to deal with disabling new Apple Intelligence "features" every time.


Spatial file placement is probably not even good on the desktop. In a file manager its definitely an antipattern


I think on earlier windows (95 maybe?) opening a folder would also always open in a new explorer window so you had the impression that the window is actually the folder you are opening. Whereas today we're more used to the browsing metaphor where the window "chrome" is separate from the content. I also don't think today it's useful to have the spatial metaphor, but it probably made more sense back then.


People get way into this desktop metaphor.

A window is a program window not an actual window. A folder is not the same as a folder in a filing cabinet and a save icon is a save icon not a floppy disk.they dont have to stand for or emulate physical things


Historically, it's both, which is how we got here.

The Xerox demo was definitely trying to make near-as-possible 1-to-1 correspondences because their entire approach was "discovery is easier if the UI abstractions are couched in known physical abstractions." UIs that hewed very closely to the original Xerox demo did things like pop open one window per folder (because when you open a folder and there's a folder inside, you still have the original folder).

As time went on and users became more comfortable with computerized abstractions in general, much of that fluff fell by the wayside. MacOSX system 7, for instance, would open one window per double-click by default; modern desktop MacOS opens a folder into the current window, with the option to command-double-click it to open it into its own... tab? (Thanks browsers; you made users comfortable enough with tabbed windows that this is a metaphor the file system browser can adopt!).


I had my folders themed on win 95. It is kinda hard to explain but the color schemes and images trigger a lot of mental background processes related to the stuff in the folder. Just seeing a green grid on a black background would load a cached version of the folder in my head and alt-tab into a linked mental process that would continue where I left it.


I think we need more visual cues for common operations to give more assurance and reinforce the action. For example, recently I was trying to back up some photos from an android phone by plugging it into a windows machine and copying files over. I already had an older version copied from before, and I was surprised that the copy action resulted in the same number of files after I selected "skip" in the dialogue. What happened was that I probably tried to copy from windows to android by mistake. With everything looking the same it's easy to miss things and have the wrong mental model of what is about to happen. It would be great to have more feedback for actions like this, maybe show the full paths, show the disk/device icons with a big fat arrow for the copying direction or something. Basically the copy/move dialog is the same for 10 files and 10,000 files, same for copying between devices and within the folder... and it will happily overwrite any files if you click the wrong option by mistake. And unlike trashing files I am not sure it's possible to undo the action.


"Experience" is more than just "interface". E.g. which actions are lightning-fast, and which are painfully slow is an important part of user experience, even if the UI is exactly the same. Performance and space limitations, things like limited / unlimited undo, interoperability with other software / supported data formats, etc are all important parts of UX that are not UI.


UI, where I stands for "interface" just like in HCI, used to mean all those things.

But in the industry the focus turned to aesthetics, so a new term was invented to differentiate between focusing on the entire interface ("experience") vs just the look.

Just like "design" encompasses all of it, but we add qualifiers to ensure it's not misunderstood for "pretty".


And that has happened again. Changing the colours is "improving UX".


Thing is: changing the colours _could_ be improving the UX.

Eg I'm colourblind, and a careful revision of a colourscheme can make my life easier. (Though I would suggest also using other attributes to differentiate like size, placement, texture, saturation, brightness etc.)


> "Experience" is more than just "interface".

UX has become equivalent with crap. Give me back GUI.


To make a simile with books, to me the UI is the writing and the UX is the story plus how it's ingested via the writing.


That’s a good example for showing how “UI” and “UX” are essentially the same thing. At least in a practical context.

We can call an excellent story teller a “writer”. A good story can be described as “good writing”. A great story, let’s say a film being adapted as a book, can become a terrible book if it is “let down by the writing”.

In the context of books and storytelling, “writing” is the all-encompassing word that experts use to describe the whole thing. Just like “UI” used to mean the whole thing.


But UX is bigger than UI. Good UX might simplify some use case so that user might not need to see any UI at all.


The thing with not well-defined names is that they're open to interpretation. To me, the difference between UX and UI is on a completely different axis.

When I was at university, I attended a UI class which - although in the CS department - was taught by a senior psychologist. Here, the premise was very much on how to design interfaces in such a way that the user can intuitively operate a system with minimal error. That is, the design should enable the user to work with the system optimally.

I only heard the term UX much later, and when I first became aware of it, it seemed to be much less about designing for use and more about designing for feel. That is, the user should walk away from a system saying "that was quite enjoyable".

And these two concepts are, of course, not entirely orthogonal. For instance, you can hardly enjoy using a system when you just don't seem to get the damn thing to do what you want. But they still have different focuses.

If I had to put in a nutshell how I conceptualize the two disciplines, it would be "UI: psychology; UX: graphics design".

And of course such a simplification will create an outcry if you're conceptualization is completely different. But that just takes us back to my very first sentence: not well-defined names are open to interpretation.


Thanks for sharing!

> Here, the premise was very much on how to design interfaces in such a way that the user can intuitively operate a system with minimal error.

Yes, that's a good default goal for most software, but not always appropriate.

Eg for safety critical equipment to be used only by trained professionals (think airplane controls or nuclear power plant controls) you'd put a lot more emphasis on 'minimal error' than on 'intuitive'.

We can also learn a lot from how games interact with their users. Most games want their interface to be a joy to use and easy to learn. So they are good example for what you normally want to do!

But for some select few having a clunky interface is part of the point. 'Her Story' might be an interesting example of that: the game has you searching through a video database, and it's only a game, because that search feature is horribly broken.


That is still the man-machine interface

UX is just a weaselly sales term, "Our product is not some mere (sneers) interface, no, over here it is a whole experience, you want an experience don't you?"


I wouldn't be so harsh.

It's just the euphemism treadmill. Just like people perennially come up with new technical terms for the not-so-smart that are meant to be just technically and inoffensive, and over time they always become offensive, so someone has to come up with new technical terms.

See eg https://en.wikipedia.org/wiki/Idiot

> 'Idiot' was formerly a technical term in legal and psychiatric contexts for some kinds of profound intellectual disability where the mental age is two years or less, and the person cannot guard themself against common physical dangers. The term was gradually replaced by 'profound mental retardation', which has since been replaced by other terms.[1] Along with terms like moron, imbecile, retard and cretin, its use to describe people with mental disabilities is considered archaic and offensive.[2]


I once upon a time coin the term scientific physics. UX is not progress, it is the astrology of UI design. The UI exists between the silicon and the wetware computer as a means to interface the two. UX aims to modify the human and invade their state of mind. Doom scrolling is an example of great UX. Interact vs subdue. I want to experience the meaning of the email not the email application.


I don't think it's weaselly: it's not the first term that has lost its original meaning (like "hacker" or, ahem, "cloud") and required introducing specifiers to go back to the original meaning.


For fun, I did a search for "user interface" before:1996-06-01 .

I found a paper that was definitely taking the perspective that the "user interface" encompasses all the ways in which the user can accomplish something via the software. It rated the effectiveness of a user interface in terms of the time taken to complete various specific tasks. (While remarking that other metrics matter to the concept too, and also measuring user satisfaction and error rates.)

But that paper also suggested how the term might have specialized - four pieces of software were studied, and they are presented in a table that gives their "interface technology", in two cases a "character-based interface" and in the other two a "graphical user interface".

Enough usage like that and you can see how "interface" might come to mean "what the user interacts with" as opposed to "how tasks are performed".

( https://www.nngroup.com/articles/iterative-design/ . It really is dated 1993, which I made a point of checking because Google assigns the "date" of a search result based on textual analysis, and it is frequently very badly wrong. I can't really slam the approach, which I assume was necessary to get the right answer here, but the implementation isn't working.)


See my above comment: UI used to mean all of those and then it became just "pretty", so a new term was invented.


UX includes the possibility that the software will be actively influencing the user, rather than merely acting as a tool to be used. (websites selling you stuff versus a utilitarian desktop app).


> Good UX might simplify some use case so that user might not need to see any UI at all.

Yeah, just look at Windows {10,11} and Android. They simplified so much that it's unusable.


UX, as a term, didn't really exist in the 1990s: https://books.google.com/ngrams/graph?content=user+interface...

That's consistent with your timeline of the decline of UI/UX though. My sense is that the birth of the term UX marked the beginning of the decline because it meant redefining the term UI as being purely about aesthetics, implying that no one was paying attention to all of the non-aesthetic work that had previously been done in the field.


The term didn't really exist, but user experience was a thing. I took a human computer interface class in college about designing good UI. My first job out of college in 1996 I got permission from my boss and the boss of the corporate trust folks to go sit with a few of my users for 1/2 a day and see how they used the software I was going to fix bugs in and add features to. Apparently, no one had done that before. The users were so happy when I suggested and implemented a few things that would shave 20 minutes of busy work off their work each day that weren't on their request list because they hadn't thought it was something that could be done.


UX was Ergonomics back them, but current term also implies some "desire" to return to application, a tint of marketing maybe?


UX was Human Factors Engineering, Usability research, and library science. UX was the rebranded label after the visual designers took over everything.


I remember it as "human-machine interaction" and "HMI design" or "interaction design". It was mostly about positioning interface elements, clear iconography, and workflows with as little surprises and opportunities for errors as possible. In industrial design, esp. for SCADA, it is often still called HMI.


Yeah, if you wanted to study usability (or what we call UX today), you'd take the ergonomics course, and there'd be usability classes. So you'd learn about how to sit at a desk, how to design a remote control, and where to put the buttons in an application.

It does seem a bit weird, but I feel like this bigger picture is what a lot of today's design lacks.


I have a guy at work who does most of our UI/UX design, and recently one of the screenes we needed to implement involved a list where the user needs to select one option then click "Save". He designed it with checkboxes... some people just have no idea that UX conventions exist.


> some people just have no idea that UX conventions exist.

Because those were (G)UI conventions.

The new "UX" is in the line of "Fuck ICCCM or Style Guide, i'll implement my own".


He committed a clear and factual mistake in design - a [Basic Engineering Defect]. It cannot be merely called not following "convention".

Now if the question was between radio buttons and a drop-down list - that is a designer's choice.


The fundamental problem with UI/UX is that it’s so heavily dependent on your audience, and most software caters too disproportionately to one audience.

New users want pop ups, pretty colors, lots of white space, and stuff hidden. Experienced users want to throw the computer through a window when their tab is eaten because of a “did you know?” popup.

Enterprise, professional software is used a lot. Sometimes decades. You need dense UI with a UX that’s almost comically long-lived. Experienced users don’t want to figure out where a new button is, they’ve already optimized their workflow to the max.


My impression was that at some point, they went too far with the scientific approach. As in round up all the last persons who had never touched a computer, put them in an experiment and make their success rate as the only metric that counts. Established conventions? "Science says they don't work".

This attack on convention then paved the way for the "just make it pretty" we see today.


The last two companies I worked for had UI/UX teams with knowledgeable directors. It is not dead; it is just that some people don't see the ROI in it.


Exactly this thread. I use Adobe suite of software, and I really don't even know what GEGL means. Nor should I need to know. Organize filters by function. Blur->radial/Gaussian,linear,etc. Noise->add/remove/etc.

Designing the UI based on how the code a filter operates is cool for where the .cpp files live is not how the users think. Then again, a user of GIMP over other apps probably does filter that user into a more techy side of user than artistic side, so I'm probably eating a bowl of crow soon.

Seems like maybe time for FOSS UIs to start a Fiverr account looking for UI/UX peeps.


That's exactly how most filters are grouped.

And then there's the GEGL stuff that's leaking implementation details to the user: obviously it should be fixed, but I am certain you can find similar stuff in Adobe's products.

I, for one, having recently been pushed to online MS stuff, certainly see plenty of that in their tools (too many, really, even worse than GNOME ever was when I was active there).


>I am certain you can find similar stuff in Adobe's products

It’s just not comparable and I’m sorry with the history of GIMP it’s all just indefensible, let’s not forget in the two decades it took them to implement Adjustment Layers Blender started focusing on the user not the developer and became a huge contender against non open software, hard to find 3D artists under 25 who didn’t learn via Blender and use it professionally.

An opportunity completely squandered by a poor culture.


GIMP itself has been going on for around 30 years now. I think it proves that the approach to development and design is "defensible".

Blender entered where there was no other good competitor in the market, with a company behind it that built a business around it, and set the standards for UI.

GIMP always kinda had to fight against the incumbents that are too ingrained into customer muscle memory to accept any change. Really, are you saying that the location for GEGL filters in the menu is what stops you from using Gimp?

So the GIMP team wisely chose not to fight, and to build their own thing that serves (hundreds of?) thousands of happy users worldwide (I am one of them: I don't do image editing professionally, but people have complimented me on what I've achieved with Gimp; similarly, moving away from Gimp shortcuts would be expensive for me and would make me really hate any big change of the sort).


Think we’re revising history here, Blender was completely shunned until they started doing the exact opposite of GIMP and building for their users instead of building for the sake of building. It certainly had no first mover advantage and even poor students spent thousands just to not have to learn it and no professional studios used it.

This all changed within a few years of them fixing the UI and focusing on users.

>GEGL filters in the menu is what stops you from using Gimp?

It’s one example but my point is already proven by you calling them “GEGL Filters” step back from your biases for a second and really think about what you wrote and the wider implications to the rest of the application and its users.

People just feel they instinctively have to defend GIMP because it was one of the early larger real desktop Linux open source successes but to me it represents one thing, completely wasted opportunity and the importance of how culture and ideology of a team can squander something that could have been amazing.

“Oh people would never have switched from Photoshop, the workflow and keys are different” is pure cope, we know this because Figma decimated Photoshop and Illustrator as web/app design tools in about 2 years just by offering a better tool.

GIMP could have done this 20+ years ago with the right ideology.


No it could not have done it.

Gimp was ever only an enthusiast developer-driven, and 1-2 engineers that it actively had working on it could not have pulled it off.

It has nothing to do with the ideology, just sheer complexity of the effort: GNOME HIG in the 2.0 times was very much focused on good, consistent UI that caters to the users (mostly driven by Sun Microsystems contributions).

But bringing individual examples of bad UI (I can do so for MacOS, the poster child of usability too) does not mean it's like that on purpose — it's mostly just that, bad instances.

A program is usable based on the whole experience with it, and the results one can achieve. Gimp is not perfect (far from it, really), but for a set of usecases, it is perfectly adequate.

The success or lack of it is not only driven by usability: there are perfectly good tools that simply bit the dust for who knows what reason.


And looking at https://www.blender.org/about/history/, I think Blender mostly owns its success to exactly the marketing approach and not any of the technical properties (its parent companies actually died twice before it was made free software and a base for community competitions).

Which is exactly how I remember it as an outsider (I wasn't interested in 3D at the time).


If you use Adobe, you’re not the prime design persona.


I don't get the implication. I never claimed I was a designer. I'm a get shit done type of person. An operator if you will. Scariest thing you can show me is a Cmd-n blank sheet of paper. Give me content and a task, and it'll get done. Your assumption I'm a designer is just that, and I'm perfectly fine being an ass on my own and do not need your help


I think they might have been implying that you’re not the person they should be designing for. it’s arguable that they should be designing for people who have never used an image editor before. if they were optimizing for adobe users, they should just copy adobe as much as possible


I’d argue they’re doing neither these things.


A design persona is an imaginary person that designers use when thinking about users at a medium level of abstraction…somewhere between actual users and demographics.

https://www.interaction-design.org/literature/article/person...

I didn’t assume anything about you. I took your words at face value. To the degree I said anything about you it was that perhaps Gimp is not for you (because everything isn’t for everybody).


A "design persona" is a stereotype that designers use to justify their decision to discriminate against some subset of users. It seems innocent at first, but it inevitably devolves into the developers crudely binning individual users who submit but reports of feature requests into those stereotypes and then, very often, disregarding their feedback without thoughtful consideration.

"Our Personas document says this software isn't for engineers who taught themselves to use Linux in highschool. This guy who submitted a request to add tabs to the interface looks like a nerd, we don't need to take his suggestion seriously even though he's suggesting a normal thing people who fit into other personas would also find useful."


I think a better UX for average consumer would be more a side-swiping filter menu similar to that of social media mobile apps, with different non-math style names "default blur", "circle blur", etc. Especially as more people do not use desktop computers today.

Also maybe LLM integration so you can just explain what you want done, then it does it, instead of needing to follow some tutorial to learn the software


> Also maybe LLM integration so you can just explain what you want done, then it does it, instead of needing to follow some tutorial to learn the software

I like how this counts as a reasonable side remark today but would have been utterly delusional just a few years ago.


Yes, why bother learning how to use things?

(or invest in UX/user research)


> LLM integration

you mean diffusion-model integration?


I remember writing the documentation for the payment processor app used at Iron Mountain and the flow for dealing with a check deposit was incredibly convoluted. The (Windows desktop) application was designed by a team from one of the big 5 consulting agencies and they clearly had never thought about how the application would be used when they designed it.


That's the classic stereotype. What we often find in open-source media applications is intentional and pompous obscurity. "Engineers" use the same words end-users do. Choosing meaningless jargon is just douchey.


that’s not it at all. everyone implementing an image editor knows what a gaussian blur is, but the average person doesn’t. it requires active effort to forget what you know and empathize with someone who is seeing these concepts for the first time. In my opinion, it’s an effort that the volunteers working on GIMP aren’t obligated to put in if they don’t feel like it


Using "GEGL" IS it. That is not an industry-standard term familiar to users of image-editing software. "Gaussian blur" is.


The GIMP team actively changed their software to better support user workflows, like when they moved from "save as" (with image formats as options) to "export". So there definitely is intent there to do the work necessary to make the software useable for their target group.

Problem was: the change was explained in terms of user persona and their workflows, but there was no mention of user tests...


Honestly, I found that one of the most user-hostile workflows they implemented to date. It's really obnoxious.

The number of times I've wanted to save in their native XCF file format is... zero. But I always want to save in a standard image format, and I don't really consider that to be exporting, just saving.

I understand why they wanted this, but I don't think many of their actual users did.


They do that to preserve data. If you’re making a complex image with all sorts of layers and masks and then you save to a JPEG, you lose all that information as the image is flattened and compressed. Saving in the native format lets you be able to open the file again at a later time and resume working without losing any data.

Users would be seriously upset if they made JPEG the default and the native format a buried option. People would be losing data left and right.


Saving as XCF still loses the undo history so it's really a question of which/how much information is lost. Meanwhile if you have a single layer image and export it to PNG which preserves as much relevant information as saving it as XCF it will then still complain about unsaved data if you try to close it. Absolutely infuriating behavior that no real user ever asked for.


Affinity does the same thing; I don't remember about Photoshop.

The obnoxious thing is separating "save" and "export" into different menu items. Much (most?) software lets you choose "save as" (including saving as a different format) from the regular File/Save dialog. But Affinity Photo (and apparently GIMP) forces you to cancel out of the Save dialog for the millionth time and go back to the File menu and choose "Export." It's annoying and unnecessary.


I don’t know, pretty much all production software I’ve ever used has made a distinction between export and save. Because export takes compute and can change the output, not all formats are created equal.

Saving in the internal format is probably rare if you’re just a user, but if this is a 40 hour a week job, then the compute time savings and potential disk space saving from doing that might be worth it.


The problem not being able to make the save/export decision from the same dialog. A lot of software lets you do "save as" and pick a different format AFTER you go down the File/Save path.

Having to cancel out of File/Save and go back to the File menu and choose File/Export, over and over and over in software that defies this convention, is incredibly irritating.


That's only true if the engineers are not allowed to copy/steal from existing designs. There are plenty that are better than GIMP (e.g. Photoshop, Krita, ...). If nothing else, make it easy to build a layer on top so that Photoshop can be replicated nearly exactly.


Jesus christ, have you looked at the state of UI over the last ~10 years? Not really a great portfolio for "UX as a whole field".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: