I enjoyed this talk, and I want to learn more about the concept of “learning loops” for interface design.
Personally, I wish there were a champion of desktop usability like how Apple was in the 1980s and 1990s. I feel that Microsoft, Apple, and Google lost the plot in the 2010s due to two factors: (1) the rise of mobile and Web computing, and (2) the realization that software platforms are excellent platforms for milking users for cash via pushing ads and services upon a captive audience. To elaborate on the first point, UI elements from mobile and Web computing have been applied to desktops even when they are not effective, probably to save development costs, and probably since mobile and Web UI elements are seen as “modern” compared to an “old-fashioned” desktop. The result is a degraded desktop experience in 2025 compared to 2009 when Windows 7 and Snow Leopard were released. It’s hamburger windows, title bars becoming toolbars (making it harder to identify areas to drag windows), hidden scroll bars, and memory-hungry Electron apps galore, plus pushy notifications, nag screens, and ads for services.
I don’t foresee any innovation from Microsoft, Apple, or Google in desktop computing that doesn’t have strings attached for monetization purposes.
The open-source world is better positioned to make productive desktops, but without coordinated efforts, it seems like herding cats, and it seems that one must cobble together a system instead of having a system that works as coherently as the Mac or Windows.
With that said, I won’t be too negative. KDE and GNOME are consistent when sticking to Qt/GTK applications, respectively, and there are good desktop Linux distributions out there.
I've given dozens of talks, but this one seems to have struck a chord, as it's my most popular video in quite a while. It's got over 14k views in less than a day.
I'm excited so many people are interested in desktop UX!
Why didn't Star Trek ever tackle the big issues, like them constantly updating the LCARS interface every few episodes to make it better, or having Geordi La Forge re-writing the warp core controllers in Rust?
I would say that it is the term "UX" that is the confusing part of "UX/UI".
By Don Norman's original definition [0], it is not merely another term for "UI" but specifically when you do have a wider scope and not working with a user interface specifically.
So, the term "UX/UI" would refer to being able to both work with the wider scope, and to go deeper to work with user interface design.
The keyboard and screen UX was established in the 1970s. I've been using a keyboard and screen to work with computers since the 1980s. I am quite sure I'll be using a keyboard and screen until I retire. And probably 50 years from now, we'll still be using keyboards and screens. Some things just work.
Touch screens, voice commands, and other specialized interfaces have and will continue to make sense for some use cases. But for sitting down and working, same as it ever was.
I felt rage baited when he crossed out Jakob Nielsen and promoted Ed Zitron (https://youtu.be/1fZTOjd_bOQt=1852). Bad AI is not good UI, but objecting based on AI being "not ethically trained" and "burning the planet" aren't great reasons.
Really interesting. Going to have to watch in detail.
I’m in the process of designing an os interface that tries to move beyond the current desktop metaphor or the mobile grid of apps.
Instead it’s going to use ‘frames’ of content that are acted on by capabilities that provide functionality. Very much inspired by Newton OS, HyperCard and the early, pre-Web thinking around hypermedia.
A newton-like content soup combined with a persistent LLM intelligence layer, RAG and knowledge graphs could provide a powerful way to create, connect and manage content that breaks out of the standard document model.
No, we’re not. Niri + Dank Material Shell is a different and mostly excellent approach.
Great talk about the future of desktop user-interfaces.
“…Scott Jenson gives examples of how focusing on UX -- instead of UI -- frees us to think bigger. This is especially true for the desktop, where the user experience has so much potential to grow well beyond its current interaction models. The desktop UX is certainly not dead, and this talk suggests some future directions we could take.”
“Scott Jenson has been a leader in UX design and strategic planning for over 35 years. He was the first member of Apple’s Human Interface group in the late '80s, and has since held key roles at several major tech companies. He served as Director of Product Design for Symbian in London, managed Mobile UX design at Google, and was Creative Director at frog design in San Francisco. He returned to Google to do UX research for Android and is now a UX strategist in the open-source community for Mastodon and Home Assistant.”
What an excellent talk, thank you. Most refreshing of all, it is about UX where the X stands for eXperience, rather than eXploitation.
Desktop is dead. Gamers will move to consoles and Valve-like platforms. Rest of productivity is done on a single window browser anyway. Llms will accelerate this
Coders are the only ones who still should be interested in desktop UX, but even in that segment many just need a terminal window.
Unpopular take: Windows 95 was the peak of Desktop UX.
GUI elements were easily distinguishable from content and there was 100% consistency down to the last little detail (e.g. right click always gave you a meaningful context menu). The innovations after that are tiny in comparison and more opinionated (things like macos making the taskbar obsolete with the introduction of Exposé).
problem is with pushing a UX at users and enforcing that model when the user changes it to something comfortable when you should be looking at what the users are throwing away, and what they are replacing it with.
MS is a prime example, dont do what MS has been doing, remember whos hardware it actually is, remain aware that what a developer, and a board room understands as improvement, is not experienced in the same way by average retail consumers.
This is a (very) rambling comment since I added things to it as I watched the video.
I think the state of the current Desktop UX is great. Maybe it's a local maximum we've reached, but I love it. I mostly use XFCE and there are just a few small things I'd like changed or fixed. Nothing that I even notice frequently.
I've used tiling window managers before and they were fine, but it was a bit of a hassle to get used to them. And I didn't feel they gave me something I couldn't do with a stacking window manager. I can arrange windows to the sides or corners of the monitor easily with the mouse or the keyboard. On XFCE holding down alt before moving a window lets me select any part of the window, not just the title bar, so it's just "hold down ALT, point somewhere inside the window and flick the window into a corner or a side with the mouse". If I really needed to view 10 windows at the same time, I'd consider a tiling window manager, but virtual desktops on XFCE are enough for me. I have a desktop for my mails, shopping, several for various browsers, several for work, for media, and so on. And I instantly go to the ones I want either with Meta+<number> (for example, Meta+3 for emails), or by scrolling with my middle mouse on the far right on my taskbar where I see a visual representation of my virtual desktops - just white outlines of the windows relative to the monitors.
Another thing I've noticed about desktop UX is that application UX seems follow the trends of website UX where the UX is so dumbed down, even a drunken caveman who's never seen a computer can use it. Tools and options are hidden behind menus. Even the menus are hidden behind a hamburger icon. There's a lot of unnecessary white space everywhere. Sometimes there's even a linear progression through a set of steps, one step at a time, instead of having everything in view all the time - similar to how some registration forms work where you first enter your e-mail, then you click next to enter a password, then click next again, and so on. I always use "compact view" or "details view" where it's possible and hide thumbnails unless I need them. I wish more sites and apps were more like HN in design. If you're looking to convert (into money or into long-term users) as many people as possible, then it might make sense to target the technological toddlers, but then you might lose, or at least annoy, your power users.
At the beginning of the video I thought we'll likely only see foundational changes when we stop interacting with the computer mainly via monitors, keyboards and mice. Maybe when we start plugging USB ports into our heads directly, or something like that. Just like I don't expect any foundational changes or improvements on static books like paper or PDF. Sure, interactive tutorials are fundamentally different in UX, but they're also a fundamentally different medium. But at 28:00, his example of a combination of window manager + file manager + clipboard made me rethink my position. I have used clipboard visualizers long ago, but the integration between apps and being able to drag and otherwise interact with it would be really interesting.
Some more thoughts I jotted down while watching the video:
~~~~ 01:33 This UX of dragging files between windows is new to me. I just grab a file and ALT+TAB to wherever I want to drop it if I can't see it. I think this behavior, to raise windows only on mouse up, will annoy me. What if I have a split view of my file manager in one window, and other window above it? I want to drag a file from the left side of the split-view window to the right one, but the mouse-down wont be enough to show me the right side if the window that was above it covers it. Or if, in the lower window, I want to drag the file into a folder that's also in the lower window, but obscured by the upper window? It may be a specific scenario, but
~~~~ 05:15 I'd forgotten the "What's a computer?" ad. It really grinds my gears when people don't understand that mobile "devices" are computers. I've had non-techies look surprised when I mention it, usually in a sentence like "Well, smartphones are really just computers, so, of course, it should be possible to do X with them.". It's such a basic category.
Similarly, I remember Apple not using the word "tablet" to describe their iPad years ago. Not sure if that has changed. Even many third-party online stores had a separate section for the iPad.
I guess it's good marketing to make people think your product is so unique and different than others. That's why many people reference their iPhone as "my iPhone" instead of "my phone" or "my smartphone". People usually don't say "my Samsung" or "my $brand" for other brands, unless they want to specify it for clarity. Great marketing to make people do this.
~~~~ 24:50 I'm a bit surprised that someone acknowledges that the UX for typing and editing on mobile is awful. But I think that no matter how many improvements happen, using a keyboard will always be much, much faster and pleasant. It's interesting to me that even programmers or other people who've used desktop professionally for years don't know basic things like SHIFT+left_arrow or SHIFT+right_arrow to select, or CTRL+left_arrow or CTRL+right_arrow to move between words, or combining them to select words - CTRL+SHIFT+left_arrow or CTRL+SHIFT+right_arrow. Or that they can hold their mouse button after double clicking on a word and move it around to select several words. Watching them try to select some text in a normal app (such as HN's comment field or a standard notepad app) using only arrow keys without modifiers or tapping the backspace 30 times (not even holding it down) or trying to precisely select the word boundary with a mouse... it's like watching someone right-click and then select "Paste" instead of CTRL+V. I guess some users just don't learn. Maybe they don't care or are preoccupied with more important things, but it's weird to me. But, on the other hand, I never learned vi/vim or Emacs to the point where it would make me X times more productive. So maybe what those users above look to me is what I look to someone well-versed in either of those tools.
~~~~ Forgot the timestamp, it was near the end, but the projects Ink & Switch make seem interesting. Looking at their site now.
For desktops, basically, yes. And that's OK.
Take any other praxis that's reached the 'appliance' stage that you use in your daily life from washing machines, ovens, coffee makers, cars, smartphones, flip-phones, televisions, toilets, vacuums, microwaves, refrigerators, ranges, etc.
It takes ~30 years to optimize the UX to make it "appliance-worthy" and then everything afterwards consists of edge-case features, personalization, or regulatory compliance.
Desktop Computers are no exception.
Golan Levin quotes Joy Mountford in his "TED Talk, 2009: Art that looks back at you":
>A lot of my work is about trying to get away from this. This a photograph of the desktop of a student of mine. And when I say desktop, I don't just mean the actual desk where his mouse has worn away the surface of the desk. If you look carefully, you can even see a hint of the Apple menu, up here in the upper left, where the virtual world has literally punched through to the physical. So this is, as Joy Mountford once said, "The mouse is probably the narrowest straw you could try to suck all of human expression through." (Laughter)
https://flong.com/archive/texts/lectures/lecture_ted_09/inde...
https://en.wikipedia.org/wiki/Golan_Levin
You know, sometimes things just work. They get whittled way at until we end up with a very refined endpoint. Just look at cell phones. Black rectangles as far as the eye can see. For good reason. I'm not saying don't explore new avenues ( foldables, etc. ), but it's perfectly fine to come to settle into a metaphor that just works.
I don't want to see what any of today's companies would come up with to replace the desktop. Microsoft has tried a few times and they all sucked.
The computer form factor hasn’t changed since the mainframe: look into a screen for where to give input, select visual icons via a pointer, type text via keyboard into a text entry box, hit an action button, recieve result, repeat
it’s just all gotten miniaturized
Humans have outright rejected all other possible computer form factors presented to them to date including:
Purely NLP with no screen
head worn augmented reality
contact lenses,
head worn virtual reality
implanted touch sensors
etc…
Every other possible form factor gets shit on, on this website and in every other technology newspaper.
This is despite almost a century of a attempts at doing all those and making zero progress in sustained consumer penetration.
Had people liked those form factors they would’ve been invested in them early on, such that they would develop the same way the laptops and iPads and iPhones and desktops have evolved.
However nobody’s even interested at any type of scale in the early days of AR for example.
I have a litany of augmented and virtual reality devices scattered around my home and work that are incredibly compelling technology - but are totally seen as straight up dogshit from the consumer perspective.
Like everything it’s not a machine problem, it’s a human people in society problem
For the same reason we don't reinvent the wheel. Or perhaps, the same reason we don't constantly change things like a vehicle. It works well, and introducing something new means a learning curve that 99% of folks won't want to deal with, so at that point, you are designing something new for the other 1% of folks willing to tackle it. Unless it's an amazing concept, it won't take off.
A) I'm not going to watch the video because it's hosted by goggle, and I'm not interested in being goggled.
B) However, even without watching the video, it must be describing corporate product UI, because in the free software world, there is a huge variety of selections for desktop (and phone) UI choices.
C) The big question I continue to come back to in HN comments: why does any technically astute person continue to run these monopolistic, and therefore beige, boring, bland, corporate UIs?
You can have free software with free choice, or you can have whatever goggle tells you...
Scrubbed the talk, saw “M$” in a slide, flipped the bozo bit
I understand the desire to want to fix user pain points. There are plenty to choose from. I think the problem is that most of the UI changes don't seem to fix any particular issue I have. They are just different, and when some changes do create even more problems there's never any configuration to disable them. You're trying to create a perfect, coherent system for everyone absent the ability to configure it to our liking. He even mentioned how unpopular making things configurable is in the UI community.
A perfect pain point example was mentioned in the video: Text selection on mobile is trash. But each app seems to have different solutions, even from the same developer. Google Messages doesn't allow any text selection of content below an entire message. Some other apps have opted in to a 'smart' text select which when you select text will guess and randomly group select adjacent words. And lastly, some apps will only ever select a single word when you double tap which seemed to be the standard on mobile for a long time. All of this is inconsistent and often I'll want to do something like look up a word and realize oh I can't select the word at all (G message), or the system 'smartly' selected 4 words instead, or that it did what I want and actually just picked one word. Each application designer decided they wanted to make their own change and made the whole system fragmented and worse overall.