The ‘desktop’ graphical user interface, historically known as the ‘GUI,’ or the ‘desktop GUI’, remains every bit as foundational to today’s work as it was before the advent of the web, the smartphone, cloud computing, and large language artificial intelligence models. Easy to use and familiar to everyone, the desktop represents a comforting constant in our ever-evolving world of technology.
Complete with its icons, launch bar, and overlapping windows, the desktop’s GUI is so timeless and classic that even at 50 years old, it remains the centerpiece of every desktop operating system today—despite looking and operating much like it did in its original state. Whether your computer runs Windows, macOS, Chrome OS, or some flavor of Linux, it features the same classic desktop GUI, front and center.
Today, the desktop GUI is ubiquitous. If you’re a PC user, you use the Taskbar to bring Windows into view; if you’re on a Mac, you use the Dock for the same. Your PC’s desktop icons and widgets look, feel, and act in the same way they do on your friend’s Mac. Application windows are moved and resized in the same way, their title bars spanning across the top, each having the same maximize and minimize buttons in their corners, no matter the system.
- The Launch Bar.
- The Desktop Icon.
- The Overlapping Window.
These are the three essential elements that comprise the computing interface used by over a billion people each day to carry out modern web-based work. And each predates the web, with roots dating as far back as the 1960s.
1960s : The Window and the Mouse
Born on December 9, 1968, on the site of San Fransisco's Bill Graham Civic Auditorium.
The Joint Computer Conference was a summit held twice a year to share the latest advances in computer hardware, programming, and human-computer interaction. 1968’s Fall edition featured a technology demonstration that came to be known as the ‘Mother of all Demos’, because it introduced not one, but a handful of groundbreaking technologies that define how we still work today, 56 years later.
Doug Engelbart was a computer scientist, engineer, and inventor who was presenting years of progress he and his team at the Stanford Research Institute (SRI) made on what they called a conceptual framework for augmenting human intellect.
The concept was that, by using computers, people could enhance the way they processed information, made decisions, and collaborated with others.
The framework was a set of tools that enabled a video display to be used as a communications device, and as a medium for storing and working with traditionally paper-based documents—but in digital form. It all represented unchartered territory; a mission guided by a vision of a futuristic, interactive display experience for the evolving computer.
The audience was a blend of visionary thinkers, tech enthusiasts, advocates, and practical engineers—all part of the broader counterculture movement of the time, united by a shared curiosity about the potential for the evolving computer…and for technology to change the world.
The Mother of all Demos came to do for technology what the following year’s ‘Woodstock’ festival did for music and the arts. Where Woodstock demonstrated the power of music as a unifying force that could transcend cultural boundaries, the Mother of all Demos showed the power of the networked computer as a force that could transcend physical ones. Both were seminal events that defined entire generations by embracing the time’s counterculturist mindset of imagining what was possible.
The Mother of all Demos turned out to be an eerie prophecy of what a typical working session would come to look like. Mere concepts at the time, the technology revealed by Engelbart that day in 1968 make up the most foundational elements of modern desktop computing today.
- The mouse.
- The application window.
- The hotkey.
- Word processing and Hypertext.
- The networked computer.
- The video call.
- Remote collaboration on the same document.
Englebart drove the entire 90-minute demo with a five-finger keyboard and a curious new pointing device called a ‘mouse’ that flanked a traditional (QWERTY) keyboard. The mouse was used to select, highlight, move, and draw content on the display (like screen regions, text, and diagram lines). And dragging it around the right side of the ‘lapboard’ hovered over Engelbart’s lap was as intuitive as dragging a stapler across a desk. The five-finger keyboard on the other end of the lapboard was used for cutting and pasting text selected by the mouse, much like how Ctrl+C/Ctrl+V (or Command+C/Command+V) are used today.
Each technology was an innovation in its own right, and each complemented a broader computing experience that itself was entirely new, enabled by a computer display that could do so much more than any TV or monitor had done before it.
A standard TV set, modified to work as a computer monitor.
In 1968 the personal computer didn’t exist yet, much less the concept of a display for one. CRT technology was the dominant display tech for the time’s black-and-white and color televisions.
As the standard of that era’s home entertainment, the television set was the most viable option for a computer display. It had been used a few times prior for purposes other than television: once in 1958 by an American physicist that created one of the earliest video games, and again in 1962 by Ivan Sutherland (a.k.a. the pioneer of computer graphics), who created a program called ‘Sketchpad’ that could be used to draw objects directly onto a CRT display with a ‘light pen’.
But Engelbart’s display was different from the monitors that came before it, in that the content it displayed was graphical, more similar in nature to content that might be shown on a TV, than to the green text typically shown on a computer monitor’s black background.
Engelbart’s modified TV was purpose-built for multi-tasking. And it was an improvement on the TV in that its screen could be split and segmented into regions, each that could show separate content all-at-once, whether related or not.
In the context of computing, splitting the screen enabled two or more distinct tasks at once. In the context of TV-watching, split-screen hadn’t yet existed—at least not outside of broadcast studios and a few motion pictures where it was used as a film and video technique.
By splitting the computer screen into rectangular regions, Engelbart’s team had found a solution for an age-old problem that would become the premise for every major functional improvement to the desktop that followed:
The ‘limited screen real estate’ of a computer’s monitor and the constrained nature of the workspace it provided, relative to that of a traditional desk.
1970s : Xerox PARC and the Desktop Metaphor
Within six months of Engelbart’s demo, Xerox began forming its legendary Palo Alto Research Center (PARC), where the ‘overlapping window’ and the ‘desktop icon’ joined the mouse to make up the ‘desktop metaphor’ that characterizes every desktop operating system today.
The ‘Window’
Engelbart’s concept of using a computer-powered display as a multitasking environment gave the computer a purpose at a time when it was up for debate. And the fact that he, alone, operated the whole experience showed the true potential of the computer as an extension of the individual; an augmentation that could store information on behalf of the user’s human cognition; even a solution for some of office work’s biggest problems at the time, like having to re-type a document because of a typo or change of mind about the order of its content. Much of it—certainly the demo itself—would not have been possible without the screen being segmented as it was.
Following the event, Howard Rheingold, who went on to become a pioneer of the concept of online communities (social networks), wrote in his review of the demo: “The screen could be divided into a number of windows.”
And with that, the window was born.
Soon after, a computer scientist named Alan Kay from the University of Utah expanded Engelbart’s constuct of the window in his 1969 dissertation, describing a graphical ‘object-oriented system’ with ‘viewports’ and ‘windows,’ or visual portholes onto a screen’s displayable area. Kay’s research laid the foundation for ‘Smalltalk’, the first object-oriented programming language and technology that enabled the graphical user interface.
By the early seventies, Kay and Engelbart had joined forces within PARC to form a group of the most highly regarded computer scientists at the time; the mandate being to build the ‘office of the future’. PARC soon became a treasure trove of the most advanced concepts, methods, and apparatuses in computing. The pace and significance of PARC’s R&D remains unprecedented, having consisted of numerous inventions that were truly groundbreaking, like the Ethernet, object-oriented programming, the laser printer, and the bitmapped display—to name just a few.
The ‘Overlapping Window’
While PARC’s work extended beyond the desktop GUI, its advancements in computer graphics were some of its most significant. BitBlting (or, ‘Bit Block Transfer’) was one of many novel software methods that we take for granted today. Together with ‘clipping’, BitBlting enabled the overlapping window. By copying rectangular blocks of screen pixels to memory, BitBlting made resizing and dragging windows appear smooth and responsive on the desktop. When combined with ‘clipping’ (e.g., the visible portion of an overlapping window or the windows under it) it ensured that only the the visible parts of any window on the desktop were redrawn.
These very windowing techniques made it possible to move windows around freely on the desktop, stacking them atop one another, like paper on a desk. And doing so became technologically practical, as an endless number of windows could suddenly be open on the desktop without consuming any more graphical resources than what the few windows in view would require.
And with that, the ‘overlapping window’ was born.
The ability to overlap windows was a big step forward for the GUI in that it provided a second option (to Engelbart’s tiled window construct) for displaying numerous windows within the confines of the monitor’s limited screen space. And it completed the desktop metaphor, representing the digital equivalent of a sheet of paper that that could be moved around ones desk and stacked atop others.
The ‘Alto’
The first wave of PARC’s research culminated in 1973 with the production of the ‘Alto.’
Though it was never commercialized, the Alto embodied the most cutting-edge technology ever developed for the desktop. It was an innovation in software and hardware, having a desktop GUI and a display that was the ultimate complement to it, with a 8.5 x 11 sized, portrait-oriented form factor that nodded to the printed documents of 1970s office work.
The Alto was the first stand-alone computer built for a single user, the first to feature a high-performance, high-resolution display, and the first to have a graphical user interface.
As the first computer to even remotely look or feel like the real-world desk experience, the Alto was a sensation. Complete with overlapping windows and a mouse, the Alto served as Xerox PARC’s proof of concept for the ‘office of the future’ heading into the 1980s.
1980s : The Bundled System
The formative years of the 'GUI', the 'desktop workstation', and the 'digital office system' .
The ‘Star’
At 1981’s National Computer Conference held at Chicago’s McCormick Place, Xerox introduced its first commercial desktop workstation in the ‘Star’. Formally known as the ‘Xerox 8010 Information System’, the Star showcased PARC’s progress on the desktop metaphor and its underlying Smalltalk programming language, and ‘Viewpoint’: its graphical environment that became generally known as the ‘graphical user interface’, and metaphorically, as ‘the desktop’.
The Star had an immediate, indisputable influence on the design of modern computer systems. It was the first commercially available computer to feature a graphical user interface and the first to ‘integrate’ the essential elements of the digital workstation into a single user-friendly package; advertised as an integrated computer system and consisting of a monitor, a keyboard, and a mouse—all complementary to the GUI that was central to the whole experience.
Its desktop GUI was much evolved from the Alto’s, featuring visually detailed ‘desktop icons’ that represented single or grouped tasks. It introduced the ‘popup menu,’ which gave users more control over the GUI and its programs and file folders. And it established the first ever (UX) ‘design system’, by giving different windows and icons a uniform look and feel and standardizing controls throughout the interface.
Soon after its release, the Star inspired an industry-wide convergence on what became the PC’s single dominant user interface in the desktop graphical user interface.
The IBM ‘PC’
A few months following Star’s release, IBM introduced its ‘IBM PC’— short for ‘personal computer’. PC was a term IBM coined to communicate the then nascent concept of a ‘microcomputer’ built for desktops and intended for a single user (as opposed to their ‘mainframe’ computer that was built for the floor and used by many).
The IBM PC represented IBM’s entry into the rapidly-evolving personal computer market. It quickly became the industry standard personal computer, thanks to its open architecture, robust software ecosystem, and its 16-bit processor that was more capable of running more than one program at once (compared to earlier successes like the Apple II with its 8-bit processor). IBM’s PC also featured Microsoft’s ‘MS-DOS’ operating system, establishing it, and its ‘BASIC’ programming environment as the standard language that nearly every application developer knew and used, and making Microsoft’s brand synonymous with the term ‘PC’.
While IBM’s PC was a huge commercial success, selling over 750,000 units by 1983, it represented something of a last breath of the text-based user interface of personal computers before the GUI.
The Apple ‘Lisa’
1983’s Apple ‘Lisa’ was the next system after the Star to popularize PARC’s graphical user interface and desktop metaphor. The Lisa’s desktop icons could be dragged and dropped (like, onto another location of the desktop or into a waste basket). And it added the now-ubiquitous ‘maximize’ button to its application windows, enabling users to focus on single tasks without the distraction of other open windows.
Priced at $9,995.00, the Lisa targeted the midmarket between the exorbitantly priced Xerox Star ($16,595.00) and IBM’s affordable, market-dominant ‘IBM PC’ (~$3,000.00 base price, before add-ons).
Meanwhile, other industry participants pushed the digital workstation forward. Sun Microsystems created high-performance workstations that had faster processors, more memory, and more storage—all while featuring the latest networking capabilities.
Apple, Commodore, Lisp, and Microsoft added their own renditions of Star’s desktop GUI, applying their own theming to the desktop’s overlapping windows, icons, and popup menus.
By 1984, the Lisa had evolved into the ‘Macintosh’. ‘Mac’ carried on many of the software and hardware qualities of the Star, but at the accessible price of $2,495.00. The Mac became the next integrated (hardware + software) solution to come after the Star, adopting Xerox’s closed (proprietary) business model, and differenting itself from the open systems of competitors like IBM and Sun.
The Commodore ‘Amiga’
By 1985 PC ownership had grown 30-fold from the decade’s start. Systems like Commodore’s Amiga 1000 pushed graphics forward with custom chipsets and drivers. Its keyboard driver could handle key events on behalf of its OS, enabling sophisticated keyboard shortcuts and custom key mapping: capabilities that were far-advanced at the time, yet became a fixture of today’s keyboard-driven desktop experience.
Windows 1.0 and ‘Alt+Tab’
1985 brought the inaugural release of the ‘Windows’ operating system. ‘Windows 1.0’ expanded on the Amiga’s keyboard-driven navigation with ‘Alt+Tab.’ Alt+Tab was particularly clever in that it leveraged the user’s muscle memory of the keyboard to enable ‘window switching’ (a new concept at the time).
In 1987, ‘Windows 3.1’ evolved Alt+Tab by adding each window’s icon to its name and moving the feedback from the bottom-left to the center of the screen.
By the decade’s end, the once brilliant Xerox Star had flickered out under the weight of market forces brought on by the successors it inspired. Despite its limited commercial success, the ideas and technologies the Star introduced scattered across the tech landscape like the remnants of a supernova, finding new life in the hands of its designers and engineers who had moved on to other companies, carrying its torch forward, and seeding innovation within developing companies that were much better positioned to lead the commercial landsape of the 90s, like Apple and Microsoft.
1990s : The Taskbar and the Dock
In the 1990s the desktop gained the 'launch bar' and the 'overview', representing its next two fundamental advancements, and last to-date.
In 1992, Microsoft began developing a hybrid 16/32-bit system codenamed ‘Chicago’ that would showcase the significant progress it made toward integrated networking and file management (with Windows NT) and toward 3D graphics (with DirectX). DirectX provided a standardized set of APIs (Application Programming Interfaces) that allowed developers to access low-level hardware functions (such as graphics, sound, and input) directly from Windows, transforming Windows into a serious gaming platform while having a lasting impact on the development of PC hardware and the gaming industry as a whole.
Three years later, the system was released as ‘Windows 95’. Complete with the newly-minted internet browser and the biggest update to the Office tools in years, Windows 95 was an instant success. Within just two years of its release, it had already captured over half of the PC operating system market, and by the end of the decade it had compelled a 6-fold increase in PC ownership. To this day, Windows 95 is considered one of, if not the most successful operating system releases of all time.
Aside from providing access to the internet, updated productivity tools, and advancing gaming, Windows 95 contributed to the evolution of the desktop GUI with its ‘Taskbar’, which featured an all-new ‘Start Menu’ that slid out to serve up app icons on a platter of sorts. But in the construct of the GUI, the Taskbar’s crowning achievement wasn’t its Start Menu. It was its ‘Taskbar buttons’ that enabled users to summon minimized or overlapping windows to the top of the desktop with a single click.
Years later, Apple’s keystone ‘Mac OS X’ release featured a ‘Dock’ that was positioned in the same location and served the same purpose as the Windows Taskbar. And where Windows employed ‘Alt+Tab’ to summon an overview of open windows in the form of thumbnail images, Mac OS X followed suit with ‘Command+Tab’, to access a similarly-arranged desktop overview.
By the turn of the millenium, the construct of a launch bar that spanned the bottom edge of the desktop and a keystroke-driven desktop overview had officially joined the ranks of the ‘overlapping window’ and the ‘desktop icon’ as ubiquitous elements of the desktop.
Early 2000's : On the Verge of Progress
Exploring new frontiers for the desktop. Research converged on the concept of 'task-centric' computing as a possible next step for the desktop, before focus shifted to new frontiers altogether with the emergence of mobile and cloud computing.
The early 2000’s represented the last bastion of effort toward progressing the desktop workstation, with significant research and development conducted into screen management and information (task) management, in an effort to streamline work involving an ever-growing amount of information.
Screen Management
Much of the research into screen management aimed to quantify the benefits of using more than one monitor, or even using prototype large-format displays bearing a closer resemblance to office desks. Researchers extended existing methods like PARC’s Keystroke-Level Model (KLM) in an effort to measure time-on-task across various windowing strategies, display arrangements, and ergonomic environments. And theoretical concepts explored all-new input modalities for window management, like the gesture-based system John Underkoffer imagined in The Minority Report.
Task Management
Much of the research of that time drew a clear distinction between application-based navigation and task-based navigation of the user experience. The concepts of ‘task-centric computing’ and ‘application-centric computing’ became understood as much of the research proposed a user experience centerred on tasks or activities as a silver bullet that could resolve cognitive load and streamline the intensive window management required by application-centric systems like Windows and macOS as the digital information landscape evolved.
Upwards of fifty different proprietary systems were developed through research into the concept of task/activity/process-centric computing (TAP-centric, for the sake of simplicity). Among them, common features included grouping windows and documents by task, and displaying visual cues for a window’s last activity—all in the name of improving the organization and display of an ever-growing volume of information.
The philosophy was that instead of centerring navigation on individual apps/files/documents/tabs, navigation could be centerred on groups of windows that could be contained in separate ‘desktops’ or ‘workspaces’. The central idea being, that navigating between collections of resources (rather than between individual resources) would drastically reduce the amount of time and cognition spent on managing information.
And the reasoning was sound. Computing was, by nature, objective-based and not ‘application-based’. Launching a program or logging into some single web app was never the end goal; It was only a means to the end goal, which was always to complete some routine task or more extensive business process. Research at the time found that completing single tasks required the simultaneous use of seven different applications (on average). The culmination of research highlighted the multitasking demands and cognitive load placed on users in digital work environments, emphasizing the need for streamlined environments to support efficient task completion.
Looking back, two common threads underlied much of the research findings: task-based computing, and more screen space. And yet, the finished commercial operating systems that followed featured shinier, but less practical concepts than reorganizing the navigational experience or finding a new way to expand the desktop.
Animating icons intended to make the desktop feel more dynamic. And animating windows made clicking on an app in the Dock feel like opening a bag of goodies.
Linux’s ‘Compiz’ window manager featured a fascinating desktop cube that presented the desktop in a rotatable 3D form, but like many of today’s spatial computing concepts, it was too big of a leap (and too big of an ask) to be adopted by users whose muscle memory was long-engrained in the 2D interface going back to the years of the flat, paper-based office work carried out on traditional office desks.
Windows XP introduced a more polished, intuitive, and visually appealing user interface, featuring a distinctive green Start button and blue Taskbar. It made navigation easier, and even explored the potential of semi-transparent windows as a way to make overlapping windows more intuitive.
Window Snapping & Screen Management
One new feature that was widely adopted was the ability to ‘snap’ windows to the edge of the screen to split the desktop into halves, thirds, or quarters. Windows 7 popularized window snapping, following up on earlier implementations from tiling window managers like ‘xmonad’ (April 2007), ‘Awesome’ (September 2007), and ‘i3’ (March 2009).
The ability to snap windows to split the screen into a form that resembled a cut-up sheet of brownies was, in essence, a modernized version of Engelbart’s tiled windows. For good reason, it became a ubiquitious feature that is still well in place today across systems and workflows.
While the pace had clearly slowed since the early days, it still represented progress on the desktop GUI.
But then, sometime around 200,9 everything changed, as the industry’s attention shifted from the workstation to something new and exciting in the emergence of mobile computing.
The Age of Apps
The iPhone had crossed the chasm. Incumbents not limited to Apple converged on the potential of its ‘Home’ screen and the opportunity for its ‘App Store’ to become the digital equivalent of the Swiss Army knife. Soon there was a separate app for every task, activity, and flavor of coffee one could imagine. Then there were 5, 10, 20 apps for each. By 2012, over 500,000 ‘apps’ were available via the iPhone and Apple was offering one lucky customer the chance to win a $10,000 App Store gift card by making the 25 billionth app download.
Google’s Android Market followed closely with more than ten billion downloads. The act of using separate apps specialized for each task or activity eventually made its way onto the desktop through the web browser, where web apps had already gained traction with the rise of Gmail, Google Maps, and social media platforms.
In 2013, Microsoft released Office 365, with web-based versions of the Office apps, as an alternative to downloadable versions. Other apps followed suit, the common vision being of the web browser as a gateway to millions of single-purpose apps and services.
In no time, users were tasked with managing duplicate versions of single apps, along with a new abundance windows and tabs introduced by the web browser. Decades of progress toward advancing productivity at the desktop had gotten off course, and just like that, through no fault of its own, the desktop became ‘application-centric’.
The State of the Art Since 2010
By 2010 innovation on the desktop GUI had run its course, with industry focus turned toward newer innovations like mobile computing, cloud computing, artificial intelligence, and the re-emergence of spatial computing.
Sometime after the release of Windows 7, the desktop seemed to have reached the moment Alan Kay referred to years earlier when he famously asked, “What will Silicon Valley do once it runs out of Doug’s (Engelbart’s) ideas?”
Today, that question can be answered by recounting the last fifteen years of the desktop. As it turned out, the industry ran out of ideas for the desktop, lost interest in it, or some combination of the two.
In the time since, major releases have featured changes more cosmetic than functional, in nature. Home screens and pop-up menus got reorganized. Design systems got polished, a few times over (e.g., the look and feel of GUI controls and ‘window chrome’). Windows and controls look much better today than they did back then, but function as they always have.
Featured Applications
Today, new applications—some revolutionary in their right—are presented as defining features of major releases of new operating system versions, featured as central elements of the system and embedded in the Taskbar, or even in the desktop itself.
‘Snap Groups’ on Windows 11 extended the functionality of window snapping by enabling users to open apps in clusterred brownie-sheet layouts from the Taskbar.
Apple made big strides in ‘continuity’, in the context of extending macOS’s design system and some of its elements (like, widgets) for use across ecosystem devices like iPads, iPhones, and even corded VR goggles.
Immersive Spatial Computing
In recent years, augmented reality (AR), mixed reality (MR) and virtual reality (VR) have introduced us to all new ways of experiencing the classic desktop.
Today’s glasses and headsets feature the biggest advancements in control and display technology in decades, if not ever (at least in the context of personal computing). The ability to focus on an icon or window just by gazing at it (then to select it with the pinch of a finger) is a truly impressive technological feat, but also bittersweet, in that it would all seem so much more natural had the mouse not existed first.
In theory, the idea of a purely natural, gesture-based computing experience (popularly known as ‘spatial computing’) is fascinating. The concept itself has been rebranded many times over, from the science fiction movies that got us thinking about how physical controls and displays might be replaced someday, to the cutting-edge headset products we have today.
While the immersive, gesture-based computing experience works well with applications that are inherently 3D (e.g., with modeling software), users are no closer to adopting it as a compelling alternative for two-dimensional work than they were when VR headsets first came on the scene years ago.
Emerging research is beginning to shed light on why users haven’t yet adopted AR or VR for work, conclusively showing that users perform significantly better with a standard desktop setup (a real monitor controlled with the tactile mouse and keyboard) than with any immersive/3D implementation of it (glasses or headsets controlled with non-tactile, insensible wands and gestures).
The Latest Findings, from 2024’s ACM Conference on Human Factors in Computing Systems.
This year’s SIG CHI conference (Special Interest Group on Computer-Human Interaction) featured various research validating earlier findings that users reacted significantly faster (and fixated on objects for less time) on the classic desktop than on the virtual/3D one.
Interestingly, one study compared the effects of the 3D immersive desktop and the classic 2D desktop across three different fields of view, finding that users took twice as long to complete the overall task in the immersive VR environment than with a real monitor and mouse.
Specifically, users completed tasks significantly faster and with fewer errors on the standard mouse-driven desktop setup. Users took significantly longer to recall information (from human memory) while in the immersive experience of the head-mounted display (HMD). They also fixated (or ‘dwelled) on objects for a significantly longer duration of time with the HMD, disrupting task flow and causing significantly longer selection times (Kargut, Gutwin, and Cockburn 2024).
Other research presented at the same conference found more of the same. One study found that desktop-based environments often required less cognitive effort than VR-based ones, offering a simpler, more focused interface that was beneficial for tasks requiring precision and minimal distractions (F. Steinicke 2024).
Another study found that AR (glasses-based) environments performed no better (and sometimes worse) than VR ones. It found that participants in AR environments reported being more affected by real-world distractions, while VR environments offered a more focused, enclosed experience, leading to different user engagement levels (Yan 2024).
A different study offered a potential solution for the performance problem in immersive environments. Ironically, it proposed a way for users to quickly ‘peek’ at the traditional 2D desktop view without fully exiting their VR environment. It concluded that peeking at the real desktop while immersed in VR was, in fact, useful in professional or multi-tasking scenarios, where quick access to non-immersive (2D) content (like documents or emails) was necessary while continuing to operate in a VR space (Wentzel, et al., 2024).
As fascinating as the idea is, using head-worn devices and natural gesturing for inherently two-dimensional work is not a compelling alternative to what we have today with the classic desktop.
Those users interested in exploring head-mounted devices as an alternative to the existing desktop experience will have to first overcome the performance problems highlighted by the emerging research, or accept them as a sunk cost of day-to-day work: where the stakes are high to begin with. Then, new hardware must be purchased, the look and feel of a whole new operating system must be learned and adopted, and a headset (at worst) or a pair of glasses (at best) must be worn.
Taking into account the SIG CHI research presented in May, and the lack of adoption of various new, cutting-edge control methods, today we can say with near certainty that the mouse, keyboard, and two-dimensional desktop still rule supreme at the desktop.