What if a Computer Was a Walk in the Woods? Why Should an AI Computer Be Like a Desktop Computer? And More on Mark Weiser.
Recent advances in AI have given rise to "power users" who push the boundaries of human-computer interaction. Moving beyond simple prompts, they develop personalized workflows that treat AI as a cognitive partner. Their practices hint at a future where we question what a computer even is.
At a recent informal tech meetup in Toronto on the topic of "Power using AI", it became clear that the problem with using AI isn't just finding the right buttons to click and settings to configure, but rather that traditional human-computer interaction has the potential to undergo a major change.
This event brought together practitioners who are pushing the boundaries of the use of AI. The session highlighted how Large Language Models (LLMs) are potentially allowing us to participate, whether we know it or not, whether we like it or not, in Mark Weiser's seminal discussion of ubiquitous computing in the early 1990s.
These "Power AI" users weren't just typing questions into ChatGPT and copying answers– they were exploring and experimenting with entirely new paradigms of interaction with computers and agentic AI, including not only how they used it, but what they told it about themselves, how it changed the way they thought and the way they worked, suggesting that Weiser's vision of invisible computing might be realised through AI rather than hardware. [1]
Who is Mark Weiser?
Mark D. Weiser (July 23, 1952 – April 27, 1999) was an American computer scientist and chief technology officer (CTO) at Xerox PARC. Weiser is widely considered to be the father of ubiquitous computing, a term he coined in 1988. Within Silicon Valley, Weiser was broadly viewed as a visionary and computer pioneer, and his ideas have influenced many of the world's leading computer scientists. - Wikipedia
Weiser has two main papers that are important to me:
In these papers Mark Weiser describes his vision of ubiquitous computing. Thanks to AI, and these industrious users of it, this vision may be finally coming to reality...but with the AI twist he couldn't have anticipated.
I've written about Weiser previously so I won't cover him too much more in this article, please do take a look at these entries though.
Why should an AI Computer look like a Desktop Computer At All?
Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstanding, and ambiguity that they represent a desirable computer interface goal? - Mark Weiser, The World is Not a Desktop
Traditional computer interfaces came from the technical constraints of their time, not because they represented optimal human-computer interaction. Even now, mobile phones still have (virtual) keyboards, monitors and mice. If we reflexively carry these patterns forward– designing AI chatbots or digital assistants that behave like desktop applications–we limit the potential of these new technologies.
Just as Weiser recognised that computers don't need to mimic human assistants, we should ask whether AI needs to mimic conventional computers and application use. [1] The real opportunity lies in developing entirely new interaction patterns that fit seamlessly into human cognitive processes, rather than forcing users to think in terms of files, applications and clicks, or the pattern we've seen with AI agents of simply connecting them together in some kind of simplistic chain or workflow, or allowing them to use a computer and a mouse and a keyboard and a monitor.
Computer Invisibility
Today's AI assistants may be beginning to fulfil Weiser's dream of a technology that "weaves itself into the fabric of everyday life until it is indistinguishable from it." [1] Rather than just physically embedding computers in our environment, AI makes them cognitively ambient.
Consider voice interfaces. While early versions represented what Weiser criticised as the "intelligent agent" model, modern AI voice interaction is moving towards what he called "invisible tools". [2] Instead of talking to an anthropomorphised agent, we're moving towards contextual voice commands that feel more natural–what Weiser described as tools that "don't intrude on your consciousness". [2]
That said, a computer you talk to might be invisible, but certainly still has cognitive weight. Where will we go from here?
An Agent in Every Teapot
One emerging pattern is the proliferation of specialised AI agents that handle discrete tasks. Instead of a single master AI assistant, we're seeing "agent cells"–small, focused AI programs that live within spreadsheet cells for example, or text documents, and application components. These agents work together invisibly to help users achieve their goals without requiring explicit management.
This mirrors Weiser's vision of "hundreds of computers in every room" [1], but at a software level. Each agent handles a specific domain–checking calculations, suggesting edits, managing references–while coordinating through a shared context.
Context is King
How should AI agents be implemented? How should they be used? One challenge that is emerging is context management. As AI becomes ambient, maintaining a coherent context across interactions becomes critical. Modern systems are exploring different approaches:
- Document linking: New IDE tools allow developers to reference documents with @ tags to retrieve relevant context.
- Workflow mapping: Building directed graphs of user activity
- Active memory: Preserve relevant context in dedicated UI spaces
- Dynamic prompting: Adapting AI behaviour based on usage patterns
These echo Weiser's emphasis on computers that "know where they are" and adapt accordingly[1] But rather than just physical location, modern systems track cognitive location - where the user's attention and intent are focused.
Human Intuition and Creativity
Darkness at the break of noon / 'Shadows even the silver spoon / The handmade blade, the child’s balloon / Eclipses both the sun and moon /To understand you know too soon / There is no sense in trying.
Try to sit down and write something like that. There is a magic to that. And It’s not a Siegfried & Roy kind of magic, it’s a different kind of penetrating magic," - Bob Dylan, 60 Minute Interview with Ed Bradley, 1994
"I find myself talking to the AI all the time, like a stream of consciousness," said one attendee at the Toronto LLM Power Users Meetup–I'm paraphrasing, but that's the gist. This comment points to a potential shift in human-computer interaction, away from the structured commands and organised interfaces, the endless clicking of buttons, that have defined computing for decades...towards something that more closely mirrors our natural stream-of-consciousness processes.
This pattern is reminiscent of Bob Dylan's descriptions of his creative process and the mysterious "muse" that seemed to channel creativity through him. Dylan spoke of his inability to write songs like "'It's Alright Ma (I'm Only Bleeding)" - he had simply "lost the ability to do it"[3]. The point is that at some point he had a muse, he had some kind of "magic" intuition that came out of him–it was there and then it wasn't–but it was there.
Some people, like people at the event I attended, believe that current AI tools provide a new kind of cognitive mirror that can reflect, reshape and crystallise our stream of consciousness–our muse–in real time, and help us harness that magic in potentially new ways. Frightening for some, amazing for others.
(With all this in mind, it should be noted that Weiser was against "magick" computers.)
Take magic. The idea, as near as I can tell, is to grant wishes: I wish I was the person I am now, but richer; I wish my boyfriend were smarter and more attractive; I wish my computer would only show me what I am interested in. But magic is about psychology and salesmanship, and I believe a dangerous model for good design and productive technology. The proof is in the details; magic ignores them. Furthermore, magic continues to glorify itself, as Robin Williams' attention-grabbing genie in Aladdin amply illustrates. - Mark Weiser, The World is not a Desktop
Conclusion
We're still in the early stages of this transition. Current AI systems often require too much attention and trial and error to be truly invisible. Context management is still fragile. Voice interfaces work best for simple commands rather than fluid interactions, and still retain some cognitive load. Multimodality is a work in progress.
But the pattern is clear–at least some of us are moving towards Weiser's vision of a technology that "helps overcome the problem of information overload" by blending into human environments rather than requiring us to adapt to machines. [1] The addition of AI accelerates this shift, making computers not only physically ubiquitous but cognitively invisible, or perhaps even a kind of strange partner with whom we can collaborate. There are many potential pitfalls, of course, but maybe, just maybe, we can find a new computing paradigm and escape the shackles of the desktop, laptop and smartphone.
References
[1] Weiser, M. (1991). "The Computer for the 21st Century." Scientific American, 265(3), 94-104. https://www.markw.org/papers/1991/computer.pdf
[2] Weiser, M. (1993). "The World is Not a Desktop." ACM Interactions. https://www.markw.org/papers/1993/desktop.pdf
[3] Taylor, T. (2023, November 29). "The song that Bob Dylan says was 'magically written'". Far Out Magazine. https://faroutmagazine.co.uk/the-song-that-bob-dylan-says-was-magically-written/
[4] 60 Minutes Interview with Bob Dylan. https://www.youtube.com/watch?v=hOas0d-fFK8