Turn your laptops into extra monitors for your Linux desktop — and get remote control of each machine from a single keyboard and mouse. No special hardware. No proprietary software. Just your existing computers.
The insight: NVIDIA GPUs expose disconnected DisplayPort outputs that can be force-enabled via Xorg config. Chain that with x0vncserver clipping, WebSocket proxy, and a browser in kiosk mode — and a MacBook becomes your second monitor.
Zero custom rendering. The entire system is an orchestration layer over existing open-source tools. Knowing which tools to chain together, and how — that was the actual engineering.
Click the ScreenLink icon in the taskbar, then press ▶ to connect
Multi-monitor setups are great until you travel, work from home with different machines, or simply don't want to buy dedicated monitors. Most people already own a laptop or two collecting dust on the desk. ScreenLink makes them useful.
Existing solutions either cost money (Duet Display, Luna Display), require specific hardware (DisplayLink adapters), only work within one OS ecosystem (Sidecar for Mac-only, Miracast for Windows-only), or are clunky research projects that never quite work. Nothing existed that was free, cross-platform, and actually let you extend a Linux desktop to both macOS and Windows machines simultaneously.
The architecture is deceptively simple — it chains together proven open-source tools rather than reinventing screen capture and streaming from scratch.
One of the core insights came from a 10-year-old forum post: NVIDIA GPUs expose disconnected DisplayPort
outputs that can be force-enabled via nvidia-settings. By setting a
ConnectedMonitor option in the Xorg config and using ModeValidation
to bypass EDID checks, Linux treats a phantom DP output as a real monitor.
The content of this virtual display is captured by x0vncserver (from TigerVNC),
which serves only the clipped region corresponding to the virtual monitor. A noVNC WebSocket
proxy bridges this to the browser, and the client machine simply opens a full-screen browser
tab pointing to the noVNC endpoint over HTTPS.
The result: the MacBook's browser becomes a second monitor. Drag a terminal window to the right edge of your Linux screen and it appears on the Mac. Fullscreen a video there and it plays.
When you click "Remote Desktop" in the control widget, the system flips the direction. The Mac's browser closes, and a new browser instance launches on the Linux machine's virtual display. This browser connects to the Mac's built-in VNC server through another noVNC proxy. Since this browser window lives on the virtual display, the Mac now displays its own desktop through the VNC chain. The Linux keyboard and mouse drive the VNC session.
Switching back is instant: the remote browser closes, the Mac's extended-screen browser reopens, and you're back to using the Mac as a monitor.
Creating a virtual monitor on Linux sounds like it should be simple — it isn't. It took six attempts to get it working. The first attempt loaded a network dummy module. The second replaced the GPU driver and caused a black screen. The third created dead space the compositor ignored. The fourth and fifth failed against the NVIDIA proprietary driver's quirks.
The sixth attempt — combining NVIDIA's ConnectedMonitor option with
ModeValidation to bypass EDID checks — finally worked. After a re-login,
KDE recognized it as a real second monitor. Windows could be dragged there. It just worked.
The final system has zero custom screen capture code, zero custom streaming protocols, and zero custom rendering. It's entirely an orchestration layer over existing tools. But knowing which tools to chain together, and how to make them cooperate — that was the actual engineering.
Docxology is a documentation app that is an artefact of ScreenLink. While reading the Xorg documentation that is not only hard to comprehend but also in the worst possible format, I decided to build something better.
The idea: Ask Claude Code to parse the documentation and put it in an easy to read format, searchable and be able to pin articles.
While building ScreenLink, I needed to understand Xorg's internals — virtual displays, GPU driver configuration, display protocols. The documentation exists, but it's scattered across man pages, mailing list archives, and decade-old wiki pages in formats that make your eyes bleed.
I spent more time finding the right docs than reading them. That frustration became the seed for Docxology.
Instead of building a documentation platform from scratch, I used Claude Code and Codex to parse existing documentation sources — man pages, HTML docs, plain text specs — and transform them into a clean, searchable, readable format.
Python parsers handle the ingestion. The frontend is pure JavaScript — fast search, pinnable articles, and a reading experience that doesn't feel like it was designed in 1998.
Docxology started as a tool for myself, but it being very helpful I thought "I might just as well release it for the next guy". But I grew to enjoy to tool and decided to archive the X.org version and make the data format easier to parse so you can parse other documentation formats into this, which is just Markdown and a manifest file. So why stop there, if I can read the documents I want to be able to edit them and if can edit them I want to be able to create them. As a devoloper I dont enjoy taking my hands off the keyboard for mouse clicks, confluence really pushes my buttons. I enjoy writing markdown, but not reading it. Hence, Docxology.
This is not a note-taking app and not a full CMS. It is a focused documentation workspace for technical projects that want lightweight structure and a polished reading experience.
I'll finally reveal my last trick for this endevour. The Xorg docs. Maybe you'll notice this projects artefact opening that link.
When I had finished Docxology a thought came to mind: "Wouldn't it be extremely helpful to have an AI assistant with you given this specific data to get explanations or examples or context?" The answer was "yes, it would be very helpful". It started with data harvesting, cleaning and annotating.
Method: Harvest, clean and annotate data to feed it to a knowledge base in Bedrock. But that's just how it starts.
Docxology solved the reading problem. But I kept catching myself searching for the same concepts, cross-referencing between pages, trying to piece together how different Xorg options interact. What I really wanted was to ask the documentation questions.
So I built an AI assistant that knows Xorg inside and out — not from a generic training corpus, but from the actual documentation I'd already curated in Docxology.
The first step was harvesting. I extracted every document from Docxology's structured format — clean markdown with metadata, categories, and cross-references. Then came cleaning: stripping formatting artifacts, normalizing code blocks, splitting long documents into semantically meaningful chunks.
Annotation was the hardest part. Each chunk needed context: which section it belonged to, what concepts it covered, what related topics existed. This metadata is what makes the difference between a chatbot that regurgitates text and one that actually understands the relationships between configuration options.
The annotated data feeds into an AWS Bedrock Knowledge Base. When you ask a question, the system retrieves the most relevant document chunks, assembles them as context, and generates an answer with proper source citations. You're never getting hallucinated config options — every answer traces back to real documentation.
To save even more cost I took it off AWS and set it up locally with Ollama and Anything LLM. It's definitely not as powerful as the cloud setup, but it's sufficient for many tasks. And this would be the spark that ignites the idea of making it a desktop assistant, which would be able to read and answer questions about any documentation, not just Xorg's.
As mentioned earlier in this saga I do not like to remove my hands from the keyboard unless it's for a good reason, so I decided to make it a desktop assistant that can be triggered with a global shortcut and answer questions about any documentation I have set up in it. It's still a work in progress but the idea is to have it as a companion for developers, engineers and tech enthusiasts that want to have quick access to their documentation without having to search for it or open a browser. So I kept the three modes and added them to "/" for the menu and I added 8 workspaces with different framworks or applications documentation completely secluded and I added a "general" that will search the web and use it's own knowlege to answer the question at hand.
What started with me actually just testing screen extender apps that non of them provided me with what I wanted so I developed it myself, then turned into a documentation app because I needed to read the documentation and then led me to the Documentation helper AI chat brought me to my Desktop AI Assistant that I use all the time.
The story continues. Teleport back to learn where I've been — and where I'm going.
Developer. Creator. Relentless tinkerer.
This isn't a CV — it's the story of what I've built, broken, and rebuilt.
I was 14. The family computer and I just had to see if I could turn it into a hackintosh. I did not need a macintosh nor did anyone else, I just saw the challenge and couldn't resist. Two days straight of troubleshooting, forum diving, and blind command-line tinkering later — I got it working. First thing I did was to install windows again. My passion and drive comes from the thrill of the chase, not the destination.
What's good with being self taught is that you become very brave, I was very early in my career deploying traffic management systems on site. What's bad with being self taught is that I dont think I even realized that it was a big deal.
"I don't build software because it's my job.
I build it because I can't not."
Not just what I know — what I wield. These are the technologies I reach for when the problem matters.
Crafting interfaces that feel alive — fast, accessible, and delightful.
APIs, databases, the plumbing no one sees but everyone depends on.
It used to be the smart people but these days most of the complexity that comes with fullstack lies within frontend. Backend developers just still want the glory.
Building intelligent systems — from LLM integrations to custom agents.
Infrastructure as code, CI/CD pipelines, making deploys boring.
One codebase, every screen. Native feel without native pain (except for all the constant pain that comes with it).
The meta-skill: knowing how to ship well, collaborate, and iterate.
Every role shaped a new dimension. Every team taught something that no tutorial ever could.
The best projects start with a conversation. Whether it's a wild idea, a gnarly bug, or a "what if we..." — I'm listening.