Launching
Project 01 — Featured

ScreenLink

Turn your laptops into extra monitors for your Linux desktop — and get remote control of each machine from a single keyboard and mouse. No special hardware. No proprietary software. Just your existing computers.

Python WebSocket TigerVNC noVNC X11 KDE Plasma

The insight: NVIDIA GPUs expose disconnected DisplayPort outputs that can be force-enabled via Xorg config. Chain that with x0vncserver clipping, WebSocket proxy, and a browser in kiosk mode — and a MacBook becomes your second monitor.

Zero custom rendering. The entire system is an orchestration layer over existing open-source tools. Knowing which tools to chain together, and how — that was the actual engineering.

Windows 11 14:46
Windows Laptop
Files
Terminal
14:46
Terminal
marcus@desktop:~$ _
Linux Desktop
ScreenLink Offline
🖥
Desktop PC
Linux — 1920x1080 — Ethernet
💻
Disconnected
MacBook Air — 1440x900
💻
Disconnected
Windows Laptop — 1920x1080
Finder 14:46
📁
🌐
🎵
MacBook Air

Click the ScreenLink icon in the taskbar, then press ▶ to connect

The problem it solves

Multi-monitor setups are great until you travel, work from home with different machines, or simply don't want to buy dedicated monitors. Most people already own a laptop or two collecting dust on the desk. ScreenLink makes them useful.

Existing solutions either cost money (Duet Display, Luna Display), require specific hardware (DisplayLink adapters), only work within one OS ecosystem (Sidecar for Mac-only, Miracast for Windows-only), or are clunky research projects that never quite work. Nothing existed that was free, cross-platform, and actually let you extend a Linux desktop to both macOS and Windows machines simultaneously.

How it works

The architecture is deceptively simple — it chains together proven open-source tools rather than reinventing screen capture and streaming from scratch.

One of the core insights came from a 10-year-old forum post: NVIDIA GPUs expose disconnected DisplayPort outputs that can be force-enabled via nvidia-settings. By setting a ConnectedMonitor option in the Xorg config and using ModeValidation to bypass EDID checks, Linux treats a phantom DP output as a real monitor.

The content of this virtual display is captured by x0vncserver (from TigerVNC), which serves only the clipped region corresponding to the virtual monitor. A noVNC WebSocket proxy bridges this to the browser, and the client machine simply opens a full-screen browser tab pointing to the noVNC endpoint over HTTPS.

The result: the MacBook's browser becomes a second monitor. Drag a terminal window to the right edge of your Linux screen and it appears on the Mac. Fullscreen a video there and it plays.

Remote Desktop mode

When you click "Remote Desktop" in the control widget, the system flips the direction. The Mac's browser closes, and a new browser instance launches on the Linux machine's virtual display. This browser connects to the Mac's built-in VNC server through another noVNC proxy. Since this browser window lives on the virtual display, the Mac now displays its own desktop through the VNC chain. The Linux keyboard and mouse drive the VNC session.

Switching back is instant: the remote browser closes, the Mac's extended-screen browser reopens, and you're back to using the Mac as a monitor.

The struggles

Creating a virtual monitor on Linux sounds like it should be simple — it isn't. It took six attempts to get it working. The first attempt loaded a network dummy module. The second replaced the GPU driver and caused a black screen. The third created dead space the compositor ignored. The fourth and fifth failed against the NVIDIA proprietary driver's quirks.

The sixth attempt — combining NVIDIA's ConnectedMonitor option with ModeValidation to bypass EDID checks — finally worked. After a re-login, KDE recognized it as a real second monitor. Windows could be dragged there. It just worked.

What I learned

The final system has zero custom screen capture code, zero custom streaming protocols, and zero custom rendering. It's entirely an orchestration layer over existing tools. But knowing which tools to chain together, and how to make them cooperate — that was the actual engineering.

GitHub Live Demo
Project 02 — Born from Frustration

Docxology is a documentation app that is an artefact of ScreenLink. While reading the Xorg documentation that is not only hard to comprehend but also in the worst possible format, I decided to build something better.

Claude Code Codex JavaScript Python Parsers

The idea: Ask Claude Code to parse the documentation and put it in an easy to read format, searchable and be able to pin articles.

Docxology Docs
v1.0
Search docs... Ctrl K
Getting Started 2
Using the App 2
Project & Community 2
Legal & Release 2
Docxology
MIT License

Docxology Documentation

Built-in product documentation for using, extending, and publishing with Docxology.

Browse sections in the sidebar or press Ctrl + K to search

Getting Started
What the app is, why it exists, and how to get productive quickly.
2 documents
Using the App
How to structure categories, write documents, import files, and publish.
2 documents
Project & Community
Why the X.Org archive exists, how the project stays open.
2 documents
Legal & Release
Licensing notes and release expectations.
2 documents
4 categories 8 documents 8 in published index

How it started

While building ScreenLink, I needed to understand Xorg's internals — virtual displays, GPU driver configuration, display protocols. The documentation exists, but it's scattered across man pages, mailing list archives, and decade-old wiki pages in formats that make your eyes bleed.

I spent more time finding the right docs than reading them. That frustration became the seed for Docxology.

The approach

Instead of building a documentation platform from scratch, I used Claude Code and Codex to parse existing documentation sources — man pages, HTML docs, plain text specs — and transform them into a clean, searchable, readable format.

Python parsers handle the ingestion. The frontend is pure JavaScript — fast search, pinnable articles, and a reading experience that doesn't feel like it was designed in 1998.

And Docxology was born

Docxology started as a tool for myself, but it being very helpful I thought "I might just as well release it for the next guy". But I grew to enjoy to tool and decided to archive the X.org version and make the data format easier to parse so you can parse other documentation formats into this, which is just Markdown and a manifest file. So why stop there, if I can read the documents I want to be able to edit them and if can edit them I want to be able to create them. As a devoloper I dont enjoy taking my hands off the keyboard for mouse clicks, confluence really pushes my buttons. I enjoy writing markdown, but not reading it. Hence, Docxology.

What makes it useful

  • Import common text-based formats into one editable model
  • Write and maintain docs in the same interface you browse them in
  • Keep categories, ordering, and in-page navigation coherent
  • Publish docs without exposing editing controls

Best way to think about it

This is not a note-taking app and not a full CMS. It is a focused documentation workspace for technical projects that want lightweight structure and a polished reading experience.

Xorg Docs

I'll finally reveal my last trick for this endevour. The Xorg docs. Maybe you'll notice this projects artefact opening that link.

GitHub Documentation Xorg Docs (Live)
Project 03 — The Saga Unfolds

AI Desktop Assistant

When I had finished Docxology a thought came to mind: "Wouldn't it be extremely helpful to have an AI assistant with you given this specific data to get explanations or examples or context?" The answer was "yes, it would be very helpful". It started with data harvesting, cleaning and annotating.

Python AWS Bedrock RAG Knowledge Base Data Pipeline

Method: Harvest, clean and annotate data to feed it to a knowledge base in Bedrock. But that's just how it starts.

Xorg Assistant
Explain docs, options, examples, and config
Explain more about xrandr command
Example-first
xrandr Command Examples
Here are a few examples of using the xrandr command:
Bash
# List all available displays
xrandr -q
# Get information about the current display
# configuration
xrandr -q | grep "Connected"
# Enable HDMI output on DisplayPort 1
xrandr --output eDP-1 --mode 1920x1080 --rate
60 --connect HDMI-1
Plain English
Technical
Example-first
Ask about xorg.conf, a section, or a config example...

From documentation to conversation

Docxology solved the reading problem. But I kept catching myself searching for the same concepts, cross-referencing between pages, trying to piece together how different Xorg options interact. What I really wanted was to ask the documentation questions.

So I built an AI assistant that knows Xorg inside and out — not from a generic training corpus, but from the actual documentation I'd already curated in Docxology.

The data pipeline

The first step was harvesting. I extracted every document from Docxology's structured format — clean markdown with metadata, categories, and cross-references. Then came cleaning: stripping formatting artifacts, normalizing code blocks, splitting long documents into semantically meaningful chunks.

Annotation was the hardest part. Each chunk needed context: which section it belonged to, what concepts it covered, what related topics existed. This metadata is what makes the difference between a chatbot that regurgitates text and one that actually understands the relationships between configuration options.

RAG with AWS Bedrock

The annotated data feeds into an AWS Bedrock Knowledge Base. When you ask a question, the system retrieves the most relevant document chunks, assembles them as context, and generates an answer with proper source citations. You're never getting hallucinated config options — every answer traces back to real documentation.

Backend

To save costs I came up with evaulation system based on a set of questions and what's approved answers. Once a post request is made the evalution is made and a look up in the database is executed to see if a question with x amount of similarity has been asked before, then return from the database. No AI Model is needed. If not I set up a three model system a cheap, a medium and an expensive. It starts with the cheap one and if the answer is satisfactory according to the evaluation it returns it, if not it goes to the medium and then the expensive one. This way I save a lot of money by not using the expensive model for questions that can be answered with the cheap one.

Frontend

I decided to make the interface like a chat but less traditional because the users messages arent of the same importance as the AI's answers from a space perspective. The users messages are more of a trigger for the AI to give an answer, so I made them smaller and less prominent than the AI's answers. The AI's answers are the main focus of the interface, so I made them bigger and more visually distinct. I aimed for a modern and simple design and deployed as a widget with customizable config to make it easy to reuse for other projects. I added a couple of extra features like 3 pre set prompts "explain in plain english", "technical" and "example first" (these are defined in the config). The user can also make a selection of the response and right click for a quick option to explain further.

Moving on to a local setup

To save even more cost I took it off AWS and set it up locally with Ollama and Anything LLM. It's definitely not as powerful as the cloud setup, but it's sufficient for many tasks. And this would be the spark that ignites the idea of making it a desktop assistant, which would be able to read and answer questions about any documentation, not just Xorg's.

How it ends (for now)

As mentioned earlier in this saga I do not like to remove my hands from the keyboard unless it's for a good reason, so I decided to make it a desktop assistant that can be triggered with a global shortcut and answer questions about any documentation I have set up in it. It's still a work in progress but the idea is to have it as a companion for developers, engineers and tech enthusiasts that want to have quick access to their documentation without having to search for it or open a browser. So I kept the three modes and added them to "/" for the menu and I added 8 workspaces with different framworks or applications documentation completely secluded and I added a "general" that will search the web and use it's own knowlege to answer the question at hand.

End of transmission

Conclusion

What started with me actually just testing screen extender apps that non of them provided me with what I wanted so I developed it myself, then turned into a documentation app because I needed to read the documentation and then led me to the Documentation helper AI chat brought me to my Desktop AI Assistant that I use all the time.

The story continues. Teleport back to learn where I've been — and where I'm going.

Scroll to explore →
Swipe to explore ← →
The Chronicle of

Marcus Builds Things

Developer. Creator. Relentless tinkerer.
This isn't a CV — it's the story of what I've built, broken, and rebuilt.

See the work Get in touch
Scroll to begin
Chapter II — Origin

It started with a
broken computer

I was 14. The family computer and I just had to see if I could turn it into a hackintosh. I did not need a macintosh nor did anyone else, I just saw the challenge and couldn't resist. Two days straight of troubleshooting, forum diving, and blind command-line tinkering later — I got it working. First thing I did was to install windows again. My passion and drive comes from the thrill of the chase, not the destination.

What's good with being self taught is that you become very brave, I was very early in my career deploying traffic management systems on site. What's bad with being self taught is that I dont think I even realized that it was a big deal.

"I don't build software because it's my job.
I build it because I can't not."
Chapter III — Arsenal

Tools of the trade

Not just what I know — what I wield. These are the technologies I reach for when the problem matters.

Frontend Architecture

Crafting interfaces that feel alive — fast, accessible, and delightful.

ReactAngularNext.jsVanilla JS

Backend Systems

APIs, databases, the plumbing no one sees but everyone depends on.

It used to be the smart people but these days most of the complexity that comes with fullstack lies within frontend. Backend developers just still want the glory.

Node.jsPythonPostgreSQLREST

AI & Machine Learning

Building intelligent systems — from LLM integrations to custom agents.

Claude APIBedrockRAGEmbeddings

DevOps & Cloud

Infrastructure as code, CI/CD pipelines, making deploys boring.

DockerAWSGitHub ActionsJenkins

Mobile & Cross-Platform

One codebase, every screen. Native feel without native pain (except for all the constant pain that comes with it).

React NativeExpoPWA

Workflow & Craft

The meta-skill: knowing how to ship well, collaborate, and iterate.

GitAgileCode ReviewPrestigelessness
Chapter IV — Journey

The path so far

Every role shaped a new dimension. Every team taught something that no tutorial ever could.

2024 — Present
Tech lead
Building the future, one commit at a time
Leading architecture decisions, mentoring devs, and shipping features that move real metrics. Full autonomy, full responsibility.
2020 — 2024
Founder and Full-Stack Developer
Scaling from zero to thousands of users
Wore every hat. Built APIs, designed UIs, wrangled databases, debugged production at 2am. The crucible that forged real skills.
2018 — 2020
Junior Developer
Where it all began professionally
Learned that code in production is a different animal than code on localhost. Embraced pull requests, tests, and the art of asking good questions.
2016 — 2018
Self-Taught Era
The internet was my university
Hundreds of hours of courses, docs, Stack Overflow rabbit holes, and side projects that nobody asked for but everyone learned from.
Loading 3D models...
Hover models to explore  •  Drag to orbit  •  Scroll to zoom  •  Right-drag to pan
Loading the finale...
W A S D move Space jump
Jump
Let's run towards the future
together