Post by Muhammad Ahmed Cheema
Software Engineer at Sendoso
I don't post on LinkedIn much. But this one's worth sharing. I've been a Claude and Claude Code user since day one, and I'd pick it over OpenAI any given day. I've spent the last few months experimenting, not just using AI tools, but thinking about what's still missing from them. Here's what I realized: your AI agent can write code, run tests, search files. But it can't click a button, send an email, or fill out a form. It lives inside a terminal. You've probably done this yourself. You go to an app, take a screenshot, paste it into Claude Code, and say "what do you see?" Over and over. Ghost OS removes that entirely. Claude Code can now see your screen and interact with any app on its own, on demand, whenever it needs to. That's the first thing it gives you out of the box. But here's where it gets interesting. Ghost OS reads the macOS accessibility tree, not screenshots. Structured data about every button, text field, and element on screen. It can click, type, scroll, press keys, manage windows. Any app on your Mac, not just browsers. And when the agent figures out a workflow, it saves it as a recipe. Plain JSON, readable, auditable. A frontier model learns it once. A small model runs it forever. Train with Opus, run with Haiku. If you've been following the OpenClaw hype and wanted to try computer use but haven't taken the jump, this is the easiest way to start. Two commands, no Docker, no sandboxes. I use OpenClaw myself, but Ghost OS is simpler, local, and you can read every step it takes before it runs. The video below? That's Claude Code using Ghost OS to navigate to LinkedIn and post this. No screenshots. No scripting. If you're building AI agents that do real things, check it out: https://lnkd.in/g67E3Vju Two commands to install: brew install ghostwright/ghost-os/ghost-os && ghost setup Open source, MIT licensed. Would love your feedback.
Video Content