OpenClaw: After Two Weeks of Heavy Use, This AI Tool Has Completely Reshaped My Workflow

,

Vanilla

I believe many people have been repeatedly exposed over the past month to an AI tool called OpenClaw. According to social media posts and early community discussions, the tool went through several name changes before finally settling on OpenClaw. During the renaming period, there was even an incident where its social media accounts were maliciously squatted and used to launch a meme coin called $CLAWD to “harvest” users—adding a dramatic twist to the birth of OpenClaw.

For a typical mobile-internet-era user like me, my instinctive reaction when encountering a problem has always been to look for yet another app to solve it. After more than two weeks of using OpenClaw, however, I noticed that my habits had quietly changed. Step by step, as I rebuilt my workflow around OpenClaw, it gradually became my first choice for handling digital-life needs.

In this article, I’ll start by explaining what OpenClaw is and how to install it, then share some of my own usage scenarios and tips, hoping to offer some practical help to anyone interested in trying it out.

What Is OpenClaw?

OpenClaw is neither a simple large language model nor just a coding CLI. Instead, it is a locally running “digital life hub.” It can act as a personal AI assistant, and it can also take on parts of certain roles within a company. Its capabilities can be extended through Skills and tool integrations (including MCP scenarios), and by running a Gateway on a Mac mini or server, it enables cross-platform access and asynchronous scheduling. No matter where you are, you can interact with it in real time via various instant messaging tools.

At the same time, all memories, file indexes, and personal habits generated within OpenClaw are stored in your own local workspace (such as Memory.md files, index databases, and Skills scripts). In a local deployment scenario, this gives you much greater control over your data and makes permission management and backups easier.

How Do You Install OpenClaw?

There are many options for hosting OpenClaw. At the moment, the most popular choice is the Mac mini, mainly for three reasons. First, it supports 24/7 continuous operation with low power consumption. Second, within the Apple ecosystem it can easily integrate Skills tied to Apple Reminders, Apple Notes, Apple Calendar, and more. Third, the hardware itself offers strong value for money. The 16GB + 256GB configuration often fluctuates in price on e-commerce platforms, but at the time of writing, many channels are approaching the 3,000 RMB range. Combined with the M4 chip and a minimum of 16GB unified memory, performance is more than sufficient.

Beyond the Mac mini, OpenClaw can also be installed on devices running Windows, Linux, and other operating systems. If you don’t want to deploy OpenClaw locally, major cloud providers—such as Tencent Cloud, Alibaba Cloud, Cloudflare, and DigitalOcean—offer dedicated server images that support quick OpenClaw deployment as well.

Overall, OpenClaw provides four installation methods: One-liner, npm, Hackable, and a macOS client. The One-liner supports macOS, Linux, and Windows; npm can be used via either npm or pnpm; Hackable comes in installer and pnpm variants. Each distribution method corresponds to different terminal commands, and you can choose whichever best suits your needs.

After installation, OpenClaw automatically runs the openclaw onboard --install-daemon command for initial setup. In most cases, the default options are sufficient. The main configuration steps you’ll need to pay attention to are four key areas:

The first is selecting an AI service provider and the specific model you want to use. Most providers offer both OAuth authorization and direct API key input, corresponding to different billing methods.

Taking Google Gemini as an example: if you choose OAuth authentication, you can use services like Google Gemini 3 Pro, Google Gemini 3 Flash, Claude Opus 4.5, and Claude Sonnet 4.5 via Antigravity, with quotas refreshed every five hours. If you choose the Google API instead, you’ll need to create an API key in Google AI Studio or Vertex, and billing will be based on token and prompt usage. Once provider verification is complete, you can select your preferred model from the available list.

The second step is configuring the communication Channels used to interact with OpenClaw. If you choose certain overseas messaging tools, issues may arise during setup, potentially causing the Gateway to fail to start. In that case, you have three options: let it fail and complete the setup first, then ask the AI to help fix it later; skip Channel configuration and move on; or choose a domestic messaging tool such as Feishu.

The third step is selecting and installing Skills. Use the arrow keys to navigate, the spacebar to select or deselect, and press Enter to install. If you don’t want to spend too much time here, you can skip this step and install Skills later.

The final step is launching the Gateway and choosing a control interface, such as TUI or Web UI. At this point, the initial OpenClaw configuration is complete. If you’re a macOS user, you can also install the official Companion App. This menu-bar utility makes it easy to adjust OpenClaw settings, monitor Gateway status, and even chat directly with OpenClaw.

How Should You Choose an OpenClaw Model?

As of the time of writing, OpenAI has just released ChatGPT 5.3-Codex, while Anthropic has launched Claude Opus 4.6—widely regarded as the two models at the very top tier right now.

If budget isn’t a concern and you have reliable access to official subscriptions, then ChatGPT Pro and Claude Max are naturally the best choices. If you lack a suitable payment method or worry about account bans, you can also use aggregation services such as OpenRouter to access these top models on a token-based pricing scheme—though the value for money is significantly lower.

I believe many people have recently taken advantage of Google’s promotions to subscribe to Google AI Pro or Ultra family plans. In that case, you have two options. One is to use Google Antigravity for OAuth authorization; once authenticated, you can access Gemini and Claude models with quotas refreshed periodically. From my own experience, however, this approach is prone to rate limits—especially with Claude—resulting in a subpar experience. The second option is to apply for an API key directly through Google AI Studio and pay per token via the API. Recently, Google has also given Pro and Ultra subscribers a one-time USD 300 credit plus USD 10 in monthly credits, which should last you quite a while.

If you don’t want to spend heavily on ChatGPT or Claude models and can’t manage Google’s paid subscriptions either, it’s worth trying domestic models. To test OpenClaw, I specifically subscribed to MiniMax and Kimi’s Coding Plans. After using them for a while, I was pleasantly surprised. Their latest models—MiniMax 2.1 and Kimi K2.5—have both been recommended by OpenClaw’s developer Peter himself.

Based on my personal experience and feedback from users on X, Kimi K2.5 has slightly stronger engineering capabilities than MiniMax M2.1, with most projects running smoothly. MiniMax M2.1, however, has its own advantages: it’s cheaper, with a top-tier plan priced at just RMB 119, compared to RMB 199 for Kimi. In terms of responsiveness, I find MiniMax M2.1 a bit faster than Kimi K2.5, making it more suitable for lighter tasks. In my own usage, Kimi Code’s RMB 99 Moderato plan was exhausted in under five days, so if you plan to use it as a main driver, you’ll likely need the Allegretto plan.

Overall, I think the best value paid option right now is the ChatGPT Plus subscription. For USD 20 per month, you get access to GPT-5.3 Codex, a true T0-level model. OpenAI is also currently running a promotion that doubles your Codex App quota for two months.

After comparing multiple primary models, I find GPT-5.3 Codex delivers the best overall experience, with a more balanced combination of speed and quality. It’s particularly well-suited for code audits and refactoring of legacy projects. Many other models can technically complete projects and run them successfully, but they often hide subtle issues that only surface later on.

If you want to use another T0-level model—Claude Opus 4.6—but can’t manage a Claude subscription or are worried about account bans, you can access it via Google Antigravity (currently requires the AI Ultra plan; the Pro plan does not yet include Claude Opus 4.6). While Claude quotas are relatively limited, it’s still viable for light exploration or as one of several sub-agents handling critical but low-token tasks. If you can’t sort out subscriptions or credit purchases for overseas model services at all, then using domestic models like Kimi, MiniMax, or Qwen is also a solid option—at the very least, they’re more than sufficient for getting OpenClaw up and running and trying it out.

If you place a high priority on personal privacy or have powerful hardware, running local models is another good choice. Through services like Ollama, you can install local large language models such as DeepSeek V3.2, qwen3-coder-next, or gemma3, then configure OpenClaw to call these local models—saving a substantial amount of money.

Finally, if you simply want to try OpenClaw without spending extra money on AI subscriptions or credits, there are plenty of free models available. For example, NVIDIA has launched a trial program for Kimi K2.5, allowing you to apply for an API key and use it directly in OpenClaw. OpenRouter has also recently introduced a model called Pony Alpha, which can be used for free in OpenClaw—but as a trade-off, all requests and outputs are uploaded for model training purposes.

What Can OpenClaw Do?

With enough groundwork laid, it’s time to show some real-world use cases of OpenClaw to answer the question most people have in mind: what is OpenClaw actually good for? I’ll divide these scenarios into basic and advanced categories, so you can quickly scan according to your needs.

Note: Most of the capabilities below are based on my own configuration and custom scripts. Results may vary depending on models, Skills, permissions, and network conditions. In addition, due to personal usage scenarios and privacy considerations, the final two advanced use cases are drawn from online sharing—please refer to them at your discretion.

Basic

Chatbot

Yes—interacting with OpenClaw looks no different from chatting with a familiar chatbot. Just like what you do in the Gemini or ChatGPT web or desktop apps, you can ask questions or issue commands in OpenClaw and have a large language model respond or generate results.

After connecting Feishu, you can chat with OpenClaw across platforms including iPhone, iPad, Android phones, Mac, and Windows. All chat records are synchronized and stored in real time. Based on your configuration, OpenClaw can also distill conversations into memories and recall them in future interactions.

I’ve now grown accustomed to using OpenClaw as my primary way to gather information or conduct research. On one hand, it can aggregate multiple AI model services and switch between them at will; on the other, IM tools like Feishu offer a more comfortable conversational interface than most AI apps. For example, I discuss investment thoughts with the bot, look up ZIP codes, summarize documents, scrape posts from X, and more—there never seems to be a shortage of things to talk about.

A Coding Sidekick

The evolution of large language models by 2025 has fully proven that even people with zero programming background can easily accomplish tasks that once seemed out of reach—this is the once-viral concept of Vibe Coding. With tools like Claude Code, Codex, and Antigravity, and with Xcode now supporting ChatGPT and Claude for AI-assisted coding, ordinary users like me have gained the ability to build products firsthand.

Imagine yourself as the product manager, with these top-tier AI models acting as full-stack engineers. All you need to do is continuously describe your ideas, and they’ll handle the coding and deliver the results.

However, tools like Claude Code, Codex, and Antigravity still require working on a computer. Unless you remote into your machine via tools like Tailscale or Sunlogin to operate terminal-based workflows, the experience isn’t great. OpenClaw changes this completely. Wherever I am in the world, I can issue instructions in natural language through Feishu. The Feishu channel connects to the Gateway on my computer, passes the command to OpenClaw, executes various programming tasks in the terminal, and then sends the results back to Feishu.

For example, I recently switched my primary input method on iPhone to “Cang Input Method,” but wasn’t satisfied with the default skin. I simply dropped an existing .hskin skin file to the bot and asked it to precisely adjust the skin code according to my aesthetic preferences. Once done, I imported the generated skin file directly on my phone—never touching my computer keyboard throughout the entire process.

A Personalized News Editor

Today, our information intake is largely controlled by big platforms and algorithms, stripping us of source control. That’s why the revival of RSS has become a hot topic. With OpenClaw, you can bypass expensive RSS subscription services and build a reading system that truly fits you.

Step one: customize your own RSS list. You can send your frequently used RSS feeds directly to OpenClaw, ask it to discover feeds based on keywords, or—like me—simply drop an existing OPML file.

Step two: build an RSS fetching mechanism. I had OpenClaw create a script that fetches RSS content, keeps only the text, stores it in Markdown format, retains articles from the past two days, and automatically clears older files.

Step three: create an article filtering mechanism. If you have too many feeds or articles coming in too frequently, OpenClaw can automatically filter them based on criteria like topic, author, or online popularity—forming a screening system tailored just for you.

Step four: set up article delivery. After filtering, I had OpenClaw create a cron task that pushes the day’s selected articles to me at a fixed time every evening, following a consistent template. OpenClaw can even summarize each article in the push, letting me quickly scan and decide which ones to read in depth.

If you don’t like reading text, you can go a step further and have OpenClaw automatically turn the day’s content into a podcast episode or video and send it to you. It’s like running your own TV channel or media company—where you decide exactly what gets broadcast.

A Writing Assistant

The arrival of OpenClaw has effectively reassembled my entire writing workflow. While I’d already been using AI since last year for research, fact-checking, proofreading, and polishing, constant platform switching, model changes, and feature updates made the process fragmented.

In OpenClaw, I created a dedicated writing group to handle all AI collaboration in my writing workflow. When writing, I place the Feishu “Writing Master” group on the left side of the screen and the iA Writer editor on the right.

During the preparation phase, I ask Claw to gather information related to my topic, repeatedly verify its reliability, then organize it into an outline with sources attached to each point for manual verification.

While writing, if I need to look something up, I simply ask Claw and get an immediate response—without switching tools or windows, and without breaking my writing flow.

After finishing a draft, I let Claw handle proofreading and polishing. I created a Skill that allows me to say something like “Help me proofread/polish the article: XXX,” prompting it to read my local iA Writer library, locate the matching article, and begin reviewing it. OpenClaw checks for typos and grammatical issues, flags imprecise expressions, and provides concrete revision suggestions.

Advanced

Notion as an External Brain

I originally subscribed to Notion’s Business Plan, which allowed me to operate content across my entire Notion workspace using the built-in Notion AI. The biggest advantage of Notion AI is that, at least in theory, it offers unlimited access to top-tier models such as Gemini 3 Pro, Claude Opus 4.6, Claude Sonnet 4.5, and GPT-5.2. Users who upgraded before January this year could even keep the subscription at USD 10 per month. The downside, however, is that these AI capabilities are confined strictly within Notion, which significantly limits usage scenarios.

Later on, I discovered that OpenClaw natively supports Notion Skills. Even the free version of Notion can be controlled by an external AI agent via Connections. As a result, I canceled my Notion subscription and switched to using OpenClaw to operate my Notion content. I created a cron task in OpenClaw that, every day, selects five words from my Notion vocabulary database based on a memory algorithm and pushes them to me using a fixed template. After reviewing them, I rate my familiarity with each word, and OpenClaw records both the score and review count back into the Notion database before moving on to the next round. If I come across new words in daily life, I can simply send them to OpenClaw, which will add them to the Notion vocabulary database and automatically fill in fields such as phonetics, part of speech, definitions, mnemonic roots, and memory aids—ensuring a continuous supply of words to study.

Controlling the Browser

OpenClaw can act as an AI agent to control the browser on your computer, helping automate UI-level interactions.

Before getting started, you need to install the Chrome browser extension using the terminal command openclaw browser extension install. Once installed, click the extension icon in the top-right corner of the browser and make sure it’s enabled on the current page (the icon will display “ON”). From there, you can start directing OpenClaw to work inside the browser.

For example, I once discovered a great content creator on Xiaohongshu and wanted to scrape all of their posts for study. As a domestic social platform, Xiaohongshu obviously doesn’t provide APIs for this kind of access, nor can it be queried, posted to, or searched via Skills like X. So I had OpenClaw directly control the browser to “manually” scrape those posts. It’s slower, but at least I don’t have to constantly click the mouse myself. That said, Xiaohongshu’s anti-scraping mechanisms can be quite annoying—once you scrape too many posts, it forcibly redirects you back to the homepage.

Of course, if you’re comfortable with it, you could also let OpenClaw help you clean up your inbox, reply to emails, or even book a flight to Paris. If you’re not comfortable with that level of access, then simply don’t install the browser extension.

AI Phone

Some time ago, the Doubao phone sparked a lot of discussion. Users could control the Doubao model via voice to directly operate apps on their phones—ordering a milk tea, scrolling short videos, claiming red packets, and so on. Since OpenClaw also has agent capabilities, it naturally caught the attention of curious users.

I saw a post on X where a foreign user hacked together a USD 25 Android phone. They installed OpenClaw via Termux and were then able to use it to control the flashlight, recognize objects through the camera, read sensor data, and more.

How the Developer Uses It

OpenClaw’s creator, Peter Steinberger, also shared some of his personal use cases in an interview. These include adjusting mattress temperature, playing music, controlling lights, viewing camera feeds, and checking package delivery status. For specific examples, you can refer to the interview video shared in Fu Sheng’s post on X.

OpenClaw Usage Tips

OpenClaw is a relatively young open-source project and still maintains a high update cadence. Beyond adding new features, updates frequently focus on bug fixes—which is my roundabout way of saying that OpenClaw is not yet a mature product. You’ll inevitably run into various issues during use. Whether or not you’re an experienced developer with strong programming skills, I recommend handing all debugging tasks over to AI agents. If you don’t have to do it yourself, why would you?

To make OpenClaw more usable and stable, I’ll share some general usage strategies here. For the actual implementation, just let an AI agent handle it. You can even copy and paste this entire section directly to an AI agent and ask it to propose solutions following these ideas, then execute them after your approval.

Creating Groups

Once OpenClaw is set up, you’ll initially be chatting with your bot in a private conversation. All interactions happen in this private chat, and everything is stored in memory. Over time, the bot will continuously pull related information from its memory files to respond. Eventually, those memory files become long and messy.

To avoid this—and to keep cleaner, more focused timelines for different scenarios—I recommend creating multiple groups, channels, or topics. Add your Claw bot to each one, give it a different name and avatar, and use each group for a dedicated purpose when interacting with OpenClaw.

By default, when chatting with Claw in a group, you need to prefix messages with @Claw (for example, if my bot is named adawinterbot, I must include @adawinterbot for OpenClaw to receive the message). Once OpenClaw receives it, it adds a 👀 emoji reaction to your message.

If you’re tired of typing @Claw every time, have your local AI agent—or OpenClaw itself—modify the group configuration to support both @Claw mentions and direct messages.

Also, remember to propagate configurations you’ve completed in private chats—such as model selection or Skill usage—into your group chats with the help of an AI agent.

Reducing Token Costs and Improving Efficiency

If you’re using paid AI subscriptions or API-based billing, you’ll inevitably feel the pain of rate limits or rapidly growing bills. This is where optimizing OpenClaw’s token usage becomes important—both to slow down token consumption and to improve output efficiency by trimming context length.

The first method is to frequently use the /new and /compact commands. /new starts a completely fresh session with a cleared context—essentially “starting over”—which is useful when switching topics or avoiding interference from old context. /compact compresses the current session’s context, preserving key information while reducing token usage, allowing the conversation to continue more efficiently. If you want a fresh session but still retain some continuity, you can send /new followed by something like “Continue the previous task: XXX.” I’ve had OpenClaw set up a script that automatically runs /new every day at 4 a.m., which significantly reduces context buildup.

The second method is enabling OpenClaw’s built-in QMD. This acts as a memory-retrieval middleware layer. Before each conversation turn, it selects the most relevant fragments from historical memory and injects them into the model, allowing OpenClaw to retain long-term context while controlling token costs. You can ask OpenClaw to enable QMD and tune parameters such as the maximum number of memory entries per turn, truncation length, and retrieval timeout—choosing between balanced, cost-saving, or performance-oriented presets.

Subagents

You may have noticed a new feature released alongside Claude Opus 4.6 called Agents Team. It enables a “main agent + multiple parallel sub-agents” architecture, where the main agent decomposes, assigns, and aggregates tasks, while sub-agents operate with relatively independent contexts to handle frontend, backend, testing, auditing, and more in parallel.

OpenClaw also natively supports sub-agent collaboration through session orchestration, though its implementation differs from Claude’s official feature. You can ask Claw to build an advanced Subagents system based on the models you’ve configured, similar in spirit to Agents Team.

Unlike Claude’s kernel-level sub-agent coordination, OpenClaw’s Subagents rely on custom scripts orchestrated by an overarching controller. As for how to design this orchestration, simply explain your idea to OpenClaw and let it build the system. If you have no clear plan, just ask OpenClaw to implement its own recommended approach.

Voice Input and Output

On mobile devices, voice has become an indispensable input and output method for interacting with AI.

On the input side, you can have OpenClaw install the “Whisper without API” Skill and create a script that automatically transcribes incoming voice messages using this Skill. This allows you to send voice messages directly in Feishu, with OpenClaw handling transcription and executing tasks accordingly. While many IM tools already offer voice transcription, OpenAI’s Whisper supports more languages and mixed-language input, generally delivering better results.

On the output side, you can install the edge-tts Skill, which automatically converts text responses into audio and sends them to you. Currently, edge-tts offers both male and female voices. You can configure OpenClaw to generate audio for all text responses, or only in specific scenarios.

Backup and Recovery

It’s easy to run into issues when modifying OpenClaw’s configuration. Once something breaks, OpenClaw may freeze or the Gateway may disconnect. If this happens while you’re traveling or away from home, recovering via IM command menus alone can be extremely difficult and stressful.

There are currently two main approaches to address this.

The first is self-healing via automated scripts. Start by having OpenClaw create a daily automatic backup script. Then create a heartbeat monitoring script that pings the Gateway every five minutes; if three consecutive checks fail, it automatically restarts the Gateway. Finally, add a configuration rollback script so that if repeated restarts still result in errors, OpenClaw automatically restores the last known good backup.

The second approach is repairing via remote SSH access to your local machine. OpenClaw officially supports connecting to the Gateway via Tailscale, but I personally don’t like this option. First, Tailscale can conflict with other networking tools; second, my own programming skills are limited, so even with remote access I’d still struggle to fix bugs.

Instead, I prefer the “AI fixes AI” approach. There are many ways to remotely control a computer—the most brute-force being desktop control software like Sunlogin or ToDesk—but that feels like overkill. Remote SSH is lighter and easier to manage.

Beyond Tailscale, you can use Cloudflare Tunnel or VPS reverse proxies to connect to your home machine. My setup uses a VPS reverse proxy: I create a VM on GCP, connect my Mac mini to it, and then use Termius on my iPhone to SSH into the Mac mini. For a smoother experience, I recommend installing tmux on the Mac mini, so SSH sessions remain persistent instead of restarting every time.

So when OpenClaw crashes while I’m away, I open Termius, SSH into the Mac mini, and use the Codex CLI to diagnose the issue, propose a solution, and execute the fix in one continuous flow. For specific configurations and steps, consult your own AI agent.

Conclusion

As mentioned at the beginning, OpenClaw’s core innovation lies more in engineering integration and usability than in breakthroughs of individual model capabilities. Still, it successfully extends the concept of AI agents—once confined to desktops—across platforms, allowing us to access AI agent capabilities on virtually any device. As an open-source project, OpenClaw has also encouraged collective creativity, giving rise to a wide range of playful experiments and productivity tools, creating strong word-of-mouth momentum.

That said, from a cautious perspective, OpenClaw does pose certain privacy and security risks. Avoid exposing API keys online, never upload sensitive information such as financial data or home addresses, and closely monitor browser automation features—complacency is not an option.

Of course, every technological shift and real-world deployment comes with a period of growing pains. There’s no denying that personal AI assistants are entering our lives at an accelerating pace. If you’re uneasy about a personal developer’s project like OpenClaw, waiting for companies like Apple or Google to enter the field may offer stronger privacy guarantees—providing reassurance through trust in established giants.

Leave a Reply