Stop Writing in ChatGPT: Why Non-Developers Need an AI Workspace
If you regularly create professional texts or documents with AI — not just ask quick questions or chat about everyday topics — it’s worth choosing the right place to do that work. Maybe it’s Cursor?
- Using chat platforms like ChatGPT, Gemini, Claude, or Perplexity as your primary workspace is unnatural. A workspace lets you easily reuse your results (artifacts), rather than revolving around chats with AI. While Google Workspace and Microsoft 365 are true workspaces, ChatGPT is still catching up, despite moving in that direction.
- Alternatively, you can hope that your familiar workspace (Evernote, Google Docs, Notion, etc) will gain all the AI features you need. That hope is reasonable — general-purpose workspaces like Notion already come close to ChatGPT capabilities.
Notion would be my AI workspace of choice — if not for one big “but”…
The AI tools market changes beyond recognition every year. Can we afford vendor lock-in to just one product (e.g., Notion)?
I prefer not to lock myself in, based on some hard-won experience:
- No software product remains the optimal choice for more than a few years. With AI, that time frame is even shorter. Even if functionality stays strong, the price can become far less competitive.
- If that product is my primary workspace, switching to another one will be painful. My productivity will drop because I can’t quickly reuse artifacts accumulated in the previous space. With AI chatbots, migration is even worse: you can only export chats in bulk, not artifacts (see Section 2.4).
So I chose a non-traditional path: do most of my work with AI in an IDE (Integrated Development Environment).
- I picked Trae IDE, though Cursor is much more popular. For the last six months, I’ve created various texts in Trae, for both professional tasks and this blog.
- Maybe, I’ll switch to Google Antigravity soon, as it can spawn multiple AI agent threads simultaneously, and its Gemini 3 Pro model is the best for me now.
In this article, I’ll share several advantages of a Cursor-like IDE as a workspace and a “ChatGPT alternative.”
If you don’t need convincing about WHY, jump straight to my Substack post focused on a concrete use case and HOW to do this effectively:
Contents
1. When Is an IDE the Right Choice?
1.1. Search With AI Overview or a Voice Assistant
1.2. Engage in Conversations With AI Chatbots
1.3. Create Artifacts in an AI IDE
1.4. Use Traditional Workspaces for Predefined AI Workflows
2. Why Is an IDE Better Than ChatGPT?
2.1. Context Engineering for Dummies
2.2. Context Generation and Improvement Over Time
2.3. Version Comparison and AI Edits
2.4. The “No Vendor Lock-In” Approach
3. The Price to Pay for IDE Advantages
Conclusion
1. When Is an IDE the Right Choice?
To understand the role of IDEs in the broader AI platform landscape, let’s clarify when it makes sense to move away from familiar chatbots like ChatGPT. We’ll skip niche applications (like transcription or marketing content generators) and focus on general-purpose tools — the domain where ChatGPT and similar chatbots shine.
My approach to choosing the right AI tool for the job is based on the value of the results; most importantly — how long their value persists.
1.1. Search With AI Overview or a Voice Assistant
AI is often used for quick questions and simple tasks that typically don’t require much context, including your previous work.
- For such tasks, many turn to ChatGPT or Perplexity out of habit, but Google Search with AI Overview is often faster and more convenient.
- If you work in a separate room, voice is ideal. Google is integrating Gemini as a voice assistant on Google Nest devices. ChatGPT Voice is also a solid choice, for instance, as a replacement for Google Assistant on Android.
- I try to keep my primary ChatGPT account history free of trivial queries that can be easily answered by Google Search, the AI Mode tab, or the voice options above. Otherwise, I’d run into the issues described in the next section.
1.2. Engage in Conversations With AI Chatbots
ChatGPT and similar platforms excel at multi-turn conversations with AI, especially for personal topics and tasks.
- Unlike quick requests, these discussions can become important later, and you may want to revisit them. ChatGPT’s search feature is helpful here — particularly if you don’t clutter your history with one-off questions.
- However, if the value lies not in the conversation itself but in the artifact it produces (e.g., a document or a plan), a chatbot isn’t the best tool. In such cases, you have to save the result from the chatbot to a place where you can easily find and reuse it.
- If you rely on AI personalization, it should relate to you in general rather than your specific professional needs. While ChatGPT’s two memory types and Projects can help, using the same account for specialized work will “pollute” the memory and degrade the quality of its answers.
1.3. Create Artifacts in an AI IDE
Cursor or another IDE is great for regularly producing complex artifacts in your domain. These are typically work-related tasks, not personal ones.
- If you’re a blogger, you create posts or video scripts. If you’re a manager, you regularly prepare presentations and reports. If you’re an IT analyst, you write user stories and other requirements. All of these are complex artifacts. Standard tools rarely fit such expert tasks because they don’t reflect your unique context and professional constraints.
- Why is an IDE especially good for recurring work? Its power lies in customizing the process of creating anything with AI. You can save and reuse AI instructions instead of re-writing them in a chat each time. However, if you only create artifacts occasionally, you don’t need to formalize the process or separate instructions from the chat.
- If you have multiple roles or a multifunctional role (e.g., a product manager), you’ll need multiple workspaces in your IDE. One workspace — one specialization. Switching between IDE workspaces is more convenient than in ChatGPT or even Notion.
1.4. Use Traditional Workspaces for Predefined AI Workflows
For standardized work tasks such as writing emails, creating routine documents, or managing projects, it’s best to use ready-made tools with AI features, like Asana AI.
- These tools usually integrate smoothly with the outside world (e.g., calendars and email).
- Unlike complex artifact creation, routine tasks require far less manual context management because the necessary mechanisms are already built into the tool.
Thus, distributing tasks correctly across different tools helps you get the most out of each one.
2. Why Is an IDE Better Than ChatGPT?
Every user of Cursor (and similar IDEs) will have a different answer. For example, if AI improves your texts for publication, the key feature may be “Diff” to compare versions before and after AI edits. If that’s your case, see Section 2.3 for how to turn this into a game-changer.
I’ll start with a less obvious but most important aspect for those who seek high text quality and steady improvement over time.
An AI-powered IDE can replace ChatGPT and other chatbots — but only for complex artifact creation where a lot of context is required.
- For general questions, web search context is often enough; for personal questions, “memory” context helps. In these cases, there’s no reason to switch from ChatGPT to Cursor.
- Large amounts of specific context are vital for the expert tasks mentioned in the diagram above 👆. Let’s unpack how Cursor helps with context and why it can beat ChatGPT here.
2.1. Context Engineering for Dummies
A key goal of moving to an IDE is to increase the quality of AI results by letting end-users manage context, applying basic context engineering principles.
Context engineering is currently practiced mainly by two groups:
- software developers who can set up and fine-tune RAG, web search, and custom tools;
- advanced users who work with no-code tools like n8n, Make, or ChatGPT Agent Builder, and can thoughtfully connect MCP servers.
However, non-developers who actively use AI also intuitively know what context their task needs. Much of that context already resides in their AI platform. But ChatGPT doesn’t offer a simple, universal way to add this context to a request.
Thus, the goal — effectively using context in complex tasks — still isn’t achievable in mass-market AI tools like ChatGPT.
- On one hand, ChatGPT is among the best platforms for context engineering. OpenAI’s RAG and tools are top-tier. GPTs and Projects are good ways to set general context for many similar tasks.
- On the other hand, ChatGPT’s context control isn’t sufficient, even for relatively simple professional tasks like writing regular reports. You can use @mentions for GPTs or similar Space mentions in Perplexity. But far more often you need to add a specific artifact or a specific part of it rather than an entire GPT or Space.
In Cursor-like IDEs, adding context is much more convenient, and not only via @mentions.
- Selected text can be added to the AI chat in one click; it appears as a link rather than the text, which keeps the chat readable.
- A file or folder can be added via drag-and-drop from Explorer or by mentioning it with @ or #.
- Implicit context loading: the built-in AI agent can find relevant context itself based on words in your prompt. For example, it can find applicable rules in AGENTS.md or .cursor/rules.
These context management methods in IDEs don’t require the technical skills needed to set up RAG, tools, or MCP. When it comes to AI copilots rather than agents, context engineering becomes intuitive for non-developers in an IDE.
2.2. Context Generation and Improvement Over Time
The context management described above is still manual. We’re essentially “micromanaging” AI — explicitly telling it which instructions and past artifact fragments to use as context. That creates at least two problems:
- Added fragments may be too large, degrading answer quality.
- It may feel like all reusable AI instructions must be written manually, which would be time-consuming.
The solution is AI-generated context instead of hand-crafted context.
For problem 1 (quality), ask AI to create short summaries, outlines, or style guides from past artifacts. For problem 2 (instruction writing effort), Cursor even has a special command /Generate Cursor Rules.
By continuously updating guides and rules with AI, you’ll steadily improve the quality of your artifacts over time.
How is this better than ChatGPT?
It may seem hard to beat ChatGPT for improving results over time:
- Memory: ChatGPT was among the first to implement memory, and it works better than most competitors.
- Reference Chat History: This feature lets you reference previous chats. Such memory can sometimes improve user experience but is a “black box” and makes context management harder for users.
- Project-only Memory: Recently added project-level memory ensures unrelated memories don’t leak into a given project’s context.
In many IDEs, the equivalent of project instructions is rules files. At this level, Cursor already provides more flexible instruction management, as different sets of rules are automatically applied to different file types.
Even more advanced context management is achieved at the folder level. A project can contain multiple folders, and they can form a hierarchy. This allows for the creation of hierarchical instructions in AGENTS.md files. The Cursor documentation says, “You can place AGENTS.md files in any subdirectory of your project, and they will be automatically applied when working with files in that directory or its children.”
Thus, Cursor-like IDEs provide far more options for manual context management. And it doesn’t consume much time because rules files (rules, AGENTS.md) are often generated with AI itself. However, IDEs do lack automatic context pulled from past chats, akin to ChatGPT’s Memory.
2.3. Version Comparison and AI Edits
When collaborating with AI to create artifacts, focus your cognitive effort on creative work. That’s how you get high quality without spending ages on the more tedious technical part.
This means that you should minimize your effort in the final “technical” stages:
Reviewing and fixing inaccuracies introduced by AI at the final stages is tedious, especially when checking multiple AI versions.
To make this easier, you must clearly see what AI changed, added, or removed — without manually comparing every word in two versions.
Modern AI IDEs highlight changes introduced by AI, at least at the paragraph level. For natural language, that’s insufficient, although that’s fine for code where lines are short. In the example below, Cursor altered only two words, but you still have to read the whole paragraph to be sure:
Trae does better here by highlighting changes at the word level, but it has its own drawback:
I think some IDE extension could satisfy both requirements for word-level changes and full visibility of the original paragraph. However, I use functionality available “out of the box” in Cursor and Trae:
- My Workspace is under Git like software projects. GitHub is free.
- I commit (save changes) before each AI request, such as “Translate …..md” or “Enhance ….md according to …_rules.md”.
- By opening the file from the Source Control tab, I review and correct AI changes in the side-by-side mode shown below.
By the way, Git also lets you create branches for experiments and then merge them. This helps in some creative workflows: you can give AI two different prompts, get two versions, and combine the best parts. But this is NOT for beginners.
As a beginner, just click Commit before each AI enhancement, and you’ll drastically cut time spent on tedious work.
You can also commit right after AI enhancements, adding the AI model name to the commit message (e.g., fix(Gemini-2.5-Pro): <artifact name>). Later, you can compare the AI result with the final version and measure the size of manual edits. This way, you’ll accumulate Git data about model quality and pick the best model for your process.
ChatGPT Canvas also has a similar feature, but it’s less convenient than side-by-side comparison in IDEs and requires too much pairwise sentence comparison:
2.4. The “No Vendor Lock-In” Approach
Let’s clarify that we’re talking about local AI-powered IDEs, not cloud AI agents for developers and vibe-coders like Codex Cloud or Replit. Cursor is the leader here, but cheaper Windsurf and Trae also fit.
The main reason to move from platforms like ChatGPT, Gemini, or Perplexity to a local IDE is usually avoiding vendor lock-in.
These popular AI platforms lock us in because we don’t want to lose:
- our AI chats — especially non-trivial instructions in those chats;
- artifacts created in those chats.
“Losing” these means inability to find and reuse quickly. Fear of loss keeps us paying a vendor (e.g., OpenAI) even when better deals appear.
In theory, you could export/import conversations when switching platforms. For example, moving from ChatGPT to Notion is possible with limitations. But old chats become an archive you search separately — not part of a new chat system.
Unfortunately, the lack of chat portability (point 1 of 2) is also the case for local AI workspaces like IDEs.
However, for complex tasks, the artifacts like texts, tables, images, and documents matter more than chats. Those are what you usually want to reuse as AI context.
Searching for artifacts in long threads is inconvenient, especially since the final version is often refined outside the chatbot. ChatGPT Canvas and Claude Artifacts only partially solve this. Moreover, bulk export of artifacts isn’t supported.
Thus, an AI chatbot is not a full-fledged workspace for documents. Google Workspace and Microsoft 365 are, but they create even stronger vendor lock-in.
An IDE is a local AI workspace that stores your artifacts as text files (e.g., Markdown) in your folder structure. You can switch IDEs at any time, in minutes. No vendor lock-in!
3. The Price to Pay for IDE Advantages
As shown in Sections 2.1–2.2, an IDE helps manage context so that you steadily get the desired results, rather than occasionally getting a “magic” result. Among AI experts, this property is called steerability. But you have to pay for this steerability.
The price of high steerability is increased cognitive load and higher demands on user experience. I mean both experience in AI and the manual creation of such artifacts.
Unlike an IDE, ChatGPT is aimed at beginners and takes context control upon itself using opaque algorithms. That may create a “magical” effect, but outcome quality remains unstable because there is almost no manual context control.
The price we pay for avoiding vendor lock-in (Section 2.4) is also high:
- Discipline in managing instructions. For recurring tasks, AI instructions should be saved in files so they aren’t lost when switching IDEs. While an IDE makes it easy to reference these files — and they can be pulled into context via rules — this approach requires more discipline compared to ordinary chat and may feel unfamiliar or unnatural.
- The IDE is still incomplete as a workspace. It’s convenient for “source” artifacts, but some final artifacts require switching to other apps. In particular, not all artifacts are easy to create from Markdown. For tables, presentations, or images, you may need ChatGPT, NotebookLM, or specialized image generators.
However, there are IDE extensions for non-programmers that partly address these issues. For example, AsciiDoc Slides for creating presentations. The AsciiDoc format is more powerful than Markdown; examples are available here.
As for the popular Markdown format, which has become a de-facto standard for AI, it can be easily pasted into many rich text editors. For example, use the “Paste from Markdown” menu item in Google Docs (enable it in Tools → Preferences if missing).
Conclusion
ChatGPT, Gemini, Perplexity, and similar platforms are universal helpers that excel as chatbots for multi-turn conversations.
However, for creating complex artifacts — documents, articles, or knowledge-base elements — you need a product of a different class: a workspace. Working in a chatbot for such tasks leads to frustration, even if you’re not worried about vendor lock-in.
In this case, use the powerful context-management features in an IDE, so that your artifact creation speed and overall quality will grow over time. In addition, clear diff highlighting of AI edits boosts review productivity and reduces errors.
How exactly to do this? Using my writing workflow as an example, I described working in an AI IDE in this Substack post.
What’s the place of an AI IDE in the broader picture of AI workspaces? At minimum, look at AI workspaces from two angles — flexibility and collaboration:
Please comment or clap if you’d like me to write a Medium article about AI workspaces with explanations of this diagram and more.