Version 3.0 of the Gab AI Dashboard

Version 3.0 of the Gab AI Dashboard

Today we’re thrilled to release Gab AI Dashboard Version 3.0, the biggest upgrade since we first flipped the switch on our platform a year and a half ago. After months of relentless work and thousands of user suggestions we have rebuilt, refined, and radically expanded what a human–AI workspace can be. If you liked the freedom and clarity we stood for in prior versions, version 3.0 is going to feel like your entire desktop just moved into a single streamlined tab. No hype, no buzzwords, just the capabilities you told us you wanted, delivered exactly as requested.

First up is the new Gab AI File Management System. You will upload a document once, be it a PDF, a spreadsheet of production data, or the latest chapter of your novel—and from that moment on it lives in your personal library inside Gab AI. Need to reference the same warranty rider in three different chats tomorrow? Just open a new window and summon the file. Starting a project that requires cross-referencing a dozen blueprints and a set of meeting transcripts? Drag all the files into the composer begin a multi-file anchored conversation without the friction of hunting down what you “might have uploaded last week.”

File uploading itself finally matches the elegance of the rest of the experience. Drop a file into any active thread from your local folder or select from your existing Library all seamlessly coexist in the same drag-and-drop lane.

Version 3.0 also introduces Multi-Model Conversations, quietly the most user-demanded feature in our backlog. Once you’re in a discussion with Arya, you can on-the-fly swap to GPT-4o, Claude, or any specialty model you have activated, then watch the thread continue as though a new expert just walked into the room. The message chain remains intact, context preserved, and footnotes appended seamlessly so you never lose the narrative thread. Comparative reasoning across AI models is finally friction-free. Drift between models as effortlessly as switching tabs in a browser.

Which brings us to Agents—special-purpose assistants you can summon exactly when they’re needed and stash away when they’re not. Each agent is a personality-profiled model reinforcement trained on narrow domains. Use our premade agents or create a custom one that meets your needs.

Agent Tagging makes all of this real-time magic. Hit the @ symbol in any chat window, and the dashboard surfaces every agent you own plus the ones we created by default. Tag it inline just like you would a coworker in Slack and the agent parachutes into that exact moment, responding contextually from the viewpoint you trained.

Organization now moves beyond folders into Collections. Think of Collections as self-contained workspaces. Collections travel with you across devices and keep all shared chats synchronized so nothing slips. Set custom instructions for chats inside that collection, for example if you are writing a book the instructions could be something like "You are an expert editor and will help me refine copy for my book."

Persistent Memory is the attention span extender we have all waited for. The feature is disabled by default, but when activated it grants every model access to your private knowledge vault so you stop re-introducing yourself in every new window. Memories can be pruned granularly and can be fully deleted at any moment. Early next quarter we’ll add an optional automatic memory system that suggests facts to store and lets you approve or reject them one by one.

Appearance Customization finally removes the remaining cold-industrial vibe. Toggle between modes, remix accent colors to exact hex precision, or select chat presentation styles that range from black to warm parchment. All preferences sync across mobile and desktop so the environment you spend eight hours inside feels like a personally tailored cockpit instead of rented real estate.

Our new Credit System introduces radical transparency. Unlimited simultaneous conversations were and remain our gift to users, but every single prompt uses compute—some more, some less—so hiding those costs behind hand-waving was unsustainable. We removed the ambiguity.

Arya, our signature model, is still completely free of credits. The meter starts only when you dial up GPT-4o, Claude, or any paid third-party AI. The number of credits burned by each invocation is displayed right on the model label before you decide to send, so there is never sticker shock at month-end. If you run the workflow most of our users run—primarily Arya, occasional side detours—you will deplete credits so rarely that you may forget we even instituted the change. Soon we will allow power users to purchase additional credits as needed.

Credits are calculated based on the cost of the model you are using combined with the context window of the conversation you are in. It's good practice to start new conversations often and not have hundreds of messages in a single chat window as this will drive up credit and compute costs.

Everything above is live this instant. Version 3.0 is not merely an update, it is the culmination of every feature request you sent, every edge case you troubleshooted, and every byte of feedback you sent us at two a.m. Thank you for building this with us; thank you for choosing Gab AI. We cannot wait to see what you create next.