Stop asking AI questions. Start having it do things for you.
Over the last few months, I’ve interviewed somewhere between 50 and 60 software engineers, a few dozen digital marketing specialists, and around 20 industrial designers. Different roles, different industries.
At the end of every conversation, I’ve been asking the same question: “How has your AI journey been so far?”
About 90% of them tell me some version of the same answer. They’ve used ChatGPT, or Claude, or Gemini. They open it when they hit something they want help thinking through. They type a question. They read the answer. They close the tab. On average, maybe once a week.
Quick aside before I get to the actual point of this article, because I don’t want it to land wrong. If that sounds like you, you’re already ahead of most people. Pew Research found that only about a third of US adults have even tried ChatGPT, which means just by using it once a week, you’re past roughly two-thirds of the country. The tool is helpful. The answers are useful. You’re getting real value from it. Nothing in the rest of this piece is meant to suggest you’re behind, or that you’ve been using AI wrong. You haven’t.
What I do want to invite you to consider is this: what you’re doing is one room in a much bigger house, and most people I talk to don’t yet know the other rooms exist.
Where most people actually are
The “occasional question” pattern isn’t just what I see in interviews. It shows up in the data too.
Pew Research found in March 2026 that 31% of Americans interact with AI at least several times a day, up from 22% in February 2024. Adoption is real, and it’s accelerating. But of the slice who use AI regularly, almost everyone is still in what I’d call helper mode: ask a question, read an answer, move on.
The Anthropic Economic Index, in its March 2026 report on data through February, splits AI usage into two patterns. AI as helper, where the human stays in the driver’s seat and the model assists. And AI as operator, where the model plans, asks its own questions, and executes a multi-step task. Helper mode is where most people live. Operator mode is rare, and concentrated. The same report notes that automated API workflows for business sales and for trading both more than doubled between November and February. The shift is happening, but it’s happening in narrow lanes (sales, trading, coding), and almost everywhere else, people are still using AI the same way they were a year ago.
That gap between helper and operator is where the real leverage lives. It’s also where almost nobody is yet.
The two things you can do today
I’m not writing this from a research desk. I’m writing it from the middle of my own experiment.
For the last 90 days, I’ve been running three businesses (Qandaba, Soláyae, and Derezd) with no IT team and a small marketing crew in the Philippines. Almost everything I get done in a day, AI is doing some part of. And most of the time it isn’t the “help me write this email” kind of help. It’s “here’s the outcome, go run it, come back when you need a decision.”
The difference between those two modes is not subtle. It’s the difference between using a tool and operating a leverage system. Two specific shifts get you there. Neither is hard. Both feel a little weird the first time.
First, let the AI ask you the questions
Most people prompt AI like this: “Write me a strategy doc for X.” Thin instruction in, thin result out. They blame the model.
The shift is to flip who’s asking. “I’m trying to figure out X. Don’t answer yet. Ask me whatever you need to know to give me a strong result.” Then you sit with it for three or five rounds. Most people have no idea what context the AI actually needs to do good work. The AI does. Letting it interview you produces a result that’s two or three times better with the same effort on your end.
Try it with something you actually need to do. Say you’re planning a family vacation in Italy this summer. The default move is to type “give me a 7-day Italy itinerary” and copy whatever comes back. But you didn’t tell it that you’re traveling with two teenagers and a partner who gets motion sick on trains. You didn’t tell it your budget, your food preferences, how many cathedrals you can stomach in a day, or that one of you has bad knees and can’t do a four-hour walking tour of Rome. So you get a generic itinerary that fits nobody.
Now flip it. Tell the AI: “I want to plan a family vacation in Italy this summer. Don’t give me an itinerary yet. Ask me everything you need to know to build one I’ll actually love.” It will ask. About the kids, the budget, the pace, what you’ve already seen, what you’re trying to avoid. Five minutes of back-and-forth, and the itinerary it produces is custom-built for your family, not for the average reader of a travel blog.
The same trick works almost anywhere context matters more than information. Drafting a hard email to a boss or a landlord. Preparing questions for a doctor’s appointment. Writing a performance review for someone who reports to you. Negotiating with a contractor over a renovation quote. Planning meals around a dietary restriction and a tight schedule. Helping a kid through a tough homework assignment. Picking a car. Writing a wedding toast.
Anywhere a generic answer would feel hollow, this move turns it sharp.
Second, hand it the goal, not the steps
The default behavior is to ask AI a question, get an answer, then go do the work yourself.
The shift is: “Here’s the outcome I want. Plan it, ask me only when you need a decision, then go do it.”
This is where agentic AI lives. The tools that read your inbox, edit your files, hit your servers, browse the web, generate slide decks, pre-draft your email responses, design entire workflows, build prototype applications, spin up landing pages, write production code, run recurring jobs while you sleep.
Sit with this for a moment: pretty much anything you can do on a computer, you can either ask an agent to do for you or build one to do it. That’s where we are in April 2026. Not theoretically. Right now.
The skill is no longer prompt writing. The skill is delegation. What to hand off, what to keep for yourself, and how to review the work fast enough to keep the loop moving.
What this actually looks like
Let me show you what I mean. A few from the last 90 days.
This article you’re reading right now. I gave AI the rough outline (the thesis, the audience, what I wanted readers to take from it) and pointed it at the instructions I’d already written for how I publish on 9 Tunnels, on LinkedIn, and on the Qandaba page. From that, it produced this article and the LinkedIn variants in parallel. My job became reviewing, pushing back, asking for rewrites in my voice, and shaping the final result. Total time on my end: somewhere between 30 minutes and an hour. The same piece, written by hand the way I used to do it, even with ChatGPT helping me research, was a five- or six-hour day. The work didn’t disappear. I just stopped doing the parts AI does better than I do.
A virtual machine crashed on a Saturday morning. I was eating breakfast when the alert came in. Old me would have spent the rest of the day in logs. New me handed it to my AI agent, kept eating, and came back to a recovered VM and a one-paragraph summary of what had happened.
A 15-second ad for Soláyae. Soláyae is the Filipino artisan handbag company my wife Elisa and I run. I wanted a professional video for one of our handbags, and I gave AI a few cues for inspiration: an Intramuros backdrop, a Filipina model, the bag as the hero. I didn’t know how to make it. The first thing AI did was tell me exactly what to do. Which tools to use, what to feed each one, how to stitch it all together. The second thing it did was actually do most of the work itself. It wrote the prompts straight into Gemini Veo, coordinated between tools, and where it couldn’t act directly, it gave me clear instructions for just that one step. When I got stuck on a transition between two clips I was splicing together, I gave it control of the app and it fixed the transition itself. Six hours of work. Three months ago that would have been a four-figure shoot day.
Email triage that scores by what I value, not by what a generic system values. This one is small, and it changed my life. Not “important versus unread.” A scoring system tuned to my actual priorities (client work, family, the three businesses, signal over noise) running five times a day. When the logic had an inverted flag once, the agent found it and fixed it. I haven’t seriously triaged my inbox in two months.
Recruiting for a Qandaba client. I needed Node.js engineers for a project. Sixty-four LinkedIn resumes downloaded, candidates contacted, interviews booked on my Calendly. I didn’t run any of it. I just showed up to the interviews.
A shorter list of the rest, less storytelling, more inventory:
- VPN rebuild from scratch with fresh certificates, the kind of task that eats half a day if you’ve ever done it manually.
- DNS migration for Derezd to Cloudflare, GPU transcoding, NAS configuration, AWS and homelab security hardening. None of it on my keyboard.
- This site. The design system, the publishing workflow, the patterns. Built and iterated with AI doing most of the building.
- Working prototype apps, roughly 80% of the way to functional, that we put in front of potential clients instead of a pitch deck. They get to use the product, not imagine it from a slide.
- Landing pages for proposals. When a Qandaba pitch needs a custom one-pager or an interactive demo, AI builds it the same day.
- Production code for Derezd, the stream-to-print platform, written and reviewed under a verification framework I’ve been testing.
- Recurring jobs. A daily top-five AI news brief in my inbox by 6am. Flight-price monitoring for an upcoming Philippines trip. Weekly homelab health reports.
- Wispr Flow for dictation. I almost never type prompts to AI anymore. I dictate them through Wispr Flow and let it transcribe. A 30-second voice brief replaces five minutes of keyboarding, and I end up giving a fuller, more natural brief because talking is faster than typing.
None of these are the same task. The pattern across all of them is identical: I’m not asking. I’m assigning outcomes.
This isn’t only happening in my office
The same pattern shows up across the public reporting and case studies (a few of them are linked at the bottom of this piece). Engineers are running code agents that ship features end to end: open the pull request, run the tests, iterate, hand back something reviewable. Founders and operators are delegating inbox triage, meeting prep, weekly board updates assembled from raw data, customer-support first-touch responses. Researchers are using deep-research style tools to assemble sourced briefings overnight that used to take a week. Support and operations teams are routing routine work through agents and keeping humans for the escalations.
The companies getting outsized results from AI right now aren’t ahead because they bought better tools. They’re ahead because their people stopped asking and started delegating.
The shift, in one line
The people who figure this out first will look unreasonably productive to everyone else. They’re not smarter. They just stopped using AI like a search bar.
So here’s what I’m asking you to do this week. Pick one task you do regularly. For the first move, don’t change the tool. Change the prompt. Tell the AI not to answer yet. Make it interview you. Then hand it the whole outcome and see what comes back.
For the second move, you’ll likely need to change the tool. Most chatbots have an agentic sibling built to actually do work, not just answer questions:
- If you use ChatGPT, try Codex (or Workspace Agents if you’re on a team plan).
- If you use Claude, try Claude Code (or Cowork for non-coding tasks).
- If you use Gemini, try Antigravity.
Pick the one that matches what you already use. Hand it a small outcome, not a question, and see what happens.
That’s the cheapest way to find out whether your current AI habits are leaving most of the value on the table.
I think you’ll be surprised by how much you’ve been missing.
Sources and case studies
- Klarna’s AI customer service assistant handled two-thirds of customer service chats in its first month, doing the work of 700 full-time agents and dropping resolution time from 11 minutes to under 2, with $40M of projected profit impact. (Klarna press release, OpenAI case study)
- Shopify Sidekick crossed 750,000 shops in a single quarter; merchants delegate inventory forecasting, app installation, and live site iteration to it. (Shopify Engineering)
- Anthropic engineers self-report using Claude in 60% of their work for a 2-3x productivity gain, with the share of work where Claude is implementing new features rising from 14% to 37% in six months. The majority of Anthropic’s code is now written by Claude Code. (How AI is transforming work at Anthropic)
- The augmentation-versus-automation framing is from the Anthropic Economic Index, March 2026 report.