Two parallel tracks — tools & fluency and the Signal Stack — that move research from individual experimentation to shared, trustworthy systems.
Every researcher is using AI. Several have pushed into Builder territory. What we lack is standardization, shared systems, and a way for experiments to compound.
Note: This data covers ChatGPT/Gemini only, not Claude Code, Cowork, or skills. An AI Pulse Survey (week of April 27) will give a fuller picture.
Ending April 17. Setup friction → collaboration friction → handoff gaps. AI comfort converged to 4/5 by Week 3. Effectiveness tracked to process maturity, not tool fluency.
April 8 team working session. Seven components proposed. Interest gathered, healthy skepticism expressed — especially around synthetic users and governance.
Research, product design, and content design developing separate-but-laddered enablement plans. The May 6 design offsite is the shared milestone.
AI-first in consequential work. "Match rigor to risk." Stay at the craft frontier. Share openly — individual experiments must become team knowledge.
Two parallel tracks that reinforce each other. Track 1 creates the conditions for Track 2. Track 2 gives Track 1 a purpose beyond individual fluency.
Get every researcher to a comfortable Operator baseline, with Builders supported to go deeper. Tool access, skillshares, dedicated build time, and a path to the May 6 offsite.
Build the shared systems that define how research uses AI at scale. Evaluation frameworks, behavioral systems, synthetic user investigation, knowledge infrastructure, and the operating model.
Every researcher confident and capable with AI tools. Shared workflows for common tasks. Nobody feeling pressured — just enabled.
Give them visibility, surface their work to leadership, connect experiments to the Signal Stack, protect their time.
Help them standardize one pattern. Granola recipe + Claude Project is a good starting point. Encourage sharing.
One real task under deadline pressure. Pair with an Operator buddy. The offsite is a natural forcing function.
| Tool | Purpose | Access |
|---|---|---|
| Claude Enterprise | Research Projects with standing context, Ask Thumbtack | Pending enterprise contract (mid-April) |
| Claude Code / Cowork | Skills, automation, behavioral analysis pipelines | IT request template in #research-team-all |
| Granola + Recipes | Structured meeting outputs — "User Interview" and "Research Readout" recipes | Launched; private team folder set up |
| NotebookLM | Cross-corpus querying — "Ask Research" prototype | Available via Okta |
| Prototyping Playground | Research stimulus, prototype testing with users | Broader access planned for offsite |
| ChatGPT Enterprise | General-purpose AI assistant, GPTs, Projects for persistent context | Available via Okta |
| Gemini + Gems | Custom Gems for synthetic users, structured workflows, multimodal analysis | Available via Okta |
| Gemini AI Studio | Advanced prompting, multimodal analysis, prototype evaluation | Available via Okta |
Morning session (10:00–12:30, 2.5 hours). Full Cap D org (~30 people). Format: kickoff → show & tell → problem-first sprints → demo share-out.
Researchers self-select into one Signal Stack area — a research repository skill, a concept evaluation prompt, a behavioral signal prototype — and build a working first draft.
Some researchers join cross-functional groups to build tools that bridge research ↔ design ↔ content: a discovery kickoff skill, a prototype feedback tool.
Every researcher leaves with at least one artifact they can use the following Monday — a skill, a prompt template, a research site, or a prototype.
Show & tell candidates: Synthetic user gem + research site · Behavioral conversation audit at scale · Human evaluation framework for AI-generated content
A system of interconnected tools and processes that define how research generates, evaluates, and scales insight in an AI-augmented world.
Note: This is an early snapshot. Components, sequencing, and ownership will evolve as the team works through the strategy and starts building.
Define how we evaluate outputs and experiences. Shared criteria: clarity, relevance, trust, usefulness.
Structured feedback on outputs and prototypes — first pass. Not for replacing judgment or defining product direction.
Predicts how users might respond. Fast, early evaluation before building. Team may decide not to pursue — valid outcome.
Shows what actually happens in use. Paths, drop-offs, hesitation, breakdowns. Ground-truth signal from real behavior.
Speeds up synthesis, study design, and pattern identification. Reduces operational overhead. Not for replacing research.
Makes past research accessible, contextualized, reusable. Captures recurring behaviors and connects research to decisions.
Ensures systems are used and maintained correctly. Confidence levels, escalation paths, when to trust vs. when to verify.
Every component must earn the team's belief.
AI predicts → behavior shows → gaps improve the system.
First-pass tools screen and narrow. They don't decide.
Systems show confidence level and suggest next steps.
Don't build what we won't maintain and calibrate.
No one builds something the team doesn't believe in.
Connected roles that drive the Signal Stack forward:
April 15 – June 30, 2026. Concrete actions organized into four phases.
How we sustain momentum without adding more meetings or pressure.
Rotating share in team meeting: tried → happened → learned. No polish.
Show, Don't Tell. One person goes deep on a build. Actual tool, actual output.
Signal Stack check-ins. Component leads: where we are, what's next, blockers.
Use case review. What's worth formalizing? What goes in the playbook? What do we stop?
Research isn't just a participant in AI-assisted product development. Research is the function that determines whether it produces trustworthy outputs.
The Prototype-First Pilot's core open questions — "Where does quality degrade? What guardrails are required?" — are Research questions. The Week 3 finding that prototypes expose product gaps that mocks wouldn't have surfaced is a Research finding. The async feedback gap is a Research problem. The "match rigor to risk" framework is becoming the operating principle for the whole org.
The Signal Stack — if developed with the rigor and skepticism this team brings — becomes the continuous learning loop between people and AI systems that the company needs.
Make sure leadership knows what's being built here. Make sure it's in the playbook. And make sure the team believes in it — because systems no one trusts are worse than no systems at all.