AI Copilot for Hedge Fund Analysts

Designing a high-frequency, data-dense research interface where information hierarchy, real-time accuracy, and analyst trust are non-negotiable constraints.

Role

Lead Product Designer

Team

Product Manager
AI Engineer

Scope

0 → 1 Internal Tool

Overview

A 0→1 internal tool for a $1B hedge fund in NYC — replacing a fragmented, manual research workflow with a structured, AI-assisted interface built for speed, accuracy, and trust.

The tool shipped as a core part of the fund's pre-meeting workflow. Analysts stopped treating it as optional after the first week.

85%

Reduction in earnings call prep time from 60 to 5 mins

80%

AI-generated questions adopted in live analyst meetings

4x

Faster KPI modeling and post-call turnaround

The Problem

Analysts are drowning in context-switching, not a lack of intelligence.

The tools exist. The data exists. The AI exists. But none of it talks to each other — and the analyst pays the cost every time.

Problem 1 — Too many tools, zero continuity

Analysts juggle AI assistants, OneNote, research docs, and manually fetched data files, all open, none connected. Every insight requires a context switch. Every data point requires manual retrieval. The workflow was never designed as a system.

Opportunity
Unify the research workflow into a single surface, eliminating manual data fetching and context switching without disrupting existing analytical habits.

Problem 2 — AI exists in the workflow but can't be trusted in the room

Existing AI tools produce shallow, unverifiable outputs with no source traceability. Bringing an AI-generated insight into a live meeting is a credibility risk, so analysts revert to manual verification every time, defeating the purpose entirely.

Opportunity
Design an AI interface that earns trust through transparency, cited outputs, editable results, and analyst control at every step.

Key Decisions

The decisions that shaped how the product thinks, not just how it looks.

Decision 1 — Information Architecture Was the Product

What I did: Designed the entire IA and user flows from scratch, including multi-source data fetching, OneNote sync, source organization, and a sidebar research memory system showing live research status.
Why: With data coming from multiple disconnected sources, the architecture itself was the core UX problem. Without a clear organizational model, every other design decision would collapse.
Result: Analysts had a single, organized workspace where all sources, statuses, and outputs were visible and connected without manual reconciliation.

Decision 2 — Citations Were Non-Negotiable

What I did: Designed every AI output to be traceable to its exact source, down to the specific line and page of the original document.
Why: Discovered through stakeholder calls that analysts could not use any insight they couldn't verify. Trust wasn't a nice-to-have, it was the adoption barrier.
Result: Analysts brought AI-generated insights into live meetings with confidence for the first time.

Decision 3 — Simpler Interaction Over Richer Features

What I did: Between two versions I designed, chose the one with the simpler interaction model, reducing the number of actions required per research session.
Why: Observed that analysts weren't making frequent discrete calls to the tool. A lighter interaction model matched actual usage patterns better than a feature-rich one.
Result: Lower cognitive load during high-pressure prep without sacrificing output quality.

Decision 4 — Multi-Model Support

What I did: Designed the tool to support multiple AI models, allowing analysts to switch models and regenerate results within the same research session.
Why: Different analysts trusted different models. Locking to one model would have created adoption resistance and excluded valid working preferences.
Result: Broader team adoption and flexibility to evaluate outputs across models, with comparison capability on the roadmap.

Decision 5 — Research Intent Before Output

What I did: Designed a research intent layer where analysts select and define their intent before the AI generates anything. Prompts are stored, editable, and reusable across sessions.
Why: Without a defined intent, AI outputs were generic. Giving analysts control over the prompt layer made outputs more precise and reduced post-generation editing significantly.
Result: Analysts fine-tuned prompts over time, creating a personalized research workflow that improved with use.

The Solution

An AI-powered analyst copilot that transforms transcripts into structured insight, enabling faster, smarter, and more confident investment decisions.

A modular web tool where hedge fund analysts can upload earnings calls, instantly extract KPIs, surface key quotes, and generate custom follow-up questions — all with traceable sources and editable outputs. The system fits directly into existing workflows, reducing prep time from hours to minutes and standardizing analysis across teams.

Smart Question Generation That Mirrors Analyst Thinking

AI-generated follow-up questions based on KPI movement, soft guidance, or missing context. Each question is editable and backed by the exact quote or metric that triggered it.

Win: Analysts adopted 80% of AI-generated questions into live meetings — cutting prep time while maintaining precision.

Verified Summaries Built for Speed and Trust

The tool produces concise summaries of earnings and corporate access calls, segmented by topic and supported with direct citations. Every insight links back to a quote or metric — so nothing is taken at face value.

Win: Analysts skipped 60+ page transcripts and trusted the assistant as a starting point for post-call debriefs.

KPI Modeling With Color-Coded Precision

Extracted financials auto-populate a table that shows quarter-over-quarter changes with visual highlights. Analysts can edit values and trace them back to original statements instantly.

Win: Updating models took minutes instead of hours — with full control, context, and exportability.

Impact

Through this AI copilot, hedge fund analysts were able to accelerate their research workflow, increase confidence in their prep, and reduce the time spent parsing dense earnings and corporate access calls without sacrificing precision or control.

85%

Reduced call prep time by 85% by automating transcript parsing, KPI extraction, and question generation.

Improved research velocity across analyst teams with 4× faster turnaround on KPI modeling and follow-up notes.

80%

80% of AI-generated questions were adopted directly into live meetings — enhancing insight quality and team alignment.