
Report Assistant
Reducing report-writing burden for law enforcement officers without compromising accuracy or trust
Overview
Report Assistant originated as a 0 → 1 effort to see how AI could accelerate police reporting without compromising the legal integrity of the document. Since 'hallucinations' aren't an option in a court of law, I led the design of a human-in-the-loop system that keeps the officer in total control of the narrative.
Strategically, we prioritized a browser extension to multiply our market reach beyond the native Axon ecosystem. I translated the high-level requirements and ethical risks into a functional architecture that bypassed the technical debt of legacy systems. This enabled any agency to adopt our AI tools without a total software overhaul.
Project summary
My role
Lead designer: Partnered with Product, Engineering, and agency development partners to architect the end-to-end experience.
Design Mentorship: Led the project’s design strategy in collaboration with a partner designer, ensuring a cohesive experience across overlapping workstreams.
Product Strategy: Co-defined the MVP and long-term vision with Product and Engineering partners, balancing speed-to-market with the high usability standards required for field operations.
Details
Platform: Native Web (Axon Records product) and a cross-platform browser extension.
Systems: Leveraged and extended the Axon design system for generative AI requirements.
Scope: 0 → 1 discovery, product definition, rapid prototyping, user testing and pilot delivery.
Business impact: Launched a customer pilot, supported agency testing and demos, and identified new product opportunities through field visits.
User problem: a fragmented, high-stakes workflow
Report writing rarely happens in ideal conditions. Officers typically complete reports on small in-car laptops after a call has ended, typing in short bursts between interruptions. They are expected to complete 8–12 separate forms per shift, many of which require redundant data entry.
When their shift ends, the work spills over. Officers finish reports later at home or return to them the next day, relying on memory. As time passes, details fade and inconsistencies might emerge introducing legal risk.
Many officers expressed frustration that having to complete paperwork reduces time spent in the community. At the agency level, report quality varies significantly across officers, creating downstream challenges for records management, investigations, and compliance workflows.
The goal was not to eliminate writing, but to meaningfully reduce the time and cognitive load. We set an internal goal of reducing report-writing time by roughly 50%, while preserving accuracy and human judgment.
Design framing
Constraints: the hard realities
Accountability: Officers are legally responsible for every word.
System Agnostic: Must work across any form, legacy system, or regulation.
Lightweight Integration: Immediate utility without deep platform integrations.
Verifiable Accuracy: Content must be traceable to prevent legal risk from hallucinations.
Design principles
Documentation, Not Decision-Making: AI supports documentation, never the investigation.
Human-in-the-loop: Oversight is prioritized over "one-click" automation.
Risk mitigation over speed: Accuracy and integrity are non-negotiable.
Design for Skepticism: Encourage critical review rather than blind adoption.
Solution overview
Report Assistant explored a human-in-the-loop approach to report completion, focused on reducing time-consuming report writing while preserving officer accountability.
Workflow
1
Context Selection: When starting a report, officers select relevant body-worn camera data via timestamps or summaries to instantly ground the documentation in evidence.
2
Smart Suggestions: Using the video as the source of truth, Report Assistant generates field-level suggestions mapped directly to the report form, appearing as optional, inline ghost text.
3
Review and Refine: Officers review, edit, or selectively insert suggestions. They can also use dictation to quickly add nuanced details or observations not captured by video footage
4
Multi-Form Sync: Once verified, data auto-populates across redundant forms and internal databases, ensuring consistency and eliminating repetitive manual entry.
Concept walkthrough: defining the future reporting experience
I visualized what a browser-based extension could function like to provide AI assistance directly within the officer's existing workflow. This was used as a communication tool to agree on the roadmap as a product team as well as present the concept to leadership.
In addition to contributing to the short-term roadmap, I defined a longer-term vision that incorporated existing system data, such as officer profile information, dispatch records (call times and locations), and law-enforcement database results, to pre-fill factual fields. While this work was not built during the pilot, it influenced architectural decisions to ensure the product could evolve without rework.
Responsible AI: designing for accountability
One of my most critical design decisions was defining what not to automate.
We intentionally excluded fields requiring legal judgment, such as arrest charges and individual roles, from AI suggestions. Automating these fields risked introducing systemic bias and encouraging 'automation bias' (where officers defer judgment to the machine).
The Boundary: AI assists with documentation; the officer remains the sole author of judgment.

Intentional Friction: We explicitly avoided auto-filling high-consequence fields where human verification is legally and ethically paramount.
Deep dive: designing for high-stakes review
How can we ensure 100% accuracy without destroying the efficiency gains?
Another design decision I took care and time with was how AI-generated form field suggestions should be reviewed and inserted into reports. Given the consequences of errors, review could not be treated as a lightweight confirmation step.
The problem: the "bulk approval" trap
Our initial engineering prototype used a bulk review sidebar. This approach minimized engineering effort and allowed us to quickly test the concept in the field. After testing this prototype in the field, I observed a dangerous pattern: officers were skimming rather than reviewing with rigor. They caught errors only after they approved the suggestions to insert into the form. The separate UI created a cognitive gap that led to passive approval.

The exploration: calibrating friction
Based on these observations, I explored two alternative review models:
Section-by-section approval
Inline, field-by-field approval within the form
Both increased rigor, but differed significantly in how they related to the officer’s mental model.
I mapped out the spectrum of friction to find the "sweet spot" between officer efficiency and cognitive oversight. Backed by user research, this was used to align leadership on the appropriate level of friction.
Final direction: inline suggestions via ghost text
To match existing officer workflows, I advocated for a field-by-field review model where suggestions appear directly within the form.
Explicit action: Press Tab to accept a suggestion.
Effortless rejection: If a suggestion is incorrect, the officer simply ignores it and keeps typing. There are no "X" buttons to click or boxes to clear, ensuring the UI never gets in the way of their train of thought.
Matching mental model: By showing suggestions directly into the form, officers can review content within its natural context, eliminating the cognitive strain of jumping between separate pages.

By using in-line ghost text, I minimized workflow disruption. Officers can evaluate suggestions within the context of the form and easily ignore them, avoiding the cognitive overhead of a separate UI.
Validating the interaction
Validating the interaction
Outcomes (pilot signals)
The pilot is currently live using the initial sidebar interface. While we continue to aggregate formal metrics, the real-world usage has provided critical signals for the product's evolution:
Efficiency gains: Even with the v1 interface, officers have self-reported time savings in report completion time moving us closer to the 50% efficiency target.
Overcoming trepidation: Officers who initially felt skeptical of AI reported that the explicit controls gave them the confidence to use the tool.
Engagement: Daily usage has remained consistent, proving the core value proposition remains high even in high-pressure field conditions.
Comprehensive documentation: We’ve seen an increased willingness to complete historically underutilized forms, as the assistant reduces cognitive load for the "annoying" forms.
The most significant outcome of the pilot was the validation of the inline review model. By observing how officers interacted with the sidebar v1, I was able to demonstrate that the future of the product must be inline to maximize rigor.
The result: I successfully aligned the engineering and product leadership around the inline ghost-text interaction as the definitive long-term direction, de-risking the next phase of development and ensuring we build for maximum adoption.
Why this matters
This project exposed a fundamental element for AI adoption in the field: technology must be integrated into existing workflows, not added as a parallel one. Especially in environments like law enforcement, where users are often skeptical of new tech and operating under high pressure, working within established mental models is the only way to ensure adoption. By "sprinkling" AI assistance directly into their current reporting habits, rather than forcing the users into a separate interface, we replaced technical friction with intuitive support, ensuring the tool felt like an assistant rather than an obstacle.
