Close
Chat RFP AI tool shown on a MacBook
Chat RFP

Enhancing RFP Efficiency: Bridging the AI Trust Gap

The Challenge

An internal AI tool, Lex Chat RFP, was built to save bid managers hours of manual searching. On paper, it was a perfect solution. In reality, adoption was near zero. I was brought in to figure out why a "time-saving" tool was being ignored by the very people it was meant to help.

The Solution

I partnered with the AI Solutions team to improve Chat RFP. This tool is an LLM driven chatbot designed to help bid managers and proposal teams quickly generate boilerplate responses, summarize internal content, and answer Lexmark-specific questions. The tool aims to reduce time spent searching through documents and improve efficiency in responding to RFPs.

Jump to Solution

The Full Story

Why is this not working?

On the surface, using a Large Language Model (LLM) to sort through bulky internal documents is a "slam dunk" use case. But after months of the tool being live, the silence from the user base was deafening.

The AI team saw a technical success; I saw a human disconnect. I needed to find out if the problem was the model's performance, the users' comfort with AI, or a fundamental UI failure.

Illustration of Nora surrounded by question marks

Research

What is going wrong, how can we fix it, and what can we add to make it even better?

With a specialized pool of only eight bid managers across three global regions, I couldn't rely on broad analytics. I needed to understand the "why" behind their hesitation. I designed a two-pronged approach to capture the immediate friction and the long-term workflow gaps.

1

Moderated Usability Testing

In a 90 min session users completed 4 tasks and were asked to think out-loud. I needed to see exactly where the mental model of a bid manager diverged from the AI's logic. Qualitative and quantitative metrics were collected.

The Focus

I tracked the find-ability of key features and, more importantly, the real-time mindset of the users as they constructed an answer.

Goal

I wanted to witness their visceral reaction to the AI's output. Did they trust it? Did they know how to edit it? By watching them work through a response from start to finish, I identified that the friction wasn't just in the UI—it was in the uncertainty of how to validate the AI's claims.

2

Diary Study

Users are provided with an overview of the tool, and then commit to using it throughout their day. Users submit survey responses about their experience at daily and weekly intervals for 2 weeks (10 work days). I needed to see how the tool lived (or died) in the wild.

The Focus

I tracked long-term applications and the "death by a thousand cuts" technical issues that don't always show up in a controlled lab setting.

Goal

I wanted to see the evolution of their workflow. Did the tool actually make them more efficient over two weeks, or was it just another tab they eventually stopped opening?

Take Aways

Creative Nuance

The AI was too robotic. It could find facts, but it couldn't mirror the persuasive "voice" required for winning bids.

Reliability

Technical latency and VPN issues made the tool feel "broken" even when it worked.

The "Black Box" Effect

Users didn't know where the information was coming from. Without citations, the AI was just a "hallucination machine" in their eyes.

Design Solutions

Early Concepts

While the engineers worked on latency, I pivoted the UI strategy toward verifiability.

I designed a "Source-to-Answer" interface. Instead of just giving an answer, the UI now explicitly connected AI-generated snippets to their source documents. This allowed bid managers to "teleport" to the exact page of a manual to verify facts instantly. We shifted the tool from being an "Author" to being an "Assistant."

Chat RFP alternate light mode - prompt from suggestion

Chat RFP - New prompt from suggestion

Chat RFP dark mode design

Chat RFP - dark mode

Chat RFP main light mode design

Chat RFP - light mode

The Result

By shifting the focus from "AI Magic" to "User Empowerment," we saw a significant lift in adoption.

Reduced Verification Time

The "Source-to-Answer" UI cut the time spent cross-referencing documents.

Increased Reliability

Identifying the VPN bottlenecks led to a more stable environment, removing the primary barrier to daily use.

Cultural Shift

The bid management team moved from viewing the tool as a threat or a chore to seeing it as a reliable starting point for their drafts.

What I learned

Problems have layers

I went into this looking for a clean UI fix, but I found a fragmented mess of technical, psychological, and workflow issues. It reminded me that a great product isn't built on one big idea — it's built on 100 small, intentional interactions.

Adoption is everything

AI is an undeniably powerful tool. But companies wanting to leverage this new technology will only move as fast as individual users feel comfortable and confident in adopting AI tools into their daily work.

The End

Close Next Project