Getting Answers

Enhancing the book reading experience with contextual answers using artificial intelligence (AI)

At the end of the third quarter of 2023, members of our Data Science team—experts on artificial intelligence and machine learning—were mobilized by an executive product stakeholders to prototype a feature that would enable learners to obtain real-time answers to technical questions while reading a book on the learning platform.

Sometime after a proof of concept was presented to the executive team, I was engaged on the project to alleviate “launch blockers,” which were more or less a list of suspiciously prescriptive unaddressed visual design feedback.

I worked with the AI Team to improve the layout, interaction design, readability and feedback mechanism related to “Getting Answers”

A screenshot of the book reader with the answer’s panel active before redesign

While a heuristic review isn’t normally my first choice when evaluating a design’s effectiveness (due to the assumptions it carries on which heuristics are relevant to evaluate for—especially without a formalized definition of the goals, user needs and tasks the design is meant to facilitate), it was still the most reasonable approach for identifying areas of potential design improvement.

I evaluated the live product prototype against the following heuristics due to its digital nature and my understanding of its use cases:

#8: Aesthetic and minimalist design

I determined from prior projects that stakeholders have a strong interest in the aesthetic quality of user interfaces and often prioritize representing the company’s visual brand-identity over user needs.

Before; Annotated screen of initial design

Despite this, in order to maximize the likelihood of any design’s success, It’s important that the visual elements support its users primary goals, for design, it meant:

  • Reducing the default width of the panel so that it occupies only 1/3 (previously 1/2) of the screen width to give the main content area its due posture
  • Swapping the current typeface (Gilroy) for a more legible one (Guardian Sans) designed and optimized specifically for reading large blocks of text
  • Applying spacing to create visual rhythm and to improve the layout’s balance, and breaking large blocks of text into smaller paragraphs to improve readability
After: Redesign of layout structure with side-panel occupying 1/3 of screen width

#1: Visibility of system status

A staple requirement for digital tools and products; In our case, because it takes time for the system to “generate” an answer, satisfying this heuristic means ensuring users can determine when the process is ongoing (e.g they shouldn’t see a blank screen for an indeterminate processes)

Before: Text-based loading indicator communicating in-progress states

While the AI team did implement a feedback mechanism, it didn’t quite meet the minimalist standard from a cognitive perspective (we expose the granular technicalities of the step-by-step process), solving for this meant:

  • Changing the determinate procedural text-based feedback pattern with a more simplified visual; an indeterminate skeleton placeholder.
After: Design of loading state visual indication; How we communicate to a user that the system is “working”

#4: Consistency and standards

The platform had gone through a major design system update and due to the book reader’s high-stake posture, Its user interface had not been updated to reflect it. UI elements had a mix of styles from both the old and new design system.

Such an inconsistency of visual styles can negatively impact customer impressions on the product and learning platform overall. In order to mitigate this, I redesigned the UI elements in the Answer’s panel so that they are visually consistent with the book reader’s overall look and feel.

A side by side comparison of the answers panel before and after; The updated panel (right) has its width adjusted to accommodate the book’s contents; Other updates includes subtle adjustments to layout, fonts, and colors, spacing application, positioning.

An annotated screen showing side by side comparison of the design of referenced sources before (top) and after (bottom) in a vertical layout.

Annotated screen showing auto-suggested follow-up questions before (top) and after (bottom); Design is updated with adjustments to font and inner spacing.

Because significant investment (and promise) had already been made prior to my engagement, after speaking at length with the Product Manager—to source goals, user needs, and historical context—We determined that satisfying stakeholder visual design concerns was the most reasonable approach to ensure a successful initial release.


Convincing stakeholders by advocating for an iterative approach to design

For any given problem, there can be many solutions with the potential to positively impact an observed metric, however, the more specific the problem’s context and its constraints, the more focused a design—like a well tailored suit—must be in order to maximize its likelihood of actualizing desired outcomes.

While some may be drawn to generate multiple design solutions to evaluate in collaboration with stakeholders, such an approach often leads to over-indexing on what often becomes a matter of personal taste or preferences, especially with Visual Design.

The following deliberates on the “stakeholder management” component of my work, and my approach at addressing their concerns while optimizing for a positive end-user experience.

Alleviating primary stakeholders concerns

When I designed the interactions for getting answers, I incorporated the affordance (the button you click on) in the existing design framework as-is and represented it as a “speech bubble” icon.

I received feedback from the Product Manager—who was initially hesitant to demonstrate it for exec review—with concerns that it was “too subtle.” For some designers, the first attempt at addressing such a concern may be to wireframe or prototype additional design concepts based on their (likely inaccurate) interpretation of “subtle.”

However, in my experience, presumptive feedback is usually a symptom of a deep rooted concern over the design’s likelihood for success. Furthermore, without a solid understanding of the feedback’s origin and the stakeholder’s perspective with respect to “subtlety” one can easily find themselves in a perpetual loop of continuous iteration.

When I discussed the feedback with stakeholders, I discovered that they were mainly looking to drive usage by optimizing for discoverability and they believed that an affordance that “stands outs” (i.e a big button on the right side) was the best strategy to ensure discoverability.

After a clarifying discussion where I demonstrated how that approach was a potential cause for a negative experience (due to its consequential impact on reading flow and immersion), I was able to convince stakeholders to support a different strategy which considered and balanced their goals (get more people to use the feature) and the users’ goals (achieve learning outcomes without distractions); In essence:

  • To maximize discoverability, the panel would be visible by default to users (that we would segment by a criteria to be defined later – users that would benefit the most from this feature and who we determined—through observed behavior patterns—were most likely to “try it out”).
  • Users would be able to dismiss/hide the panel from view and be able to intuitively determine how to re-enable it (Based on the interaction pattern).
Requirements I wrote and shared with the engineering team during implementation

Getting the Product Design team’s Input

With the primary stakeholders (Product manager, VP of Product) on-board, I elicited feedback from the Product Design team on the overall design.

Process-wise, most user-facing features at O’Reilly follow a chain of approvals and stakeholder reviews which includes the Product Design team. The design team typically evaluates design-specific aspects (visual appeal, usability, accessibility etc…)

In my case, aside from capturing their design feedback, it was important that I provide them awareness of this new feature as design decisions across different parts of the platform impact the reader one way or another (we have shared ownership).


Putting it all together and ready for implementation

Once the design had gone through its formal approval process, I created a representative click-through Figma prototype which served as a source of truth for the AI software engineers to reference during implementation.

As customary with the design process, I was invited to the engineering team’s recurring stand-up meeting which I attended throughout the development process; listening for questions, providing answers, and discussing specific technical details while performing quality assurance testing.


What I learned

When it comes to designing in a change-sensitive environment, it’s important to deeply understand stakeholder feedback. Misinterpreting feedback can be a cause for a negative experience for both stakeholders and designers.

Misunderstandings can lead to frustration, especially when a design’s intent is missed. In the case of a complex digital product that serves many context-dependent goals for different user types, responding to feedback without establishing shared understanding can lead to a perpetual loop of endless iterations.

You’ll notice that the final “approved” design was not that different from the initial wireframes. This was due my and the Product Manager’s collaborative effort to guide stakeholders to disambiguate their goals (the product outcomes they want to achieve) from the user interface.


If you are a Hiring Manager reading this

Let me first thank you for going through this case study and checking out my work. I’m always excited to connect with people in the design world, even if you’re not considering adding to your team at this time.

Now…from a holistic user-centered design point-of-view, Ideally, any user experience related decision would be informed by methodically rigorous generative or evaluative research. While the AI team’s efforts confirmed the feature’s technical feasibility to some extent, there exists many potentially risky and costly assumptions about learners, their experience and their motivations when reading books such as:

Usefulness and Desirability

There’s the overall assumption that learners often have questions while reading a book, indicating that there’s a natural inclination for users to seek additional information or clarification during their reading.

  • This assumes that users would find it convenient or would prefer to access answers within the same environment (the learning platform) where they are reading, rather than using an external source (like Google, Quora or Stack exchange – sources which many users already use and trust)
  • This also assumes that learners are motivated to seek answers to their questions as part of their learning process while reading a book, indicating a strong motivation for self-directed learning.

Technological Feasibility

There’s an assumption that the current state of Artificial intelligence and Machine learning technology makes it feasible to provide real-time answers to a wide range of questions related to various books and topics.

  • This presupposes that the real-time answers provided by such a system would be accurate, reliable, and relevant to users‘ questions. (Users may have expectations regarding the quality of answers)
  • It also presupposes that there exists a source of information that the system can reference (or be trained with) to generate real-time answers.