Optimized In-Content Views

How I optimized the in-content views for usability while designing for scale

The tl;dr version 

I gradually took ownership of the “in-content learning experience” on the O’Reilly learning platform after having led the release of many of its feature enhancements since 2022. 

I re-designed how its core features are represented, accessed and operated and defined rules to govern how the UI (user interface) adapts to various usage and context scenarios. 

I successfully released many of these updates to all users at the beginning of August 2024 and have seen noticeable improvements in user satisfaction and engagement as a result.

This has allowed us to better accommodate the growing number of sub-features that were planned for release in Q2 and Q3 of 2024. I continue to release usability improvements on a rolling basis til date.

The following tells the story of how these changes were initiated, the reasoning behind them as well as some of the challenges I faced.


The user interface could not support the growing number of features planned for 2024

Screen of UI before all changes shows in-content view of a video-course with the icons toolbar highlighted in a book

Due to lacking places to logically surface new features and capabilities, we ended up adding many controls into the same UI region over time which had compromised the integrity of interface and its information architecture.

Stakeholders were concerned that adding additional features would cause negative customer reactions and reflect poorly on the quality of the platform.

From our perspective, this problem, if left unaddressed long-term, could be a cause for a negative user experience as users would struggle to understand the relationship between the controls.

I developed a conceptual framework to govern where and how features would be represented in the UI

I led a group of product stakeholders to create a conceptual model for the user interface to convey the relationship between its controls and features and mapped them according to usage frequency, functionality and impact scope.

Those attributes were most relevant because they directly inform where and how the controls should be represented.

A conceptual model is a diagram that visualizes the relationship between the concepts in a system or information environment.

 

 

Collaboratively developing a Conceptual Model to define the relationship between tasks, objects and attributes

I first introduced concept models to the Product Design team O’Reilly back in 2022, a time when we had challenges aligning on requirements for some of our more complex and technical product features. I got the opportunity to re-introduce it again to create shared understanding and achieve alignment with my product counterparts.

Designing layout options based on the conceptual map while aligning with stakeholder expectations.

Success is often tied to meeting the expectations of stakeholders, such as product managers, executives, and developers. In this case however, our success was tied to developing a logical and coherent UI framework which would inform where and how new features would be surfaced on screen.

This meant that testing with users was crucial as they would ultimately be the ones directly impacted by it. Our Product Officer agreed and provided me the budget to conduct usability testing on the new designs. The following shows the key screens for video-based content and text-based content

Badge-eligible video-based courses

I added the content’s title to the bottom navigation bar so I can present the badge icon next to it. The icon indicates the course’s badge eligibility status which users would click once they’ve met the criteria to accept their badges.

A screen showing video content with the table of contents expanded and an affordance for accepting badges visible — for badge eligible courses only.

 

Books with a sandbox coding environment available

In updated layout design, I represent them with an icon (terminal symbol in stand-by mode) grouped with other learning tools in the top-right of the UI.

Screen of book view with control for evoking a sandbox coding environment highlighted. Learning tools are represented as icons, grouped and placed in the top right corner of the UI so that they don't hinder or impede visibility over content.

Testing the updates and changes for discoverability and usability ahead of engineering “hand-off.”

After a few rounds of stakeholder reviews, I formalized the design direction and held an UN-moderated study to test the changes. I recruited 8 participants and had them perform several tasks to evaluate discoverability and usability such as:

Customizing Preferences

Task-Scenario: “You’re reading a book on Python, and want to make the text bigger so that you can read it better. How would you do this?”

Findings: Users were able to easily and intuitively find the control to trigger the settings menu—represented as a gear symbol in the lower right-hand corner of the UI.

Accessing a sandbox to practice writing code

Task-Scenario: “You’re reading a book on Python programming and would like to try some of the code examples yourself. Use this interface to practice writing and executing code”

Findings: 4 of the participants struggled to identify the appropriate control to perform “hands-on coding.” I believed this was likely due to them being unfamiliar with the graphical representation of coding sandboxes.

I recommended that we research and favor a control with a representation analogous (or synonymous) to “hands-on coding” that is recognizable and intrinsically clear to a broader audience of non-software engineers other than a terminal symbol.

Accessing the AI powered assistant

Task-Scenario: “The book comes with an AI assistant that can provide real-time answers to your questions. Use the interface to access it and ask a question.”

Findings: Participants reacted positively at the availability of an “AI-powered” assistant although they were expecting it to be represented by a star symbol. (like many emergent AI-powered tools at the time of this study)

Creating a sense of functional stability by removing problematic effects to improve perceived performance

I discovered several problematic issues while auditing the user interface. There were poorly implemented animation effects which created input and response lag causing elements to delay when the UI layout adjusted due to browser resize.

I removed many animation effects and other poorly implemented “micro-interactions” which led to a noticeable improvement in perceived responsiveness and functional stability.

Screenshot of CSS code blocks showing problematic effects disabled

Designing more salient clues to help learners identify the table of contents’ active content module

I applied distinct styling to active modules so that a user could keep track of what they are reading with respect to a whole book or watching with respect to a video-based course at anytime.

The active content module is contained in a rectangle with a light background which adapts to the UI’s active color mode and includes a vertical bar positioned to its left set in the UIs active accent color all of which distinguishes it from other content modules.

Navigating engineering workflow-inspired cultural challenges around design implementation

Engineering prefers to work from specifications explicitly covered in a Jira ticket. Most of the time those tickets include attachments, and links to design files, prototype and other helpful deliverables.

A Jira ticket is basically a digital representation of work activity for the purpose of tracking status and progress over time

Although each ticket includes the necessary assets, engineers do not always refer to them which causes critical details to be missed during design implementation. Furthermore, while the ACs (Acceptance Criteria) include descriptions of desired behaviors and states, they do not typically describe common states (hover, focus), for instance, I did not initially specify that interactive elements have focus indication and be focus-able—as I expected this to be an established general rule for the user interface—this led to instances where interactive elements were inaccessible by keyboard and lacked focus indicators.

A spacing system I developed for engineering to reference when implementing designs when we had lost access to "dev-mode"—a feature which allowed developers to inspect design files directly which substantially reduced the likelihood of inconsistency between implementation and design.

It made it easier for them to determine how much space to apply between elements in a layout to create the desired visual relationships.

 

I would uncover and point out these issues during the QA (Quality Assurance) process to the engineers, however this inevitably led to frustration as they’d have to “rework” the code to ensure it meets criteria they expected to be all encompassed and explicitly described in their ticket’s AC.

If you’re a hiring manager, and have read the previous paragraph and would like to understand how I managed this situation, let’s chat 😄

The graph below illustrates the Impact of Design Spec Comprehensiveness on Implementation Success based on my experience from having worked with various development teams over 3 years.

Once the design specifications had been satisfactorily delivered to engineering I monitored implementation progress and provided clarification and supporting deliverables as needed until we were ready to release.

These changes were fully released to 100% of all our audience at the end of August 2024 and have improved user satisfaction with the interface overall.

More importantly, I formalized the conceptual model and UI framework which stakeholders could refer to when making decisions about where and how to surface new features and enhancements in the future.

Key Screens

Here’s a view of the in-content views for books in light-mode and dark-mode. These screens represent the initial state of the UI for books for 96% of our users.

In dark-mode, foreground and background colors are inverted.

Since this work was initiated as a preventative measure to preserve UI’s integrity, we did not expect changes to our KPIs or metrics, we expected them to remain stable — and they did.

The project was a success 👍