Skyler Tse

It’s Skye!  I’m a senior product designer 
that builds durable design systems 
and research-led experiences 
for complex products.


back

Product Desgn & Sys  /  
Solutions Workspace Tiptap Features

Solutions Workspace
TipTap Features


A friction-free post editing experience that lets Solutions Engineers create and manage content seamlessly—removing barriers to focus on sharing, not navigating.



7 tiptap redesigned features, 500+ interaction states documented, 2 design system layers adhered to (SLDS + QDesign), 1 source of truth for annotations & best practicess
Salesforce Specialized Tech & Programs (STP)
New York City ID# 0924-0225
Web Product 6 months

Project Lead & Designer
Figma
Tiptap
User editing Space
User Testing / Analysis

23m Read






Here, let me catch you up!

Solutions Workspace, with over 24k users, had a flagship community forum that catered to Salesforce Solution Engineers, had lost its luster over time.

While the Community’s feature enabled basic post creation, it never evolved into the vibrant hub users expected—and eventually, they began to drift away.

The designs assets created in this project were in tandem with Solutions Workspace’s Community Profile. For this side of the product, I stepped in to lead the discovery and design of new components. That meant auditing existing interaction models, pinpointing friction points, and strategically expanding the feature set—all while leaning into the underlying Salesforce architecture. The mission was simple: transform the “Create Post” from a basic editing tool into a dynamic space where engineers could actually connect, share, and collaborate.
 



The Established System: Flagship Feature Within a Mature Design SystemThe product lived within Salesforce's ecosystem, which meant every design decision had to align with the Salesforce Lightning Design System. Lightning is not merely a UI kit—it is a mature, deeply considered language shaped over years by experienced designers and engineers. It carries technical constraints, baked-in accessibility standards, and implementation patterns engineered to work seamlessly with the Salesforce backend.

a second version of SLDS was published with refined components and variants


However, for internal products like Communities, we operate within an additional layer: Q Design. Think of Q Design as a thoughtful sheath over SLDS—a complementary system that adapts the foundational language for internal audiences. While SLDS serves external-facing products with broad, public-facing considerations, Q Design is tailored specifically for tools used by Salesforce employees and internal teams. It respects the core patterns of Lightning but introduces opinionated adjustments that account for internal workflows, tighter integration needs, and the nuanced expectations of users who live inside Salesforce every day.

A unique figma flow that might appear in an editing space for the user, who may attach a zoomable figma preview attachment in addition to their posting.



Given this layered foundation, the work was never about reinvention. It was about thoughtful extension within and across both systems. Every component needed to feel native to the ecosystem—honoring both SLDS standards and Q Design nuances—while still pushing the product forward.

My role was to study existing assets from both systems, understand their underlying logic, and then carefully design new patterns that expanded functionality without breaking coherence. This required balancing ambition with discipline: honoring what worked across both layers while introducing principled UX improvements that felt both fresh and familiar. The result was not disruption, but evolution—expanding what was possible within a multilayered system without sacrificing consistency, accessibility, or engineering feasibility.

Component Design: Matrix
A significant portion of the work lived at the component level. Each new or extended element was evaluated across its full interaction lifecycle—default, hover, active, focus, and disabled states—to ensure behavioral consistency and compliance with accessibility standards rooted in Q Design. Components were treated not as static visuals but as dynamic actors within a user's workflow.




Design decisions were grounded in foundational UX heuristics: 

① Clarity of affordance, feedback visibility

② Predictable state transitions, and

③ Cognitive load reduction. 

Even within accelerated timelines, lightweight validation loops were introduced. Rapid prototypes were built and tested internally with cross-functional teammates to compare interaction models and identify which patterns felt most intuitive within the Q Design layer.

These prototypes were also used during user interviews to observe real-time reactions. Rather than debating opinions in a vacuum, the team watched users navigate alternative flows and articulate which experiences aligned with their expectations. 



This iterative, behaviorally-informed approach ensured that final design choices were not arbitrary—they were grounded in observed user needs and respectful of the multilayered design infrastructure beneath them.

Worked FeaturesThere were 7 features in the tiptap editing space I oversaw, and personally worked on the highlight following: 
① Accordion 
② Carousel
③ Link Button Menu
④ Embed iframe

⑤ Image / Image Text
⑥ Tables
⑦ Carousel

Link Button Menu
For the Link Button feature, the previous version lacked transparency. Users had no clear feedback on whether a link had been successfully attached to text or an image, and error states were either absent or ambiguous. This eroded confidence and created friction in what should have been a simple interaction.

Matrix: Possible variation of Link Button Menus on different design idea combinations

As a solution, I approach the challenge with a context-aware link menu that surfaces precisely when and where users need it—directly adjacent to their selection. Rather than treating this as a isolated component, I designed it as a system-level addition that needed to account for multiple states (neutral, active, error, success) and flexible configurations (with or without descriptions, small or medium sizing, required or read-only fields). Every decision was made with an eye toward scalability: this component would need to work across contexts, not just within the current feature set.




It is a lightweight, context-aware link menu that surfaces precisely where users need it. Every interaction state was accounted for: 

① visual feedback for successful link attachment, 

② clear error messaging when something went wrong, and 

③ flexible sizing to accommodate different input needs. 

Throughout the design process, I kept engineering engaged in ongoing critiques to pressure-test feasibility. These conversations weren't about handing off specs—they were collaborative problem-solving sessions that shaped the final component architecture. The result was a link menu that felt intuitive to users and was technically sound to build.


Users gained transparent control over link attachments, with immediate visual feedback that eliminated guesswork. More importantly, the component was designed to scale—ready to be repurposed across the product without reinvention.

Validation: Error and sucess indicators for the user

Embed iFrameNext, was the video embed. 

Video embedding for the users, was a black box. Users could drop a link, but had no control over playback behavior—autoplay, muting, looping, or defining start and end times. This lack of control diminished the quality of posts and forced users to accept platform defaults that didn't always serve their intent. And although veteran users had found ways to navigate the friction, newcomers were left to struggle without that institutional knowledge. So the challenge became clear: level the experience so that every SE—whether ten years in or ten days—could engage with confidence from day one.

A lightweight Vidyard configuration flow that allows users to set and control video viewing restrictions.


For videos, I designed a focused, lightweight control panel that surfaces when users embed a video. The challenge was balancing depth of control with cognitive simplicity—giving users powerful options without overwhelming them. I grouped controls intuitively (playback behavior, timing, audio) and ensured every toggle and input field carried clear affordance.

One obstacle emerged early: the visual alignment of labels with their corresponding toggles. After exploring options, the team landed on right alignment—a deliberate choice to counter the negative space and create cleaner visual rhythm. It was a small but intentional decision, a reminder that sometimes design is about trusting what looks right, not overthinking what feels safe. Throughout, the video solution never shifted: give users control, not complexity.



This feature required particularly close engineering partnership. Video playback controls touch platform-level behavior, and not every desired interaction was technically straightforward. Through iterative critiques and feasibility reviews, we landed on an implementation that delivered the control users needed while respecting engineering constraints. The back-and-forth wasn't a hurdle—it was essential to shaping a realistic, shippable solution.

Users finally had granular control over video behavior. Autoplay, muting, looping, and custom timestamps were no longer hidden—they were explicit choices users, themselves, could now make with confidence. The feature transformed video embedding from a passive act into an intentional part of post authorship.

Image TextAnd lastly, Images.

Image handling in the original editing space was functional but rigid. It worked, so it had never been prioritized for improvement. Stepping in to improve this feature, meant evolving the "good enough" standard up to “what it’s meant be”.

In addition, users had minimal control over sizing and no streamlined way to pair images with captions or accompanying text. This, constrained visual storytelling and forced users into layouts that didn't match their intent.

  
Size variations on what images could be like
This feature was the most straightforward of the three—but straightforward doesn't mean unimportant. I defined a flexible yet simple system for image display: clear size variants (small, medium, large) and optional caption treatments that integrated cleanly with the existing content hierarchy. The goal was to refine what was already there, and while at it, give the Solution Engineers slightly more control over how they want the post readers intake their content. 

After quick validation loops—with confirmed feasibility—on the simplicity of the image solution, the team moved on from designing to backend implemetation. 

Users now, gained the ability to size images intentionally, or pair them with text in ways that supported their storytelling. The feature proved that giving users control doesn't require elaborate solutions—sometimes the most impactful work is also the most restrained.

Feature Key LessonsAcross all three features, I maintained a consistent leadership stance: design with system-level thinking, collaborate early and often with engineering, and never lose sight of the user's need for control. The mini critiques and feasibility reviews weren't checkpoints—they were integral to shaping work that was both thoughtfully designed and realistically shippable.

The result was not just three new features. It was a renewed post editing experience that put users back in the driver's seat, restored their confidence, and laid the groundwork for future expansion.

Annotation & Best Practices
 
Example: Strong and concise documention for other dev and design handoffs

Strong systems work, demands strong documentation. While early exploration involved the usual messiness—whiteboards, sticky notes, personal mapping sessions—the final deliverables were intentionally structured for longevity and cross-system clarity.

Every screen was built with clean hierarchy, labeled components, and clearly defined spacing rules. Margin systems, border radii, color application standards, and state-based behaviors (like hover stroke interactions) were all documented to ensure continuity—with explicit callouts for where Q Design diverged from, or extended base SLDS patterns. 

If another designer or engineer needed to inherit the work, they could do so without reverse-engineering decisions or guessing which system governed which behavior.


Example: What part of the documentation may look like
But annotation went deeper than the "what." I made a point to capture the "why” as well. Design intent was recorded alongside implementation notes to support engineering collaboration and future iteration. 

Where Q Design introduced a specific adjustment, the rationale was documented: internal user behavior patterns, workflow efficiency gains, or technical considerations unique to the internal tooling space. The goal was never just visual polish—it was operational clarity across both design layers. This practice transforms design from a collection of screens, into a transferable and scalable product asset, that lives comfortably within a complex ecosystem.

User Interviews, Analysis, 
Data Visualization, and Validation
With the Community Profiles initiative targeting an end-of-quarter launch, the design and dev teams moved decisively into validation mode. Rather than relying on assumptions or internal intuition, we committed to direct engagement with the people who would actually use the product: Salesforce Solution Engineers. A small but carefully selected group of participants were recruited from within our core audience. Each session was structured as a 1–1.5 hour moderated interview, designed to balance depth with focus. Preparation was deliberately organized as a structured script was written to guide conversations consistently across sessions. Backup materials were prepped in advance, anticipating potential technical hiccups, and team roles were clearly defined during a full dry run:

  • An interviewer to guide users through tasks verbally and visually,

  • A notetaker to capture real-time observations and manage logistics like recording and link-sharing,

  • And an observer to absorb behavioral nuance without interrupting the flow.

This level of preparation ensured that when users showed up, I would be ready to listen—not scramble.

Research Planning & ExecutionThe study itself, was structured around three discrete tasks, each isolating a critical phase of the user journey. I facilitated a neutral, observatory environment where participants were recorded and asked to verbalize their thought process. Post-session, my team and I conducted a systematic analysis, pairing user actions with their commentary to build a rich understanding of where the design aligned with—or diverged from—mental models. This synthesis of session recordings, observer notes, and error logs provided the actionable intelligence needed to refine the product with confidence.

Two Rounds, Fourteen VoicesIn total, two rounds of testing—14 interviews—were completed. The output was substantial: 26 pages of raw annotations capturing everything from verbatim user quotes to observed friction points. Across all sessions, we accumulated over 900 minutes of recorded footage.
Some data points that the team considered to capture, was:

  • How long did it take for the task to be completed?

  • Was the task complete?

  • Were there any errors, or pathway divergence?

  • And what was the difficulty rank for the task? 

And yes—sifting through 900 minutes is no small task. But the richness of the data made it worthwhile.




Synthesis: Patterns Over Opinions Analysis was structured around several core activities:

  • Card sorting exercises to understand how users expected information to be grouped and prioritized.

  • Quote extraction to capture authentic user sentiment and pain points in their own words.

  • Behavioral pattern mapping to identify recurring navigation logic and decision-making habits.

  • Ranking exercises where users evaluated feature clarity and surfaced what mattered most to them.

This wasn't about collecting surface-level feedback. It was about uncovering directional insight—how users thought, where their confidence broke down, and what mental models they brought into the experience.

Task 1: The Demo Creation DiscoverabilityThe first task, was designed to test the discoverability of the Demo Guide creation entry point within a dense, personalized information environment. Participants began on the main Solutions Workspace Communities home—a sea of content tailored to their role. The tasker’s core challenge was to observe whether they could intuitively navigate from this broad, high-level view down to the specific action of “creating a new Demo Guide”, ultimately landing on the prototype screen where that flow begins.

Before any recording started, I made it a point to get explicit verbal consent from each participant. This wasn't just a formality—it was about building trust. They knew the footage would remain private to the research team and would only ever be used to make the product better.

After placing the participant on the personalized Communities home screen, their sole directive was to locate the entry point, and initiate the creation of a Demo Guide. From that moment forward, I assumed the role of a silent observer. My team and I meticulously documented their path: every click, every menu expansion, every moment of hesitation. The goal was to map their mental model against our information architecture. 

  • Did they look for a "Create" button in the header? 

  • Did they navigate into a specific community first? 

  • Did they use global navigation or contextual menus?

I only intervened with a neutral hint if a user became completely stuck, ensuring the session remained on track without introducing bias.

Top: Data consolidation into digestable form to present findings to engineering teams and upper management; Bottom: Task 1 results and analyses.


The two testing rounds for this task, revealed both averages of strengths and friction points:

  • 57% of participants successfully completed the requested tasks in the first, and 100% in the second.

  • Average completion time: 1m 24s (first), and 1m 3s (second).

  • Average errors per participant: 2.4 (first), and 0.4 (second).

  • Perceived ease of use (where 1 = most difficult, 5 = most easy): 3.5 (first) and 5 (second).

By tracing the unique routes each user took, from the chaotic breadth of the homepage to the narrow focus of the creation screen, I was able to assess and note, the clarity of our navigation and the placement of key entry points. A direct, confident path validated our IA decisions. A meandering route or a failure to locate the entry point, signaled a breakdown in discoverability. In the end this analysis provided a clear, evidence-based directive on whether users could find the feature at all—a foundational test of our design's intuitiveness.

Task 2: Probing the Nuances of the Editor    Next, the second task shifted focus from the macro-journey to the micro-interactions within the editing environment. The goal was to evaluate the discoverability of formatting tools and the overall cognitive load of manipulating content, independent of the creative process of writing.

To isolate the editor's usability, I provided participants with a pre-written script hosted in a Quip document. Their directive was to execute a series of subtasks, in example: create a header, manipulate table menus, copy and paste text into a table, and insert a carousel component. Again, I stepped back into a purely observational role, documenting their ability to locate and use features. As with the first task, I only intervened with neutral prompts when a user was stuck, prioritizing the flow of the session over rigid adherence to a script. 

Participants were asked to complete these series of specific interactions. (listed below). And as long as all tasks were finished within the session timeframe—a constraint known only to the research team—the task was marked successful. Other interactions were sorted under “further discussion” as it might be important in future product sprints.

① Column builder
② Buttons
③ Accodions
④ Images
⑤ Video
⑥ Table
⑦ Carousel

Sorting user task 2a: Creating a carousel, and in addition display sentiments on course A or B.

In the testing rounds for this 2nd task, only the second round averages was considered—due to a bug in the first round (specifically the feature carousel that led the users to unable to complete that subtask):

  • 100% of participants successfully completed the requested tasks for both versions of the carousel (A/B testing).

  • Average completion time: 2m 15s (Carousel A), and 1m 37s (Carousel B).

  • Average errors per participant: 1.2 in both Carousels.

  • Perceived ease of use (where 1 = most difficult, 5 = most easy): 4.4 (first) and 5 (second).

  • Overall preference: Carousel B.

Although there were a few hiccups with some backend bugs, after going over the session recordings and notes, the data revealed a clear map of the editor's strengths and weaknesses. Tracking successful subtask completion, along with any errors or confusion, provided granular data on whether the tools were where users expected them to be and if they behaved as anticipated. This analysis directly informed refinements to the editor's layout and interaction design.

Task 3Lastly, the third task addressed a critical dependency tied to our backend architecture: the distinction between draft and published states, as detailed in Solutions Workspace’s Community Profile, Section ⑥ Drafts & Publication Statuses. In this, I needed to validate that this complex logic could be translated into a user experience that felt simple and confidence-inspiring.

I placed participants in a prototype environment containing a pre-existing Demo Guide and challenged them to publish it. This was a focused test of a specific workflow: saving, exiting, and successfully publishing. The task concluded when the user triggered the final publish action and was rerouted to the Posts home screen, where a "Success Toast" provided clear confirmation. Same with the last two tasks, I observed whether the user understood how to manage their work without instruction, and noted any confusion around navigating away from a draft or interpreting the final success state.

The successully published post is now live on the Communities platform.


The two testing rounds for this last task, highlighted that:

  • 63% of participants successfully completed the requested tasks in the first, and 100% in the second.

  • Average completion time: 4m 12s (first), and 3m 27s (second).

  • Average errors per participant: 3.1 (first), and 1 / 11.6 (second; without outlier / with outlier).

  • Perceived ease of use (where 1 = most difficult, 5 = most easy): 3.6 (first) and 4.6 (second).

Sorting user task 3: Creating a Demo Guide post


Context Matters: Internal Users, Internal Systems
Because our participants were internal Salesforce employees—Solution Engineers with deep platform fluency—their feedback carried additional weight. They didn't just react to the interface; they brought expectations shaped by years of working within Salesforce ecosystems. This meant their input helped us refine not only Communities but also our understanding of how Q Design should serve internal audiences differently than SLDS serves external ones.

One of the findings report organized to be easily digested by other designers and management. It’s counterpart was done in Quip (Salesforce’s internal document app), as Devs’ preferred choice of QA.

Although 14 participants provided rich directional insight—sample size limiting statistical generalizability—the pattern saturation across both rounds, did provide us confidence in the findings.

With user testing complete, I walked away with a clear understanding of where our design instincts were right—and where they needed refinement. More importantly, I had the evidence needed to advocate for those changes with confidence. Once the designs had been QA'd with the engineering team, I formally handed them off and pivoted to supporting development through to the launch.

Sustainability & GovernanceWhile the initial implementation phase is complete, the system was intentionally designed as a living product—one that must remain aligned across both SLDS and Q Design layers as both systems continue to evolve. Ongoing maintenance, iteration, and alignment require structured stewardship, and that responsibility ultimately passed to me.

My role now, included maintaining Q Design system components, bridging new designs with established patterns, and ensuring the system remains coherent as it scales. This work happens in close partnership with my engineering counterpart, who owns the technical side of the components. Together, we keep the system up and running—addressing updates, fielding feedback from product teams, and making thoughtful decisions about when to extend versus when to refine.

This governance model protects the system’s integrity while enabling growth. It ensures that new features, components, and extensions integrate seamlessly into the established ecosystem—preserving coherence without stifling innovation, and honoring the relationship between the foundational SLDS and the internal Q Design layer that sheathes it.

TakeawaysLeading this work was quite rewarding. It meant balancing a few familiar tensions: innovation versus consistency, speed versus rigor, and new patterns versus established ones. The outcome—seven validated features, twenty-six pages of documentation, and a friction-free posting experience—reinforced something I've come to believe about design leadership. It is not about making noise. It is about creating clarity within complexity, empowering engineers with precision, and leaving the system better than you found it.

Postitive Sentiments on TipTap Features“Nice, I can see the image better!”
- Vamsi, Lead Product Manager, Data Cloud

“I love the tooltips on the corner of these elements, learning curve type of stuff!”
- Randy, Solution Engineer, Lead

“I love the emojis [in videos] it really helps!”
- Tyler, Product Manager

“I’m more comfortable with the selection [in the editing space] now.“
-  Tom, RFP Manager

End of Project PreviewOh dang, you’ve hit the end of the Solutions Workspace, Tiptap Features!

But don’t worry, there’s so much more content waiting to be discovered!