Big Wins

Bar chart and table showing cost distribution across various categories from Q4 2023 to Q3. Categories include containers, security, devops, and storage, with expenses listed monthly from September 2023 to February 2024. Labels include services like Secrets Manager and Watson Discovery. The table summarizes costs for each category by month and provides a total.

FinOps & Cloud Cost Management

strategic - generative - evaluative - mixed methods

This is the story of how I influenced over 30 cross-functional collaborators, created a shared understanding around business and user needs, and drove key product improvements in IBM Cloud’s cloud cost management space.

Impact

  • Time-to-complete core user task decreased by 84%

  • Core tasks previously absent in the product were enabled

  • $1.5-$2.5 mil/year saved in CSM time

  • Increased Forrester Wave rating

  • Over the past decade, as cloud adoption has skyrocketed, the need to manage cloud spend has also grown. Old ways of managing on-premise spend break in the cloud model. This has given rise to the practice of FinOps — Financial DevOps — which emphasizes collaboration and shared accountability between engineering and finance teams in order to manage and govern cloud spend, a top priority for enterprises alongside security.

    As other cloud vendors have moved in lockstep with these shifting industry needs, IBM had largely stagnated. Every source of user insights underscored this point. Because of this, client-facing teams from sales to support to CSMs needed to use their time to manually help clients understand their cloud spend. CSMs alone stated they spent between 20%-40% of their time helping clients understand their spend, resulting in a $1.5-$2.5 mil/year loss in time alone because that is not what IBM hires CSMs for.

    Together, this lack of maturity in the FinOps space indirectly hindered client growth and undermined retention.

  • When I joined this project, it became clear that there were disparate opinions on what FinOps was, where capabilities should be built in the Cloud platform, and for whom we were building them for.

    I felt it was ripe opportunity to establish a shared understanding, guide the FinOps roadmap, and help move the project forward to escape the cycle of pontification.

    I used secondary research, participatory observation, and primary research to gain a holistic understanding of FinOps.

    This included

    • perusing external and internal reports and data, conducting a competitive analysis;

    • infiltrating the FinOps community and earning a FinOps practitioner badge, eventually holding FinOps community meets up at the IBM Cloud office;

    • and running interviews with users and SMEs.

    I identified 8 key FinOps capabilities IBM Cloud should build and where we should build them in the platform.

    In tandem with the research, I built rapport and closely worked with PM, development, and design. Through this collaboration, I helped nudge them in the direction I felt was the best route based on what I was seeing in the research.

    I was able to get 2 of the 8 capabilities built in the IBM Cloud platform. These were the 2 capabilities that spoke to core business and user needs:

    1. Cost Analysis (above image): a data visualization tool that allowed users to see spend over time and slice-and-dice that data using expanded filtering.

    2. Sharing: the ability for FinOps users to share that data with other relevant parties.

  • Key deliverables:

    • FinOps Research Hub

      • This is a still image of a domain specific research hub that I created for collaborators. If I wasn't in the room, I wanted them to be able to easily access the research. It contains all relevant findings, observations, and insights from my FinOps research. Summaries of the problem and domain, product analytics, the competitive analysis work with screenshots, a breakdown of the 8 capabilities I felt should be built, links to source material, and more are found here. It is a one stop shop for all things FinOps research.

    • FinOps persona

      • Before my work, this persona was called "Finance Manager." Cloud's understanding of it was outdated. Based on my learnings, I revamped this persona to match industry trends.

    There were more than just these two, as the Research Hub speaks to. For example, prior to production launch, I usability tested the Cost Analysis tool.

    • 84% decrease in time-to-complete core user tasks.

    • Enabled the completion of core user job within the product itself.

    • The cost analysis tool freed CSMs from having to manually help clients understand their spend, a task that took 20%-40% of their working time. This amounted to $1.5-$2.5 mil/year in wasted time. Time CSMs could now spend on growth opportunities.

    • Forrester Wave rating increase in cloud cost management category.

    • Laid foundations to upsell and build upon recent Apptio software acquisition. This software is the leading 3rd party FinOps tool and IBM invested heavily in it.

    • Built lasting partnerships with Apptio product and consulting teams, connecting their side with IBM Cloud FinOps team to knowledge share and build a go-to-market strategy that would benefit both sides.

Screenshot of an AI-powered assistant interface from IBM Cloud, featuring a welcome message and an invitation to ask questions about IBM Cloud, labeled "Experimental," with a link to AI assistant documentation.

AI Assistant & Human-AI Interaction (HAX)

generative - evaluative - mixed methods

This is the story of how I ensured that IBM Cloud’s first AI solution was useful, user-friendly, and, perhaps most importantly, trustworthy to users.

Impact

  • Uncovered 10 usability issues, 2 of which were critical (impeded usage)

  • Time-to-complete core user tasks decreased 90%+ (docs/content space)

  • Influenced AI user testing across IBM

  • Increased Forrester Wave rating

  • IBM Cloud, like nearly all platforms, services and products, is leveraging gen AI to grow business and augment user experiences. The first foray into this new world for IBM Cloud was the AI Assistant (AIA).

    Previous work by the Content team allowed users to search through Cloud documentation via a side modal without leaving the context of use. Now the AIA would do the searching and finding for them with a natural language prompt.

    Even though the use case wasn't novel, the newness of the technology and this being Cloud's first AI experience underscored the need to "get it right."

    That's why I was brought into the workstream by the project lead.

  • Not only was this Cloud's first foray into gen AI, it was also mine. As such, I needed to establish some baseline understanding of human-AI interactions (HAX).

    I felt this would allow me to not only establish a shared understanding of HAX for collaborators, but it would allow me to think clearer about the usability test I'd be running in the near future on the AI assistant (AIA).

    I poured over internal and external sources on HAX and gen AI and summarized that in an infographic for collaborators.

    Most of the usability test for the AIA was straightforward. However, one thing I learned from the above research was the importance of user trust in AI solutions. A usable AI solution = a trustworthy AI solution. As such, I wanted to find a way to measure it like we measure usability.

    Unfortunately, there wasn't an existing measurement of trust that was lightweight enough to administer during a usability test alongside UMUX-Lite.

    So I created one.

    From my previous life in academic research, I knew of existing literature on trust and began diving into this research again, repurposing various trust scales for user testing AI solutions.

    My trust measurement is a 14-item questionnaire that attempts to capture different kinds of trust — situational, dispositional, and learned — as well as different aspects of trust — ethical and non-ethical. These 14 items are averaged into a single-number score, similar to UMUX-Lite.

    When it came time to user test the AIA, I was armed with pointed tasks meant to test intended usage and questions related to User-AI trust and AI mental models. I also now had a way to quantitatively measure user trust like usability.

  • The initial research for the AIA was presented and shared via a one-page report or infographic, alongside links to the 34 sources I used.

    The usability test results were presented, alongside this executive summary one-pager.

  • Secondary Research

    I presented the initial secondary research in February 2024. The infographic/one-pager I created continues to be leveraged by stakeholders in Cloud and beyond, having been viewed by 112 individuals and downloaded 44 times. In June 2024, a Principal PM Slacked me, linking the artifact, saying "I appreciate this. I've slowly been taking bites of it as I find a few minutes here and there." The research is helping beyond the confines of the project itself and even the org it originated in.

    Usability Test

    The suggestions and results of my usability test were directly translated into epics for the development team.

    • 10 usability issues uncovered

      • 2 critical (impeded usage/adoption)

    • Time-to-complete decreased by 90+% for "How-To" and "What is" questions.

    • Increased Forrester Wave ratings in cloud AI category

    Additional Impact

    • Tapped to be AI user testing SME for business unit.

    • Wrote internal IBM-wide blog helping teams user test AI solutions. Viewed 686+ times.

    • My trust measurement sparked tons of interest across IBM, allowing me to share my work widely and democratize its use in user testing AI solutions.

Illustration with sections labeled 'Build,' 'Run and use,' and 'Maintain' showing 5 archetypes with icons: cityscape, hand pointing, and abstract eye design.

User Personas

generative - evaluative - mixed methods

This is the story of how I helped the Cloud UXR team (and IBM UXR) rethink the way personas are made and used.

Impact

  • Increased user-centered thinking/design (personas used in Figma/workshops)

  • Increased user-centered development (personas used in epics)

  • IBM UXR wide personas thought leader

  • Helped UXR teams beyond Cloud build personas

  • User personas have had a falling from grace since 1999, the year Alan Cooper coined the term and popularized user personas as a tool for facilitating user-centered thinking.

    As Travis and Hodgson put in Think Like a UX Researcher, "Personas get a mixed reception from development teams, with some questioning their value."

    And this is exactly what I started to experience when I began poking around IBM Cloud's personas in early 2022. As I interviewed collaborators about existing personas and started reading externally about this tool, I noticed there were at least 7 common pain-points with personas. I call these the 7 Sins of Problematic Personas:

    1. Fluffy - look nice but don't contain relevant insights for teams.

    2. Idiosyncratic - no standardization across personas that are part of a set.

    3. Innumerable - too many to make sense of.

    4. Concrete - one-and-done artifacts that become outdated.

    5. Hidden - hard to even find if they do exist.

    6. Opaque - black-boxes without any reference to source material/research.

    7. Specific - not generalizable, break in scenarios they should lend clarity to.

    In addition to these pain points, the Cloud persona landscape itself was disjointed: 8 archetypes, 66 associated job titles, and 23+ cloud-specific personas spread asunder in box folders and local drives. And many had the 7 sins.

    It was clear in early 2022 that a revamping — and a rethinking — of user personas was needed.

  • I approached personas like any other tool or service UXRs help create everyday — I sought to understand the user pain points and find solutions to those pain points in order to make something more useful and usable. I was not beholden to any one method or approach — personas, jobs-to-be-done, mindsets, etc.

    During my initial probing, I began presenting my thinking to the Cloud UXR team for feedback. I called out the common pain points and how the current way of thinking about personas — i.e., as fictional characters, using out-of-the-box templates — was broken.

    I argued that personas should be thought of, not as fictional characters, but scientific theories. A good persona would possess the same qualities a good scientific theory possesses. Said another way, the 7 Sins of Problematic Personas are the result of not having these qualities. These are the Tenets of Good Personas:

    • Empirically based

    • Flexible (generalizable)

    • Updated over time

    • Consistent (with each other and other knowledge)

    • Lean (Occam's razor)

    • Predictive (of user behavior/attitudes/feelings)

    • Documented (sourced)

    • Discoverable

    • And of course, just built into these tenets are the concepts of being Testable and Falsifiable.

    Through these presentations, I was able to convince the Cloud UXR leadership to focus on revamping our personas. A small cadre, or guild, was created with this goal in mind.

    Overall, this work took a year to complete, requiring enormous amounts of collaboration and coordination between uxr, design, engineering, and pm.

    The guild approached this project with the mindset that personas were theories. We also wanted to detach our understanding of personas from job titles. Job titles mean different things at different companies and don't lend clarity to what someone is doing. We opted instead to see our users through a behavioral lens, similar to jobs-to-be-done.

    Armed with the above mindset shift, we went through five phases of person development:

    1. Corral collective UXR knowledge about users, pulling out insights related to user behavior (jobs and responsibilities).

    2. Created draft personas.

    3. Socialized these drafts with cross-functional collaborators for feedback and critique. This was a crucial step. It not only ensured we were creating something of actual value for the users of personas but also fostered a sense of shared ownership — these weren't the UXR teams personas; these were Cloud's personas.

    4. Incorporate feedback and publish new personas.

    5. Update as new knowledge is gained.

  • The main deliverable for this work was something called the Persona Framework. This framework is similar to an image on a puzzle box — it shows how all of our cloud platform users relate to one-another.

    Within the framework, there are individual, product/service agnostic Archetypes for each domain on the platform. These Archetypes describe what these user types are trying to do on the platform and their core responsibilities (e.g., FinOps Manager: understand cloud cost and manage that spend, etc.).

    Each Archetype has tabs on it that contain the research it was based on and any notes related to it (e.g., updates that have occurred to it).

    Here are (blurred) images of these deliverables:

    Persona Framework and Archetypes

  • In Think Like a UX Researcher, Travis and Hodgson say:

    "Here's a secret many people don't know: You don't need to create personas to be user-centered. Creating personas should never be your goal — understanding users' needs, goals and motivations should be your goal."

    To the extent that user personas do that — i.e., increase user empathy and user-centered development — that is the measurement of their success.

    The Cloud team did a pre-/post-survey of stakeholder attitudes about users and their usage of personas. This allowed us to measure attitudes related to user understanding/personas, albeit in an imprecise way.

    We were also able to see usage metrics for the persona artifacts via site traffic data, Figma insights, and how often they were attached to epics in PM tools.

    Anecdotally as well, we began seeing these new personas used as the basis of all workshops and ideation sessions.

    Overall Impact

    • Increased user-centered thinking/design (personas used in Figma/workshops)

    • Increased user-centered development (personas used in epics)

    • IBM UXR wide personas thought leader

    • Helped UXR teams beyond Cloud build personas