Sector
Client
My role
Project time

The problem space: Energy Industry

Managing outages in the energy sector, especially with transformers, meters, and Distribution Line Outage Controllers (DLOCs), is complex. Aging infrastructure and fragmented data can delay fault detection and response, leading to unnecessary downtime and higher costs.

By integrating data from these systems into a real-time platform, utilities can quickly identify issues, prioritize repairs, and streamline dispatch. This reduces manual work, improves response times, and enhances grid reliability.

Image source: https://www.entergy.com/brightfuturela/jwl-station/https://www.bizjournals.com/southflorida/news/2024/10/24/nextera-energy-duane-arnold-nuclear-plant.htmlh

Project Preface

As the UX Lead, I was tasked with designing a Transformer Outage Management Tool for Entergy, a large energy company facing significant challenges related to the inefficiencies in dispatching field crews (truck rolls) and understanding transformer performance. Entergy had issues with determining when to send a truck based on transformer spikes that exceeded normal operating thresholds. This problem was exacerbated by the complexity of meter-to-transformer relationships—some meters were connected to multiple transformers, making it difficult to accurately pinpoint where the issue was occurring. Additionally, DLOCs (Distribution Line of Connections) further complicated the decision-making process because multiple transformers were often linked to the same DLOC.

The goal of the project was to create a tool that would reduce unnecessary truck rolls, improve decision-making, and provide real-time insights into transformer health, allowing Entergy to dispatch crews more efficiently.

Understanding the Problem:

1. Inefficient Truck Rolls: Dispatching field crews without clear, actionable data, leading to unnecessary trips.

2. Complex Meter-Transformer Mapping: Meters connected to multiple transformers made it difficult to track down the source of the issue.

3. Real-time Insights: A lack of real-time visibility into transformer and meter health made it difficult for operators to respond swiftly to issues.

To kick off the project, I conducted a series of contextual interviews with key stakeholders within Entergy. These interviews helped me better understand the workflows, pain points, and data needs of the primary users of the system: field technicians, control center operators, and plant managers and executive personnel.

The goal of the project was to create a tool that would reduce unnecessary truck rolls, improve decision-making, and provide real-time insights into Meter-transformer health, allowing Entergy to dispatch crews more efficiently.

Tools Used:

  • Zoom for remote interviews and Miro for collaborative real-time note-taking and journey mapping.
  • Dovetail for qualitative research analysis (coding user interviews and tagging pain points).
  • Google Sheets to track insights from interviews and develop user personas.

The full scope of this project is under NDA

The case study overviews my impact, contributions, and learnings as a designer. For additional information on
the project please contact me directly

Who are we building for?

Technicians

Who I Am:
I’m a field technician at Entergy, responsible for inspecting and repairing transformers and ensuring the stability of the electrical grid. My job requires me to respond to outage alerts, diagnose transformer issues, and decide whether to roll a truck to the site. I work with complex systems and must act quickly to resolve any issues.

My Goals:

  • Ensure that all transformer maintenance and repairs are done safely and efficiently.
  • Accurately diagnose the cause of outages to determine if a truck roll is necessary.
  • Minimize downtime and ensure that transformers are brought back online as quickly as possible.
  • Communicate effectively with dispatchers and operators to ensure timely response to outages.

What I’m Thinking:

  • “Which transformer is the issue? I have multiple meters connected to a single transformer, and I can’t pinpoint which one is malfunctioning. This means I might end up checking several transformers or meters, wasting time and fuel.”
  • “I need clearer data or alerts that tell me exactly which transformer or component is faulty. I don’t have time to waste on unnecessary checks.”
  • “The data I have access to isn’t detailed enough to help me quickly diagnose the problem. I’m relying on my instincts and what I can physically see, but I’d prefer a system that tells me more.”

My Pain Points:

  • It’s difficult to determine which transformer is causing an issue when multiple meters are connected to different transformers, leading to confusion about whether a truck roll is needed.
  • There’s not enough real-time data on which transformers are likely to cause issues, leading to unnecessary trips or missed problems.
  • The lack of clear data connections between meters and transformers makes it harder to diagnose issues efficiently.

Areas of Opportunity to Make My Life Easier:

  • A tool that provides real-time updates and clear alerts indicating which transformer is most likely responsible for an outage or spike.
  • An interface that shows the relationship between transformers and meters, so I can quickly understand which transformer to investigate.
  • A recommendation engine that suggests whether a truck roll is needed based on historical data and current trends in transformer health.

Operators

Who I Am:
I’m a control center operator at Entergy, monitoring the performance of transformers and ensuring the stability of the power grid. I track outage alerts and work to prioritize issues based on their severity, while coordinating with field technicians and dispatchers. I rely on accurate data to make informed decisions about which transformer needs attention.

My Goals:

  • Monitor transformers in real-time to detect any potential issues that might require immediate attention.
  • Correlate data from various systems to quickly identify the root cause of outages or spikes.
  • Provide clear, actionable insights to field technicians and dispatchers to minimize downtime.
  • Ensure all outages are addressed in a timely and efficient manner.

What I’m Thinking:

  • “I need to identify the exact issue quickly. But the system doesn’t make it easy to connect the data points between transformers and meters. It’s frustrating to have to manually cross-reference several systems to get the full picture.”
  • “I need to make quick decisions, but the data I have is scattered and not immediately actionable. I feel like I’m always playing catch-up instead of being proactive.”
  • “There’s too much manual work involved in diagnosing and dispatching crews. I need a better way to correlate all this information and make decisions faster.”

My Pain Points:

  • The system often presents raw data that is difficult to interpret, making it hard to determine which transformers are causing issues.
  • There’s no clear visualization of how meters and transformers are connected, so I have to manually track down this information.
  • Alerts can be too broad, leading to uncertainty about which issue is the highest priority.

Areas of Opportunity to Make My Life Easier:

  • A real-time dashboard that provides an overview of transformer health, with clear indicators for issues like spikes, outages, and connections to meters.
  • Visual representations of meter-to-transformer relationships, so I can quickly understand the scope of the issue.
  • An alert system that notifies me of high-priority outages with clear actionable recommendations.

Resourcing

Who I Am:
I’m a resourcing manager at Entergy, responsible for ensuring that the right personnel and equipment are available to respond to outages and maintenance needs. My role involves managing staffing schedules, allocating resources efficiently, and ensuring that the field technicians and support staff are properly deployed to handle transformer issues as they arise. I work closely with dispatchers and field managers to ensure there are no gaps in coverage.

My Goals:

  • Ensure that the right resources (personnel, equipment, and vehicles) are available at the right time and place to respond to outages.
  • Optimize staffing schedules to avoid overstaffing or understaffing while maintaining operational efficiency.
  • Track resource utilization to ensure that personnel and equipment are being deployed effectively and are not under or overused.
  • Minimize downtime and disruption by ensuring that sufficient resources are available for each incident.

What I’m Thinking:

  • “There are so many alerts coming in at once, and I have no way to prioritize them. How do I decide which technician should go where?”
  • “I have to check multiple systems to figure out which crew is available and whether they have the right equipment for the job. It’s a bit of a guessing game.”
  • “I wish there was a way to better visualize which transformers are the highest priority so I can optimize the technician deployment and reduce unnecessary trips.”
  • “I need to make sure the technicians get to the right site with the right information. I can’t afford any errors or delays, but the current system makes it harder than it should be.”

My Pain Points:

  • It’s difficult to predict exactly when and where resources will be needed, which often leads to either underutilizing or overburdening staff.
  • The scheduling system is manual and doesn’t provide real-time visibility into crew availability or workload, making it challenging to make adjustments on the fly.
  • There’s no clear way to track which resources (e.g., specific field technicians or specialized equipment) are best suited for particular tasks, which can lead to inefficiency.
  • Coordinating between various teams (dispatch, field technicians, and operations) to ensure resource availability often feels disjointed and reactive.

Areas of Opportunity to Make My Life Easier:

  • A real-time resource management system that shows current availability and workload for all personnel and equipment, enabling me to allocate resources efficiently.
  • A predictive scheduling tool that uses historical data to forecast when and where outages are likely to occur, allowing for better proactive planning of resources.
  • A centralized dashboard that integrates with dispatch and field teams, showing not only current resources but also upcoming needs, so I can make adjustments in advance.
  • Automated prioritized alerts and notifications for when staffing levels are too high or low in certain areas, helping me balance resources more effectively.

Plant Managers

Who I Am:
I’m a plant manager at Entergy, responsible for overseeing the day-to-day operations of the power plant, including the safe and efficient operation of transformers, generators, and other critical infrastructure. I coordinate with control room operators, technicians, and other departments to ensure that plant operations run smoothly and that any outages are quickly addressed. My job also involves ensuring that the plant complies with regulatory standards and that safety protocols are followed.

My Goals:

  • Maintain the safe and efficient operation of all plant systems and equipment.
  • Ensure that outages or equipment failures are detected and addressed as quickly as possible to minimize downtime and maintain grid stability.
  • Optimize plant performance to meet both short-term production targets and long-term sustainability goals.
  • Train and manage plant personnel to ensure that safety standards and operational protocols are followed.

What I’m Thinking:

  • “I need real-time data from the field, but I don’t have visibility into what’s actually happening at the transformer sites. How can I ensure everything is running smoothly without the full picture?”
  • “The current systems give me limited data on what’s happening out in the field, and I don’t have the tools to proactively address potential issues before they escalate.”
  • “I’m under pressure to reduce costs, but without more data, I can’t optimize resources effectively. How can I make decisions without all the information I need?”
  • “Predictive maintenance would help us get ahead of issues before they become outages, but I don’t have that capability right now. It’s all reactive.”

My Pain Points:

  • It’s challenging to get a real-time view of the plant’s operational status, especially when multiple transformers or systems are experiencing issues simultaneously.
  • The current system doesn’t provide an integrated view of meter and transformer performance, making it difficult to diagnose issues quickly and accurately.
  • Coordinating maintenance schedules with the control center and field teams can be complex and time-consuming.
  • Managing plant operations during an outage or crisis is stressful when I don’t have real-time insights into which systems or personnel are available.

Areas of Opportunity to Make My Life Easier:

  • A comprehensive plant operations dashboard that integrates data from transformers, meters, and DLOCs, offering real-time status updates and current-to-future performance insights.
  • Real-time alerts that notify me of any system anomalies or potential failures before they escalate into critical outages.
  • A collaborative platform that connects plant operations, control room, and field teams, enabling more efficient communication and faster response times.
  • Automated maintenance scheduling and tracking that integrates with plant operations, so I can better plan and manage resources, maintenance, and repairs without disrupting plant efficiency.

Executives

Who I Am:
I’m an executive at Entergy, responsible for overseeing the strategic direction of the company and ensuring operational efficiency. I focus on high-level decision-making, risk management, and ensuring that the company meets its financial and sustainability goals. My role involves reviewing performance reports, evaluating new technologies, and ensuring compliance with regulatory standards.

My Goals:

  • Ensure that Entergy operates efficiently, meets regulatory requirements, and maximizes profitability.
  • Maintain a strong reputation for safety, reliability, and customer satisfaction.
  • Drive the adoption of new technologies and innovations that improve operational efficiency and reduce costs.
  • Align resources and strategies to meet long-term business and sustainability goals, including transitioning to cleaner energy sources.

What I’m Thinking:

  • “How do I get a clear, high-level view of the company’s transformer health and maintenance performance? Right now, I’m getting fragmented reports from different departments.”
  • “We need to lower operational costs, especially the costs associated with unnecessary truck rolls. How can I optimize the maintenance process if I don’t have clear insights into the data?”
  • “The field teams are doing their best, but I need a system that can provide us with a clearer picture of what’s happening on the ground and let us predict problems before they escalate.”
  • “I need data that can help me make decisions on resource allocation. Without better insights into transformer performance and maintenance needs, I’m not confident that I’m making the most cost-effective decisions.”

My Pain Points:

  • Difficulty accessing real-time data that gives a clear view of operational performance, including outage management and resource utilization.
  • Limited visibility into how quickly and effectively the organization is responding to outages, which affects decision-making during crisis situations.
  • Pressure to minimize costs while balancing the need for high performance and reliability.
  • Slow or fragmented decision-making due to disjointed information systems, which could impact the company’s ability to respond swiftly to external market and regulatory pressures.

Areas of Opportunity to Make My Life Easier:

  • A real-time executive dashboard that consolidates key performance indicators (KPIs), such as outage resolution time, resource utilization, and system reliability.
  • Predictive analytics that help forecast potential risks and outages, allowing for more proactive planning and decision-making.
  • Mobile access to real-time data so I can monitor the company’s performance while on the go.
  • A comprehensive reporting system that consolidates financial, operational, and safety data in a user-friendly format for quick decision-making.

Progressive Disclosure Strategy:

Objective: In building a Transformer Outage Management Tool, I implemented a Progressive Disclosure strategy to ensure that users are presented with the right level of information at the right time. This approach was designed to keep the interface simple and focused, minimizing cognitive overload, while providing deeper, more detailed data as users need it. By tailoring the content based on the user’s role and task, I was able to ensure that users could access relevant information efficiently, without feeling overwhelmed.

Key Principles of Progressive Disclosure

  1. Role-Based Content Display
    Each persona has a unique view based on their responsibilities. While everyone starts from the same dashboard, the tool gradually reveals more specific data depending on the user’s role. For example, a technician will see different data compared to an executive or dispatcher.
  2. Context-Sensitive Information
    Information is only revealed based on the user’s current context. For instance, a technician will see detailed transformer diagnostics once they select a specific unit to work on. For a control center operator, the tool might show a high-level system overview first, with the option to click into individual outages for more information.
  3. Levels of Information Detail
    I designed the tool so that each layer of data builds on the previous one. The more the user interacts with the system, the more detailed the information becomes. For example, a user can click on an alert to expand it into a detailed outage report, providing only what’s necessary at that moment, rather than overwhelming them upfront.
  4. Clear Pathways to Deeper Insights
    The tool includes intuitive navigation options to allow users to dive deeper into the data when needed. These links or buttons are only visible when they make sense for the task at hand, ensuring that users don’t get distracted by irrelevant data points.

Detailed Breakdown of Progressive Disclosure in the Tool

  1. Dashboard View (Universal Starting Point)
    Every user begins with the System Health Dashboard, which shows a high-level summary of transformer health, current outages, and key performance indicators (KPIs). At this stage, the dashboard is simplified to prevent information overload:
    • Color-coded indicators show whether transformers are operational (green), in warning (yellow), or in failure (red).
    • Overview of Active Outages displays the number of customers impacted, sorted by severity.
  2. This universal starting point keeps things simple. Each persona sees only the most relevant, high-level data.
  3. Role-Specific Adjustments
    • Field Technicians:
      When a Field Technician selects a specific transformer or meter, the tool reveals detailed information, including:
      • Transformer specifications
      • Fault history and past repair issues
      • Health metrics such as temperature, load, and voltage
      • Troubleshooting guides based on recent fault codes
      • Repair status options like “In Progress” or “Completed”
    • I made sure that technicians only see detailed, task-specific data once they select the transformer they’re working on. This keeps them focused on the relevant details without cluttering their workspace with unnecessary information.
    • Control Center Operators:
      Operators get a real-time monitoring dashboard with a broad system overview. When they click on any alerts or specific transformers, more detailed data becomes available:
      • Outage details such as location, severity, and affected customers
      • Predictive Analytics that highlight potential failures based on trends (e.g., temperature spikes, load imbalances)
    • This allows operators to monitor system health at a glance but gives them the option to delve deeper when a particular issue arises.
    • Dispatchers:
      The dispatcher’s starting view includes a map of active outages and technician availability. When they click on an outage or technician, the following information is revealed:
      • Outage details, including the severity, location, and customer impact
      • Technician assignments based on proximity and availability
      • Resource allocation, including the availability of tools and vehicles
    • By revealing this information only when necessary, dispatchers can efficiently assign crews and resources without being distracted by too much data at once.
    • Plant Managers:
      Plant managers start with a System Performance Overview, which provides a high-level look at the grid’s health and predictive maintenance alerts. When they click on specific alerts or KPIs, more granular information is revealed:
      • Asset-level data, showing performance trends and fault histories
      • Predictive maintenance insights based on AI-generated alerts, helping them schedule preventative actions
      • Operational KPIs, such as Mean Time to Repair (MTTR) or technician efficiency
    • This progressive approach allows plant managers to stay on top of the grid’s performance while drilling down into specifics as needed.
    • Executives:
      Executives start with an Executive Dashboard, which displays summary data on grid performance, outage metrics, and financial impact. When they click into a report, more detailed financial and performance breakdowns are revealed:
      • Financial impact reports that show cost breakdowns by outage, repair time, and resource usage
      • Strategic insights on grid health and long-term performance forecasts
    • The tool provides high-level strategic insights at the beginning, and executives can access deeper reports as needed to guide decision-making.

Benefits of Progressive Disclosure in This Tool

  1. Avoids Overload:
    By presenting data in layers, I ensured that each persona only sees the most relevant information at the right time. This prevents users from feeling overwhelmed and helps them focus on their current task.
  2. Increases Efficiency:
    The tool helps users stay focused by showing only what’s necessary for their current workflow. They can easily access more detailed information when needed, without having to sift through unrelated data.
  3. Tailored Experience:
    Each persona receives a tailored experience that aligns with their role and responsibilities. This ensures that the tool supports their specific tasks and decision-making processes without introducing irrelevant information.
  4. Improves Decision-Making:
    By gradually revealing detailed information as needed, I helped ensure that users could make informed decisions without feeling rushed or overwhelmed. They have all the data they need, but only when it’s relevant.
  5. Future-Proof: The strategy allows the tool to scale. As new features or data are added, they can be progressively revealed to users in a way that maintains clarity and focus. The tool grows with the user’s needs, without introducing unnecessary complexity.

Visual Examples of Progressive Disclosure

  • Initial Screen: A simple, high-level dashboard with essential KPIs and a quick view of outages and system health.
  • Expanded View: Clicking on an alert or asset brings up more detailed data like fault history, health metrics, and repair instructions.
  • Deep Dive: For advanced users (like plant managers or control center operators), additional layers of predictive maintenance insights, KPIs, and historical trends are available through intuitive click-through options.

By using Progressive Disclosure, I was able to design a tool that helps each user access exactly what they need at the right time, while keeping the interface clean and intuitive. This approach enhances usability and ensures that the tool supports quick decision-making without overwhelming users with too much information.

My Contribution and Outcome (Sprint Breakdown):

1. Discovery & Research Phase (Sprints 1-6)

User Interviews & Stakeholder Alignment

Tools Used:

  • Zoom for remote interviews and Miro for collaborative real-time note-taking and journey mapping.
  • Dovetail for qualitative research analysis (coding user interviews and tagging pain points).
  • Google Sheets to track insights from interviews and develop user personas.

Methods Adopted:

  • I led a series of contextual interviews and in-situ observations with field technicians, operators, and managers. This allowed me to understand how they handled outage situations in the field, their decision-making process, and the tools they currently use.
  • Persona Creation: I synthesized the data into user personas using Google Sheets to categorize each user’s goals, pain points, and work contexts. We had personas like “Operator Olivia,” “Technician Tom,” and “Resourcing Manager Rachel,” each with unique workflows that informed feature prioritization.
  • Journey Mapping: Using Miro, I created detailed journey maps for each persona, highlighting pain points, emotional states, and opportunities for improvement. This was a crucial method to understand user interactions, especially the disconnects in current outage management systems.

Outcome:

  • Developed clear, actionable personas and journey maps that directly informed feature prioritization.
  • Stakeholder alignment around user needs, establishing a shared understanding of what the tool should solve.

Competitive Analysis & Market Research

Tools Used:

  • Miro for competitive analysis boards, cataloging features, strengths, and weaknesses of competitor tools.
  • Google Scholar and JSTOR to review research papers and case studies about outage management in the energy sector.
  • Figma for exploring similar tools’ user interfaces and interaction design.

Methods Adopted:

  • Conducted a competitive landscape analysis to review industry leaders in outage management tools, such as PowerGrid, GridPoint, and GE Digital. I reviewed their UI/UX, usability flaws, and common pain points.
  • Analyzed each tool’s alert prioritization systems, resource allocation features, and how real-time data syncs were handled.
  • Used Heuristic Evaluation methods to identify key usability flaws in competitors’ tools, focusing on issues like information overload and lack of mobile-first design.

Outcome:

  • I identified several opportunities for differentiation, especially around creating a more intuitive mobile interface for field technicians and an improved alert system for operators.
  • Recommendations on design features like color-coded priority alerts, modular dashboards, and real-time status updates for efficient decision-making.

 

2. Design Phase (Sprints 7-18)

Wireframing & Prototyping

Tools Used:

  • Figma for wireframing and high-fidelity design; InVision for interactive prototypes.

Methods Adopted:

  • Developed low-fidelity wireframes in Figma using a mobile-first approach to ensure that field technicians’ mobile experiences were optimized. We knew mobile performance was a pain point for field workers, so this was an area of special focus.
  • For the dashboard, I adopted the Modular Design method, ensuring that the operator dashboard could be easily customized depending on what data was most critical to the user.
  • I used atomic design principles in Figma, ensuring components (buttons, alerts, tables, etc.) were reusable across different screens.
  • Prototyping: I built interactive prototypes using Figma to simulate real-time interaction, specifically focusing on the alert system, technician dispatching, and real-time data updates. These prototypes were shared with stakeholders for early feedback.

Outcome:

  • The wireframes and prototypes received early-stage feedback, especially around the alert prioritization and resource allocation screens, which I refined further based on user needs.
  • I ensured accessibility was a focus—contrast ratios, text sizes, and interactive elements were tailored to be highly legible for users working in difficult environments.

User Testing and Iterative Refinement

Tools Used:

  • Lookback.io for conducting remote usability testing, recording user interactions with the prototype.
  • Hotjar to track user behavior (click heatmaps, session recordings) once the tool was in a beta test phase.

Methods Adopted:

  • Conducted moderated user testing with a small group of field technicians and operators to validate our prototypes. I used Lookback.io to observe how they interacted with the system in real-time and where they struggled.
  • Based on insights from testing, I iterated on features like the alert system (to reduce cognitive load) and data visualizations (making them clearer and more actionable).
  • Applied card sorting techniques in Miro to determine the most intuitive information hierarchy for the dashboard, ensuring operators could easily navigate alerts, transformer statuses, and resource allocation features.

Outcome:

  • The testing highlighted a need for a more visual and dynamic alert system, which led to the decision to implement a color-coded priority system and a filterable alert list for operators to prioritize actions effectively.
  • Feedback from technicians emphasized the importance of offline functionality, which led us to prioritize a mobile version of the tool that could function with intermittent connectivity.

 

3. Development Phase (Sprints 7-18)

Design Handoff and Collaboration with Development

Tools Used:

  • Figma for precise design handoff, ensuring developers had access to specs, measurements, and assets.
  • Microsoft Teams for real-time communication with the development team and Jira for sprint management and issue tracking.

Methods Adopted:

  • I held weekly design critique sessions with the development team to ensure that the designs were feasible and that we were aligned on user stories.
  • Created detailed design specifications in Figma to ensure that developers had access to all necessary design assets and measurements (e.g., padding, margin sizes, color codes).
  • Maintained close collaboration with the backend team to ensure real-time data (e.g., transformer health and outage status) could be represented correctly on the front end without latency or data mismatches.

Outcome:

  • This close collaboration ensured smooth design-to-development transitions, particularly in complex areas like real-time data sync and alert prioritization.
  • Developers implemented a dynamic alert system where operators could see real-time updates of transformer health status and prioritize alerts based on severity.

User Testing & Feedback Integration

Tools Used:

  • UsabilityHub for remote usability testing, where users could rank different design iterations of features like alert systems or dashboards.
  • Google Forms for feedback surveys post-user testing.

Methods Adopted:

  • Conducted multiple rounds of A/B testing for alert system designs, including testing push notifications for high-priority outages vs. in-app alerts.
  • I implemented feedback loops, iterating on designs based on responses from both internal testers (field engineers) and external users (technicians in the field).
  • Utilized card sorting to validate that information was being presented in a logical flow and that operators could quickly understand the transformer status without confusion.

Outcome:

  • Insights gathered led to refinements such as clearer labeling of alert categories (e.g., critical, low, maintenance) and user interface tweaks to reduce decision fatigue during high-stress outage situations.

4. Finalization & Delivery (Sprints 19-30)

Design Handoff and Final Iterations

Task:

  • Preparing the final design for production and handing it off to the development team.

Action:

  • I organized a comprehensive design handoff that included high-fidelity designs, interactive prototypes, and design specifications (e.g., color codes, spacing, typography).
  • I reviewed all designs with the development team to ensure every design element was implemented accurately.
  • During this phase, I also provided final revisions based on the results of user acceptance testing (UAT), ensuring all feedback was addressed and the product was aligned with user expectations.

Outcome:

  • The designs were finalized and handed over to the development team, which allowed the tool to enter the deployment phase.

Post-Launch Monitoring & Enhancements

Task:

  • Overseeing the post-launch monitoring phase and suggesting improvements based on user feedback and performance data.

Action:

  • I continued to monitor user feedback post-launch, analyzing both qualitative (user comments, support tickets) and quantitative (usage metrics) data to identify areas for enhancement.
  • I worked with the product team to define new features for future iterations based on feedback, such as the introduction of AI-driven predictive analytics for outage forecasting.

Outcome:

  • We made refinements to the tool based on real-world usage, ensuring that users had an increasingly streamlined experience and the tool evolved in response to user needs.

 

5. Post-Launch & Continuous Improvement (Sprints 31-36)

Post-Launch Monitoring & User Feedback

Tools Used:

  • Hotjar for user behavior analytics (heatmaps, session recordings).

Methods Adopted:

  • Post-launch, I monitored real-time user interactions to identify pain points users encountered when they began using the tool in actual outage scenarios.
  • Based on feedback, I suggested new features like AI-powered predictive maintenance alerts for transformers, allowing operators to proactively address issues before failures occurred.

Outcome:

  • Continuous improvements were made, including refining the resource allocation feature, adding predictive analytics to anticipate transformer failures, and making the mobile app even more offline-friendly.

6. Leadership and Team Growth

Team Velocity & Collaboration

Task:

  • Managing team velocity and encouraging best practices in collaboration between design, development, and product teams.

Action:

  • I ensured the design and development teams were aligned on goals and shared ownership of the product vision.
  • I encouraged practices like pair programming and design reviews to ensure continuous feedback and collaboration.
  • We held regular sprint retrospectives where the team reflected on what went well, where we could improve, and how we could work more efficiently.

Outcome:

  • The team’s velocity improved over time as we refined our processes and implemented lean design principles—we were able to deliver features faster without compromising quality.
  • The collaboration between design and development teams created a shared sense of ownership and a stronger team dynamic.

Testing Procedures and Findings:

As part of the design process for the Transformer Outage Platform, I began by creating low-fidelity wireframes to quickly visualize and test my initial hypotheses about user needs and the platform’s functionality. These wireframes served as a cost-effective way to gather feedback early, ensuring we were on the right track before investing time and resources into high-fidelity designs.

Testing Focus:

  • Data Visualization: I hypothesized that integrating transformer, meter, and DLOC data into a single, easily digestible view would help users identify and prioritize issues more effectively.
  • User Flow and Information Architecture: I wanted to validate whether the proposed layout and navigation would allow users (such as dispatchers and operators) to quickly access critical data without feeling overwhelmed.
  • Interaction Simplicity: Ensuring that key actions—like assigning a technician or viewing system health—could be performed with minimal clicks was a top priority.
Operator Test

Test Objective:
Sarah was asked to use the Operator Dashboard to diagnose an outage scenario and respond to an alert. Specifically, she needed to check transformer statuses, view connected meters, and take action on a critical alert.

Test Procedure:

  • Sarah logged into the system and was presented with a list of ongoing outages, each with different severity levels.
  • She selected an outage triggered by a transformer failure and was asked to drill down into the relevant data: transformer status, connected meters, and DLOC information.
  • The task was to determine which meters were affected, prioritize which transformer to check, and then mark the issue as resolved.

Findings:

  • Pain Point: Sarah found it difficult to quickly understand which transformer was causing the most widespread outage because the alerts were not prioritized well. She was overwhelmed by too much information on the main dashboard, making it hard to focus on critical issues first.
  • User Frustration: The visual map of transformers was not interactive enough. Sarah wanted to click on specific transformers to see connected meters or DLOCs in real time.
  • Solution Needed: She also mentioned that being able to filter by outage severity would help her focus only on high-priority issues.

Design Decision:

  • The dashboard was revised to include filtering capabilities and saved searches for prioritizing outages based on severity (e.g., critical, high, medium, low).
  • The interactive map feature was enhanced to allow Sarah to click on individual transformers and meters to see real-time diagnostic data (transformer health, affected meters).
  • Alert prioritization logic was refined so that high-severity outages were always listed at the top, and more granular, actionable information was displayed for each alert.
Technician Test

Test Objective:
Tom was asked to use the Technician view in the Mobile App to receive a dispatch job for a malfunctioning transformer. He was to read the job details, perform basic troubleshooting, and mark the task as resolved.

Test Procedure:

  • Tom was dispatched to inspect a malfunctioning transformer that is connected to multiple meters and DLOCs.
  • The app displayed the transformer’s basic health data, including any past outage history, and also provided a map to the job site.
  • The task was for Tom to view the transformer’s issue and complete basic diagnostic steps in the app, then update the job status to “Resolved” once finished.

Findings:

  • Pain Point: Tom felt that the transformer health data provided in the app was too basic, and he was unable to perform in-depth troubleshooting in the field. He often had to contact the control center for more specific information.
  • Frustration with Mobile Interface: The interface was not intuitive for adding diagnostic notes or updating job status quickly. There were too many steps involved in closing a task, which could lead to delays in completing work.

Solution Needed: Tom wanted a quick-access view to diagnostic details, including data from previous visits, and an easier way to communicate directly with the control center for further troubleshooting help.

Test Procedure:

  • Sarah logged into the system and was presented with a list of ongoing outages, each with different severity levels.
  • She selected an outage triggered by a transformer failure and was asked to drill down into the relevant data: transformer status, connected meters, and DLOC information.
  • The task was to determine which meters were affected, prioritize which transformer to check, and then mark the issue as resolved.

Findings:

  • Pain Point: Sarah found it difficult to quickly understand which transformer was causing the most widespread outage because the alerts were not prioritized well. She was overwhelmed by too much information on the main dashboard, making it hard to focus on critical issues first.
  • User Frustration: The visual map of transformers was not interactive enough. Sarah wanted to click on specific transformers to see connected meters or DLOCs in real time.
  • Solution Needed: She also mentioned that being able to filter by outage severity would help her focus only on high-priority issues.

Design Decision:

  • The dashboard was revised to include filtering capabilities and saved searches for prioritizing outages based on severity (e.g., critical, high, medium, low).
  • The interactive map feature was enhanced to allow Sarah to click on individual transformers and meters to see real-time diagnostic data (transformer health, affected meters).
  • Alert prioritization logic was refined so that high-severity outages were always listed at the top, and more granular, actionable information was displayed for each alert.
Resourcing Test

Test Objective:
Rachel was asked to use the Resource Management Module to assign technicians to an outage scenario and track their progress in real-time.

Test Procedure:

  • Rachel received a list of ongoing outages and was asked to allocate technicians to various locations based on real-time availability, technician skills, and proximity.
  • She was also tasked with adjusting the deployment of resources in response to delays or new incoming alerts.
  • The task was to review technician performance after the assignments were made and adjust schedules accordingly.

Findings:

  • Pain Point: Rachel found it difficult to track real-time progress of technicians once they were dispatched. The platform showed technician availability, but it didn’t update in real-time about their current locations or how far along they were in solving an issue.
  • Resource Allocation: She also struggled to prioritize resources effectively during overlapping outages when multiple high-priority issues arose simultaneously.
  • Solution Needed: Rachel needed better visibility into field progress and a more intuitive system for managing overlapping deployments.

Design Decision:

  • The resource management platform was enhanced to include a real-time tracking feature, showing technicians’ current locations, progress status on jobs, and expected arrival times at their next destination.
  • A drag-and-drop scheduling tool was added, allowing Rachel to easily reassign technicians based on live updates from the field or incoming alerts.
  • Overlapping outage priority was introduced, where the system would automatically suggest the best technician deployments based on factors like distance, skillset, and outage severity.
Plant Manager Test

Test Objective:
Pam was asked to use the Dashboard to monitor real-time performance data from both the field and the plant, and to anticipate issues based on transformer performance and resource allocation.

Test Procedure:

  • Pam logged into the system and was presented with a visual map showing the current status of transformers and meters, as well as real-time performance data from the field.
  • She was tasked with analyzing transformer performance data to anticipate potential failures and proactively deploy resources to address issues.

Findings:

  • Pain Point: Pam found it difficult to integrate data from the field (e.g., transformer status, outage alerts) with plant-level data (e.g., energy flow, grid performance). This made it harder to anticipate plant-wide problems.
  • Data Silos: There was no way for Pam to see how field issues (outages, transformer failures) were directly impacting the overall grid performance in real-time.
  • Solution Needed: Pam needed a centralized view that showed both field and plant-level data in a way that could help her anticipate potential disruptions across the entire system.

Design Decision:

  • The Dashboard was redesigned to integrate both field data and plant performance metrics in a single view, allowing Pam to quickly assess how an issue in the field might impact plant performance.
  • The visual map was updated to show real-time energy flow, and alerts were enhanced to include warnings about potential grid-wide disruptions based on field issues.
Executive Test

Test Objective:
Greg was asked to use the Dashboard to review key performance metrics, including outage response times, downtime costs, and technician efficiency. He was also tasked with identifying trends and making decisions based on the data.

Test Procedure:

  • Greg was presented with an executive overview of performance, showing key metrics like average response time, technician efficiency, cost per outage, and overall system health.
  • He was asked to identify problem areas, evaluate how resources were being allocated, and make recommendations for improving operational efficiency.

Findings:

  • Pain Point: Greg felt the KPIs were not actionable. While he could see high-level performance data, he was unable to drill down into specific metrics to understand why certain KPIs were underperforming.
  • Lack of Drill-Down Capabilities: He wanted the ability to easily click into any metric (e.g., outage response time) and view specific data points, such as individual team performance or causes of delays.
  • Solution Needed: Greg wanted the dashboard to allow him to drill down into the data and provide contextual insights, like why certain teams were performing poorly or where delays were happening in the process.

Design Decision:

  • The dashboard was enhanced with Progressive disclosures that allowed Greg to click on any KPI and view a more detailed breakdown, such as the root causes of delays, the performance of individual teams, and historical trends.
  • Contextual Insights were added, so Greg could see why performance dipped in certain areas, helping him make more informed, strategic decisions.

What the Data is Saying:

The needs of each persona varied greatly, especially when comparing the field roles (operator, technician) with leadership roles (resourcing manager, plant manager, executive). I learned that the system must be customizable and flexible to address the specific workflows of each user. For example, a dashboard for an executive should highlight high-level KPIs and financial metrics, whereas a technician needs detailed job assignments and real-time data on transformer performance. The ability to tailor the tool for each user role is critical for adoption and user satisfaction.

Work In Progress...

The full scope of this project is under NDA

The case study overviews my impact, contributions, and learnings as a designer. For additional information on
the project please contact me directly
Scroll to Top

Field studies

Definition

Observing users in their natural environment to understand real-life interactions with a product or service.

When to Use

Conduct field studies to observe users in their real environment to capture authentic behaviors.

How to Perform

1. Define goals and decide on the context to study.
2. Prepare observation guidelines.
3. Observe and take notes without interrupting.
4. Analyze data for insights.

Template Sources

Nielsen Norman Group and UX Design Institute provide downloadable field study templates.

Customer feedback

Definition

Collecting feedback directly from users about their experience with your product.

When to Use

Use customer feedback to understand user satisfaction and identify areas needing improvement.

How to Perform

1. Use surveys, reviews, or customer service channels to gather feedback.
2. Analyze feedback for recurring themes.
3. Make design adjustments based on user suggestions.

Template Sources

Intercom, Qualtrics, and Zendesk can help collect customer feedback.

Desirability studies

Definition

Testing that focuses on the emotional appeal of a design to determine how desirable users find it.

When to Use

Use desirability studies to ensure your design evokes the desired emotional response.

How to Perform

1. Present users with the design.
2. Ask for feedback on aesthetics, appeal, and preferences.
3. Use results to improve the design’s emotional resonance.

Template Sources

Tools like UsabilityHub and Qualtrics can be used for desirability studies.

Session recording

Definition

Recording user sessions to observe behaviors and understand where they encounter issues.

When to Use

Use session recordings to gather qualitative insights on user interactions.

How to Perform

1. Use software to record interactions on the screen.
2. Review recordings for insights on usability issues.
3. Use findings to improve problem areas in the design.

Template Sources

Hotjar, FullStory, and Smartlook offer session recording tools.

Analytics reviews

Definition

Analyzing website or app analytics data to understand user behavior and make informed design decisions.

When to Use

Use analytics reviews for data-driven insights into user behavior.

How to Perform

1. Review metrics (e.g., bounce rate, time on page).
2. Identify trends and areas needing improvement.
3. Apply insights to refine design or content.

Template Sources

Google Analytics, Mixpanel, and Heap Analytics provide robust tracking and analysis tools.

Click tracking

Definition

Recording where users click within a webpage or app to see which elements attract attention.

When to Use

Use click tracking to identify popular and underutilized areas in your interface.

How to Perform

1. Install click-tracking software.
2. Collect data on click patterns and frequency.
3. Use insights to improve layout and interaction design.

Template Sources

Crazy Egg, Hotjar, and FullStory offer click-tracking features.

Eye tracking

Definition

A technique that measures where users look on a screen to understand attention and visual focus.

When to Use

Use eye tracking to optimize layouts and visual hierarchy.

How to Perform

1. Use eye-tracking software or hardware to track users’ gaze.
2. Analyze heatmaps to see where users focus most.
3. Make design changes to emphasize key areas.

Template Sources

Tobii, Lookback, and Gaze Recorder support eye-tracking tests.

A/B testing

Definition

A method that compares two versions of a design to see which performs better.

When to Use

Use A/B testing to test small changes and choose the most effective design option.

How to Perform

1. Create two versions of a design element (A and B).
2. Randomly assign users to each version.
3. Measure performance metrics (e.g., clicks, conversions) to determine the winner.

Template Sources

Optimizely, Google Optimize, and Adobe Target provide A/B testing tools.

Surveys

Definition

A set of questions sent to users to gather quantitative and qualitative data about their experience and preferences.

When to Use

Use surveys to gather feedback from a larger audience about user needs, preferences, or pain points.

How to Perform

1. Draft concise, clear questions aligned with your goals.
2. Distribute the survey to target users.
3. Analyze responses for trends and actionable insights.

Template Sources

Google Forms, Typeform, and SurveyMonkey offer survey templates.

Usability Benchmarking

Definition

Setting measurable usability standards to compare a product’s performance over time or against similar products.

When to Use

Use usability benchmarking to evaluate progress in improving user experience.

How to Perform

1. Define metrics (e.g., task completion rate, time on task).
2. Conduct initial tests to set baseline data.
3. Use benchmarks to track improvements over time or against competitors.

Template Sources

Tools like Google Analytics, Crazy Egg, and Hotjar can assist in gathering benchmarking data.

Cognitive walkthrough

Definition

A usability evaluation where designers or experts walk through tasks to anticipate potential user challenges and errors.

When to Use

Use cognitive walkthroughs early in the design process to identify usability issues before user testing.

How to Perform

1. Define tasks from a new user’s perspective.
2. Step through each task, considering how easily a user could understand it.
3. Identify areas of confusion or obstacles and note suggestions for improvement.

Template Sources

UX tools like Lucidchart and Figma can help structure task flows for cognitive walkthroughs.

Unmoderated testing

Definition

Testing without a facilitator, where users complete tasks independently.

When to Use

Use unmoderated testing to reach a larger audience remotely.

How to Perform

1. Set up tasks and questions for users.
2. Let users complete tasks without guidance.
3. Analyze data to improve UX.

Template Sources

Maze, UserZoom, and UserTesting support unmoderated testing.

Participatory design

Definition

Involving users directly in the design process.

When to Use

Use participatory design for user-centered solutions.

How to Perform

1. Include users in brainstorming or prototyping.
2. Gather ideas and feedback firsthand.
3. Refine design based on user input.

Template Sources

Miro and Figma for collaborative workshops.

Heuristic evaluation

Definition

An expert review of a product’s usability based on predefined heuristics.

When to Use

Use heuristic evaluations to identify usability issues in a structured way.

How to Perform

1. Define heuristics (e.g., Nielsen’s 10 usability heuristics).
2. Evaluate the design for compliance.
3. Document issues for improvement.

Template Sources

Nielsen Norman Group provides guidelines on heuristics.

Moderated testing

Definition

Testing with a facilitator present to guide users through tasks.

When to Use

Use moderated testing for in-depth feedback with direct user interaction.

How to Perform

1. Prepare tasks and questions.
2. Guide users and observe their interactions.
3. Collect detailed feedback for improvement.

Template Sources

Lookback, UserTesting, and Zoom for remote moderated sessions.

Paper prototypes

Definition

Basic, low-fidelity sketches used to test concepts quickly.

When to Use

Use early in the design process for rapid testing and feedback.

How to Perform

1. Sketch screens on paper.
2. Have users “navigate” the paper prototype.
3. Note feedback and refine.

Template Sources

No software needed—only pen and paper.

Guerilla testing

Definition

Quick, informal testing conducted in public spaces.

When to Use

Use guerilla testing for quick feedback with minimal setup.

How to Perform

1. Approach people and ask them to complete a task.
2. Observe and take notes on behavior.
3. Use feedback to make rapid adjustments.

Template Sources

Any prototype tool (Figma, Adobe XD) can be used for guerilla testing.

5-Second test

Definition

A quick test to capture users’ first impressions of a design.

When to Use

Use early in design to ensure key messages are clear.

How to Perform

1. Show the design for 5 seconds.
2. Ask users what they remember or thought about it.
3. Use feedback to evaluate clarity.

Template Sources

UsabilityHub and Lookback have 5-second test features.

First click test

Definition

A usability test to see where users click first when trying to complete a task.

When to Use

Use to evaluate the intuitiveness of the design's clickable elements.

How to Perform

1. Present users with a screen or layout.
2. Ask them to complete a task.
3. Track the first click to determine usability.

Template Sources

UsabilityHub and Maze support first click testing.

Wizard of oz

Definition

A method where a human secretly simulates the product’s functionality to test it with users.

When to Use

Use for testing complex ideas before building backend systems.

How to Perform

1. Define the tasks and functionality.
2. Use a human “operator” to simulate responses.
3. Gather insights without building complex tech.

Template Sources

Can be facilitated with basic prototyping tools like Figma and Miro.

Service blueprint

Definition

A visual map that outlines the full process of service delivery, both user-facing and behind-the-scenes.

When to Use

Use service blueprints to design or optimize complex, service-based experiences.

How to Perform

1. Define user actions and touchpoints.
2. Map supporting activities and systems.
3. Identify opportunities to improve service.

Template Sources

Miro, UXPressia, and Lucidchart offer templates for service blueprints.

Concept testing

Definition

Testing early-stage ideas or concepts to get user feedback before investing in development.

When to Use

Use concept testing before committing to new ideas to ensure they resonate with users.

How to Perform

1. Present users with the concept.
2. Gather feedback on feasibility and value.
3. Refine based on insights.

Template Sources

UsabilityHub and Google Forms for simple concept testing surveys.

Cognitive map

Definition

A mental model that visualizes how users understand and relate concepts within a system.

When to Use

Use cognitive maps to design information architecture that aligns with user expectations.

How to Perform

1. Identify related concepts and tasks.
2. Map connections based on user mental models.
3. Use to align your design with users’ understanding.

Template Sources

Miro, Lucidchart, and Coggle offer templates for cognitive mapping.

Scenario map

Definition

A map that outlines hypothetical user situations and their paths to achieving goals.

When to Use

Use scenario maps during ideation to visualize how users might interact with your product.

How to Perform

1. Define a user scenario and goal.
2. Map steps and interactions to reach the goal.
3. Use it to refine paths and highlight improvements.

Template Sources

Miro, Mural, and Figma support scenario mapping.

User journey

Definition

A visual representation of the user’s experience across different stages of interaction.

When to Use

Use a user journey map to understand and improve the overall experience.

How to Perform

1. Define key stages from awareness to post-use.
2. Outline user actions, feelings, and pain points.
3. Identify areas to improve or optimize.

Template Sources

UXPressia, Miro, and Adobe XD offer user journey templates.

Card sorting

Definition

A technique where users organize items into categories that make sense to them.

When to Use

Use card sorting to inform information architecture decisions.

How to Perform

1. Create a list of content or features.
2. Have users group items and label categories.
3. Use insights to structure your site or app.

Template Sources

Optimal Workshop and UXPressia provide card sorting templates.

Tree testing

Definition

A usability technique to evaluate how well users can find information within a website’s hierarchy.

When to Use

Use tree testing when designing or validating site navigation.

How to Perform

1. Present users with a simplified menu structure.
2. Give them tasks to locate specific items.
3. Analyze success rates and adjust structure as needed.

Template Sources

Optimal Workshop and Maze offer tools for tree testing.

User flows

Definition

A diagram that shows the steps users take to complete a task in a product.

When to Use

Use user flows during the design phase to visualize and optimize paths.

How to Perform

1. Identify the starting point and end goal.
2. Map out each step and decision point along the way.
3. Use it to streamline user paths and remove obstacles.

Template Sources

Figma, Adobe XD, and Lucidchart have templates for user flows.

Mind map

Definition

A visual tool that organizes ideas and concepts around a central topic.

When to Use

Use mind maps during brainstorming sessions to organize thoughts and ideas.

How to Perform

1. Start with a main idea in the center.
2. Branch out related ideas and subtopics.
3. Use it to explore all related aspects of a concept.

Template Sources

Miro, Lucidchart, and MindMeister offer mind-mapping templates.

Customer journey map

Definition

A visualization of the user’s journey through each stage of interaction with the product.

When to Use

Use journey maps to outline the user’s interactions, emotions, and pain points across the journey.

How to Perform

1. Define the key stages and touchpoints.
2. Outline user goals, actions, and emotions at each stage.
3. Identify pain points to improve.

Template Sources

UXPressia, Miro, and Adobe XD offer journey mapping templates.

Problem statement

Definition

A clear definition of the problem being solved.

When to Use

Use a problem statement to clarify the main challenge your design aims to address.

How to Perform

1. Identify the user, need, and problem.
2. Craft a statement that outlines the core issue.
3. Use the problem statement as a focal point.

Template Sources

Design Thinking and IDEO websites provide templates for crafting problem statements.

Assumption map

Definition

A tool to organize and prioritize assumptions that need validation.

When to Use

Use assumption maps to prioritize assumptions that need validation.

How to Perform

1. List assumptions about users and product success.
2. Map assumptions on a grid of importance and certainty.
3. Test high-impact, uncertain assumptions first.

Template Sources

Miro and UXPin have assumption mapping templates.

Experience map

Definition

A visual that outlines the end-to-end user journey and touchpoints.

When to Use

Use experience maps to visualize the user journey and touchpoints across a product.

How to Perform

1. Identify all touchpoints from start to finish.
2. Map emotions and actions for each stage.
3. Analyze for opportunities to improve experience.

Template Sources

Smaply, UXPressia, and Miro offer experience map templates.

POV statement

Definition

A focused statement articulating the user’s problem.

When to Use

Create POV statements to distill a user’s core problem and need into a focused insight.

How to Perform

1. Define the user, need, and insight.
2. Create a statement: “User needs a way to… because…”
3. Use it to guide ideation.

Template Sources

IDEO and Stanford d.school provide POV statement frameworks.

Empathy map

Definition

A visual tool to understand user emotions, thoughts, and needs.

When to Use

Use empathy maps to capture users’ thoughts, feelings, and behaviors.

How to Perform

1. Draw a map with sections: Think, Feel, Say, Do.
2. Populate each section with insights about users.
3. Use it to design with empathy.

Template Sources

Miro, Mural, and UXPressia have empathy map templates.

Task analysis

Definition

Breaking down tasks to understand user actions and goals.

When to Use

Conduct task analysis when you need to understand specific actions users take to accomplish goals.

How to Perform

1. Identify key tasks users perform.
2. Break tasks down into individual steps.
3. Analyze steps for usability improvements.

Template Sources

Nielsen Norman Group and Lucidchart offer task analysis templates.

Storyboards

Definition

Visual narratives showing a user’s journey through a product.

When to Use

Use storyboards to visually map out the user’s journey through a product.

How to Perform

1. Define key moments in the user journey.
2. Create a series of images representing each step.
3. Use the storyboard to visualize and improve user flow.

Template Sources

Canva and Adobe XD provide storyboard templates.

Affinity map

Definition

Grouping research findings into categories to identify themes.

When to Use

Use affinity mapping to organize data and identify patterns after user research.

How to Perform

1. Gather research notes.
2. Cluster related ideas into groups.
3. Label groups with themes to identify patterns.

Template Sources

Miro and Mural have templates specifically for affinity mapping exercises.

User stories

Definition

Short statements from the user’s perspective describing a need or task.

When to Use

Use user stories to outline user needs from a development perspective, especially in agile projects.

How to Perform

1. Identify a user need or task.
2. Write a user story using “As a [user], I want to [do something] so that [goal].”
3. Use stories to prioritize features.

Template Sources

Jira and Trello include templates for writing user stories.

Narraitives

Definition

Storytelling techniques to describe user experiences and emotional journeys.

When to Use

Use narratives to create relatable stories that convey user experiences.

How to Perform

1. Use insights from research to create a story.
2. Describe a user's journey with emotions and actions.
3. Use the narrative to empathize and guide designs.

Template Sources

Miro and Milanote have narrative storytelling templates for UX.

Persona

Definition

A fictional representation of a user archetype based on research to inform design decisions.

When to Use

Create personas to represent user archetypes, guiding design decisions and empathizing with the target audience.

How to Perform

1. Analyze user data to find common patterns.
2. Develop a fictional character that embodies these traits.
3. Use the persona to guide design decisions.

Template Sources

UXPressia, Figma, and Adobe XD offer customizable persona templates.

Stakeholder interviews

Definition

Talking to stakeholders to gather requirements, expectations, and constraints for the project.

When to Use

Conduct stakeholder interviews at the start of a project to understand business goals and priorities.

How to Perform

1. Define objectives and prepare questions.
2. Schedule and conduct interviews with stakeholders.
3. Summarize key insights and prioritize them in the design process.

Template Sources

Miro and Lucidchart provide templates for stakeholder interview frameworks.

Customer feedback

Definition

Collecting input from users about their experiences to understand their satisfaction and pain points.

When to Use

Use customer feedback to gain insight into users' likes, dislikes, and improvement suggestions.

How to Perform

1. Define the objective and target audience.
2. Write clear, unbiased questions.
3. Choose a survey platform.
4. Distribute the survey to the audience.
5. Analyze responses to identify trends.

Template Sources

Customer feedback platforms like Qualtrics and SurveyMonkey have customizable templates.

Contextual inquiry

Definition

Observing users and asking questions in their environment to understand their workflows and challenges.

When to Use

Use contextual inquiry to observe and understand users in the context of their environment.

How to Perform

1. Schedule a session in the user’s environment.
2. Observe them using the product, asking questions as needed.
3. Document findings and analyze for improvement areas.

Template Sources

Nielsen Norman Group and Usertesting.com offer templates for contextual inquiries.

Analytics reviews

Definition

Evaluating data from tools like Google Analytics to learn about user behavior on digital platforms.

When to Use

Conduct an analytics review when you want to understand user behaviors on your platform.

How to Perform

1. Identify the key metrics to review.
2. Access your analytics platform and collect data.
3. Interpret data insights to inform decisions.

Template Sources

Built-in templates in platforms like Google Analytics and Amplitude.

Context mapping

Definition

Visualizing the user’s context, including factors that influence their experience, to understand their ecosystem.

When to Use

Use context mapping to visualize external factors impacting the user experience.

How to Perform

1. Identify the key elements affecting the user.
2. Create a map that places users in the center with factors around them.
3. Use the map to identify influential factors for product design.

Template Sources

Miro and Mural offer context mapping templates.

Metrics analysis

Definition

Reviewing quantitative data (e.g., KPIs) to understand product performance and user behavior.

When to Use

Reviewing quantitative data (e.g., KPIs) to understand product performance and user behavior.

How to Perform

1. Identify relevant metrics (e.g., conversion rate).
2. Use analytics tools to gather data.
3. Analyze data trends and correlate findings to user experience.

Template Sources

Reviewing quantitative data (e.g., KPIs) to understand product performance and user behavior.

User Interviews

Definition

One-on-one conversations with users to understand their needs, behaviors, and preferences.

When to Use

Use interviews to gather deep insights into user motivations, preferences, and pain points.

How to Perform

1. Define interview goals and prepare open-ended questions.
2. Recruit participants who represent your target audience.
3. Conduct interviews, asking follow-up questions for clarity.
4. Analyze responses for themes.

Template Sources

Airtable and HubSpot offer user interview templates.

Diary Studies

Definition

Users document their experiences over time, revealing insights into their habits and interactions with a product.

When to Use

Diary studies work well for understanding long-term or habitual user behaviors.

How to Perform

1. Decide the duration and frequency for users to record entries.
2. Give users a structured format to log their experiences.
3. Collect entries and analyze them to find patterns.

Template Sources

UX Templates and Dovetail offer templates and tools for diary studies.

Surveys

Definition

Surveys gather quantitative or qualitative data from a large group of users via a structured set of questions.

When to Use

Use surveys when you need quantitative data or quick feedback from a large group.

How to Perform

1. Define the objective and target audience.
2. Write clear, unbiased questions.
3. Choose a survey platform.
4. Distribute the survey to the audience.
5. Analyze responses to identify trends.

Template Sources

Google Forms, Typeform, and SurveyMonkey offer templates for various survey types.