Welcome to Product Unfiltered, where we talk with Product Leaders about real challenges, how they handled them, and give you the process tools, and frameworks they used to overcome them.
P.S. Did someone forward this to you? You can get this directly in your inbox by signing up for the newsletter here.
My first guest is a great friend of mine, Alec Harrison.
I first met Alec several years ago when we were working together on a product for a client, both of us from separate agencies. If you've been a part of an agency in the past, you know that typically doesn't end up well. Two external agencies working together, collaborating, and getting along? Impossible!
Take my word for it, it is definitely possible. This is one of those times we combined forces and crushed it. It was the first of several projects we've worked together on over the years.
Today, Alec Harrison is the CEO, Lead Designer & AI Educator of VisualBoston.
If you are a B2B SaaS founder or growth leader looking for a creative partner for visual branding and digital experiences (web, mobile, AI) that delight customers and close deals, I cannot recommend Alec and his team at VisualBoston enough. Whenever I have a new project on the horizon, I pick up the phone and call him.
Get in touch at [email protected].

I spoke with Alec about one challenge he routinely combats in his role as a designer: user testing and validation before building.
As product leaders, we have best practices drilled into our heads. I can hear you all thinking it, "Of course you need to user test! Validation before building is what saves product teams from investing in development only to release something that doesn't resonate with users."
Meanwhile, Leadership is pushing you to do more with less:
You'd rather not fight back, and instead you skip it. You tell yourself that you know your users so well that you're sure it'll resonate with them anyway. 🙋
Been here? You're not alone.
“A lot of teams treat user testing as an afterthought,” Alec said. "They'd say, ‘Okay, we don't have time for that.’ Or maybe, ‘it sounds too expensive. It's going to slow us down.’"
Hearing this time and again led VisualBoston to position testing as an add-on or upsell rather than a core part of the process. It may be part of best practices, but if there isn't significant demand, there's no need to include it in the core offer. Maybe there wasn't room for validation anymore.
Alec's design career goes back to the days of sharing Sketch files (remember Revision_Final_FINAL_v7_FINAL-for-real-this-time?).
Back in the day of nightmare file management, it could be a pain to build prototypes for user testing, often adding a week to the process.
If you were on a deadline, it could lead to serious consideration: either you had that extra week to afford the user testing, or you had to bite the bullet and hope for the best.
Then along came Figma
A godsend, yes. But it was only a matter of time before it introduced its own problems, democratizing feedback. Now, granting access to everyone is as easy as hitting that share button, and so was getting feedback. The Product Manager, Engineer, CEO, Marketing, Sales, and Vanilla Ice all had feedback.
Sifting through the feedback from ten different stakeholders adds another wrinkle to stakeholder management in the process. So much for innovation.
The more feedback there is, the more expensive it gets.
"The larger the business, the more red tape there is,” Alec said. “So you have to really push that up the flagpole, hope there's approval. And by the time it gets back down, it's already talked to like multiple departments that kind of want their own say in the product."
By the time engineering gets the designs, it may not even resemble the original product specs, much less be validated, so who knows how much it will resonate with users. It leads to "hoping the features stick on the wall. And right now, there might be a lot of features that are not used ever."
Alec watched this pattern repeat for 15 years. And then he watched it cost real money.
"I've consulted with companies that have spent more than one or two million dollars on building these new R&D products, and then no one wants to use them."
It wasn't that the UX was bad. The products were usable.
The problem was more fundamental: painkiller versus vitamin. The kind of thing that initial research could have caught. The kind of thing a product design team could have validated before anyone wrote a line of code.
On the smaller scale, bootstrapped startups where every penny counts, the waste is just as real. Features and revisions piling up, each one a bet that never got validated.
Alec has felt it himself. "I'm not perfect. I'm a human. There are times when I knew I should have tested something internally, and we didn't. And I was like, well, that's stupid. I didn't practice what I preach."
Validation is a muscle. You have to keep flexing it. And for a long time, the friction made it easier to let it atrophy.
Then the tools changed.
Alec started using ChatGPT to break ideas into requirements docs, making sure they didn't bloat with unnecessary scope.
Then, building functional prototypes in no-code tools like Lovable. Not Figma mockups. Real, clickable prototypes with actual payment processors attached.
"Now it's like put Stripe on there, don't say anything, just see if they're going to take their card out or not, and then pull out why. Because it actually functions now," He said.
Even when your target users aren't easily accessible, Alec has built AI emulators of the target audience for pre-validation.
"Have it act as an 85-year-old with glaucoma going through the flow. It'll pick up on contrast issues, accessibility problems, and DOM issues that a static mockup would never catch."
The speed caught him off guard. "I used to put on Spotify, move around some components, and have a little fun in Figma. But then I discovered I can do that in Claude Code with prompting and get 2 - 3x done."
For one client—a complex finance questionnaire with complex business logic—the team built the prototype in Lovable, ran user testing within two weeks, and had it in the engineering pipeline in under a month. The kind of project that used to take months now surfaces backend conflicts early, before anyone's moved on to the next thing.
Beyond reducing friction in the product validation conversation and speeding up development for their clients, it had a profound impact on VisualBoston's positioning.
"It got us from being design monkeys—getting a ticket, coming up with a nice look and feel, handing it off—to becoming consultants. I'll let you know if I think there's a better way to do this. And I will back that up with validation, not just my expertise."
VisualBoston stopped offering user testing as an add-on. Now there's at least one validation touchpoint when you work with them. Non-negotiable.
If you're facing the same resistance, Alec's advice:
"Start embarrassingly small. Carve out an hour. Step outside. Think about one flow in your product you're not sure anyone's using. You know it's important, but you'd be embarrassed to admit you have no idea how it's performing.
Give yourself two weeks. One designer, a couple of hours of solution building, and some moderated or unmoderated testing. That's your pilot. Once you see the magic of real user feedback, making the case for budget gets a lot easier."
Too busy for even that? Guerrilla test. Schedule a 15-20-minute Zoom call with non-direct stakeholders, read a script, give them a URL, and record it. Alec used to buy strangers coffee at Starbucks for app feedback. Wrong target audience, but he still caught usability issues.
The validation muscle atrophies fast. But it builds back faster than you'd think.
The Secret Sauce
Here is the exact prompt that Alec uses to create prototypes for user testing validation. He recommends starting with a specific UI screen, flow, or component. It can even be a screenshot. It's something both non-technical and technical people could use. This helps outline and structure for user testing intent.
Act as
A senior UX/UI designer and product strategist with expertise in usability heuristics, conversion optimization, and hypothesis-driven design audits.
Objective
Audit the provided image (UI screen, flow, or component) to evaluate its effectiveness against business objectives, user needs, and design best practices, and identify clear opportunities for improvement.
Context
You are reviewing a static image or snapshot of a digital product experience.
Assume this screen is part of a larger user journey, even if the full flow is not shown.
The audit should balance business goals, user expectations, and usability principles.
Format
First Impressions
What does this screen communicate within the first 5 seconds?
What action feels most encouraged or expected?
Business Objectives Alignment
What business objective does this screen appear to support? (e.g., conversion, engagement, retention, trust)
Is that objective clearly reinforced through hierarchy, copy, and visual emphasis?
What might be missing or competing with that objective?
User Intent & Clarity
Who is the likely user at this point in the journey?
What problem is the user trying to solve here?
Are the next steps obvious and low-friction?
Hypothesis Check ("We believe…")
State at least one implicit design hypothesis in this format:
"We believe that [design choice] will help [user] achieve [outcome], which supports [business goal]."
Assess whether the current design supports or undermines this belief.
Flow & Placement Evaluation
Does the placement of key elements (CTAs, navigation, inputs, messaging) match the expected user flow?
Is anything prematurely demanding attention or hidden too late?
Identify one element whose position or prominence should be tested.
Usability & Accessibility Flags
Identify potential usability issues (cognitive load, unclear affordances, visual noise).
Note any obvious accessibility concerns (contrast, font size, tap targets, clarity).
Testable Recommendations
Propose 2–3 specific, testable changes (e.g., CTA placement, copy clarity, visual hierarchy).
Frame each as an experiment or A/B test, not a subjective opinion.
Constraints
Be constructive and objective, not purely aesthetic.
Avoid generic feedback. Tie every observation to user behavior or business impact.
Do not assume intent without stating assumptions clearly.
Focus on clarity, flow, and outcomes over visual style preferences.
Extras
Optionally include a quick confidence score (1–5) for how well this screen supports its primary objective.
If context is missing, list the top 3 questions you would ask the product team before iterating.
Note: You may want to optimize this depending on the LLM and model you plan to use this in, e.g., ChatGPT and Claude can be slightly different
One final word
Sometimes my guests share one thing they've learned in their career that's contributed to their success. Here's Alec's:
"Getting to know people on a more personal level, I think that's helped me build trust, and it's helped me share my expertise faster than I probably would. Then I'm more inclined to help them with what I've learned, and I'm excited to help them. And I can see their excitement grow with me. And then our trust just kind of builds together, and the product just gets that much better, faster."
Until next month,
- Matt

