Experience Design Resources & Insights | We Are Affective

How Do I Build Features Users Will Want in Three Years?

Written by Simon Lee | Feb 9, 2026 8:37:01 PM

What if the features you're designing right now are completely irrelevant by the time they launch? I mean, its a question worth asking, because I've seen it happen more times than I'd like to admit. A client spends six months crafting something they think users will love, only to discover that needs have shifted, technology has moved on, or—and this is the kicker—users never actually wanted it in the first place. The mobile experience world moves at a pace that still catches me off guard sometimes, even after nearly a decade in this business. User expectations change. Platform capabilities evolve. What seemed like a brilliant idea in January can feel dated by December.

Designing experiences that will still matter in three years isn't about predicting the future—it's about understanding patterns. During my time working on healthcare experiences, I noticed something interesting; the features that stood the test of time weren't the flashy ones we thought would impress users. They were the ones that solved fundamental problems in flexible ways. A booking system we designed could adapt to telemedicine when that suddenly became necessary. An e-commerce search experience we crafted with broad parameters could handle new product categories without requiring a complete rebuild. The difference? We'd designed in flexibility from day one, not because we knew exactly what was coming, but because we accepted we didn't know.

The best feature roadmaps aren't rigid three-year plans; they're frameworks that let you respond to change without starting from scratch every time something shifts.

This guide draws from real projects across fintech, retail, education and health tech—all sectors where I've watched features succeed or fail based on how well teams planned for uncertainty. We'll look at what actually works when you're trying to design something that lasts, and honestly, what doesn't work at all.

Understanding What Future Planning Really Means

Most people think future planning means predicting what users will want in three years time and designing those features now. But here's the thing—that approach is basically setting money on fire. I've watched clients spend tens of thousands crafting "future proof" features that never got used because by the time users actually needed them, the market had shifted completely. Future planning isn't about crystal ball gazing; its about designing systems that can evolve without needing to be rebuilt from scratch every time something changes.

When I'm working with clients on healthcare experiences, for instance, I never try to guess what regulations will look like three years down the line. Instead, we design the data handling layer in a way that makes it relatively straightforward to adjust compliance rules without touching the core functionality. Same goes for payment systems in fintech experiences—we separate the payment logic from the user interface so when new payment methods emerge (and they always do), we're not redesigning the entire checkout flow. This separation of concerns might sound technical, but it's really just about not painting yourself into a corner.

The Real Purpose of Forward Thinking

Future planning means identifying what parts of your experience are likely to change and what parts will stay stable. In my experience across e-commerce, education and entertainment experiences, certain things remain constant... users always want fast load times, they always want their data to be secure, they always want intuitive navigation. Those are your foundations. Everything else? That's where you need flexibility built in from day one.

What Actually Changes Over Time

The mistake I see repeatedly is teams trying to anticipate specific features users might want. But what changes isn't usually the core problem your experience solves—it's how users expect to solve it. Three years ago, nobody expected voice commands in shopping experiences; now its becoming standard. You can't predict that specific trend, but you can design your experience so adding new interaction methods doesn't require rebuilding your entire product catalogue system.

Here's what does change predictably enough to plan for:

  • User interface expectations (what looks modern shifts every 18-24 months)
  • Integration requirements (APIs you need to connect with multiply constantly)
  • Device capabilities (new sensors, better cameras, faster processors)
  • Privacy and security standards (regulations get stricter, not looser)
  • Platform requirements from Apple and Google (they change the rules regularly)
  • User acquisition channels (what worked last year might not work next year)

Notice none of these are about specific features? That's deliberate. When clients come to me wanting to design "the next big thing," I always bring them back to this fundamental truth—you're not planning for what users will want, you're planning for how quickly you can respond when user needs inevitably shift. And they will shift, probably faster than you think they will.

Why Most Feature Roadmaps Fail

I've reviewed hundreds of feature roadmaps over the years and honestly? Most of them are dead on arrival. Not because the teams aren't smart or the ideas aren't good—they fail because they're built on assumptions that don't match reality. The biggest mistake I see is what I call "feature stuffing", where companies try to plan every single thing they'll design for the next 18 months. It looks impressive in a boardroom but its completely disconnected from how users actually behave.

Here's what happens in the real world; you launch your fintech experience with a beautiful roadmap showing cryptocurrency integration in quarter three, but by quarter two your support tickets reveal that 60% of users can't figure out how to link their bank accounts. I've watched this exact scenario play out with three different clients. The roadmap said "advanced features" but users were still struggling with basic functionality. That disconnect is what kills most feature planning—we design what we think users will want instead of watching what they're actually doing right now.

The other major failure point? Treating roadmaps like contracts instead of hypotheses. I worked with an e-commerce client who spent six months designing a complex social shopping feature because it was "on the roadmap". When we finally launched it, engagement was terrible because customer behaviour had shifted during those six months; people wanted faster checkout, not more social features. We'd invested heavily in something that was already outdated before it launched.

Your roadmap should be a living document that changes every quarter based on actual user data, not a fixed plan you're too proud to abandon when the evidence says otherwise.

The Planning Paradox

There's a weird tension in feature planning that I haven't fully resolved even after all these years. You need enough structure to guide design and secure resources, but too much structure makes you blind to opportunities. I've found that the sweet spot is planning detailed features for the next three months, rough themes for months four to six, and just directional goals beyond that. Anything more specific than that is probably fiction dressed up as strategy—and expensive fiction at that.

Learning From User Behaviour Patterns

The thing about user behaviour is that it tells you everything—and I mean everything—about what features will matter in the future. I've watched experiences succeed and fail based entirely on whether the team was actually paying attention to what users were doing versus what they said they wanted. There's a massive difference between those two things, trust me.

When we designed a fitness tracking experience for a healthcare client a few years back, users kept telling us in surveys they wanted more detailed calorie counting features. Made sense, right? But when we looked at the actual usage data, we discovered something completely different. People were spending most of their time in the social feed, comparing their progress with friends and leaving encouraging comments. The calorie tracking? Barely touched after the first week. We shifted our roadmap to focus on community features instead, and retention jumped by 47% over the next quarter. That's the power of watching what users actually do rather than listening to what they think they want.

Here's what I look at when analysing behaviour patterns: session length, drop-off points, feature adoption rates, and the sequence of actions users take. That last one is crucial because it shows you the natural flow of how people use your experience. If you're seeing users constantly switching between two features, that's telling you something about how those features should be connected or combined in future updates.

The mistake most teams make? They focus on aggregate data instead of user segments. A fintech experience we worked on had terrible engagement with its investment tracking feature—only 12% of users touched it. But when we segmented by age group, we found that users over 45 were using it religiously. That insight shaped our entire three-year roadmap; we knew that as our user base aged, investment features would become more central to the product.

Setting Up the Right Tracking

You can't learn from behaviour you aren't measuring. Sounds obvious, but you'd be surprised how many experiences launch with barely any analytics in place. I always set up event tracking for every significant action a user can take—not just the big ones like purchases or sign-ups, but the small ones too. How often do they open a specific screen? How long do they spend there? What do they do immediately after?

The tools have got better over the years. Mixpanel, Amplitude, Firebase Analytics—they all let you track user paths and create cohort analyses without needing a data science degree. But here's the thing: you need to know what questions you're trying to answer before you start collecting data. Otherwise you'll drown in numbers that don't tell you anything useful.

Spotting Patterns That Predict Future Needs

Some behaviour patterns are obvious indicators of where your experience needs to go. If you're seeing users repeatedly performing workarounds to achieve something your experience doesn't directly support, that's a feature waiting to be designed. We had an e-commerce client whose users kept screenshotting products and sharing them outside the experience because there wasn't a native wishlist sharing feature. That told us exactly what needed to be on the roadmap.

But the really interesting patterns are the subtle ones. Seasonal usage spikes, for example. If you notice your experience gets heavy use during specific times of year, that's valuable information for planning features that capitalise on those periods. An education experience we crafted saw massive engagement jumps in September and January—no surprise there. But we also noticed a smaller spike in May when students were revising for exams. That led us to design a quick-revision feature specifically for that use case, and it became one of the most-loved parts of the experience.

Another pattern worth watching: feature abandonment over time. Just because something was popular six months ago doesn't mean it'll matter in three years. I've seen plenty of experiences waste resources improving features that users have already moved on from. The key is spotting the decline early enough to pivot your plans.

Building Flexibility Into Your Experience Architecture

When we designed a healthcare booking experience a few years back, the client wanted something simple—just appointment scheduling and reminders. But here's the thing; I knew they'd eventually need video consultations, prescription management, and payment processing. I'd seen it happen too many times before. So instead of designing a rigid system that only did what they asked for, we created an architecture that could expand without needing a complete rebuild. Sure enough, six months later they came back asking for telehealth features, and because we'd planned ahead, it took weeks instead of months to implement.

The technical side of this isn't complicated, but it does require thinking differently about how you structure your systems. I always use a modular approach where different features live in their own isolated sections—like separate rooms in a house rather than one big open space. This means when you need to update your payment system or add a new feature, you're not risking breaking everything else. Its about creating clean boundaries between different parts of your experience so they can evolve independently.

The best architecture decisions are the ones that give you options later, not the ones that lock you into a single path

One pattern that's saved my clients thousands is keeping the business logic separate from the interface. I worked on a fintech experience where we needed to completely redesign the UI after user testing showed people were confused by the original design. Because we'd separated the "brains" from the "face" of the experience, we could rebuild the entire interface without touching any of the core financial calculations or security features. That separation also meant we could launch different platform versions much faster than expected, since we could reuse most of the logic...

APIs are your best friend here too. By designing your experience to communicate with your backend through well-designed APIs, you can swap out entire systems without users noticing. Need to change payment providers? Switch analytics platforms? Add a new third-party service? All possible without rebuilding your experience from scratch. I've seen too many teams skip this step to save time upfront, then regret it massively when they need to make changes later.

Balancing Current Needs With Future Vision

The hardest part of designing experiences—and I mean this genuinely—is keeping one eye on today whilst planning for tomorrow. I've seen projects fail because teams got so caught up in designing for the future that they forgot to solve the problems users had right now. And I've seen just as many experiences become obsolete because they only focused on immediate needs and painted themselves into a corner they couldn't escape from.

Here's what works in my experience; you need to solve today's problems completely, but you need to solve them in a way that doesn't block tomorrow's possibilities. When I designed a healthcare experience a few years back, the client wanted a simple appointment booking system. Fair enough. But instead of hardcoding everything around their current workflow, we designed it so the data structure could handle different appointment types, multiple providers, and various scheduling rules. Cost them maybe 15% more upfront? Sure. But when they wanted to add telemedicine appointments later, it took us days instead of months to implement.

The trick is knowing where to invest that extra flexibility and where not to. You can't make everything future-proof—that's just analysis paralysis with a fancy name. I usually apply what I call the "two-step rule": if a feature might need to expand in one obvious direction within the next product cycle, design in that flexibility now. If its three steps away or purely speculative, design the simplest thing that works today. You'll get it wrong sometimes, honestly thats fine, but this approach has saved my clients thousands in redevelopment costs whilst keeping initial projects lean and focused on actual user needs.

Testing Ideas Before Committing Resources

Look, I've seen companies blow six figures on features that nobody wanted. It's painful to watch, especially when there are ways to test your ideas for a fraction of that cost before you commit your entire development budget.

The cheapest way to test a feature idea? Just ask people. But here's where most teams get it wrong—they ask "would you use this?" and people always say yes because they want to be helpful. Instead, I run what I call behaviour tests. For a fintech experience we worked on, the client wanted to add cryptocurrency trading. Rather than designing it straight away, we added a simple "Coming Soon" button where the feature would live and tracked how many users clicked it. Less than 2% showed interest, which saved the client about £80k in development costs they nearly committed to.

Prototyping Without Code

Design prototypes are brilliant for testing user flows before writing any actual code. I use tools like Figma or InVision to create clickable mockups that look and feel like the real thing. For an e-commerce client, we prototyped three different checkout flows and had real users try them out. The winner was actually our simplest option, not the fancy multi-step process the stakeholders were pushing for.

Another method that works well is the wizard of oz test—where you manually perform actions behind the scenes that would eventually be automated. We did this for a healthcare experience that needed prescription reminders; instead of designing complex notification logic, we had someone manually send messages to a small test group for two weeks. Response rates told us exactly which reminder frequency worked best.

Setting Your Testing Budget

A good rule of thumb? Spend about 5-10% of your estimated build cost on validation first. If a features going to cost £20k to implement properly, invest £1-2k in testing whether its actually worth designing at all.

Create a "feature graveyard" document where you list all the ideas you tested and rejected, along with the data that informed each decision. Its incredibly useful when stakeholders bring up the same ideas six months later.

Here are the testing methods I use most often, ranked by cost and reliability:

  • Landing page tests with email signup—cheapest option, measures genuine interest through commitment
  • User interviews with specific task scenarios—more expensive but gives qualitative insights you cant get from data alone
  • Clickable prototypes with real users—moderate cost, excellent for testing complex flows and interactions
  • Beta features with limited rollout—higher cost but tests with actual usage in production environment
  • A/B tests of different approaches—most reliable but requires existing user base to be meaningful

The key thing to remember is that testing isn't about proving your idea is good; its about finding out if its good. I've had plenty of features I was personally excited about that flopped in testing, and that's fine. Better to find out early when you can pivot cheaply than after you've spent months designing something nobody uses.

Creating a Practical Feature Prioritisation System

The biggest mistake I see with feature prioritisation? Teams trying to rank everything on a single scale. I've worked on healthcare experiences where we'd spend hours debating whether notification improvements were more important than data export functionality, when really they served completely different user needs and couldn't be meaningfully compared. What actually works is a system that acknowledges features have different purposes—some are table stakes, some drive growth, and some future-proof your platform.

I use what I call the three-bucket method, though its not particularly fancy or original. Every feature request goes into one of three categories: Foundation (the experience doesn't work without these), Growth (these bring in new users or revenue), and Evolution (these position you for future market shifts). A fintech experience I worked on needed proper security authentication—that's Foundation, non-negotiable. Their referral programme? Growth bucket. Support for open banking APIs that weren't widely adopted yet? Evolution. See how different those needs are?

The Scoring Framework That Actually Gets Used

Within each bucket, I score features on just three factors: user impact (how many people benefit), technical complexity (be honest here), and strategic alignment (does this fit where you're heading). Each gets a simple 1-3 rating. Nothing more complex than that because if your system needs a spreadsheet with formulas, people won't use it consistently.

Here's what this looks like in practice for a typical sprint planning session:

Feature Bucket User Impact Complexity Strategic Fit Priority
Password reset flow Foundation 3 1 3 High
Social sharing Growth 2 1 2 Medium
AI recommendations Evolution 2 3 3 Low

When to Ignore Your Own System

The thing about any prioritisation framework is knowing when to override it. I've pushed low-scoring features up the queue because a major client needed them for a contract renewal, or because competitive pressure demanded it. That's fine—the system exists to inform decisions, not make them for you. What matters is that you're making those exceptions consciously, not just designing whatever the loudest stakeholder asks for.

One e-commerce client had a feature that scored poorly across the board: allowing customers to schedule future purchases. Low user impact, medium complexity, didn't fit their immediate strategy. But their biggest competitor had just launched it, and three enterprise clients specifically asked about it in sales calls. We designed it. Sometimes the market tells you what matters more clearly than any scoring system can.

Adapting Your Roadmap Without Losing Direction

I've seen so many teams fall into the same trap—they either stick rigidly to their original roadmap even when users are screaming for something different, or they pivot so frequently that they end up with a Frankenstein experience that does twenty things poorly. Finding the balance between flexibility and focus is genuinely one of the hardest parts of long-term feature planning, and honestly, it's where most experiences either nail their strategy or completely lose their way.

The key thing I've learned working on experiences that have lasted years (not months) is that you need what I call "anchor features" that define your core value, and then "satellite features" that can shift and change based on what you learn. When we designed a healthcare experience for a major NHS trust, we knew patient appointment booking was an anchor—that couldn't change. But the way patients wanted to communicate with clinicians? That evolved from simple messaging to video calls to asynchronous updates over about eighteen months. We adapted the satellites whilst keeping the anchor fixed.

Your roadmap should be written in pencil for everything except the core problem you're solving—that bit stays in permanent marker.

Here's what actually works: review your roadmap quarterly, not annually. Look at your usage data, talk to your actual users (not just the loud ones on social media), and identify which features are driving retention versus which ones seemed like good ideas but nobody uses. I had a fintech client who was convinced they needed cryptocurrency integration because everyone was talking about it, but their data showed users just wanted faster bank transfers. We deprioritised crypto and focused on payment speed—retention jumped by 23% in the next quarter. Sometimes adapting means saying no to shiny new things and doubling down on what's actually working, even if its not exciting.

Conclusion

Look, designing features that last isn't about predicting the future—its about creating experiences that can evolve without falling apart. I've redesigned too many experiences from scratch because they were crafted for yesterday's problems with yesterday's thinking. The truth? Nobody knows exactly what users will want in three years, but you can make sure your experience is ready to adapt when those needs emerge.

What I've learned after years of doing this is that flexibility beats perfection every time. When we designed that healthcare experience I mentioned earlier, we didn't try to predict every feature doctors would need; we crafted a solid foundation that could accommodate new workflows without requiring a complete rebuild. That's served them well through multiple regulatory changes and two major feature expansions. Sure, we had to refactor some things along the way, but the core architecture held up because we planned for change from day one.

The experiences that succeed long-term aren't the ones with the most features right now—they're the ones that stay relevant by listening to users and adapting quickly. Your feature roadmap should be a living document that changes as you learn more about what people actually need, not a rigid plan you follow blindly... Start with user problems, design flexibility into your technical foundation, test ideas before committing serious resources, and don't be afraid to kill features that aren't working. The experiences still thriving after three or five years? They all did those things well. The ones that disappeared did not.

Before any development team writes their first line of code—whether that's a freelancer, in-house team, agency, or AI—they need the experience design, user research, and technical roadmap that transforms user psychology into reality. We craft the emotional experiences, design the user flows, conduct the research, and create the strategic foundation that makes great digital products possible. Let's design your experience foundation.

Frequently Asked Questions

How do I know if a feature will still be relevant in three years?

You can't predict specific features, but you can identify stable user needs versus changing preferences. In my experience, core problems like fast load times and secure data remain constant, whilst interface expectations and integration requirements shift regularly. Focus on designing flexible solutions to fundamental problems rather than trying to guess future trends.

What's the biggest mistake teams make when planning feature roadmaps?

The biggest mistake I see is "feature stuffing"—trying to plan every detail for 18+ months ahead instead of responding to actual user behaviour. I've watched clients spend six months designing complex features that users never wanted, whilst ignoring basic functionality problems shown in support tickets. Plan detailed features for three months max, themes for six months, and just directional goals beyond that.

How much should I spend testing ideas before designing them?

I recommend spending 5-10% of your estimated implementation cost on validation first—so if a feature costs £20k to implement, invest £1-2k testing whether it's worth designing at all. Simple tests like "coming soon" buttons or clickable prototypes can save you tens of thousands by revealing lack of user interest before you commit design resources.

Should I design features users say they want or focus on what they actually do?

Always prioritise what users actually do over what they say they want in surveys. I've seen this repeatedly—users told us they wanted detailed calorie counting in a fitness experience, but usage data showed they spent time in social features instead. When we shifted focus to community features based on actual behaviour, retention jumped 47%.

How do I balance current user needs with future-proofing my experience?

Solve today's problems completely, but solve them in ways that don't block tomorrow's possibilities. I use the "two-step rule"—if a feature might expand in one obvious direction within the next product cycle, design in flexibility now. If it's speculative or three steps away, design the simplest thing that works today.

What's the best way to structure my experience so it can evolve without complete rebuilds?

Use modular architecture where different features live in isolated sections, like separate rooms in a house. Keep business logic separate from the interface, and design everything to communicate through well-designed APIs. This separation means you can update systems, redesign interfaces, or add new features without risking the entire experience.

How often should I review and update my feature roadmap?

Review quarterly, not annually—the mobile world moves too fast for yearly planning cycles. Look at actual usage data, talk to real users, and identify which features drive retention versus which ones seemed good but nobody uses. I've seen experiences waste months improving features users had already abandoned because they weren't reviewing frequently enough.

What should I do if my roadmap conflicts with what stakeholders want to design?

Ground decisions in user data and be willing to override your prioritisation system when market conditions demand it. I've pushed low-scoring features up the queue due to competitive pressure or client contracts, but the key is making those exceptions consciously rather than just designing whatever the loudest stakeholder requests.