Product Rebels
Product Rebels
AI as a Force Multiplier for Product Discipline
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this special episode of Product Rebels, Vidya Dinamani and Heather Samarin revisit highlights from their recent webinar, AI as a Force Multiplier for Product Discipline, featuring product and AI leader Elena Luneva. After the live session sparked a wave of thoughtful audience questions, they sit down to tackle some of the most pressing ones—from how AI is reshaping product work to why strong product fundamentals matter more than ever.
Hey Product Rebels, I'm Vidya Dinamani, and you're listening to the Product Rebels podcast. Today is a special episode. We recently hosted a webinar called AI as a force multiplier for product discipline with Elena Luneva, a product and AI practitioner with over 20 years of leadership experience at companies like GoFundMe, Nextdoor, Open Table, and BlackRock. We dug into how AI can amplify product discipline rather than replace it. And the conversation clearly struck a nerve because we got so many great questions from the audience that we couldn't possibly answer them all live. So we sat down to tackle them here. If you're a product leader or a PM working with AI right now, there's something in here for you. Let's get into it. Hi everyone, this is your little bonus section of all the questions that we didn't get to in the webinar. We promised you that we would give you a quick take on the answers. So here goes. And thank you for asking such great questions. We're going to kick it off with a question from Lisa. Heather, I'm going to put you on hot seat to answer this one. We all know that AI can be a great tool for market and competitive research. When scrubbing for deep level feature comparisons across multiple competitors, how do you compensate for nomenclature differences and nuances in your prompts to get the most out of the competitive intelligence response? Interesting.
SPEAKER_01Yeah, it's a great question. We just did a competitive analysis here a few weeks ago. And the things that I learned from that are a couple of things. One, you need to seed it with your business product context first. What's the business strategy? What's the product strategy? What are the outcomes you are solving for? Right. And maybe even a little product overview. Then I started broad with a, hey, define my top 10 competitors in order of threat. And three years from now, right? Who is going to be a threat and who is a threat now? Provide me clear rationale. So you start with this view around what your overall competitive set is. Then from there, you're going to look at does this make sense? Who's missing? Are there substitutes that we're missing here? Then develop your criteria for comparison. What are the most important criteria now that you've seen the competitive set, you agree with the competitive set? What's the criteria that matters to you? Is it really just the features or is it service model? Is it pricing? What is it? And define those very clearly and what it means. I call it, you're defining the columns to your table of comparison. And then you get that data back and you iterate. It's not going to be great the first output out, but I like to start broad and then narrow in once I understand the landscape. And that really helped us a lot.
SPEAKER_00It did. And the one other thing that I would add to this is I always ask if you're confused about nomenclature or if there is a question about this or feels the same. Don't assume. Ask me. Make sure that you check in with me. And then what you end up getting, which I mean, I use Claud a lot. So it's really nice for just saying, okay, I'm pretty certain about these things, but have me confirm them. And in that way, it gets rid of some of that compensation for differences. Great question. Okay, let's move to the next one from Matali. And they ask, now there are AI tools to build Figma wireframes and PRDs with just one click. Any advice of beginners getting into product on how they use these tools effectively and which ones to use for maximum impact? I'm sorry, I didn't mean to laugh, but I think Heather, like, why am I laughing?
SPEAKER_01Um, you don't believe one click, right? We've seen some of these tools, we've seen the output of these tools, and I'm not gonna pick on Atlassian, but I am gonna pick on them right now. We've been working with Jiravo, I think it's called, and it's sort of the instant PRD. They ask you some questions, which is interesting, but I think at the end of the day, they kind of miss the point. It's oh, what do you want to build? As opposed to what's the problem we're trying to solve? What are the needs of the customer? How do we, and who are we solving that for? So my point with this is all of those tools are fantastic, but you have to seed it with your customer foundations. You have to seed it with the outcomes you're solving for, so that you're connecting the work that you're doing in AI at an accelerated pace, you're connecting it back to the outcomes and the customers you're solving for.
SPEAKER_00Yeah. Anything that tells you that you can automatically do something, I mean, just have major mistrust around that. Okay. Okay, let's move to Jason's question, which is more and more with AI coding tools, if the product manager defines specs and expected results like usability tests, AI can do all the development in their real time. Curious how you see that impact in the product role. Obviously, good product decisions are critical because of the lack of friction. So, Jason, there's a couple of things going on here. One of them is you can have good product decisions, and we can spend a lot of time on alignment, but I've never had a lack of friction. So if you do, I want to hear how you got there. So, in terms of the curiosity to like seeing how impacting the product role, this is probably, gosh, if we get on a soapbox, right, Heather, we could talk about this all day long. In terms of we define the specs, but being a huge part of driving the decision, making sure that you are checking in what are those checks and balances? How do you make sure that you are staying towards your true north and your outcomes and constantly how you're connecting with the customer? How are you connecting with what's truly important? So that jump to defining a spec and output is probably not how we would define how development is done these days. Hither, what would you add anything there?
SPEAKER_01No, I totally agree. I feel like we are less about spec writing now and much more about judging and facilitating getting to the right decision. Decision making now is becoming way more important as speed becomes a norm and we our jobs in terms of prototyping and building products are accelerated faster than we have ever even imagined, and it's only going to get faster. The job of a product manager becomes the judge, the jury, and the facilitator of the right decisions. And that is all about staying close to the customer, all about staying close to the business strategy and the market trends to make sure that what we are accelerating in terms of building is the right thing.
SPEAKER_00Okay, moving on to a question from Jessica. As a product leader, how do I monitor my team's use of AI to ensure they're using best practices? I find it challenging to coach them in working with AI because the chats are private by default and the volume of text in a typical chat-based interaction is really high. Gosh, I wish we had done this live because I'm so curious as to why you would want to monitor them.
SPEAKER_01Because I think it's not about monitoring the uses.
SPEAKER_00It's not about monitoring. Yeah, exactly. I guess there's probably maybe a deeper question. So we're going to take it at face value and try and answer this. But in terms of using best practices, what have you documented? What have you stated are the rules by which to play? What's important to your business and to your perspective as a product leader. And then maybe Heather, you talk about coaching them, right? That maybe you take that one, like in terms of how do you monitor that?
SPEAKER_01Up until recently, AI has been experimental for most people on the teams, right? Most product managers are experimenting with different prototyping tools, vibe-coding tools, different sort of agentic AI to help them do their jobs. And so it's been experimentation up until now. And I think we need to move from experimentation to a consistent approach with sanctioned tools. And I love experimentation. Don't get me wrong, that's not where I'm going here. But it's about what tools matter most to your organization. How do we approach leveraging them to their highest extent, but still getting the outcomes? What we should be monitoring is the outcomes. Are we still building the stuff that solves customer problems? Are we still developing stuff that drives value at the end of the day? So our metrics don't change. It's establishing the infrastructure and the frameworks for how to use AI in a way that still gets us to those outcomes in a fast way, right? So a couple of things that we see that's been super important. Making sure that product managers have the infrastructure they need to access the latest customer research, transcripts, and documents like business strategy, the outcomes for the businesses that they're in, all in AI ready artifacts that they can then feed into any tool that they're using so that they're connecting the work they're doing with AI to the outcomes, to the context that matters, like the customer foundations and the business, what I'm calling foundations, right? The outcomes, the strategy. And so it's facilitating that connection between the tool usage and the outcomes we're looking for and the customer-backed decisions we're making, is where I think product leaders need to sit in this overall view of establishing good AI practice.
SPEAKER_00Cool. Two more questions. Quick one from Nick. What are the most corporate-friendly alternatives to co-pilot? Easiest to sell.
SPEAKER_01Okay, this is Microsoft World Enterprise, I should say, really.
SPEAKER_00I gotta tell you, we have had our clients and companies, we've seen them use alternatives and introduce them. There is no, I think, from our perspective, there's no sort of closed second runner. You gotta build a business case, you gotta talk about why you can't use co-pilot, you gotta talk about what else you want to do. So much like a lot of the answers in this section, what are you trying to solve for? Why is copilot lacking? And we can give you a dozen reasons there. I hope no one's from Microsoft listening to this. But then make your case. At the very least, you can talk about having the balance, having something else to use, having a voice in there, not being too reliant on copilot. That could be the argument that you start with. It's a probably quite a compelling one, and you probably have a lot of evidence to support that.
SPEAKER_01I think the evidence thing too is a good one, right? Demonstrate the difference between the work that you're doing in Copilot and maybe compare it to Claude and just show the output of what you're getting and how you're using the output and show the difference. I mean, it is clear as day in my mind between copilot and others. So you can probably bring that to your product leader and your CTO and go, look at the difference in the outcome I'm getting. This isn't enabling me to be faster. It's actually enabling me to be slower.
SPEAKER_00Okay, last question from Miguel. There is a transition between bytecoding and prototyping and as actual integration in the company code. How do we ensure a smooth transition? And what are the main mistakes you've seen so far with this transition? Wow.
SPEAKER_01That's a deep question. So let's talk a little bit about. I think I'm gonna say this a little tongue-in-cheek. It depends, right? I think the vibe coding is becoming better and better. And so if you're starting from scratch in a startup and you want to develop an application all the way through in some vibe coding tool that allows for production environments and the like, you're gonna have a much greater chance of producing some good stuff now than you would have six months ago. So let's start there. If you're in an enterprise, it's very different. I think one of the co-founders of Anthropic had said, look, vibe coding is your first draft of code. It's not the full production level code. Again, I think it's getting better. And I think it depends on the organization you're in and where you're starting from. For major enterprise with legacy systems and coding standards and the like, you've got a bigger leap. And so I think the way we've seen some successes is treating prototyping and vibe coding as a really great set of tools for exploration and testing and getting to a point where you have some sort of standards or stage gate that then goes to engineering interpretation or engineering sort of standards review. And they're going through it from an infrastructure review, they're going through it from a security review. They might be doing some refactoring. And then from there, it goes to the big, I can't remember the way that they what they called it. I want to say it was production, right? So the production implementation. So then they do the last-minute code standards. Are we there? And they're so there's some stage gates that I think enable us to go from the vibe coding exploration iteration to a good production-ready set of code. I just don't think it's one fell swoop where you're developing it in a vibe coding tool and going straight to production and then voila. Again, for the larger enterprise legacy systems, I think you've got some layers, but still possible, and certainly an accelerated approach to development. But I think there's just some checks and balances that have to happen along the way.
SPEAKER_00Cool. Well, thank you again. We've never done this before, had so many questions that we've had to do a little addendum. So I really appreciate it. Love the engagement. Please look out for more about AI. This is something that we're working 24 hour by seven with our clients. I'm doing some really fun things. So please reach out, take that 15-minute call. You'll see all the information in the email. Bye everyone. Bye-bye. Thanks so much for listening. We hope you enjoyed it. If you want to go deeper on this topic, we're hosting a series of AI roundtables, exclusive small format gatherings for CPOs and VPs of product to have candid conversations about how AI is impacting the product function and what pioneering leaders are doing and how to lead through this transformation. Hey, they're invite only and hosted by Heather and me, along with a guest host working at the intersection of AI and product. If you're interested, come and apply for a spot. You'll find all the details in the show notes. Thanks so much for being part of the Product Rebels community. See you next time.