Your buyer doesn’t care about your cool technology. They care about saving money**,** making money**, or** looking good to their boss**.**
When I started selling AI-powered products to enterprises, I thought the tech would sell itself. Cutting-edge language models, sleek interfaces, and mind-blowing demos? I was sure it was enough.
The problem? I was pitching what I thought was amazing, not what the buyer actually wanted.
Enterprise buyers don’t wake up thinking, “I need the latest AI product today.” They think about three things:
Here’s what I’d go back and tell my 2015 self:
1. Focus on the buyer, not the user.
Users love features that make their work easier. But buyers care about results: budgets, revenue, and their reputation. Connect the dots between features and outcomes.
Example: A client loved how our AI streamlined customer service, but we failed to tie it to call center savings. We lost the deal.
2. Not all revenue is created equal.
Selling to cost centers (IT, support) is tough. Budgets are tight, and focused on savings. Revenue centers (sales, marketing) have more room to spend.
Example: For support, focus on cost savings: “reduce ticket resolution times by 60%.” For sales, focus on growth: “increase pipeline by $1M.”
3. Pilots aren’t wins.
Celebrate them, but pilots can be traps: they absorb time but don’t convert without clear metrics.
Pro Tip: Structure pilots to succeed. Define measurable goals upfront, set a timeline, and tie success to a full deployment decision.
When you stop selling what you love about your product, and start selling what your buyer loves, everything changes.
Ask yourself: Am I pitching my tech, or solving their problem? Rethink your pitch — and let their priorities lead.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
"An AI agent that can be shut down or quietly rewritten isn’t autonomous. It’s just pretending to be.”
In 2021, a major AWS outage disrupted countless businesses and tools. Now imagine relying on an AI assistant during that outage — only to find it offline because a server halfway across the world went down or was suddenly banned in your country. This isn’t a rare inconvenience. It’s a design flaw baked into centralized systems that prioritize efficiency over trust.
If we want AI agents that are truly autonomous and dependable, we need to rethink their foundations. Whether you’re building AI tools or choosing them as a user, here are four principles that define real autonomy:
Centralized systems can’t deliver on these principles. They rely on fragile infrastructure and treat user needs as secondary. But a better path is emerging: decentralized AI systems. These systems combine the best of both worlds — handling sensitive tasks locally while still benefiting from shared insights.
If you’re building AI tools, ask yourself: Are you designing for resilience, transparency, adaptability, and accountability? These principles aren’t just good design. They’re the future of autonomous AI.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
Static product specs are obsolete. They’re relics of a slower, less iterative world, disconnected from the realities of modern product building.
Developers figured this out years ago. Instead of writing static documentation, they generate it directly from their code, ensuring accuracy and alignment with the product.
So why are product managers still stuck writing specs that can’t keep up?
Creator-PMs are changing this. Armed with AI-powered tools, they’re turning specs into dynamic, evolving prototypes that remain in sync with the product and foster true collaboration.
Imagine this: product feedback isn’t a comment buried in the margins of a doc. It becomes a fork of the prototype itself. A suggestion like “improve this flow” triggers an alternative version of the user experience — updated navigation, adjusted copy, or restructured interactions — all ready for immediate testing. Teams can compare it to the original, gather real-time feedback, and eliminate the guesswork of static documentation.
The future of Figma isn’t just better collaboration. It’s a Replit-like environment where design, code, and collaboration merge seamlessly.
Prototypes become living, interactive, testable artifacts that serve as the single source of truth for teams. Instead of debating static specs, teams align around what they can see, test, and experience.
For Creator-PMs, this is transformative. Prototypes aren’t just tools to explore ideas. They’re the product itself, albeit in its earliest form. Feedback becomes immediate action. Iteration happens fluidly. Collaboration flows as fast as we can think.
Prototypes won't just describe the product anymore — they'll lead it. And the teams embracing this shift aren’t just building faster. They’re building smarter.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
The best product managers (PMs) aren’t defined by how much they create — but by how fast they discard wrong hypotheses.
AI has fundamentally reshaped product management. Tools like Replit and V0 turn concepts into reality in minutes, removing the barriers to experimentation. No waiting on engineers. No holding for designers. Just test, iterate, and refine — all before the first meeting starts.
This is the rise of the Creator-PM. The role of PMs is evolving from planning and delegating to actively building, experimenting, and iterating in real time, using AI tools to validate ideas and discard what doesn’t work.
These tools empower Creator-PMs to treat prototypes not as pitches to sell ideas, but as instruments to refine and sharpen their thinking.
Failure used to burn time, money, and credibility. But AI makes it cheap — every discarded idea, flawed prototype, or wrong assumption sharpens focus and pulls you closer to the right solution.
Prototypes are the new specs. And experiments are the new roadmap.
The Creator-PM isn’t some mythical genius blending technical brilliance with product sense. It’s anyone bold enough to discard fear, embrace failure, and build the future — one experiment at a time.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
The US has imposed new restrictions on AI exports, claiming they’ll “protect innovation.” Instead, they’re setting the stage for a global arms race. Blocking advanced AI chips might slow rivals like China down temporarily, but history shows it’s more likely to spark competition than prevent it.
Take nuclear technology. After World War II, the US attempted to control nuclear proliferation through export restrictions and treaties. The goal was to maintain dominance and prevent others from developing similar capabilities. Instead, countries like India developed their own nuclear programs. Decades later, the US had to negotiate civilian nuclear cooperation deals with them to rebuild relationships and regain influence.
The same pattern could play out with AI. Cutting off tools like Nvidia GPUs won’t stop innovation — it will accelerate it elsewhere. China is scaling its AI capabilities, India is investing heavily in semiconductor independence, and the EU is implementing the AI Act to assert its regulatory and technological leadership. The result? A fragmented AI ecosystem where nations compete rather than collaborate.
This shift undermines a key principle: technology should connect us, not divide us. AI has the potential to predict pandemics, optimize renewable energy systems, and improve education globally. Treating it as a pawn in a geopolitical game risks turning progress into conflict.
So the question we need to ask is — are these policies truly protecting innovation, or are they leading to a future of conflict vs collaboration?
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.