Melissa Stone is a Tech Lead at Maybern. Her team is building the engine that powers every Maybern workflow and making complex allocation logic feel simple and intuitive. Prior to joining Maybern, Melissa spent 5 years in healthtech at Zocdoc where she built and scaled new product verticals in the provider and pre-appointment spaces.
One of Maybern’s values is Be an Owner. To me, that means that as engineers, there’s more to our jobs than just pushing code or completing our linear tickets. This doc explains explicitly what ownership means as an engineer at Maybern. All of these examples may or may not be applicable depending on the size of the ticket / project.
1. Pre-Development
Understanding the Why
- Dig into the business context and user problem:
- What are we actually trying to solve here? What’s the user problem?
- What existing functionality is there in the app around this feature?
- Are there any similar patterns across the app we can leverage or expand on?
- What’s the impact of solving this problem? Increasing addressable market, easier development, fixing a client’s bug?
- Who are my stakeholders? What are their needs?
- Helps us prioritize urgency
- What is a must vs what is a nice-to-have (or can be a follow-on)?
Scope Definition
- What assumptions are we making about the feature? What explicitly are the requirements?
- Consider - is it worth building something new that we plan to maintain long term? or will a hack do in the short term?
- Worth mentioning alternatives and discussing with the team / stakeholders
- Can we push back on any of the requirements? (effort vs impact tradeoffs)
Planning
- Break down the work realistically
- Does it warrant an ERD? Have I discussed my approach with the right stakeholder? Subtasks and definitive milestones are good for larger projects
- Identify dependencies and risks early
- Call out blockers, identify what other projects are happening in that domain
- Communicate timeline concerns proactively
- Call out when things are getting de-prioritized because of incoming bugs, or whatever it may be
- Build in a little buffer for the unknown unknowns (but not too much buffer!)
2. During Development
Technical Decision Making
- Make decisions as if you'll maintain this code for years
- Optimize for readability / understandability - our code is complex, largely due to the business logic. We don’t need to add in unnecessary complexity for complexity’s sake
- Don’t be afraid to update existing code to make something simpler. Don’t assume existing code is always 100% correct either.
- Ask yourself - would a dev without context understand this code?
- Balance perfect vs. good enough for now
- But document those tradeoffs or things to do in the future (perhaps in linear, or just comments in the code / todos!)
Communication & Collaboration
- Over-communicate progress and blockers
- Linear updates
- Updates in #implementation are good when a new feature becomes available (or the relevant slack thread)
- Pull in help before things become critical - people are usually willing to help!
- Share context with teammates
- Helps with code-review and team ownership
- Update stakeholders without being asked
- Ex: updates on bug tickets when things are being investigated
- Collaborating with product - feel empowered to make product decisions
Quality & Testing
- Test like a user, not just a developer
- Idiot proof - our users may not know what buttons they should / shouldn’t click. Don’t assume they know how the system works like we do
- Consider edge cases and failure messages - Are we using language only an engineer would understand? Did we give users enough information to unblock themselves?
- Think about monitoring and observability - how will we know if this thing breaks?
- Plan for rollback scenarios
3. Delivery & Beyond
- Own the rollout plan
- Feature flags (and eventual deprecation of said feature flag!)
- Loop test cases (collaborate with PM)
- Monitor actively post-deployment
- Can leverage Datadog custom metrics
- #bug-tracker is a good place to see inbound and glance to see if anything is related
- Communicate and test with stakeholders - did this actually meet their requirements?
- Document follow-ups or nice-to-haves that we didn't get to
Addendum: Ownership in the Age of AI (March 2026)
The core thesis: AI has dramatically lowered the cost of doing things right. Many of the shortcuts we used to take — skipping docs, deferring small fixes, leaving TODOs to rot — were rational when the effort to do them properly was high relative to their value. That calculus has changed. Ownership now means raising your bar for what's "worth doing" because the effort threshold has dropped.
The Cost of Experimentation Has Collapsed
- Before: Spiking an alternative approach or prototyping a different architecture meant hours or days of work, so we rationally limited how many options we explored. "Good enough" was often the only option we had time to evaluate.
- Now:
- You can spike 2-3 approaches in the time it used to take to do one. This means scope definition and planning should include more experimentation, not less deliberation. "I tried X and Y, here's why I chose Z" is now a realistic thing to say in a PR description.
- The bar for pushing back on requirements goes up — "that would take too long" is less often true. But the bar for strategic pushback ("should we build this at all?") stays the same or goes higher. AI makes building cheaper, not deciding cheaper.
- Prototyping to validate an approach before committing is now almost always worth it. "I wasn't sure if this would work" is less acceptable as a reason for not trying.
Small Fixes and "While You're Here" Work
- Before: Fixing a nearby issue, improving a confusing variable name, or cleaning up a test you noticed was flaky — these were nice-to-haves that often got deprioritized because context-switching cost was real.
- Now:
- The marginal cost of fixing small things you encounter is near-zero. "Don't be afraid to update existing code to make something simpler" should become the default, not the aspiration.
- The boy scout rule ("leave the campsite cleaner than you found it") becomes a genuine expectation, not just a platitude. If AI can fix it in 30 seconds, leaving it broken is a choice.
- This changes how we think about code review too — reviewers can (and should) suggest more improvements, because the cost to the author of addressing feedback is much lower.
Follow-Through: Documentation, Tech Debt, and True Completion
- Before: "Document follow-ups or nice-to-haves" often meant creating a Linear ticket that sat in the backlog forever. Updating docs, writing missing tests, cleaning up feature flags — these were the first things to get cut when time ran short
- Now:
- Documentation is no longer optional — and it's now an investment, not just a chore. If AI can draft a doc update in minutes, there's no excuse for shipping a feature without updating the relevant docs. But more importantly: every piece of documentation you write — why we made a decision, how a system works, what the business context is — becomes context that AI agents can use to make better decisions about Maybern's codebase going forward. Documenting the "why" behind a tradeoff isn't just helping the next human engineer; it's training the next AI-assisted workflow to understand our domain. The ROI on documentation has fundamentally changed: it compounds in ways it never did before.
- Tech debt tickets should get smaller and get done faster. Many tech debt items are exactly the kind of well-scoped, mechanical work AI excels at. "We'll get to it eventually" should become "we'll get to it this sprint."
- Feature flag cleanup, test coverage gaps, TODO resolution — all of these "finishing touches" that used to feel like separate projects are now part of the original scope. Ownership means the feature is actually complete, not just functionally shipped.
- Following up on things proactively — checking Sentry after a deploy, updating a runbook, notifying stakeholders — AI can help draft those communications and monitor those dashboards. The cost of follow-through has dropped.
What Hasn't Changed (and What Gets Harder)
- Understanding the "why" is more important than ever. AI makes it easy to generate code without understanding the business context — but ownership still means knowing why you're building what you're building. The risk of building the wrong thing faster is real.
- Judgment and taste become the differentiator. When the cost of building is low, the value shifts to deciding what to build and how it should work. Product thinking, architectural taste, and knowing when to say no — these are the ownership skills that matter more, not less.
- Communication doesn't get automated away. Over-communicating progress, pulling in help, updating stakeholders — AI can help you draft the message, but the instinct to communicate is still a human responsibility.
- Review and critical thinking become more important when more code is being produced faster. Owning the quality of AI-generated code is still your ownership.
A New Mental Model
The old mental model: "I would do this if I had time." The new mental model: "Where my time goes has changed — am I spending it on the right things?" Are you auditing whether or not you are leveraging AI tools appropriately to use your brain on thinking through the things that matter.
Parallelization is also much easier with these tools. The expectations have changed; being “heads down” on one thing may not be the way an engineer is most impactful. It’s more likely that every engineer should have multiple workstreams or projects they are nudging along over the course of any given week.
Writing code is now (mostly) cheap. But reading, verifying, and understanding code is not — and AI is generating more of it than ever. We are still very much limited by time, and there will always be 100x more things worth doing than we can actually do. That hasn't changed, and we still need to be ruthless about what we decide to spend time on.
What has changed is the leverage calculation. The mechanical parts — drafting code, writing tests, updating docs — are faster. That means more of your time should shift toward the high-judgment work: understanding the problem, verifying the solution, deciding what's worth building at all. Ownership in the age of AI isn't about doing more things because they're cheaper. It's about recognizing that the bottleneck has moved from producing code to thinking critically about it — and spending your time accordingly.
But here's the practical shift: when you're about to create a follow-up ticket, write a TODO, or punt on something small — ask yourself: could I get this done in the time it takes me to document that I'm not doing it? If leveraging AI means the fix takes less effort than the ticket, just do it. The threshold for "not worth doing right now" has moved — not disappeared, but moved.