The Intuition Interface


What does a product engineer even do?

I’ve recently been asked about what product engineering means to me, and while I knew the answer, I had considerable trouble articulating it. The term gets thrown around a lot but it is rarely examined, and if it is examined, it’s usually reduced to a number of platitudes like “cares about the result, not the tools,” as if software engineers didn’t care about the result.11This framing also has a second problem: it implies that tool choice doesn’t matter. It does. In software, most results can be achieved with most tools, but we reach for the familiar in order to make our estimations reliable. When this phrase appears in job listings, it usually means “we want an orchestra, but can at most pay for a soloist.”

When we talk about the different technical roles of a software project, we typically consider them on a spectrum from high-level to low-level, depending on how close to the user or how close to the machine someone works. At the high-level we might have front-end developers, web designers and UX/UI designers. A bit further down we might look at back-end developers, database administrators, infrastructure and site reliability engineers, embedded software developers, and so on.

Sometimes these roles are lumped together under a new label, for instance, one who combines the skill set of a back-end and front-end engineer and perhaps some database administration might be called a “full-stack engineer,” or one who deals with infrastructure, databases and back-end comes to be defined as a “DevOps engineer.”

Likewise, when we consider non-technical roles we also tend to place them on a spectrum of those that are closest to the product to those that are closest to the stakeholders: business analysts, project managers, product owners are some of the labels employed here, and here too roles are frequently merged: one taking on the responsibilities and skills of both a project manager and a product owner might come to be called a “product manager.”

The more complicated software becomes, the more roles are created and the more responsibilities are extracted. That said, we very rarely see the two worlds, technical and non-technical, merge, and they are rarely considered as a whole. There is a good reason for it: we tend to think of the non-technical world as the ones who set the goals, and of the technical world as the ones who execute these goals.22It’s a quite natural choice when you consider it from the outside, but it seems like many of us rarely consider what the software development field looks like from the outside. I’ve been in this field for a while now, and I only realised this while listening to a podcast about investing.

But in my view, that separation, like many, does not need to be there. We can lay every role, from the users themselves down to the most heads-down technical engineers on a continuous spectrum from man to machine. We draw a line across it and assign different people to each side, and call one side “development team,” and the other “domain experts.” But that line is an organisational choice, not an essential property of the process.

The filtering problem

One of the more serious downsides of this divide is that goals are input upstream and evaluated by that team first, before they are handed off downstream for implementation.

The way this typically works is that first, the non-technical stakeholders, who have good knowledge and intuition of the needs of the users, decide which ones should be prioritised. The goals with sub-par product value are discarded. Then, the technical team is asked to estimate the implementation of the remaining tasks, and finally the team as a whole agree on what to do and in what order.

When goals are evaluated by product merit alone, good options may be discarded before considering the technical cost. When presented with four options, a domain expert picks A, discarding B, C, and D. But what if goal B was 90% as good as A and took 10% of the effort? The option is not considered, because technical effort is evaluated after the search space is already narrowed.

Go get coffee

Consider the following (real-world) scenario. A user complains that one particular long-running process that she has to engage with on a month-to-month basis is too slow. The product owner prods the user for a bit more information. Here’s what the user has to say:

“Well, I need to generate the accounting report on the last day of every month. It takes about two hours to complete before I can download the finished Excel file and I can’t do anything else while it’s running. That’s way too long. I just sit there, refreshing the page, because sometimes it errors out halfway through due to invalid data. I wish it were faster so I could just get it done and move on with my day.”

What happens to this feedback upstream?

Well, the product owner does his job: he identifies the user’s pain and translates it into a goal. The user said the process is too slow, so the goal is obvious: make it faster. The goal is put into a Jira ticket, the technical team is invited to brainstorm solutions on how to make the report generation faster. They draft a few possible approaches, estimate them, and everyone agrees on the best one. They start working. A month later, the report completes in twelve minutes. Success.

Except—is it really?

Making the report faster is not a bad idea. It’s better for things to be fast and not slow. But perceived speed has sharply diminishing returns: anything beyond a few seconds means that the user becomes distracted and wants to go do something else.33This relationship is very well documented, with Nielsen Norman Group research putting the upper threshold of how long individual interactions should take at ten seconds. If something takes more than ten seconds, the user treats it as a cue to switch to a different task. Two hours and twelve minutes are very different numbers, but from the user’s perspective they are the same experience.

The user didn’t really complain about the report generating in two hours. She complained about being held hostage for two hours. The pain is the uncertainty, rather than the duration.

If there were a product-minded engineer listening in carefully to the user’s problems, the entire situation might have turned out differently. Perhaps the engineer knows that “generating the report” is processing hundreds of thousands of rows one by one and outputting them to a spreadsheet, bailing out on the first error. It would be a trivial refactor to first sanity check the data, and then do the actual file conversion.

The resulting time might even take slightly more than the original two hours, but a pre-flight check informing the user that the data is correct and she can go get coffee, reasonably assured that the process will be successful, might be the better solution. Indeed, two hours of justified slacking off might even make the user more delighted than speeding up the generation to twelve minutes.

But that option was never on the table, because the goal was framed as a speed problem before anyone with technical knowledge had a chance to say anything.

This is not the same as saying “put engineers in meetings.” The presence of a technical person is not enough. A back-end engineer sitting in on a user interview is still listening like a back-end engineer. Hearing “the report is slow,” he starts thinking about query optimisation, caching layers, file formats and rewriting it in Rust. It’s the same conclusion the product owner reached, just from a different angle. The insight required someone who could hear the product problem and see the technical shortcut at the same time.

Dual vision

An experienced, purely technical software engineer has good intuition for technical solutions. He builds things that are technically beautiful and elegantly implemented. He knows which requests that sound simple are architectural nightmares, and which requests that sound complex are trivial given existing infrastructure. This sense is built over years of being surprised—features that seemed easy, but weren’t, and shortcuts that seemed risky, but worked. It’s an exceptionally valuable and rare skill.

On the other hand, an experienced product owner or manager has good intuition for product judgement in his preferred domain. He knows how to talk to users and understand what they actually need. This, too, is built from experience—cycles of observation, hypothesis, design and intervention. Intuition for what is valuable for users is difficult to transfer across scales: knowing what thousands of e-commerce shoppers need teaches you little about what tens of surgeons need. Good product judgement is also a valuable and rare skill.44Some claim that product judgement is not transferable between domains. I am inclined to disagree: I think scale is the real divide. Building for thousands of consumers is fundamentally different from building for tens of professionals, and domain doesn’t have much to do with it. Two pieces of B2B software serving different professions likely have more in common than one B2B and one B2C product in the same domain.

The value proposition of a product engineer is that he ventures into both of these areas at once, becoming sufficiently fluent to serve as an interface between these two worlds, and as a result is able to gain novel insights and turn them into business value.

This is not just additive. You might have one technical person and one product person in a room, and not get to these kinds of insights. There’s a class of solutions that only become visible when both pattern recognitions fire in the same head. The dual intuition is qualitatively different from two specialists collaborating.

This dual understanding also comes from intuition, not magic, innate talent, meetings, or metrics. Intuition is compressed experience, and that’s why the role requires seniority. There are no junior product engineers. You need to have lived through making decisions and seeing their consequences down the line. You can’t fake having been there.

Intuition is a dirty word, though. Intuition doesn’t get you funding, data does. Data is essential for validation, but it answers questions, it doesn’t ask them. No dashboard would bring up “add a sanity check” as the solution to the report problem. The hypothesis came from someone who listened to the user, was able to listen past the suggested solution, identified the real problem, and knew the shape of the technical implementation well enough to see a fix.

Counsel, not command

But the product engineer does not decide the ultimate goal and is not held responsible for the choice. That belongs to the stakeholder, the one who decides what the team pursues and accepts the consequences of that decision.55”…Let the Abbot call together the whole community and state the matter to be acted upon. Then, having heard the brethren’s advice, let him turn the matter over in his own mind and do what he shall judge to be most expedient. (…) At the same time, the Abbot himself should do all things in the fear of God…, knowing that beyond a doubt he will have to render an account of all his decisions…” Rule of St. Benedict, Chapter III.

What the product engineer owns is the integrity of the options he puts on the table. If he says that a pre-flight check will solve the user’s problem, he is accountable for that assessment. His responsibility is to ensure that whoever makes the final call sees the real trade-offs on both product and engineering sides of the equation. A product engineer can be honest about trade-offs because he doesn’t carry the weight of that final call.

I suppose that might be what was so difficult to articulate originally. A product engineer is not a product owner who codes, or a developer who attends product meetings. He is an interface between two domains. The domains communicate in different ways, have their own definitions of value, their own languages and intuitions. A good product engineer has spent enough time in both worlds to develop the same intuitions to a degree and see connections that neither side sees alone. His job is to make those connections visible to people who need them.