← Back Home

Don't confuse signals with answers

Is 100% test coverage an answer that you code has no bugs? No. It's a signal that certain parts of the system behave in a way that is not broken. It is not guarantee that system as a whole works, even if you have some integration tests. There are still some edge cases. But having good test coverage is preferrable, because it is usually a strong signal that codebase is well treated.

Here is another example. How much code did you ship in the last month? If you didn't write much, is it an answer that you didn't work? No. But it's a signal that shows that you might need more attention — maybe you're blocked, or maybe you're working on a big project and most of contributions are to the RFCs.

When you ask for advice you don't ask for definitive answer. For example you might ask a friend, do I look good in this jacket? Their answer is a signal. You might agree or disagree with it. It might be possible that 1 friend said the jacked does not suit you, but 5 other said it sits perfectly. It's up to you to decide for yourself.

Same with AI answers — it can provide some signal (opinion) to the question asked, but it might be missing some important context details or it might have more context then needed. So it's again up to you to how you gonna use that signal, if you agree or disagree with it.

Similar with performance metrics for the website. If we improve it would it result in better sales? Was it only the performance or also the marketing campaign with 50% discounts that happened at the same time? Or maybe it was our competitor's website that was down for 10 hours? Some metrics are too abstract to give an answer. Having multiple levels metrics helps to identify stronger signal, and A/B testing also helps to improve signal strength.

For a lot of questions we only have signals. Some strong, some weak, but it's rarely a definitive answer. It takes time to shift to this mindset, but it helps to understand world better and helps to keep calm in unexpected situations.