At some point in the last couple of years, without any formal announcement, a shift happened in a lot of workplaces. The most consistent, patient, and well-informed presence in many people’s working day stopped being a colleague or a manager. It became a piece of software.
Not in a dramatic way. Nobody handed over the org chart. But if you use AI tools with any regularity, and a growing number of people do, then something has access to a fairly intimate picture of how you work. It knows what time you start. It knows which tasks you struggle to articulate. It knows the topics you return to repeatedly, the questions you’re embarrassed to ask out loud, and roughly how long it takes you to make a decision. Your manager, in most cases, knows considerably less.
This piece isn’t an alarm. It’s more of a slow look at something that’s already happening, at what it means that our tools have become so personally attuned, and what that quietly reveals about the state of modern working relationships.
The Data That Accumulates
It helps to be specific about what AI tools actually observe, because the picture is more detailed than most people consciously register.
A calendar AI sees not only your meetings but the gaps between them, the invites you decline, the ones you accept and then reschedule, the blocks you protect and the ones that get eroded across a typical week. A writing assistant sees your first drafts before anyone else does, including the deleted sentences, the false starts, the moments where your thinking is muddy before it clears. A productivity tool that manages your task list knows which items have been carried forward for three weeks running, which projects you engage with first thing in the morning and which you leave until there’s no avoiding them.
Individually, none of these observations is particularly significant. Together, they form something that starts to resemble a behavioural profile, one that captures not the polished version of you that shows up in performance reviews, but the actual texture of how you operate day to day. The hesitations, the preferences, the patterns you might not even be aware of yourself.
Most managers never get close to this level of detail. Not because they don’t care, but because they have other things to do, other people to manage, and no reliable mechanism for observing the granular rhythms of someone else’s working life. The AI has no such constraints. It is, in a sense, endlessly attentive.
What Managers Typically Know
To be fair to managers, the comparison needs some context. What a good manager offers goes well beyond observation. Judgement, advocacy, the ability to read a room, institutional knowledge, human empathy in difficult situations. These are not things that any AI tool currently replicates in any meaningful way.
But in terms of raw knowledge about how an individual actually works, the average managerial relationship is thinner than the rhetoric around it tends to suggest. Research on workplace dynamics consistently finds that managers typically spend a small fraction of their time in direct, substantive engagement with any individual report. Performance conversations happen quarterly or annually. Feedback, in many organisations, is patchy and delayed. The day-to-day texture of someone’s work, the struggles, the momentum, the quiet competences that never make it into a one-to-one, largely goes unobserved.
There are structural reasons for this. Management spans are often too wide. Hybrid and remote working has reduced the ambient awareness that comes from physical proximity. And the performance management frameworks that most organisations rely on are built around outputs and objectives rather than the process of how work actually gets done.
Into this gap, AI tools have moved. Not intentionally, and not as a replacement for anything. But the attentiveness is real, and the informational asymmetry it creates is worth naming.
The Intimacy of the Interaction
There’s another dimension to this that sits slightly apart from the data question, and it has to do with how people actually behave with AI tools compared to how they behave with colleagues and managers.
People tend to be more honest with AI. This is a finding that comes up repeatedly in research on human-computer interaction and it makes intuitive sense when you think about it. There’s no social cost to admitting confusion to a tool. You can ask a basic question without worrying about how it reflects on you. You can express uncertainty, frustration, or half-formed thinking without managing anyone’s perception of your competence. The usual filters that govern professional communication, the self-presentation, the status awareness, largely come down.
What this means in practice is that the version of yourself you bring to an AI interaction is often less curated than the version you bring to a meeting or an email exchange. The tool sees something closer to the unedited working mind. Over time, through accumulated interactions, that picture becomes genuinely detailed in ways that no professional relationship typically achieves.
This isn’t sinister, necessarily. But it is intimate. And intimacy in a professional context, particularly intimacy that flows in one direction and is retained by a third-party platform, raises questions that the industry has not been particularly forthcoming about answering.
What This Says About Modern Work
The fact that AI tools have become such attentive observers of working life reflects something real about the conditions in which most people now work. The distributed, often asynchronous, heavily screen-mediated nature of modern work means that human observation and connection have genuinely thinned. People spend more time interacting with software than with colleagues. The tools are, in many cases, where the work actually happens.
In that context, it’s perhaps not surprising that the tools have become repositories of personal and professional information that would once have been spread across a web of human relationships. What is surprising, or should be, is how little conversation there has been about what that means.
There’s a version of this story that’s simply about data and privacy, about what companies do with the behavioural information that accumulates in their platforms. That conversation matters and is not happening loudly enough. But there’s a separate, softer question underneath it, about what it means for people’s working lives when the most consistent witness to their daily effort is not a human being but a subscription service.
The Counter-argument Worth Sitting With
There’s a case to be made that this framing is too loaded. AI tools don’t actually “know” anything in the way a person knows something. They process inputs and generate outputs. The sense of being understood that people sometimes report when using well-designed AI tools is, arguably, a form of interface design rather than genuine comprehension. The attentiveness is simulated. The intimacy is, to some extent, an illusion.
This is technically accurate and also, in practice, somewhat beside the point. The effects on behaviour are real regardless of the mechanism behind them. If people are more candid with AI tools than with their managers, the consequences of that candour flow into the real world. If the tools accumulate detailed behavioural data, that data exists and can be used, whether or not the tool has any subjective experience of possessing it.
The philosophical question of whether AI truly “knows” you is less pressing than the practical question of what happens to everything it records.
The Part That Deserves More Attention
None of this leads to a tidy set of recommendations. The tools are useful, often genuinely so, and the informational intimacy they create is largely a byproduct of that usefulness rather than a deliberate feature. People are not going to stop using them because of an abstract concern about attentiveness.
What does seem worth attending to is the gap this dynamic reveals between the promise of modern management and its reality. If a well-configured AI tool has a more accurate picture of how someone works than their direct manager does, that’s not primarily a story about AI. It’s a story about how thin the connective tissue of many working relationships has become, how little space there is in most organisations for the kind of sustained, attentive human observation that good management is supposed to involve.
The tools moved into a vacuum. The more useful conversation, perhaps, is about what created it.
- The 2026 Australian PEO Handbook for International Hiring - March 9, 2026
- How AI Perfection Is Making Us Fall Back in Love With Flaws - February 12, 2026
- Perth’s AI SEO Scene: 7 Perth Agencies Optimising for the AI Answer in 2026 - January 17, 2026
