In defense of wasting time

America post Staff
14 Min Read



For most of modern management history, wasting time has been treated as a vice. This sensibility can be traced back to Frederick Taylor’s doctrine of scientific management, which recast work as an engineering problem and workers as components in a machine to be optimized, standardized, and controlled. In reducing human effort to measurable outputs and time-motion efficiencies, Taylorism marked the beginning of the end for seeing people as thinking agents, turning them instead into productivity units not unlike laboratory rats, rewarded or punished according to how efficiently they ran the maze.

Since then, we have come a long way. The post-war rise of the knowledge worker, and later the age of talent that took shape from the 1960s onwards, marked a decisive break with the logic of the factory floor. Work was no longer merely a job to be endured, but a career to be developed. Organizations began to concern themselves with engagement, motivation, wellbeing, and work–life balance, not out of benevolence alone but because value increasingly resided in people’s minds rather than their muscles. Human capital came to mean employability, shaped by intelligence, drive, expertise, and a new, if imperfect, meritocracy that coexisted with vocational careers. The growth of the creative class reinforced this shift: machines would handle the boring, repetitive tasks, freeing humans from the assembly line to think, design, and imagine.

{“blockType”:”mv-promo-block”,”data”:{“imageDesktopUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-16X9.jpg”,”imageMobileUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-1×1-2.jpg”,”eyebrow”:””,”headline”:”Get more insights from Tomas Chamorro-Premuzic”,”dek”:”Dr. Tomas Chamorro-Premuzic is a professor of organizational psychology at UCL and Columbia University, and the co-founder of DeeperSignals. He has authored 15 books and over 250 scientific articles on the psychology of talent, leadership, AI, and entrepreneurship. “,”subhed”:””,”description”:””,”ctaText”:”Learn More”,”ctaUrl”:”https:\/\/drtomas.com\/intro\/”,”theme”:{“bg”:”#2b2d30″,”text”:”#ffffff”,”eyebrow”:”#9aa2aa”,”subhed”:”#ffffff”,”buttonBg”:”#3b3f46″,”buttonHoverBg”:”#3b3f46″,”buttonText”:”#ffffff”},”imageDesktopId”:91424798,”imageMobileId”:91424800,”shareable”:false,”slug”:””}}

The latest iteration of this story is, of course, AI. What makes it different is not merely that it automates standardized and repetitive work, but that it increasingly encroaches on intellectual, creative, and cognitive tasks once thought to be distinctly human. Writing, analyzing, summarizing, designing, even ideating are now faster, cheaper, and more scalable when performed by machines.

The irony is hard to miss. Just as work had evolved away from crude measures of output, we find ourselves drifting back towards a Taylorist logic, where value is once again assessed in terms of raw productivity: how much, how fast, how cheaply. Only this time, the benchmark is no longer the stopwatch but the algorithm. Worse still, the machines are not merely competing with us on these terms; they are learning from us how the work is done, refining it, and then doing it better. In the process, the very qualities that once distinguished human work risk being reduced to inputs in someone else’s optimization function.

This is widely framed as progress. It may turn out to be a costly misunderstanding.

Engineering inefficiencies

Deep thinking is inefficient by design. It is slow, cognitively demanding, and frequently unproductive in the short term. Experimentation is worse. Most experiments fail, and even the successful ones rarely succeed on schedule; plus if you know in advance whether an experiment will work, then it’s not truly an experiment. Intrinsic curiosity is even more unruly, leading people into intellectual detours with no obvious payoff. None of this lends itself to neat metrics or reassuring dashboards. From a narrow productivity perspective, it looks like waste.

Those inefficiencies are not limited to how humans think. They also define how humans relate to one another at work.

Acting human, and especially acting humane, is inefficient by design. Greeting your barista and asking how they are doing slows the line, even as the system is optimized to maximize how many lattes can be poured per hour and you are encouraged to streamline your order through an app. Asking colleagues how they are doing at the start of a meeting consumes time that could otherwise be spent racing through the agenda. Showing genuine interest in others, listening without an immediate instrumental purpose, or helping someone become better at their job often sits well outside your formal goals, your key performance indicators, or your objectives and key results.

From a narrow productivity perspective, this too looks like waste.

Friction in the system

Efficiency, however, is indifferent to relationships. It privileges throughput over connection, output over meaning, and speed over understanding. Optimized systems have little tolerance for small talk, empathy, or curiosity because these behaviors resist standardization and cannot be cleanly measured or scaled. In a perfectly efficient organization, no one asks how anyone else is doing unless the answer can be converted into performance. Help is offered only when it aligns with incentives. Time spent listening, reflecting, or caring is treated as friction in the system.

The problem is surprisingly common, namely that when organizations optimize for the system, they often end up sub-optimizing the subsystems within it. This is a familiar lesson from systems theory, but one that is easily forgotten. In the age of AI, the “system” increasingly appears to be designed around what machines do best, while humans are quietly downgraded to a supporting subsystem expected to adapt accordingly. We hear a great deal about augmentation, but in practice augmentation often means asking people to work in ways that better suit the technology rather than elevating the human contribution.

Talent, however, will not be elevated if human output continues to be judged by the same raw, quantitative metrics that define machine performance: speed, repetition, and operational efficiency. If you are simply running faster in the same direction, you will only get lost quicker (and maybe even lose the capacity to realize that you are lost). These apparent efficiency measures reward behavior that machines naturally excel at and penalize the very qualities that distinguish human work. They focus obsessively on output while ignoring input: the role of joy, curiosity, learning, skill development, and thoughtful deployment of expertise. In doing so, organizations risk building systems that are optimized for AI, but progressively impoverished of the human capabilities they claim to value most.

Inefficiency and new value

This is why efficiency so often feels dehumanizing. It removes the informal, relational, and moral dimensions of work that make organizations more than collections of tasks. Humans do not learn, trust, or collaborate best when they behave like streamlined processes. We improve through interactions that appear inefficient on paper but are foundational in practice. In this sense, the inefficiencies of acting human are not a failure of management but a feature of humanity. They are the social and psychological infrastructure that allows thinking, learning, and cooperation to occur at all, and the necessary counterweight to systems designed to optimize everything except what makes work worth doing.

Incidentally, inefficiency also plays a central role in the creation of new value, both in discovering better ways of doing existing things and in discovering entirely new things to do. Many important advances in science and business did not arise from tighter optimization or marginal efficiency gains, but from allowing room for exploration, deviation from plan, and attention to unexpected outcomes.

In science, this is often the product of curiosity-driven research rather than narrowly goal-directed problem solving. Alexander Fleming’s observation in 1928 that a mold contaminant inhibited bacterial growth on a culture plate did not, by itself, produce a usable antibiotic, but it did reveal a phenomenon that later became penicillin once developed by others. Similarly, early work that eventually led to technologies such as CRISPR gene editing emerged from basic research into bacterial immune systems, conducted without any immediate application in mind. These discoveries were not accidents in the casual sense, but they did depend on researchers having the freedom and attentiveness to notice anomalies rather than discard them as inefficiencies.

The role of anomalies

Business innovation shows a comparable pattern. The adhesive behind Post-it Notes was not the outcome 3M originally sought, but its unusual properties were documented rather than rejected, and only later matched to a practical use. This kind of outcome depends less on speed or optimization than on organizational tolerance for ideas that lack an immediate commercial rationale. Systems optimized exclusively for efficiency tend to filter such anomalies out before their value becomes apparent.

Even in exploration and trade, progress has often followed from imperfect information and miscalculation rather than from optimal planning. European expansion into the Americas, for example, was driven in part by navigational errors and incorrect assumptions about geography. While hardly an argument in favor of error, it is a reminder that historical change frequently arises from deviations rather than from flawlessly executed plans.

The broader point is not that inefficiency guarantees innovation, but that innovation is unlikely without it. Systems designed to maximize efficiency excel at refining what is already known. They are far less effective at generating what is new. Allowing space for uncertainty, exploration, and apparent waste is not an indulgence, but a necessary condition for discovering value that cannot be specified in advance.

This distinction is captured neatly in the work of Dean Keith Simonton, who has argued that innovation follows a two-step process: random variation followed by rational selection. New ideas arise from error, experimentation, and departures from established rules, and only later are refined and selected for value. AI is exceptionally strong at the second step. It can evaluate options, optimize choices, and select efficiently among existing alternatives. What it cannot meaningfully do is generate the kind of genuine variation and rule breaking from which truly novel ideas emerge. That responsibility remains human. The risk in an AI-saturated environment is that organizations double down on selection while starving variation, becoming ever more efficient at refining yesterday’s ideas.

Reheating ideas

If, in the name of efficiency, creativity itself is outsourced to AI, the result is not randomness but prefabrication: synthetic re-combinations of existing ideas, smoothed and averaged across prior human output. This often resembles creativity without delivering it, more akin to reheating ideas than inventing new ones. The food analogy is instructive. Cooking a proper meal is inefficient and time-consuming, while a frozen meal is faster and perfectly adequate. But no one serves a microwaved lasagna to an important guest and mistakes it for craft. The extra effort is the point.

The same logic applies to thinking and work. Deep thinking is inefficient, but it converts familiarity into understanding. Stepping outside established processes may slow things down, but it is often how better methods are discovered. Time spent feeding curiosity rarely pays off immediately, but it expands skills, connections, and optionality. Even social inefficiencies, such as investing time in relationships that do not yield immediate returns, build trust and create opportunities that efficiency metrics fail to capture.

In this sense, inefficiency is not the opposite of effectiveness but a different path to it. Systems optimized solely for speed and output may function smoothly in the short term, but they do so by eroding the very conditions that allow learning, adaptation, and originality to emerge.

{“blockType”:”mv-promo-block”,”data”:{“imageDesktopUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-16X9.jpg”,”imageMobileUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-1×1-2.jpg”,”eyebrow”:””,”headline”:”Get more insights from Tomas Chamorro-Premuzic”,”dek”:”Dr. Tomas Chamorro-Premuzic is a professor of organizational psychology at UCL and Columbia University, and the co-founder of DeeperSignals. He has authored 15 books and over 250 scientific articles on the psychology of talent, leadership, AI, and entrepreneurship. “,”subhed”:””,”description”:””,”ctaText”:”Learn More”,”ctaUrl”:”https:\/\/drtomas.com\/intro\/”,”theme”:{“bg”:”#2b2d30″,”text”:”#ffffff”,”eyebrow”:”#9aa2aa”,”subhed”:”#ffffff”,”buttonBg”:”#3b3f46″,”buttonHoverBg”:”#3b3f46″,”buttonText”:”#ffffff”},”imageDesktopId”:91424798,”imageMobileId”:91424800,”shareable”:false,”slug”:””}}



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *