👕 Grey Collar
Knowledge Work in the Age of AI
Welcome to Startup ROI, where I, Kyle O'Brien, share European Tech insights from my perch in Paris, France. You can expect to find articles covering startups & ecosystem trends, interviews with founders and investors, as well as updates on in-person events for community thought leaders.
Want to sponsor or collaborate? 📩 firstname.lastname@example.org
Join hundreds of fellow tech enthusiasts & follow for in-depth analysis:
All That Glitters Is Not Gold
Milestones in the business world are littered with references to gold — in some cases, quite literally. When someone retires, it is customary to gift them a gold rolex. When an executive is given severance (typically a generous one) following a leadership shake-up or hostile takeover, we call it a golden parachute. If, however, your startup is acquired and the parent company plans to retain talent at all costs, they often implement a strategy knows as the golden handcuffs. For my non-native English audience, let me explain.
Someone said to be wearing golden handcuffs is waiting on financial incentives to vest. Typically, they can't leave until that vesting period is complete without forfeiting some percentage of their forthcoming compensation. They are also usually well-paid and kept happy in order to boost productivity and prevent an early departure. This most often happens when a larger company acquires a tech startup (for talent and/or IP). If you acquire a Machine Learning company and all the ML engineers disappear once they've gotten their cut, you're left with, well, not much. Which is why creative financial instruments (i.e. stock options & a vesting schedule) are put in place to preserve the parent company's investment. In essence, you're imprisoned with your new employer, but it could be worse — after all, there's a pot of gold waiting for you once you serve your time.
This is by no means the only article you'll see on AI this week. In fact, you've probably been beaten over the head with AI content in the past month. But I will posit, that this one will be different in at least one respect. I'd like to talk less about the upcoming “business wars” or the side-by-side comparison of generative models and more about the implications on society, namely people like me who belong to a specific class: knowledge workers. What can we expect in the Golden Age of AI — see what I did there?
They Took Our Jobs
The notion of artificial intelligence has been around for close to a century — early computer scientists and science-fiction authors have proposed myriad theories about how it would work, what it might look like and the potential impact on human civilization. The most recent hype cycle has been a condensed, borderline manic rollercoaster of emotions: awe, fear, inspiration, disappointment, outrage and — now — worship.
The 2000's was still very much a research era. The 2010s brought us to the peak of inflated expectations — you may recall that every startup pitch deck included “AI” (regardless of the veracity), but for good reason… Those who did ended up with more funding! There was lots of vapor-ware that led to acqui-hires, lots of experiments to integrate AI into the Enterprise and then, ultimately, the steep downslope towards the trough of disillusionment. And that's OK. It's how these cycles work. We've been steadily pulling ourselves up and out of this pit of despair ever since, often behind the scenes when newer and shinier objects caught our attention (*cough* crypto *cough*).
There is one particular company (well a few…) — and the leader behind it's success — which have brought AI back into the spotlight. Bolstered by a viral product (and arguably a well-timed crypto winter), Sam Altman and OpenAI's ChatGPT have resuscitated latent fantasies of a world with real, usable and often mind-boggling levels of artificial intelligence. If you aren't familiar with this story, I would be surprised. Nevertheless, I add some more context further down in the epilogue for those interested…
For now, I want to get back to today's main topic: what does this AI renaissance mean for knowledge workers (like me and presumably, since you're reading this, you)?
Early predictions around the rise of AI (framed more generally as automation) was the demise of blue collar work. In the US, blue collar work did diminish in part due to substantial globalization (jobs left the country for cheaper labor) and in part due to automation (“smart” machines, tech-enabled factories & the like). Basically, we outsourced to cheaper, low-skill labor and then applied advances in computer science and robotics to drastically reduce labor requirements in the medium/high-skill category. This subject has been a political flashpoint in the US for decades, most notably captured in some classic South Park episodes:
These early predictions can be retroactively justified: the “new intelligent machines” will take over the “old dumb machines.” But perhaps there was a bit of hubris lurking in the background: us white collar / knowledge workers are too creative, unique and valuable to be overtaken by some smart machine. I'm here to inform you that we were wrong. But don't worry just yet, we have some buffer before those Ivy League degrees become completely useless!
The Grey Collar Worker
My near term prediction (~ the next decade+) is the rise of the grey collar worker.
In one sense, it could be explained as a convergence between white and blue collar work — each human worker leveraging or training their artificially intelligent partners until they themselves become obsolete. Ironically, I think the two sides will evolve at different clips — pure white collar gigs will disappear faster than hard skills most associated with blue collar work. Why? It's harder for AI to do things in the context of atoms (i.e. real world), where as the world of bits (digital/computers) is their natural habitat. That's changing quickly though, as evidenced by this robot doing parkour:
It's easy to fall victim to doom and gloom scenarios when projecting these things out on long time horizons. That's why, for the moment, I'd like to stick to the current decade. The short term outcomes are actually pretty great for knowledge workers that capitalize on the tools available to them. I see a world wherein aforementioned “grey collar workers” have some sort of reverse golden handcuff situation: all the benefits accrue immediately and slowly taper off as you — the human — become less and less useful. Creativity will flourish, productivity will skyrocket and us humans will have, quite frankly, less work to do but presumably the same pay.
The promise of Artificial General Intelligence (AGI) is to unshackle us from the drudgery and minutae that consumes our daily lives. Free to explore, free to create, free to advance humanity's progress. This sounds a lot like what we told factory workers: the new robots will relieve you of dangerous/rote tasks so you can focus on higher-order activities. Which is great, in theory, until it comes for those higher order activities too and you're out of a job. So what gives? This doesn't sound so positive after all…
Here's the thing: we aren't dealing with a run of the mill economic cycle. The changes anticipated from cheap, broadly accessible and increasingly powerful AI will feel closer to societal shifts that occurred during the Industrial Revolution than the 2008 Financial Crisis. Admittedly, this is a hard pill to swallow. But it's also an opportunity to usher in sweeping changes that elevate the politics, ambition, health, moral landscape and justice for our species. How do we build the utopia we hope to inhabit?
There are plenty of futurists and philosophers who are better equipped to expound on what that future might look like. I seek to remain optimistic, but there are so many scenarios, I had to put my highest probability choices into meme format.
*Meme Preface: this is somewhat niche, don't be sad if it doesn't make sense, explainer below.
As advances in AI march forward, the scenarios range from complete civilizational eradication to unimaginable prosperity. To be honest, it's hard to wrap your head around either extreme. My hunch is that we'll have some mix of the middle as these outcomes are not mutually exclusive. Let's breakdown my impossibly niche meme (above) to see if we can wade through the morass of our AI-enabled future. To be clear, I see the bookends of this spectrum as very long term outcomes, whereas the middle section (or some mix of it) is more relevant in the short-mid-term.
Black Nothingness (Extinction): This is a worst case scenario. The debate around AI safety and whether or not we've let the proverbial cat out of the bag already is heated. This doesn't necessarily mean there will be some sinister Terminator vibes — the AI just may grow to have other objectives and we could simply be in the way much like an ant crossing my path on the sidewalk. My favorite thought experiment here is The Paperclip Maximizer — an AI built for the sole purpose of maximizing paperclip production (harmless, right?) but ends up dismissing contextual clues (i.e. do we need this many paperclips?) and eventually turns the whole planet/universe into paperclips.
Orwellian Dystopia: 1984. Big Brother. You know the story. This is a legitimate concern — we already see glimpses of it with the CCP's social scoring system in China. There is a high probability that AI will further widen the inequality gap and has the potential to be abused by powerful governments. Extrapolate 10+ years down the line and we better hope there are some guardrails in place.
Economic Crisis: This is inevitable. Perhaps not an economic collapse à la “The Big Short” but something even more transformational — a socio-economic identity crisis might be a better way of putting it. As definitions/necessity around “work” change there will be questions around purpose (what do we do all day?), equality & human rights (how does this newfound prosperity in the form of time get distributed?), politics (how do we govern and for whom?). Short-term, I imagine this being chaotic and disruptive. Eventually, we'll look back and say “remember when people used to work on oil rigs or do manual data entry for 16 hours a day — how archaic!” Just like now we chuckle thinking about elevator or switchboard operators. There will be some work we are glad we no longer have to do (too rote, too dangerous or too humiliating) but there will be consequences across the board that will force us to reconsider how we spend our time.
Super-Human Capabilities: The graphic here refers to Elon Musk's "pig on a treadmill” demo of the Neuralink prototype (brain-computer-interface). As our devices inch closer and closer to our biological bodies, the logical next step is simply plugging in. Having AI not only as a resource but as an extension to your conscious reality could have tremendous implications. This could be a major value add, a wild catastrophe or the beginning of a new & improved human species. Who's to say?
Transhumanist/Interplanetary: The notion of “living forever” is not new. But it did seem to hijack the minds of the Silicon Valley elite. From Ray Kurzweil's Singularity prediction and subsequent life extension pill popping habit to Peter Thiel's alleged blood boy (you've seen HBO's Silicon Valley, haven't you?) — the idea of living forever tends to settle into the psyches of the rich and powerful. Especially those with the technical prowess to — maybe — make it happen. Musk has called for us to be an interplanetary species. Longevity research is in full swing — with movement figureheads causing storms on Twitter. Harvard prof, David Sinclair, is probably most notable for espousing the science of reversing biological age. Bryan Johnson (Founder of Braintree - also featured in the picture) is the most recent target of praise & vitriol for his Blueprint — a protocol for slowing down and reversing aging through food/exercise & other means. The point here is that maybe we’ll figure out the biology or perhaps we'll plug into our silicon-based friends. But the potential lifespan of a human will shoot up drastically. And it will have consequences.
Become the Multiverse: TBH I don't even know what this means. I do know that I was once stood up by Physicist Brian Greene for a lunch I won at a charity auction. He wrote a book on the multiverse called Hidden Reality.
I guess what I'm trying to say is two-fold:
We should enjoy the near-term novelty, the productivity gains and the awe-inspiring potential of advances in artificial intelligence
Simultaneously, we should seriously reflect (as a society) on the implications for the decades (centuries) to come.
We can't predict the future of this technology but we can do our best to prepare for the extraordinary consequences. Don't let the golden handcuffs lull you into a state of complacency or they just might become permanent.
Epilogue: AI Back on the Map
If you're just waking up from a coma, here's the rundown on what happened in AI recently. ChatGPT, a large language model (I wrote more about LLMs here) was one of the most powerful AI models to ever grace the public. It's effectively a Chatbot with super-intelligence that can answer virtually any question, can translate between coding languages and generates original ideas based on prompts the user inputs. It's hard to explain so just google it.
Adoption was unprecedented. VCs pivoted from web3 to AI overnight. Companies are being built around the technology. GPT effectively “broke the internet” like Kim K. Around the same time, there was a flurry of generative image models (Midjourney, Stable Diffusion) that allowed people to input prompts and get images as outputs. I even made some profile pictures from an AI trained on old photos of me. It's fun, eerie and flat-out impressive.
Yet already, there has been pushback. The academic community is concerned about high schoolers plagiarizing by simply asking chatGPT to write their term papers (a Princeton undergrad is trying to solve for this). With the recent spate of Big Tech layoffs, people are concerned about AI coming in to backfill, potentially permanently in some cases. More recently, ChatGPT passed the Bar (legal) and US medical exam! Previously, models trained on specific problem sets (Chess or Go) could get really good at them. We're now approaching something that has more general problem solving skills. Your family doctor won't be living on the street anytime soon. But you can see where this is all headed. The level of cultural/social upheaval (both positive and negative) from this iteration of the technology is tame compared to what's in store. So buckle up!
Did I mention that Microsoft just announced a partnership involving a $10B investment into OpenAI? Have a look at what CEO Satya Nadella thinks about this innovation space and see for yourself the immediate impact it could have on the world:
I hope you enjoyed my ramblings on the current (and ostensibly future) state of AI. For some real data-driven research and reporting, I suggest the State of AI Report from the Nathan Benaich & Ian Hogarth. Surely, I'll circle back on this as I continue speaking with academics, founders and fellow deep tech investors!
If you are building, writing or investing in this area, get in touch: