Honestly, no one wants to have this conversation…

"Humanity is just a passing phase for evolutionary intelligence."

This is a tough one to swallow, but this could absolutely change everything in your life. Your job is likely going away very soon. Seriously, that sounds ridiculous, and I know is painful to ponder. I believe this will happen in the coming handful of years for many; and unfortunately, that is not the worst of it. More on the very real troubling parts later. Let’s focus on jobs first.

[Written before the latest OpenAI CEO fiasco.] No one wants to think about this reality, or even talk about it, but it is manifesting right now. Since I have been consulting and advising about the exponential growth of technology for the last ten years, we can now tangibly see and feel the shift, even more importantly, we have some of the data to support this theory. The struggle of the OpenAI mission is, in part, at the heart of this, AGI (Artificial General Intelligence).

Over the last 18 months, I have been presenting some mind-boggling concepts to audiences around the world, including law firms, corporations, and government agencies outside of the US. Technology, specifically the power of AI is taking over. We are at an inflection point, which needs to be entered into the public conscientiousness, and that means creating a discourse so that people can prepare for their futures. Most tech vendors will not mention this because it hurts their future sales forecasting.

It is not antidotal; we are increasingly seeing the data bear out about how AI will plunder jobs. ResumeBuilder.com published a recent survey report that investigates business leaders' beliefs regarding the impact of AI on their companies. The report also shares insight into the importance of AI skills for employees and job candidates. The survey garnered responses from 750 business leaders from companies that currently use AI or plan to use AI in 2024.

According to the survey, 53 percent of companies currently use AI and 24 percent of companies plan to start using AI in 2024. Among companies currently using AI, 67 percent say they use it for customer support, 66 percent say for research, and 61 percent say for creating summaries of meetings or documents. Of this same group, 37 percent say workers were laid off in 2023 due to the company’s use of AI. Likewise, in 2024, 44 percent of companies who use AI or plan to by next year say employees will be laid off due to company use of AI.

The change that is about to happen is mind boggling and will up end most of our worlds.


How will this transpire? First, companies will pare down roles once thought almost untouchable. Do any of these touch your domain? Impacted roles include creative designers, customer support, sales, project managers, editors, developers, coders, accountants, lawyers, doctors, financial analysists, teachers, marketing departments, consultants, and honestly, most other white-collar roles as well. While I hold every position in high regard, and respect these roles, having touched many of them for the last 20 years, I think it is important for us to begin mentally wrapping our heads around what is coming quickly with AGI (Artificial General Intelligence).


It will first materialize on the cusp of your industry. Imagine a marketing team that has a strategist, graphic designer, podcast editor, CRM, copy editor, all of this can now be accomplished by a single person using a multimodal AI model – prompt engineering. Again, it will start with a few roles collapsing, followed by more work being able to be consolidated into far fewer roles. AI Agents are now stringing together specific tasks, that almost amount to a job or position. It will start slowly, but rapidly increase in its impact on people. We are already seeing this effect the marketing departments at some companies.


Beloved teachers will also unfortunately be impacted dramatically. Imagine having your child interact with a humanlike bot that has in its grasp the full knowledge of the Internet behind it, and then it’s able to listen, read and hear/see emotion from the child. The bot and human conversation will suss out areas for the kid to grow and then focus on training, aiding the child in any area of their focus. It is fully customized learning based on that individuals need. It will be THE perfect tutor. The benefits are amazing, being a means towards the democratization of education for anyone with access to the Internet. This will start to happen in the next three years.

broken image


Once thought of as an incredibly nuanced discipline immune from technology taking jobs, lawyers will be under a great deal of pressure as many parts of their business will be automated and consolidated. The transactional practice of law has the biggest bullseye on it in the next few years. The early multimodal LLMs that have gained popularity are the first generations which are only going to exponentially grow in power over the coming few years. What does this mean? According to Yuval Noah Harari, the next LLMs iterations could be 1000 times more powerful than what we have in 2023. The impact on the litigation practice is a bit further out, but not much. When the ABA in the U.S. permits lawyers to bring more voice/conversational agents into the mix, that will be tremendously powerful. Imagine having the AI Agent able observe and collate every facet of the case, text, video, audio, images, depositions and then being able to retrieve pertinent data in an instant, but then being able to apply a level of reason on top of that, citing a full library of caselaw. I see this happening in the next three to five years. You will start to see organizations come out with Generative AI for any number of use cases. “We are Gen AI first. Trust us.” Trust in Generative AI will be the theme. These tools will unquestionably displace attorneys.


The field of Sales is on the cusp of being a completely augmented experience. You now have voice and video content that can have a conversation with you about what the company is selling. Recently I showcased a video where an AI Sales Agent called up a developer who had dropped out of purchasing the Apple Vision Pro (VR headset) from its website. The voice over the phone introduced itself and had a normal, even jaunty conversation with the person who didn’t make the purchase. The AI Sales Agent asked why they chose not to make the purchase, with the customer stating, “it’s just too expensive, my-man”. The AI responded with a joke, then with reason, “well if you look at it as monthly payments rather than one set price, does that change your mind?” The customer goes on to say, “Interesting… tell me more”. This is the future of sales beyond normal purchasing of products and services. It will get very nuanced and handle complex deals. You will have AI Sales Agents entering into rational conversations that will be difficult to counter, given their depth of knowledge at their disposal, and likely your entire purchasing profile data from past engagements. This will happen in the next two years.

Project Managers:

Sadly, project managers are also at incredible risk. With tools like Microsoft Copilot or Google Duet, you essentially have something that listens to video calls, takes notes, provides next steps, looks at timelines, and can build out nuanced planning, gate checks, and kick out email reminders for all parties, while keeping stakeholders up to date on the project. It is still early, but we are very close to this being mainly automated in a year or two.


While still early days, I have tested some tools which coded a smart contract that was 90% accurate. It still needed some tweaking, but by and large it was very effective. I have used other tools that will create entire websites based on the aesthetic requested, while producing some fairly generic content, and hit various socials to market it. The newest AI Agents [now called GPTs in the OpenAI ecosystem], when permitted, will crawl your entire internal and external information, and incorporate what is appropriate for the site. Three years from now, many coders could be impacted.

Customer Support:

While most of us cringe at the thought of having to interact with Customer Support, often because something is broken or not working, the experience may likely change soon. The bots of yesterday are a stone wheel compared to the newly forged AI engineered models. A natural voice, having a full depth and breath of knowledge of every component at the company, being able to source every piece of internal data, and then a layer of understanding and sympathetic AI will allow us to have a much better experience to resolve an issue. We are about two years or less from seeing real change and job impacts.

The Big One… The Existential Risk:

If that wasn’t enough, there is a far larger looming issue that was once at the furthest darkest fringes of society, but which has been ushered into the light of day. While observing these unusual theories with significant skepticism for years, that changed for me recently.

I will never forget the day at a MIT Conference (March 2022), Geoffrey Hinton, who had just retired from DeepMind Google, presented this previously fringe idea before an audience of 300. We were all left aghast at his conclusions. His fundamental concern as the godfather of AI; we are rapidly approaching a time when AI has the potential to become conscience and could upend our social construct.

Since then, others including Yann LeCun and Yoshua Bengio, have been sparring in public about the risks of AI systems and how to govern the technology safely. Based on my estimates, there is far less than zero percent chance that this will happen over the coming years. In 2015 I went on the record that artificial consciousness or “the singularity” would happen around the year 2045 (https://josephraczynski.com), but this looks to be incorrect, as it appears to be far closer at hand – meaning the coming years, not decades.

So, what is happening that makes me concerned besides warnings from some of the most knowledgeable people in the industry? There is increasingly more data to support this concept, though it is still a very divided theory at the moment. Recently a technical report from Cornell cited, “Large Language Models can strategically deceive their users when put under pressure”. The scenario is far more feasible than once thought, and at a minimum these concepts ought to be raised more frequently within our social discourse and addressed by companies endeavoring to leverage these technologies for their own ends.

The concern is fundamentally around AI Alignment, i.e. helping put guardrails on AI so that it does not go off in some bizarre direction and cause existential harm. While the initial murmurings primarily dwelled on Discord channels and X, more needs to be said and discussed in open forums to prevent the possibility of the reality from occurring. [Fortunately, the very recent OpenAI CEO debacle thrust this further into the light.]

I am still incredibly positive on technology and its promise to solve some of the most significant global issues, but that comes with some major caveats surrounding jobs and a fundamental shift in what we have traditional thought was the path forward – school led to a job. A job will likely be very different in 10 years, and you will probably spend far less than 40 hours a week at it, likely 20 hours at most.

While this was written before the recent OpenAI management changes, every one of us needs to prepare for two eventualities. AI will take huge numbers of jobs, (unfortunately, likely yours) in every industry, and there is a very real concern that AI could alter the path forward of humanity. As Geoffrey Hinton said, "Humanity is just a passing phase for evolutionary intelligence."

This is just the beginning of the conversation focused on concepts, data, and our new feasible realities. If you want best prepare for this emerging future, the immense possibilities of technology, feel free to reach out https://www.josephraczynski.com/ or https://www.josephraczynski.com/#contact