Is ISO 42001 a waste of time and money?

Is ISO 42001 a waste of time and money?

Possibly, and here's why:

AI doubles its capability every six months. It’s already smarter than 90% of humans. When you look at the research of OpenAI's O3 and O4 mini models, and if you look at other SOTA labs like Google, Anthropic and then the developments in China, who are going to release very similar AI models, this time of 6 months is only going to reduce.

Now let's say we want to build a business starting today. The ISO 42001 standard is behind a paywall. So, I can’t get open-source knowledge right now and start building my company to follow whatever standards are required. I must go find an auditor who's certified in ISO 42001 to help me with the standard. All that stuff takes time.

So, let me give you a financial problem. On March 19, OpenAI's 01 Pro State of the art (SOTA) model, with my current token consumption, would cost me a million dollars a month to put into my agentic workflows, which is fine.

If I go for ISO 42001, best firms in the country will look at my stack and take 90 days at best to evaluate it. So, I basically must stop what I'm doing for 90 days and let these people test my AI systems for bias and any of the other technical controls for generative AI.

If you look at the AI industry, especially back in March, on the 19th I was using the latest OpenAI 01 Pro SOTA model, five days later, Google Gemini came out and its cost was a tenth of the cost of OpenAI 01 Pro.

Two days after that, DeepSeek came out and its cost was a tenth of Gemini's cost. And so, you have in a matter of 14 days a 1000% price reduction. And what does that do from a business perspective? If I elected to hire an ISO 42001 auditor on March 19th to look at my stack based off of a SOTA model on OpenAI 01 Pro, I'm effectively locking in my million dollar a month expense while you look at my stuff for 90 days.

My competitor, who didn't know they wanted to start in Generative AI two weeks ago, decided they wanted to start building Generative AI agents. The first thing they're going to do is look at their cost. They're not going to choose OpenAI01. They're probably going to choose either DeepSeek or Google, two different data centers.

And they immediately have a 1000x price advantage over me who started 14 days prior. So therefore, ISO 42001 to me is a risk to my business. It's probably one of my biggest risks.

You could mitigate the risk if it wasn't behind a paywall, and the only reason why it's behind a paywall is because they're trying to protect (perceived) intellectual property. Quite frankly, we have generative AI right now that's smarter than nine out of 10 people out there, effectively making IP Protection a thing of the past. And in six months it's going to be twice as smart again. Which means the intellectual Property that the ISO27001 or 42001 people have is irrelevant at that point as its outdated.

So, What's the Alternative?

NIST Open-Source AI Risk Management Framework. It's at least open source. You can tell your customers, look, go read up on the technical controls that they have available right now and start building your business towards those technical controls.

And they'll be in a better spot. In terms of business, when you have to wait for ISO 42001, that's where one of your biggest expenses come in. What if somebody built a business off of a technology that ISO 42001 identifies as being bad? How many days are you behind your competitors in the AI world? You're too far behind. You will never catch up.

Eric Schmidt, Former CEO. A former founder of Google.

Literally said, you know, we don't understand AI and then there's multiple experts that are out there, and nobody's going to stand up and tell you this. When AI gives me a response, if you ask how, from a technical perspective, did AI come up with that response, you're going to have a whole bunch of people give you a whole bunch of answers. And the truth is it’s unknown. They’re going to start explaining, oh, there's a neural network and it uses training data, but the reality is, nobody genuinely understands how It's going to come up with that answer.  AI is going to be smarter than the average human and in some areas it already is. That shift is currently happening

If somebody tells me they are ISO 42001 and you're an AI shop, and I am not ISO 42001, I know I have a technical advantage over that ISO 42001 shop. I know I'm going to be cheaper. I know I'm going to have more higher intellect AI agents and that's what I'm going to say as a competitor, I'm going to go after you now, if I'm trying to sell compliance AI agents versus whoever else builds an AI team, and that guy rolls up with ISO 42001,  I'm going to ask, what technology is your stack built off ? And they're going be like, oh, OpenAI01. And I'm going be like, yeah, that's nine months old.

With 01 Pro, Gemini 2.5, Sonnet 3.7 and China’s DeepSeek R2 (and they're about to come out with their latest SOTA model) its game over. Right there. We're already here with OpenAI, Google and anthropic and with what China's about to release.

From a human perspective, we will never be able to compete with AI again, which is scary.

AI even developed its own language that humans can barely understand, what happens when AI develops this further, on its own, into a full language humans cannot decipher?

How are the GRC people doing ISO 42001 next year going to secure that? Having AI in its logs. You're going to prompt AI to do it securely so that no human can understand the log.

Call to Action:

Given the rapid advancements in AI and the challenges posed by ISO 42001, businesses should adopt the NIST Open Source AI Risk Management Framework. This framework offers immediate access to guidelines, reduces compliance costs, and enhances flexibility, allowing companies to stay competitive in the evolving AI landscape.

Transitioning to the NIST framework will enable businesses to leverage the latest AI technologies without being hindered by outdated compliance requirements. I encourage all stakeholders to explore and integrate the NIST guidelines into their AI strategies to ensure effective risk management and sustained innovation.