National Artificial Intelligence Policy USA v China

National Artificial Intelligence Policy USA v China

At Stealth and our parent company, we're very interested in the development of AI and have a vision for ourselves, and for the industry. As an AI start up specializing in Undetectable AI – we created Stealth because of exactly the issues we were seeing in the broader space - notably the increase of regulation in the future as well as anti-ai sentiment more generally in various industries and at various companies.

We were recently interviewed by a national publication on AI policy and direction and wanted to provide an uncensored perspective along with the exact questions and answers we provided.

The overarching theme of the piece was the following...

China – hands on, encouraging innovation, but with strict regulations when it comes to AI regulation. USA – seems (for now to be completely hands off in terms of guidance and regulation). Considering these approaches, and our perspective on the industry more specifically we were asked (and answered) the following questions:

Which approach (U.S. or China) do you find the best when it comes to the innovation and regulation of Artificial Intelligence)?

I think some context is important here. The U.S.A. has, again, been at the forefront of this technology – with openAI being (for now) the 800lb gorilla in the room, alongside Google’s Bard with the first to debut to the public an LLM (large language model) based artificial intelligence through OpenAI/ChatGPT. With this as a context it’s no wonder that a state we’ve been openly hostile to politically, would opt to have strict regulations in place. Similarly, their BRICS alignment strongly suggest that certain states are choosing a more nationalistic approach to governing and moving away from globalism.

There’s no doubt time will tell which is the more prudent direction, albeit it’s clear even from our perspective that AI can/is being weaponized, not to mention the larger implications with jobs (and even the populaces general intelligence) that these tools and direction will have soon.

Ultimately, it seems the real question is – how people will generally use these tools and is that a net benefit or net negative on them individually, socially, and culturally? Anyone that says they know what’s going to happen from the release of these powerful tools is lying. And one cannot regulate from a position of complete ignorance. So, on one hand – it is admirable that certain nations and entities are trying to steer the direction of these tools – however, it is (most likely) a useless endeavor because we simply don’t know ALL the ways these tools will disrupt industries and the zeitgeist. Facebook’s old motto – “move fast and break things” comes to mind, but we’re no longer talking about internal Facebook code, but the very fabric of society and reality.

How does one balance regulation of AI development and not stifling it?

We believe, for now, it is an all or nothing proposition. One simply cannot regulate and NOT stifle. At the same time, we believe that these tools are democratizing and allow literally anyone to have an outsized impact (for their own person, company, or politically) than ever before.

Or is talking of killing all innovation by regulation actually a bluster from the industry and not a real concern?

Correct. It will never be fully killed at this point – no matter the company or territory/country. There’s clearly a public arms race at every level to harness, utilize, and implement these tools. If any one country stops they’ll be left in the dust, orders of magnitude behind those that allow development to prosper. At the same time, one cannot say whether this “prospering” will be good or bad for such countries.

One item that we fully expect to be rolled out in the near future is ubiquitous government enforced registration for the web – most likely starting with small cartels of tech companies wanting to harvest even more information and control from their users.

Ironically, tools like X, while marketed as “free speech friendly” is moving in this direction rapidly. This kind of top-down identification would have a slowing effect on AI proliferation in both positive and negative ways – and wouldn’t necessarily be (depending on how they are implemented) “policy or regulations” based, but simple enforcement by a cartel of tech companies could drastically steer the landscape. This direction would be a backdoor to actual regulation as they would have large implications on AI’s use on various platforms.


Tech and AI companies such as OpenAI have advocated for international guidance bodies and voluntary commitments, rather than new laws. Do you think these firms would hold to their "voluntary commitment"?

Most likely no, its clear companies bend the rules unless under the force of prosecution, fine and punitive jail time. Being in the crypto industry previously, it’s been obvious that clear regulations would put a stop to a lot of nonsense, however the obtuse nature of the current rules make no one and everyone a potential target, which stifles things more than either NO regulation or clear regulations. It becomes completely subjective who to go after and under what pretext.

Ultimately, we’re weary of any international guidance bodies. If internal nation-state guidance bodies were formed, and then communicated in an opensource way for the public to be kept informed at every level that would be ideal. Similarly, if any actual directions/decisions were to be regulated – it should be done under the current (but would need to be expanded) structure of the democracy we live under. IE elected officials, selecting an AI Czar, with clear public records and accountability.


To what degree regulation is needed, do you think? This seems a bit of a grand regulatory experiment, doesn't it?

Absolutely it’s a huge experiment. It’s our opinion the only regulation that should exist in this dimension is that companies should not be allowed to selectively penalize users for content. We see, and created StealthGPT, based on large (and growing) anti-AI sentiment. Google demonetizing sites that use AI, HR departments are dismissing users that use AI generated content, even Steam (the gaming platform) has removed games which used AI generated scripts for voice and chat conversations in game. It’s nonsensical and is clear it’s being enforced to build a mote for larger, more engrained businesses that can selectively use these same tools internally but is looking to exclude individuals and smaller companies that might use these tools to become competitors. What’s a more interesting question (at least right now – and ESPECIALLY BECAUSE OF THE LAST 3 YEARS) is really, what are private companies NOT allowed to do? It's fascinating this never gets brought up. Users are censored in mass on all major platforms for nothing worse than going against the current political narrative. Much grander censorship has gone on, as we’ve come to find out – that literally led to the direct harm (and potentially murder) or millions of Americans – why does this never get brought up is the bigger question.

As for AI -

It’s similar in nature to Search Engines (Google) vs. the Yellow Pages but an even larger change in many respects. The Yellow Pages did what it could to compete, but it wasn’t a trillion dollar+ business that controls many aspects of the physical domain. Google (as one example) on the other hand, is one of the largest companies to ever exist and controls much of the web. It has major incentives to demonize AI, until a point it thinks it can build it into its current or future profitable business model(s).

On the other hand, uncontrolled automation could bring about serious disruption on the societies, couldn't it?

Yes. But we always adapt. It is our nature and our calling and is one of the most exciting parts about being in this industry. For example, we’re currently working on an AI based tool that we believe will disrupt THE ENTIRE education space – at every level.

Written By

Undetectable AI, The Ultimate AI Bypasser & Humanizer

Humanize your AI-written essays, papers, and content with the only AI rephraser that beats Turnitin.