Anthropic, a US tech giant backed by Amazon, has announced that it will restrict Chinese-run companies and organizations from using its artificial intelligence services. The move is part of the company’s efforts to toughen restrictions on “authoritarian regions.”
Anthropic, known for its Claude chatbot, has positioned itself as a leader in AI safety and responsible development. The company has already barred companies based in China, Russia, North Korea, and Iran from accessing its commercial services due to legal and security concerns. This restriction also applies to US competitor OpenAI’s products, such as ChatGPT, which are unavailable in China. As a result, Chinese companies like Alibaba and Baidu have developed their own AI models.
In a recent statement, Anthropic said it would update its terms of service to prohibit companies or organizations whose ownership structures are subject to control from jurisdictions where its products are not permitted. This change will affect entities that are more than 50% owned, directly or indirectly, by companies in unsupported regions. According to Nicholas Cook, a lawyer with 15 years of experience in the AI industry, this is the first time a major US AI company has imposed a formal, public prohibition of this kind.
The move may have a modest commercial effect, as US AI providers already face barriers in the Chinese market. However, it may lead to questions about whether other companies will take a similar approach. An Anthropic executive estimated that the move would impact revenues in the “low hundreds of millions of dollars.”
Anthropic was founded in 2021 by former OpenAI executives and has recently raised $13 billion in its latest funding round. The company has over 300,000 business customers, with the number of accounts generating more than $100,000 annually increasing nearly seven times in the past year. Despite restrictions, some users in China access US generative AI chatbots like ChatGPT or Claude using VPN services. The development of AI technology is rapidly evolving, with Chinese start-up DeepSeek unveiling a chatbot that matches top American systems at a lower cost.
The restriction by Anthropic reinforces the company’s commitment to AI safety and responsible development. As the AI sector continues to grow, companies like Anthropic are taking steps to ensure that their products are used responsibly and in line with their values. The move also highlights the complexities of the global AI market, where companies must navigate different regulatory environments and geopolitical tensions.