Get Your Free Risk Report Today
  1. Home
  2. /
  3. Resources
  4. /
  5. Blogs
  6. /
  7. DeepSeek’s Rise: Opportunity or...

DeepSeek’s Rise: Opportunity or Threat for Enterprises?

How DeepSeek became the ‘Next Big Thing’

As we get to grips with a still-nascent 2025, DeepSeek has emerged as the biggest buzzword in tech circles. Yet, to fully grasp the disruptive effect it is currently having, let’s rewind to late 2022 when OpenAI unveiled another such disruptor called ChatGPT to the world. ChatGPT’s large language model (LLM) transformed the way many people worked, and even lived, and it wasn’t going to be long until China came up with their own AI models. Regional giant Baidu made the first Chinese ChatGPT equivalent. Still, the model’s inferiority compared to more advanced Western AI models highlighted the gap in AI capabilities between US & Chinese firms.

That narrative was flipped on its head when DeepSeek, a Chinese AI company founded by entrepreneur Liang Wenfeng and headquartered in Hangzhou, released its V3 model late last year and the R1 model at the end of January 2025. What made these models different from what the company put out before was the reveal that they were developed in just 2 months using Nvidia H800 chips at a budget of $5.6 million – a fraction of the estimated $100 million used to develop OpenAI’s GPT-4. This revelation sent the entire tech world into a tailspin.

As DeepSeek’s AI assistant overtook ChatGPT to become the top-rated free app on Apple’s App Store, its emergence cast potential doubts on the dominance of the US in the AI space and the viability of sky-high valuations when it comes to Western tech giants. As a result, Nvidia saw more than $589 billion erased from its market cap, a drop characterised by Bloomberg as the biggest in US stock market history. During the same time, the NASDAQ declined by 3%

Why Your Enterprise Should Be Excited About DeepSeek

For enterprises in India, the emergence of DeepSeek should be treated as both exciting and concerning. Exciting, because game-changing models like DeepSeek pave the way for AI democratisation that could change the trajectory of your business.

  • The open-source nature of its models makes advanced AI accessible to a broad range of enterprises and challenges the proprietary AI development models of certain Western tech giants. Recently, Arli Charles Mujkic, CEO of Swedish tech company Ooda AI, claimed that DeepSeek’s V3 LLM is up to 20% better performing than Meta’s Llama 3.3, previously considered to be the best open-source model in the market.
  • It has already proven to be useful for a wide range of applications. It can sift through complex datasets and offer detailed insights previously impossible to achieve through traditional search engines or databases, making it valuable for endeavors like business analysis and scientific research. It has also shown strong performance in technical tasks like code generation.
  • DeepSeek’s latest models were created keeping in mind the US chip export restrictions to China. Therefore, it is optimized to run on older hardware without compromising performance, which is integral for an emerging country like India. Additionally, within those restrictions, DeepSeek incorporates an advanced Mixture-of-Experts (MoE) architecture that uses multiple specialised models to enhance efficiency & performance.
  • DeepSeek’s game-changing optimization enables it to run at costs 20-50x cheaper than OpenAI’s models. That leads to drastically reduced costs for enterprises – DeepSeek’s premium subscription model stands at $0.50/month, while ChatGPT Plus will cost you $20/month.

All these benefits make it highly viable for enterprises to consider DeepSeek Adoption. However, that decision should be taken keeping in mind the massive concerns regarding the origins and data security protocols of the application.

 

The Uncertainty Surrounding DeepSeek

As tools like ChatGPT became prevalent in our work lives, enterprises already had to consider the data security risk of employees using these kinds of AI models. Companies like Samsung came under scrutiny when their employees entered sensitive proprietary data and code into the application, unaware that this data would be forever exposed as a means of training these models in the future. 

What makes this core problem even more precarious with DeepSeek is its Chinese ownership. Its privacy policy unequivocally states – ‘We store the information we collect in secure servers located in the People’s Republic of China.’ When we dig deeper into it, the data privacy concerns surrounding the application are very much real:

  • Anytime you use DeepSeek, regardless of what you input, automatically collected information includes device model, OS, IP address, cookies, crash reports, keystroke patterns & rhythms, etc.
  • Then, depending on the information you provide, DeepSeek stores text or audio inputs, uploaded files, email IDs, phone numbers, DOBs, usernames, feedback, prompts, and chat history. Any sensitive information you provide cannot be deleted from their servers. 
  • Additionally, if you create a DeepSeek account using a sign-on from an existing Google or Apple account, DeepSeek could collect information from the service such as access tokens, mobile identifiers, hashed email addresses & phone numbers automatically without any inputs. 
  • Finally, DeepSeek’s corporate group can access all this data anytime and share it with Chinese law enforcement agencies & public authorities if they require it for their investigations.

These policies have cast immense doubt on DeepSeek’s usage all around the world. It was recently removed from app stores in South Korea due to data privacy concerns. Australia also banned the use of DeepSeek on all government devices because of the same reasons. Closer to home, the Indian Ministry of Finance issued a directive advising public employees to refrain from using AI tools like DeepSeek for official work.

Enterprises have to consider all these data privacy concerns, while also adhering to local regulatory requirements. With the DPDPA soon to be implemented across all organizations dealing with sensitive Indian data, DeepSeek’s data processing practices of storing data in China go against some of the core tenets of the act in terms of data minimisation and excessive collection. Recently, Union IT Minister Ashwini Vaishnaw indicated that DeepSeek will soon be hosted on Indian servers to accelerate compliance with DPDPA, but it is unclear when exactly that will happen.

Therefore, considering the present circumstances, organizations have to craft AI strategies that balance using low-cost, open-source tools like DeepSeek to enhance productivity while also striving to keep organizational data safe and compliant with regulations.

 

Secure Your Organization With Good AI Governance

DeepSeek has become omnipresent since its emergence, and here’s a reality check: there’s a good chance employees from your various teams might already be using DeepSeek on their own. Therefore, it is important to set up a stringent AI governance policy right at the outset to immediately mitigate AI-related data privacy risks. Here are some elements you can include to create a holistic, watertight policy.

Controlled Testing

We have already mentioned that DeepSeek has shown strong performance in technical tasks like code generation. That itself could transform your business by drastically reducing app development times. However, since DeepSeek is relatively new and could be subject to stringent regulations very shortly, decision-makers should seek prolonged reassurance that the model is secure & reliable before integrating it on a large scale. 

Therefore, for initial integration, it is essential to incorporate controlled testing, whether you:

  • Test it on smaller projects involving little to no sensitive data
  • Test it on more business-imperative projects solely in a controlled environment, which we at iValue provide through our secure sandboxed solutions.

Multi-Vendor Approach

With so many governments already either banning or putting checks on DeepSeek, there is deep uncertainty surrounding its usage. This could increase or decrease in the future, but it wouldn’t be advisable to use just DeepSeek for all your AI-related tasks. A multi-vendor approach reduces overreliance on a single provider while enabling you to use the best tools at your disposal for each task. For example, DeepSeek is known to be more proficient for research purposes, yet ChatGPT is better for writing & creative brainstorming purposes.

Comprehensive Employee Training 

As we mentioned earlier, there is a real risk of employees divulging sensitive information if there are no checks in place. Therefore, it is important to train employees at every level so they know how to handle AI responsibly and make the most of its benefits. These training programs can include:

  • Guidelines around what data these models can access & how their outputs are used
  • Restrictions on what kind of data should never be shared with these models
  • Best practices like using a VPN and not connecting with any organizational Google, Microsoft & Apple accounts
  • For accounts handling ultra-sensitive information, it would be wise to ban the use of DeepSeek completely

Additionally, it helps to have some monitoring in place to ensure that confidential information is not accidentally leaked.

Constantly Evolving Policies

Finally, considering the present scenario, it is important to stay informed, be proactive, and not just wait for problems to occur. Your policies need to be flexible because AI evolves so quickly – as the landscape changes, your frameworks for evaluating these tools will also change. Throughout the entire process, you must balance the delicate mix of embracing new opportunities while also keeping risks and ethical considerations in check. 

By having strong governance and clarity around the usage of AI models like DeepSeek, you can capture the benefits of tools like DeepSeek without exposing your enterprise to unnecessary dangers. Click here to begin a conversation on how to elevate your enterprise AI strategy using measures like controlled testing and robust governance measures.

Authored by

Similar Posts

Scroll to Top