The Massive Impact Of AI On Indian Work Life
Earlier this year, the AI Impact Summit positioned India as a future tech leader in this space through a series of incentives and initiatives. Yet, the fact remains – AI already plays a prevalent yet precarious role in the work lives of many Indian professionals across the country. The 2025 BCG AI at Work survey provides a snapshot of this complex scenario:
Indian professionals lead the world in genAI adoption. 92% of Indian employees use the tech regularly for work – the highest number among surveyed countries. | High AI usage is linked to greater concerns over job security. 48% of respondents from India believe their job will certainly or probably disappear in the next decade because of AI – higher than the global average of 41%. | At the same time, AI is helping employees save time. 47% report that they save more than an hour a day by using AI for work. |
It is adoption borne out of equal parts function and fear. Unfortunately, enterprise AI governance has not kept pace with this adoption – only 36% of employees feel adequately trained, while 37% say their company doesn’t provide the right AI tools. When these measures are not put in place, employees often train themselves using public YouTube tutorials or forums that almost always seem to recommend free, unapproved tools. As a result, 54% of employees end up using these unauthorised alternatives – a figure rising to 62% among Gen Z and millennials.
What turns this scenario into India’s fastest growing security threat is the following trend: the percentage of sensitive corporate data being fed into AI tools has skyrocketed from 10% to over 25% in just one year, according to IDC. Instances like the Samsung ChatGPT leak, where company code was irreversibly entered into genAI public models, led to exposed trade secrets, significant reputational damage and potential losses surpassing millions. Considering the landscape, Indian CISOs must immediately take steps to stop shadow AI originating from their organization.
Shadow AI: The More Evolved Successor To Shadow IT
In many ways, shadow AI is a subset of shadow IT, which involves employees using tools and applications without your IT teams knowing about it. The emergence of shadow AI has certainly created a lot more use cases for IT teams to deal with:
Shadow IT | ▶️ | Shadow AI |
Signing up for unapproved Trello accounts / projects | ▶️ | Unauthorized accounts in AI apps like ChatGPT and Perplexity |
Hoarding customer data in personal Dropboxes | ▶️ | Agentic AI tools that can process sensitive data and become prone to misconfigurations |
Creating unauthorized OneDrive instances | ▶️ | Excel automation scripts integrated with GPT APIs |
Yet, this is where the comparisons end. Shadow IT merely stores data – shadow AI, on the other hand, consumes it. As seen by the Samsung incident, public AI models often retain training inputs, creating an irreversible governance issue regarding data privacy and where your intellectual property ultimately resides.
Moreover, the added psychological motives behind the usage of shadow AI make it that much harder to track – and this is where CISOs today don’t see the fallout coming. The motives behind usage of both shadow IT and shadow AI are rooted in productivity enhancement. However, usage of shadow AI – as seen in the aforementioned BCG report – is also about professional self-preservation. Many professionals want AI to help them solve problems without revealing that they use it to their supervisors because of a fear that this will make their role redundant. This complex scenario makes it critical for enterprises to incorporate a nuanced, multifaceted strategy when trying to combat shadow AI.
The Added Risks That Emerge From Shadow AI Usage
The goal of this strategy should be to capture the upside AI can have on your organization while sidestepping disasters that can come in many varied forms:
- Data Leakage: Most free, unauthorized AI tools have terms of service that allow for it to use your data for training. Once a model ingests your sensitive information, it is nearly impossible for it to delete or unlearn that information.
- Issues With IP & Licensing: Any kind of generation done by employees can have grave consequences with regards to your IP. If a developer generates code from an open-source software which somehow makes its way to your proprietary product, you could be legally forced to open-source your entire application. Similarly, if you use public GPTs to create company logos and materials, you technically do not own this material – meaning your IP is completely defenseless if a competitor tries to copy it.
- Less Reliability: AI tools are notoriously known to make up fake facts and figures. If employees use this data in important company assets without verifying them, it could lead to reputational damage for your enterprise. A related yet more insidious risk is package hallucination – if you ask AI for code libraries for problem solving, it may invent/hallucinate a plausible-sounding package. Cyberattackers, in turn, can track these hallucinations and fill the fake packages with malware.
- Compliance & Regulatory Exposure: Public AI tools process data on servers all over the world. This becomes a compliance nightmare for organizations under the purview of the DPDP Rules that mandate data localization, in addition to enterprises in highly regulated sectors like healthcare and BFSI. It could all lead to highly debilitating compliance penalties, which can go upto ₹250 crore under DPDP.
Considering all this added risk, many CISOs out there may be considering banning any & all AI tools. Yet, AI has proven to be beneficial if governed properly, and there is no guarantee that employees won’t find ways to sidestep this ban. The solution, therefore, lies in an effective mix of people, processes and technology.
The Three-Pronged Approach To Combating Shadow AI
Your enterprise probably has a framework for dealing with shadow IT compromising of the following:
- Having a preset list of allowed tools & applications
- Setting security controls to mitigate data leakage
- Educating employees about safe usage of these tools
Creating a framework to combat shadow AI will involve many of the same principles, but with dynamic, evolved governance as its backbone:
The Three Organizational Elements Required To Successfully Combat Shadow AI | ||
AI Policy For Acceptable Use | Real-Time AI Security | Effective AI Training |
The Difference In Approach | ||
Your enterprise must create a flexible, fast-moving framework for responsible AI usage, as bureaucracy only ends up breeding more shadow AI. | Controls must be real-time and preventive – blocking leakage of data while or before it’s happening, without hampering productivity. | Rather than periodic programs, you must undertake ongoing, contextual education that evolves with the rapidly changing AI landscape. |
Ideal Components In A Successful Strategy | ||
|
|
|
How Does Your Enterprise Go About Fighting Shadow AI Today?
The aforementioned BCG report points out a critical factor in turning the tides against shadow AI: the importance of leadership support. Only 25% of frontline workers report receiving it, but when it is present, it leads to significant improvements in job satisfaction and career optimism.
That’s why, as a CISO, combating shadow AI is more than just about eradicating the many risks that come with it. If done correctly, it can truly empower your employees and reinforce an innovation-first work culture – where human expertise and AI prowess come together to transform your organization.
This is where partnering with a managed services provider that has proven success in implementing AI governance across sectors – like iValue – can prove to be highly beneficial for your enterprise. Click here to speak to one of our experts on how you can go about eradicating the menace of shadow AI from your organization, while safely leveraging the many benefits AI brings.