Your employees use ChatGPT to write emails, DeepL to translate documents, or their personal Dropbox to share files. These practices seem harmless, but they represent one of the most underestimated cybersecurity risks for SMEs: Shadow IT and its new variant, Shadow AI.
Key definition: Shadow IT refers to all software, applications, and cloud services used by employees without the authorization or supervision of the IT department. Shadow AI is its extension: the unregulated use of generative artificial intelligence tools (ChatGPT, Gemini, DeepL…) in a professional context.
The reality is simple: your company data circulates on tools that you do not control, whose existence you are sometimes unaware of, and which escape any security policy. For a Swiss SME subject to the nLPD and concerned about its reputation, this situation represents a major vulnerability.
In this article, you will discover what Shadow IT and Shadow AI really are, what concrete risks they pose to your business, and above all how to regain control without hindering the productivity of your teams.
Shadow IT has grown considerably with the democratization of the cloud and the explosion of SaaS applications. WhatsApp for communicating with customers, personal Google Drive for storing documents, management applications found on the Internet: all common examples in Swiss SMEs.
Your employees are not trying to circumvent the rules out of malice. They simply want to be more efficient. If the official tool is slow or unsuitable, they will naturally look for a more practical solution elsewhere.
In 2025, this phenomenon took on a new dimension with the massive arrival of generative artificial intelligence. We now speak of Shadow AI. ChatGPT, Claude, Gemini, DeepL and dozens of other AI tools are used daily to write reports, summarize meetings, translate documents or analyze data. These assistants boost productivity, but their unregulated use creates a data leak on an unprecedented scale.
Most free AI services operate on a simple model: you use the tool for free, but your data is used to train and improve artificial intelligence models.
When your CFO copy-pastes your projected balance sheet into ChatGPT to get a summary, this confidential data leaves your company's secure environment. It is transmitted to servers often located in the United States, then integrated into the AI's knowledge base. There is a risk that this sensitive information could be returned by the AI in response to a question from another user, including a competitor.
Beyond the loss of intellectual property, this practice poses concrete legal problems for Swiss SMEs:
Even if the Swiss-U.S. Data Privacy Framework (which came into force in September 2024) better regulates these transfers for certified American companies, the Cloud Act remains applicable and represents a real risk for sensitive data.
| Risk | Source | Potential impact for your SME |
|---|---|---|
| Leak of confidential data | Public generative AI (ChatGPT, Gemini…) | Loss of intellectual property, violation of customer contracts |
| nLPD non-compliance | Transfer of personal data outside Switzerland | Sanctions from the Federal Data Protection and Information Commissioner (FDPIC), fines, damage to reputation |
| Access by foreign authorities | American Cloud Act | Exposure of strategic data to unauthorized third parties |
| Loss of file control | Personal Dropbox/Google Drive | Data inaccessible when an employee leaves |
A Swiss SME active in HR consulting. A consultant uses ChatGPT to summarize evaluation interviews. He copies notes containing names, comments on performance and salary data. This information ends up in the hands of an unauthorized third party, with all the risks: damage to reputation, lawsuits, loss of the client, sanctions from the Federal Data Protection and Information Commissioner. Applying the principle of least privilege could have considerably limited the exposure.
Faced with these risks, wanting to purely and simply ban these tools is understandable, but rarely effective. A total ban pushes practices underground. Your employees will continue to use these tools by hiding it, preventing you from measuring and managing the risks.
If your teams use ChatGPT, it is because they derive a real benefit from it: saving time, producing better quality content, focusing on tasks with higher value. The recommended approach is support and governance.
If your teams use public AI tools and you cannot immediately provide them with secure alternatives, here are the practices that considerably reduce the risks:
Your role is not only to raise awareness, but also to provide the tools to work effectively while respecting security rules.
The first step is to offer Swiss or European alternatives to public AI tools:
These solutions have a cost, but it must be balanced against the risks of a data leak. Losing the trust of a major client can have much more serious consequences than investing in secure tools.
Shadow IT also concerns WeTransfer, personal Dropbox, personal Google Drive. These solutions escape your control: no visibility on access, no centralized backup, often no multi-factor authentication. If an employee leaves the company, the files leave with him.
Clear rule: company data must reside exclusively on company tools.
| Need | Public solution (to avoid) | Recommended sovereign alternative | Key advantage |
|---|---|---|---|
| Storage and collaboration | Personal Dropbox / Google Drive | kDrive (Infomaniak) | Swiss hosting, up to 106 TB, nLPD compliant |
| Ultra-confidential data | — | Proton Drive | Zero-access encryption, Swiss jurisdiction, outside Cloud Act |
| Large file transfer | WeTransfer | SwissTransfer | Free, up to 50 GB, AES-256 encryption, Swiss hosting |
| Professional backup | — | Swiss Backup (Infomaniak) | Anti-ransomware, replication on 2 Swiss datacenters, AES-256 |
| Generative AI | Free ChatGPT / Gemini | Euria (Infomaniak) | Data never used for training, ephemeral mode available |
Formalize your usage policy in a document accessible to everyone. This charter must explain which tools are authorized, which are prohibited, and why. It specifies best practices and the consequences of non-compliance.
But a written policy is not enough. Organize training sessions explaining the risks with real examples adapted to your sector. If your employees understand why these rules exist, they will be more inclined to respect them. A security audit adapted to your SME can help you identify risk areas and prioritize your actions.
Establish a continuous dialogue. Encourage your teams to report the tools they would like to use. Evaluate these requests and find secure alternatives. This collaborative approach transforms security from a constraint into a shared approach.
Shadow IT refers to all software, applications and cloud services used by employees in a professional context without the IT department being aware of it or having given its authorization. Common examples: WhatsApp for communicating with customers, personal Dropbox for storing company files, or ChatGPT for writing professional documents.
Shadow AI is an extension of Shadow IT specific to generative artificial intelligence tools. It refers to the unregulated use of public AI assistants (ChatGPT, Gemini, Claude, DeepL…) in a professional context, without a security policy or contractual guarantees on the processing of submitted data.
Because the data entered into free AI tools can be used to train the models, is stored on servers often located in the United States (exposing your company to the Cloud Act), and its transfer may constitute a violation of the nLPD if it contains personal data. A leak of customer data can lead to legal sanctions, loss of contracts and lasting damage to your reputation.
Euria from Infomaniak is the most complete sovereign alternative for Swiss SMEs: exclusive hosting in Switzerland, data never used to train the models, ephemeral mode for ultra-sensitive data, and functionalities equivalent to ChatGPT (writing, translation, document analysis). The basic version is free.
A total ban is rarely effective: it pushes practices underground without reducing the risks. The recommended approach is governance: raising awareness among teams about the risks, providing secure alternatives adapted to their needs, and formalizing a clear usage policy with anonymization rules for cases where no immediate alternative exists.
Shadow IT and Shadow AI will not disappear. They reflect a reality: your employees are looking to be efficient. Your role is not to curb this dynamic, but to channel it in a secure manner.
By understanding the risks, raising awareness among your teams, providing adapted professional tools and establishing clear governance, you transform an invisible threat into a competitive advantage.
Two concrete actions right now:
The cybersecurity of your Swiss SME also depends on this. Bexxo can support you in this governance and security approach.