Google Gemini Security Risks

March 5, 2024
Cyrus Tehrani
9 min read

As artificial intelligence continues to dominate discussions about its role in the workplace, Google’s introduction of Gemini (formerly Bard) will surely be another hot talking point. Much like the excitement surrounding Microsoft’s Copilot release, Gemini is yet another step forward in revolutionizing how we interact with digital content.  

However, as businesses embrace these advancements, we must still do our due diligence to scrutinize the security implications of all these new iterations of generative AI.  

Here, we’ll discuss what Google Gemini is, its benefit to an organization’s overall productivity, and what security and privacy risks companies should be aware of. Then, we’ll explore how Concentric AI can help businesses use Google Gemini and worry much less about risks to their sensitive data.  

What is Google Gemini and how does it work?  

If you’re familiar with Microsoft Copilot and OpenAI’s ChatGPT, Gemini appears to deliver the best of both worlds. With Google Gemini, you can use it like you would ChatGPT while also getting a personal AI assistant for every Google Workspace app you use – from Gmail to Docs to Spreadsheets. Because it has the power of Google search behind it, it’s almost like ChatGPT and Copilot on steroids (but only time will tell how effective or secure it will be).  

Like Copilot with Office 365, Gemini can dig deep into your organization’s Google Workspace data. Gemini leverages Google’s advanced language models to analyze your content and proactively provide contextual assistance. Gemini can help you summarize complex documents, get quick answers from Sheets data, suggest email responses, or even improve your writing in Docs.  

Here’s an example: say you’re drafting an email in Gmail about an upcoming project. Gemini can summarize key points from related Drive documents, offer potential phrases to make your tone more professional, or even help catch possible errors. 

Ultimately, Gemini aims to enhance productivity and creativity by leveraging the power of generative AI, but not without raising important questions about data privacy and security. 

Enhanced productivity with a side of privacy concerns 

Google has always maintained that the privacy and security of its users’ data is paramount to the company. According to their communications, they look to extend that commitment into the era of generative AI with Gemini. The technology is designed to ensure that Workspace content remains confidential, with strict controls over data sharing and usage. Google’s privacy commitments emphasize user control over their data, promising that Workspace data will not be used to train its AI models without permission. 

Here are the positives for Google’s privacy and security safeguards: 

Data ownership: Users retain control over their Workspace content, with the ability to delete or export their data at any time. 

Workspace data integrity: Google assures that Workspace data is not utilized for improving AI models or other systems outside of Workspace without explicit user permission. 

Anonymization and aggregation: Interactions with Workspace features are anonymized and aggregated, contributing to feature improvements while prioritizing user privacy. 

Ad targeting: Google Workspace services do not use content for advertising purposes, reinforcing the separation between user data and ad targeting. 

This all sounds great, but despite these assurances, discussions about potential privacy risks are crucial.  

The big question here is to what extent the user data is protected, as experience tells us that other generative AI tools already blur the lines of data privacy and security.  

While Google asserts that its AI advancements do not compromise privacy standards, the underlying concern remains: how does one balance the benefits of generative AI with the imperative of protecting user data? 

Let’s explore the key privacy concerns first:  

Data usage for AI training: Despite Google’s commitments, skeptics worry about the potential for user data to inadvertently contribute to AI model training, raising questions about consent and data anonymization. 

Data sharing and permissions: The mechanisms by which data is shared within and outside organizations, especially in the context of Gemini’s AI-generated content, require transparency and strict oversight. 

Impact on sensitive information: For businesses and educational institutions, ensuring Gemini’s interactions stay confined within the organization requires robust security measures to prevent unauthorized access or data breaches. 

That’s not all. What may be most troubling is this, right from Google’s own Gemini privacy statement:

“Don’t enter anything you wouldn’t want a human reviewer to see or Google to use. For example, don’t enter info you consider confidential or data you don’t want to be used to improve Google products, services, and machine-learning technologies.” 

Yes, humans from Google may review your conversations, which may be used to improve their AI. 

Can Gemini give employees access to too much data?  

All these macro-level privacy issues aside, the privacy and security implications of having sensitive data shared with the wrong party is often left out of the discussion. This could be an employee, a third-party vendor, or a malicious actor.  

We’ve written about the importance of data classification quite a lot, and for good reason. If your data isn’t appropriately classified and you can’t identify sensitive data, how will Google know which document shouldn’t be shared?  

Here’s a specific example we posed to Google Gemini and the answer it gave.  

Our Question: If an employee from sales wants to access an HR document, if we didn’t set the permissions properly, could the sales person potentially see HR data? 

Gemini’s Answer: Yes, absolutely. Here’s why that’s a risk: 

Accidental Exposure: If you don’t have clearly defined permissions in Google Workspace, a sales employee could stumble upon HR documents through search or browsing if they were granted access to a Drive folder containing these files. 

Improper Sharing Settings: If HR documents are shared too broadly (e.g., with “Anyone in the company can view”), users outside of HR would have access, despite their role. 

Group Overlap: If sales and HR employees belong to the same groups that have access to sensitive HR information, the sales staff would be privy to it as well. 

How to Prevent This 

Restricted Folders: Create dedicated folders for HR documents and limit access to only authorized HR personnel 

Specific Sharing: When sharing HR documents, choose the “Specific people” option and add only HR staff or those who have a legitimate need to see the information. Never use “Anyone with the link “or overly broad group settings. 

Review Group Memberships: Ensure groups are well-defined and don’t accidentally grant access to sensitive areas they shouldn’t have. 

Additional Protections 

Data Loss Prevention (DLP): If your Google Workspace plan includes it, DLP tools can automatically scan for sensitive data like personnel information and alert you or potentially block sharing. 

Training: Educating all employees on proper file handling and the importance of respecting data privacy is vital. 

Remember:  Access control should be designed thoughtfully to  match your organization’s confidentiality needs. It’s not just about technology, but also clear policies and ongoing awareness on data protection. 

Google is essentially telling us that a data loss protection tool will provide better security.  

Also, did you know that Google Gemini recommends leveraging data security posture management (DSPM) 

Here’s what it said when we asked if Google Workspace works well with a DSPM solution. 

Our Question: So are you saying Google Workspace data security is boosted with a good DSPM tool? 

Gemini’s Answer: Yes, a good DSPM tool can significantly boost Google Workspace data security. Here’s why: 

Proactive Detection of Misconfigurations: Google Workspace has robust security settings, but human error, complexity, and the evolving threat landscape can create gaps. DSPMs continuously scan for configuration issues that could put your data at risk, notifying you before hackers exploit those vulnerabilities. 

Oversight of Permissions: Even with careful design, access permissions can unknowingly  become overly relaxed over time. DSPMs audit  permissions for both files and users, ensuring sensitive data is only accessible to those who truly need it. This aligns with the principle of least privilege. 

Compliance Support: Compliance with regulations like GDPR, CCPA, or industry-specific standards requires tight control over your data environment. DSPMs provide tailored security recommendations and automated reports to help you demonstrate compliance with minimal manual effort. 

Uncovering Hidden Risks: DSPMs analyze activity logs and user behavior  beyond surface-level configurations. They can uncover unusual data access patterns, potential unauthorized file sharing, or signs of compromised accounts that you might otherwise miss. 

Actionable Recommendations: DSPMs don’t just highlight problems – they provide clear instructions on how to fix security gaps, streamline your permissions, and strengthen your overall Google Workspace security setup. 

Important Notes 

DSPMs don’t replace native security: Think of them as augmenting and automating elements of your security practices. Google Workspace still has built-in features like DLP that are essential. 

Choice matters: The benefits you actually gain are directly tied to the capabilities of the DSPM you select. Thoroughly evaluate solutions to find the right fit. 

In short, while Google Workspace alone offers a substantial level of security, using a DSPM adds an intelligent layer of proactive monitoring, continuous risk assessment, and guided remediation that further enhance your data protection 

Shouldn’t your company do all it can to enhance data protection?  

What this means for data security for Google Gemini is that your sensitive data is only as secure as your current Google Workspace security settings. Gemini opens up a whole new way for employees to access documents and data, and if those settings are not robust enough, sensitive data is essentially more exposed than ever.  

Especially in these early days of AI integration, it’s best to prioritize the sensitivity of the content you expose to Gemini, possibly limiting it to lower-classification datasets. Ideally, zero trust is always best. 

Remember, any labels and classification methods companies rely on to protect data can be cumbersome at best, and AI-generated data will only add to the complexity. With more data to manage than ever, organizations can’t expect their employees to be responsible for managing data risk. 

Keeping sensitive data safe with Concentric AI 

Before deploying any type of generative AI, organizations must have a clear understanding of their data risk. 

Going a step further, they need to ensure that any type of sensitive data — from financial data to PII/PHI/PCI to intellectual property to confidential business information —can be identified, classified and remediated if at risk. Remember, sensitive data can be stored in many places: the cloud, on premises, structured or unstructured data. 

While most classification methods are better than having none at all, most paths to classification — like end-user, centralized and metadata-driven — can be time-consuming, ineffective and full of unnecessary obstacles. 

How Concentric AI helps secure Gemini  

Concentric AI leverages sophisticated natural language processing capabilities to accurately and autonomously categorize data output from Gemini into categories that include privacy-sensitive data, intellectual property, financial information, legal agreements, human resources files, sales strategies, partnership plans and other business-critical information. 

Concentric AI can analyze all the data Gemini has access to and discover sensitive information – from financial data to PII/PCI/PHI — and label the data accordingly to ensure that only authorized personnel have access to it. 

With Concentric AI, employees are relieved of the burden of labeling the output, which means better overall security. 

After identifying and classifying any sensitive data, Concentric AI can autonomously identify risk from inappropriate permissioning, risky sharing, unauthorized access, wrong location etc. 

Remediation — like changing entitlements, adjusting access controls, or preventing the data from being shared — can also be taken centrally to fix issues and prevent data loss. 

Best of all, Concentric AI can help you address Gemini’s security risks without having to write a single rule. 

To sum up, with Concentric AI, your organization can effectively manage generative AI output data: 

  • Discover, monitor and protect all data types, including cloud, on-premises, structured, unstructured, and shared via messaging services 
  • Auto-label sensitive data output from Gemini 
  • Gain a risk-based view of data and users 
  • Leverage automated remediation to instantly fix access and activity violations 
  • Find risk without rules, formal policies, regex, or end-user involvement 
  • Secure API-based SaaS solution with no agents required 

Concentric AI is easy to deploy — sign up in ten minutes and see value in days. 

Contact us to book a demo today.

concentric-logo

Libero nibh at ultrices torquent litora dictum porta info [email protected]

Getting started is easy

Start connecting your payment with Switch App.