Webex leverages the power of Artificial Intelligence (AI) technology to enable seamless collaboration and hybrid work. A plethora of AI features and capabilities power the Webex App to improve user productivity and promote frictionless collaboration. As with all Webex capabilities, AI-driven features have strong guiding security principles that help keep user and company data secure throughout its lifecycle. In addition, Webex has robust privacy policies related to the handling of user-generated content and metadata that prevents any misuse. As you may be aware, Webex makes available to developers and partners various developer tools and application types so that third parties can embed and extend their functionality within the Webex App to customers. For example, In the Webex App Hub, you can discover a variety of third-party applications. Outside of Webex-developed AI technologies that power various native Webex App experiences, these third-party applications also leverage AI technologies which may present new security and privacy considerations. You must be aware of each application’s risks and how to mitigate potential problems.
Various third-party applications—bots, embedded apps, service apps, assistant skills, integrations, guest issuers, and more—enhance Webex end-user productivity multifold in many cases. Third parties may improve the application experience by leveraging, for example, advanced Natural Language Processing (NLP) technology like ChatGPT from OpenAI. At Webex we want to make you aware of certain security and privacy risks associated with the introduction of NLP technology —Webex is at the frontline of combating these risks, by providing, for example, security controls and tools that allow you to minimize any risks associated with third-party applications that leverage advanced AI technology. Let’s explore the potential incremental security risks and discuss how Webex offers security controls to mitigate those risks.
AI-powered applications are considered a higher risk to an enterprise’s security and privacy than standard applications which do not have Natural Language Processing (NLP) engines. Two main factors drive this risk, data proliferation with associated language model retraining and human interaction with these systems.
First, AI systems, especially generative AI, are trained on and use very large language models. A language model defines the domain of information where an AI model can be used. Commercial language models have been deployed since the early 2000s but were always domain-specific. For example, they allowed a caller to express naturally what they were looking for when they called a contact center. Today’s generative AIs like ChatGPT don’t have such domain restrictions. They can be used to write a student’s essay and to create comprehensive computer program test cases. These capabilities are possible because these AI technologies are trained on massive datasets from every domain accessible. Publicly available AI systems like ChatGPT scan a large corpus of public data and bring it back into the system for retraining purposes. Once the language model is retrained with the new data, it is open for everyone to use.
Second, since these systems are so powerful, we may interact with them more naturally, providing a lot of data. We may speak or write to the AI system in more natural ways and provide, in general, more context and information to a bot than we did in the past. With these new AI systems, we realize that the more context we provide, the better answers we get. For example, a Marketing manager might ask ChatGPT the question “For my B2B SaaS X product that targets SMB customers, what percentage of my $15K annual marketing budget should I allocate to different digital channels to maximize conversion?”. The system learns about the company, target customers, marketing budget, and products which may contain different levels of sensitive company information. ChatGPT and other NLP systems provide better and more targeted suggestions to the user based on this contextual information. However, with inherently collecting more data, the language model is exposed to more risks and may be retrained with this information.
In addition to the inadvertent data compromise above, it is relatively easy for bad actors to weaponize Chatbots and Integrations to lure users into fun-filled natural interactions and engagements only to exfiltrate sensitive company information such as customer lists, product roadmaps, revenue, and personal user information like credit cards and health records.
In addition to the phishing attacks, Chatbots and applications developed by bad actors using the power of ChatGPT can spew malware intelligently (via continuous learning that optimizes user engagement) by embedding malicious code in file attachments.
As we’ve established, there is a certain level of risk associated with this new class of apps that are built on AI technology. These risks can be mitigated if not eliminated. Here are a few controls you should apply to keep data security top of mind for your organization.
Thankfully, all applications published on the Webex App Hub fulfill these requirements since they are vetted rigorously by our team of experts. For an application to be published in the Webex App Hub, the developer has to fulfill submission requirements and go through a vetting process with the Cisco Developer Support team. The Developer Support team engages with the developer directly to understand the application’s use case and performs a series of tests to ensure the application’s benefits are delivered and work with the developer to mitigate any issues. In addition, the developer must provide details about their company and practices around the privacy and security of the end users’ data. While the Webex Developer Support team doesn’t have the ability to inspect end-to-end encrypted traffic, we will get a good understanding of the app’s baseline security posture. When an app is published to App Hub, all the information mentioned above will be accessible.
Control Hub, the administration console for Webex, provides a rich set of application management controls for each application type. Generally, application types can be turned on or off globally. For example, you may decide that ‘bots’ are generally allowed in your org since they can only interact with users by explicitly mentioning the bot. At the same time, you may globally disallow ‘Integrations’ that impersonate a user and can be full-space participants. The same applies to ‘Assistant Skills,’ ‘Embedded Apps,’ and ‘Service Apps’ all of which can be globally turned on or off. If you decide that these app types are too dangerous to use in your organization’s environment, you can globally restrict them. Please see the new Service Apps if you are looking for a new app type that inherently needs admin authorization in Control Hub. You also have options to turn off file sharing for Bots & integrations.
In addition to global controls for most app types, especially under the ProPack license model, you have granular controls to individually allow specific applications. The typical setting is to globally deny new apps while specifically allowing individual apps. In addition, you can selectively decide which groups of users within your organization are allowed to use these apps. This is important for apps that are only used by certain line of business users. With this scenario, you can work on evaluating an application on a case-by-case basis. When the results show that an app is useful to end-users and poses little to no security risk, you can allow that app, and in some cases only for specific people.
At Webex we offer the Extended Security Pack (ESP) as another layer of defense against malicious Chatbots and applications that lure users into phishing attacks. If your organization has procured the ESP license, then the content generated by users in your organization is secured via a Data Loss Prevention (DLP) engine and malware scanning of files. You can set up domain-specific and custom content policies in the DLP engine to prevent harmful Bot’s and apps from exfiltrating company-sensitive data. The malware scan engine proactively scans identifies and quarantines bad files before they are exposed to multiple users and spread through the system. If your organization is using Bot’s and other workflow productivity applications, you should seriously consider purchasing the Extended Security Pack license to protect your organization’s content. To make the purchasing process easier, at Webex, we offer a free trial of the ESP bundle. You can sign up for the trial to evaluate the impact before moving forward with a purchase commitment. You also have the flexibility to choose from a variety of enterprise DLP products that are pre-integrated with the Webex App.
At Webex, we appreciate you using our products to enhance user’s productivity. Generative AI-based NLP technology that mimics human like interactions contains a variety of security threats related to compromising of sensitive data, ransomware, and malware attacks. Webex offers a defense and in-depth approach to combat these threat vectors at the origin source (via BOT and Integration management controls) as well as at the application data layer (via DLP and Malware protection). Let’s keep your organization’s data safe and users productive.
For further information, please refer to the following:
Canadians are demanding more from their interactions with businesses that serve them. Across every industry,…
Webex Rewind is your briefing on everything we’ve launched at Webex in the past month,…
In recent years, the financial services industry has undergone many changes, ranging from tumultuous economic…
WebexOne is THE AI event of the year, happening in Anaheim, California, from October 24…
Spanning the globe with Webex Integration Partners FY2023 has been a whirlwind of new experiences…
People don’t want to be forced back into a lousy office. During the last few…