Artificial intelligence (AI) is no longer a futuristic technology concept to be harnessed in the far-off distance. It is here, used in thousands of businesses and consumer applications. But we’ve only begun to scratch the surface of AI capabilities. According to Statista, the AI industry is growing 54% year over year, bringing us the next big technological shift.
For the collaboration industry, its use is providing frictionless connections and will continue to make remote communications better in ways we once only dreamed of.
While the ubiquitous use of AI to improve interactions in business and everyday life is exciting, it doesn’t come without consideration. Many people are weary of AI, afraid unintended bias will make its way into algorithms or data privacy will be compromised.
The following are three key areas of focus that organizations need to be aware of to design responsible AI. If you want a more detailed account on the subject, read Designing Responsible AI Systems by Webex VP of Collaboration AI, Chris Rowen.
Gaining trust in AI with an unbiased approach
According to a Stanford report, in 2019 private investment in AI went up 9.3% from the previous year. Companies recognize that AI holds the key to providing people with a variety of experiences that add more convenience, productivity and realism in a now mostly virtual world, and they are putting their trust into developing technology with artificial intelligence, machine learning and deep learning algorithms.
But to realize the full potential of AI, users must also trust it, and that starts with demonstrating ethical AI, with algorithms that produce unbiased outcomes. Designing responsible AI includes a well-thought-out process that leads to sponsoring organizations being able to answer several key questions, including:
- What is the reasoning behind why an algorithm made a certain prediction/output?
- Is the algorithm adoptable across multiple platforms and complex ecosystems?
- Is someone continuously updating and correcting algorithms when needed?
Once questions like these can be addressed, it is the start of building trust for AI technology and gaining the confidence of the users.
Regulating and securing data right from the start
Another important consideration in the AI journey is making sure the data collected is private, secure and regulated, with methods and tools in place to ensure transparency with AI.
For the past few years, customers and stakeholders have expressed concern about how and where their data and information is being used. This has led to the creation of GDPR and the EU’s guidelines on how to safety use AI. Read more here.
For organizations developing their own AI technology, security, privacy and compliance measures must be primary considerations, not an afterthought. If organizations don’t have rigorous security measures in place — measures that can be shared with users – it will be difficult to gain trust and ultimately bring the technology to market. Organizations must be able to accept the consequentiality of their decisions and make sure to get it accurately evaluate the first time.
Launching and fine-tuning AI technology
Launching responsible AI includes constantly evaluating the technology and fine tuning after deployment. There needs to be constant evaluation of the output to make sure it is doing what it is supposed to and correcting when needed. There also needs to be constant monitoring to ensure no risk, bias, and ethical issues arise.
For the successful launch of AI technology, there needs to be frameworks put in place to help:
- Design and implement a control model for governance
- Continuously train and develop personnel who will be working with AI
- Check on accuracy and correct when necessary
- Assess any gaps that may occur
- Create a risk management protocol
It is important to note that adopting a responsible AI strategy takes time and implementing it should be evaluated and measured carefully to ensure data privacy and security.
How Webex approaches AI
At Cisco, we understand just how important it is to build and maintain trust into our technology. We are actively and consistently taking action to identify the most responsible way to develop and realize the potential of AI and to build in strong safeguards on security, human rights and privacy at all stages of our technology’s development and operation.
Webex has implemented guidelines and taken steps to make sure our AI technology is properly governed, secured, and has ethics and trust built into the solution. Our features are designed with privacy in mind and with Cisco’s leading security platform behind it.
Our AI and machine learning initiatives are guided by a few core principles:
- Customer-level explanation of efficacy/accuracy, risks, desired outcomes, purpose, inputs, and limitations
- Protection against data poisoning
- Consequentiality of decisions
- Evidence of validation
- Likely risks, biases, and ethical issues
To get a more detailed view, check out how Cisco treats the data it collects in our whitepaper, Data Handling and Privacy for Cognitive Collaboration. Additionally, this Privacy Data Sheet describes the processing of personal data (or personally identifiable information) by Cisco Webex Meetings.
Learn more