Explainable AI or XAI has spurred big investment in silicon valley with startups and cloud giants competing to make opaque software more understandable. We have already witnessed Linkedin boost its subscription revenue by 8%, thanks to the sales team equipped with artificial intelligence software. The software helps to predict clients at risk of canceling and also explains how it arrived at that conclusion.
AI scientists have been designing systems that make accurate predictions of all business outcomes. Though now feel the need to make these tools more effective for human operators, this means the AI may require to explain itself via another algorithm.
This has stoked discussion in Washington and Brussels amongst regulators to ensure automated decision-making is done fairly and transparently. As AI can maintain a social bias around race, gender, and culture, some AI scientists feel the explanations can be a crucial factor in mitigating such problematic outcomes.
For the last two years, the US consumer protection regulators along with the Federal Trade Commission have warned AI that is not explainable could be investigated. EU will be introducing the Artificial Intelligence Act, next year, this includes comprehensive requirements that users can interpret automated predictions.
According to the promoters of explainable AI, it increases the effectiveness of AI’s application in fields such as healthcare and sales. Currently Google Cloud markets explainable AI services. Such as it informs clients trying to sharpen their systems on which pixels and soon which training examples mattered most in predicting the subject of a photo.
Those who are against it suggest what it does is too unreliable as the AI technology to interpret the machines is not good enough. Explainable AI is being developed by LinkedIn and others, the AI acknowledges each step in the process – analyzing predictions, generating explanations, confirming their accuracy, and making them actionable for users – still has room for improvement.
LinkedIn after two years of trial and error in a relatively low-stakes application, says the technology has yielded practical value. This can be confirmed by the 8% increase in renewal bookings during the current fiscal year above normally expected growth. LinkedIn declined to specify the benefit in dollars, though mentioned it as sizable.
Prior to this, salespeople at LinkedIn relied on their own intuition and some spotty automated alerts about clients’ adoption of services. The AI, dubbed as CrystalCandle by LinkedIn, handles research and analysis and calls out unnoticed trends. Its reasoning ability helps salespeople sharpen their tactics to keep at-risk customers on board and pitch others on upgrades.
The explanation based recommendations have expanded to more than 5,000 of its sales employees spanning recruiting, advertising, marketing, and education offerings, according to LinkedIn
Parvez Ahammad, LinkedIn’s director of machine learning and head of data science applied research said, “It has helped experienced salespeople by arming them with specific insights to navigate conversations with prospects. It’s also helped new salespeople dive in right away.”
LinkedIn offered predictions without explanations back in 2020. The 80% accuracy results show the likelihood a client soon due for renewal will upgrade, hold steady or cancel.
This helped its sales team of LinkedIn’s Talent Solutions recruiting and hiring software. With it, they were able to adapt their strategy, especially when the odds of a client not renewing was no better than a coin toss.
According to Fei-Fei Li, co-director of Stanford University’s Institute for Human-Centered Artificial Intelligence, “people use products such as Tylenol and Google Maps whose inner workings are not neatly understood. In such cases, rigorous testing and monitoring have dispelled most doubts about their efficacy.”
Been Kim, an AI researcher at Google said, “I view interpretability as ultimately enabling a conversation between machines and humans. If we truly want to enable human-machine collaboration, we need that.”
Daniel Roy, an associate professor of statistics at the University of Toronto, stated AI systems could be deemed fair even though individual decisions are inscrutable.