AI is a hot topic in medicine, and its impact on sales is being noticed.
Who hasn’t read about the possibilities of AI and natural language processing in medicine, research, even high-school term papers. But what about sales?
“AI-powered systems are on the way to becoming every salesperson’s (and every sales manager’s) indispensable digital assistant,” write the authors of “How Generative AI Will Change Sales,” a recent article in Harvard Business Review, “Sales is well-suited to the capabilities of generative AI models. Selling is interaction and transaction-intensive, producing large volumes of data, including text from email chains, audio of phone conversations, and video of personal interactions.
“These are exactly the types of unstructured data the models are designed to work with. The creative and organic nature of selling creates immense opportunities for generative AI to interpret, learn, link, and customize.”
Henry Schein Medical is thinking along the same lines.
“AI can have a number of positive impacts on sales reps and for companies as a whole,” says Bruce Lieberthal, vice president and chief innovation officer, Henry Schein. “It can help to better understand customers – their purchasing behaviors, their needs, their pain points. It can also help reps enhance their selling time by streamlining the pre-call process, optimizing their routes, and identifying customers that need immediate attention. AI has the potential to help reps focus their time where it will be most impactful – for both them and their customers.
“AI is also likely to help improve processes and maximize resources across other functions as well – not just sales. A few examples include helping optimize pricing strategies, identify the best use of digital channels, and assist with inventory management and demand planning.
All that said, AI will call for some adjustments, he says. “As companies implement more AI technologies into their day-to-day lives, reps will need to embrace the change and adapt to new workflows and ways of connecting with their customers. They will need to understand the WHY behind the AI chosen, HOW to use it most effectively, WHEN to let AI take on some of their administrative tasks, and WHEN to engage with customers directly.”
Microsoft, Salesforce
CRM vendors are jumping into AI for sales. In June 2022, Microsoft Corp. introduced Microsoft Viva Sales, intended to “enrich any CRM system with customer engagement data from Microsoft 365 and Microsoft Teams, and leverage AI to provide personalized recommendations and insights for sellers to be more connected with their customers.”
According to Microsoft, sellers can tag customers in Outlook, Teams or Office applications such as Excel, and Viva Sales will automatically capture it as a customer record. This data can be shared with team members while collaborating in Office and Teams without retyping or looking it up in a CRM. In addition, Viva Sales recommends next steps to progress a customer through the sales funnel and enables sellers to access full history and customer interactions.
Then in March 2023, Salesforce launched Einstein GPT, intended to “infuse Salesforce’s proprietary AI models with generative AI technology from an ecosystem of partners and real-time data from the Salesforce Data Cloud.” Customers can connect that data to OpenAI’s advanced AI models or choose their own external model and use natural-language prompts directly within their Salesforce CRM.
According to Salesforce, Einstein GPT can generate personalized emails for salespeople to send to customers, generate specific responses for customer service professionals to more quickly answer customer questions, generate targeted content for marketers to increase campaign response rates, and auto-generate code for developers.
Proceed with caution
Writing in Forbes in March, Pradeep Aradhya, CEO and founder of digital consulting firm Novus Laurus, wrote about the potential as well as the red flags of AI in sales. “I asked ChatGPT to sell me a DSLR camera, and it gave me a list of cameras, their specs, price ranges and even some bounded discounting,” he wrote. “Then I asked it about a particular model and asked it to sell it to me with a specific tone. It actually cross-sold and negotiated with me within preset bounds using Texan expressions. Without any further training from me, ChatGPT knew camera capabilities, specs, pricing and was also able to negotiate and sweeten the deal.”
Despite its upside, sales organizations should proceed with AI prudently, he wrote. “While ChatGPT can offer links, it’s a good idea to restrict it to only produce links from within your domain. It will probably be scary to let technology negotiate either pricing or other perks on your behalf, but I’ve found it can be constrained to preset boundaries. While it will not be easy to feed contextual visitor characteristics and affinities, it is possible. From there, it is a short step further to feed individual visitor data like click stream or even sales history to further contextualize the chat.
“Even with all of this in mind, it is important to remember that tools like ChatGPT can also have limitations, such as providing false information, wordy or formal answers and information that lacks detail.”
According to the Harvard Business Review authors, “Generative AI must be nonintrusively embedded into sales processes and operations so sales teams can naturally integrate the capabilities into their workflow. Generative AI sometimes draws wrong, biased, or inconsistent conclusions. Although the publicly accessible models are valuable … the true power for sales teams comes when models are customized and fine-tuned on company-specific data and contexts.”
Says Lieberthal, “One of the biggest immediate risks would be an overreliance on AI, or the assumption that AI is always smarter than humans. AI develops its models and responses based on the available data and assumes that the data is always complete and accurate. And AI is only as accurate as the data it was trained on. This can lead AI to sometimes make incorrect predictions and suggestions.
“A small example in the sales world might be a recommendation based on a customer’s purchase history. If an item was repeatedly purchased because the customer’s preferred product has been on backorder for several months, AI may recommend something similar to that repeatedly purchased item. In this case, the data does not reveal the customer’s true preferences. Additionally, there are significant concerns regarding AI breaching security protocols and privacy.
“Although it is possible that AI can improve the ability to service customers, it is not a substitute for the human touch – for the relationship that reps have with customers,” adds Lieberthal. “Even as companies begin to implement more and more AI, reps should continue to play a vital role in developing and stewarding relationships with customers. AI will be a collaborator, not a competitor, in this equation, assisting reps and their customers to develop valuable insights and make informed decisions.”
Sidebar 1:
AI: Red flags for the medical community
In May the World Health Organization (WHO) called for caution in using artificial-intelligence-generated large-language-model tools (LLMs) such as ChatGPT to protect and promote human well-being, human safety and autonomy, and preserve public health. Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and thereby undermine or delay the potential long-term benefits and uses of such technologies around the world, says WHO. Serious concerns include:
- The data used to train AI may be biased, generating misleading or inaccurate information and pose risks to health, equity and inclusiveness.
- LLMs may generate health-related responses that can appear authoritative and plausible to an end user, but that also may be completely incorrect or contain serious errors.
- LLMs may be trained on data for which consent may not have been provided for such use, and they mail fail to protect sensitive data (including health data) that a user provides to generate a response.
- LLMs can be used to generate and disseminate highly convincing disinformation in the form of text, audio or video content.
WHO reiterated in its statement the importance of applying ethical principles and appropriate governance, as enumerated in its guidance document “Ethics and governance of artificial intelligence for health.” The six core principles identified by WHO are:
- Protect autonomy.
- Promote human well-being, human safety and the public interest.
- Ensure transparency, explainability, and intelligibility.
- Foster responsibility and accountability.
- Ensure inclusiveness and equity.
- Promote AI that is responsive and sustainable.
Source: WHO calls for safe and ethical AI for health, World Health Organization, www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health
Sidebar 2:
Further reading:
- How Generative AI Will Change Sales, Harvard Business Review, March 31, 2023, https://hbr.org/2023/03/how-generative-ai-will-change-sales#:~:text=Yes%2C%20AI%20will%20take%20tasks,looming%20massive%20and%20complex%20opportunities
- How Tools Like ChatGPT Could Change Sales, Forbes, March 1, 2023, https://www.forbes.com/sites/forbesbusinesscouncil/2023/03/01/how-tools-like-chatgpt-could-change-sales/?sh=338b384f3e55