Retail is Applying AI in Four Main Areas – Data Remains a Challenge

If you’ve been paying attention to the application of artificial intelligence in retail, you may feel like the buzz around the topic has gone from zero to “arrived” in less than a year. In retail time, even at the speed of the modern consumer, that is incredibly fast. Some of the hype has come from […]

If you’ve been paying attention to the application of artificial intelligence in retail, you may feel like the buzz around the topic has gone from zero to “arrived” in less than a year. In retail time, even at the speed of the modern consumer, that is incredibly fast.

Some of the hype has come from activity around specific use-cases for the application of AI in retail. While companies like Baidu profess over 100 AI capabilities, in retail it appears that use-cases are centering on four main areas:

Predictive analytics / forecasting – This is forecasting with an emphasis on either products, or customers. For products, retailers appear to be focusing on three main areas of opportunity. First, they are looking at understanding product attributes in a new, AI-driven light. By looking beyond the obvious attribute connections between products, retailers are looking to machine learning to identify and make connections between products that get lost in the noise. They are then connecting those attributes to drivers of demand, to make finer-grained predictions of how well products will sell and why. And finally, retailers are looking to incorporate non-traditional demand signals to get a better picture of demand – seeing if there are connections to be made about consumer behavior related to products that can be exploited in the future, using new kinds of data. For example, predicting that a restaurant will sell 25% more salads if the lunch-time temperature is above 80 degrees F. Or, conversely, that lettuce contamination in the headlines creates a 10% decline in salad sales.

Predictive analytics is also being applied to customer behavior. Matching product to customer behavior can be used in the product sense above, but it can also be use in a customer sense to predict the next product a specific customer would be interested in buying. It can also be used to predict when, in which channel, and at which price (or with which offer) a customer would be most likely to buy, and which product would most have their attention. This has made its way into retail through personalization solutions, mostly targeted at the digital portion of the online journey.

Voice / Natural Language Processing In – While the retail industry tends to group natural language processing (NLP) together into in AND out, in reality there are applications that focus only on inputs, and applications that more heavily focus on outputs, which are much more difficult, and covered next. On the input side, the applications focus on speech-to-text, and then text recognition, which can then be used to analyze for sentiment or emotion. Examples include call center chats or phone calls that detect when a customer might be getting angry, or traditional social media analysis that is smart enough to self-learn – so, instead of a person having to go through and note when there are exceptions to language that is traditionally considered negative (“This vacuum sucks” is sometimes not a bad thing), the AI will be able to detect and categorize exceptions on its own over time.

Voice / NLP Out – The output side is much harder, because it requires the AI to approximate human behavior enough to sound “natural”. Chatbots are on the learning curve, as are automated copy writers. Chatbots are a little easier to pull off because you can rely on a smaller subset of information to seed the chatbot, and they tend to be focused on specific objectives, like problem solving or sales. Copy is a lot harder because it tends to rely on a broader range of inputs and human expectations might include more difficult language concepts like metaphors or poetic license. But retailers are looking to these capabilities to either offset human communication and customer service costs, as in a call center, or to be able to generate a lot more unique copy about products a lot faster – or both.

Read the source article in Forbes.

Edmonton Startup using AI to Fix ‘Broken Meetings’

Meetings cost time and money to run, and many of them are unnecessary, says Testfire Labs CEO Dave Damer. His solution: the company’s AI assistant, Hendrix.ai. Currently in its beta test phase, it takes a meeting’s minutes, noting questions, answers and action items by listening via microphone. Its meeting summaries leave out “chit chat” for […]

Meetings cost time and money to run, and many of them are unnecessary, says Testfire Labs CEO Dave Damer. His solution: the company’s AI assistant, Hendrix.ai.

Currently in its beta test phase, it takes a meeting’s minutes, noting questions, answers and action items by listening via microphone. Its meeting summaries leave out “chit chat” for clarity. Exact transcripts aren’t kept for reasons of confidentiality, said Damer, who founded the company in 2017.

“The demands to do more with less in modern business keep increasing,” Damer said. “AI gives us an opportunity to legitimately take things off peoples’ hands that are generally mundane tasks so they can focus on higher-value work.”

Hendrix.ai also tracks attendance rates, numbers of last-minute meetings and meeting lengths.

On May 25, Testfire Labs won a Startup Canada regional innovation award for its work on Hendrix.ai. Startup Canada CEO Victoria Lennox said the adjudicators liked how Testfire Labs integrated AI into audio-to-text technology with Hendrix.ai.

“There’s a lot of audio-to-text tools and they’re growing more and more,” Lennox said. What made Hendrix.ai different was its focus on meetings.

Damer’s goal is for Hendrx.ai to reach companies with more than 1,000 staff. It’s being tested by 100 organizations in its beta phase, including the City of Victoria and the Northern Alberta Institute of Technology. Torsten Prues of NAIT’s information technology department said he’s been using the system with a team of six since January.

“What made us interested (in using Hendrix.ai) was that NAIT is very meeting-heavy, and people don’t like taking minutes,” he said.

Edmonton: ‘On the cusp’ of tech

Damer graduated from the University of Alberta in 1991 as a computer engineer and has 25 years of experience in the technology industry. He calls Edmonton a good home for a tech startup, with an industry that’s “on the cusp.”

Before Testfire Labs, Damer founded ThinkTel Communications Ltd. in 2003, and spent 14 years there. ThinkTel is now the business services division of Distributel, an independent communications company.

Damer started Testfire Labs as a more creative project. Currently valued at $5 million, with 10 employees, Damer hopes to grow the business to $20 million next year. He sees Hendrix.ai becoming an asset to workplaces as more tasks — like note taking — become automated.

Randy Goebel, a professor at the University of Alberta and expert in natural language processing, said applying the science of natural language understanding to everyday use is “extremely difficult in practice.” The science is there, he said, but businesses like Hendrix are challenged with translating that science into something people will pay for.

“They provide a line of sight to scientists to add value to their work,” said Goebel, who is also a researcher at the Alberta Machine Intelligence Institute.

While the system is being honed for summarizing meetings, Damer plans to include more features that track whether certain speakers dominate meetings, gauge what tones discussions take, and find possible areas where different teams can collaborate.

“You can do so much more than notes,” he said. “We can do tone analysis on whether it was a positive, negative or neutral conversation. Was there joy in the words, or was there fear. What are the emotions that are being expressed.”

Read the source article in the Edmonton Journal.

Here are 3 Tips to Reduce Bias in AI-Powered Chatbots

AI-powered chatbots that use natural language processing are on the rise across all industries. A practical application is providing dynamic customer support that allows users to ask questions and receive highly relevant responses. In health care, for example, one customer may ask “What’s my copay for an annual check-up?” and another may ask “How much does seeing […]

AI-powered chatbots that use natural language processing are on the rise across all industries. A practical application is providing dynamic customer support that allows users to ask questions and receive highly relevant responses. In health care, for example, one customer may ask “What’s my copay for an annual check-up?” and another may ask “How much does seeing the doctor cost?” A smartly trained chatbot will understand that both questions have the same intent and provide a contextually relevant answer based on available data.

What many people don’t realize is that AI-powered chatbots are like children: They learn by example. Just like a child’s brain in early development, AI systems are designed to process huge amounts of data in order to form predictions about the world and act accordingly. AI solutions are trained by humans and synthesize patterns from experience. However, there are many patterns inherent in human societies that we don’t want to reinforce — for example, social biases. How do we design machine learning systems that are not only intelligent but also egalitarian?

Social bias is an increasingly important conversation in the AI community, and we still have a lot of work to do. Researchers from the University of Massachusetts recently found that the accuracy of several common NLP tools was dramatically lower for speakers of “non-standard” varieties of English, such as African American Vernacular English (AAVE). Another research group, from MIT and Stanford, reported that three commercial face-recognition programs demonstrated both skin-type and gender biases, with significantly higher error rates for females and for individuals with darker skin. In both of these cases, we see the negative impact of training a system on a non-representational data set. AI can learn only as much as the examples it is exposed to — if the data is biased, the machine will be as well.

Bots and other AI solutions now assist humans with thousands of tasks across every industry, and bias can limit a consumer’s access to critical information and resources. In the field of health care, eradicating bias is critical. We must ensure that all people, including those in minority and underrepresented populations, can take advantage of tools that we’ve created to save them money, keep them healthy, and help them find care when they need it most.

So, what’s the solution? Based on our experience of training with IBM Watson for more than four years, you can minimize bias in AI applications by considering the following suggestions:

  • Be thoughtful about your data strategy;
  • Encourage a representational set of users; and
  • Create a diverse development team.
1. Be thoughtful about your data strategy

When it comes to training, AI architects have choices to make. The decisions are not only technical, but ethical. If our training examples aren’t representative of our users, we’re going to have low system accuracy when our application makes it to the real world.

It may sound simple to create a training set that includes a diverse set of examples, but it’s easy to overlook if you aren’t careful. You may need to go out of your way to find or create datasets with examples from a variety of demographics. At some point, we will also want to train our bot on data examples from real usage, rather than relying on scraped or manufactured datasets. But what do we do if even our real users don’t represent all the populations we’d like to include?

We can take a laissez-faire approach, allowing natural trends to guide development without editing the data at all. The benefit of this approach is that you can optimize performance to your general population of users. However, that may come at the expense of an underrepresented population that we don’t want to ignore. For example, if the majority of users interacting with a chatbot are under the age of 65, the bot will see very few questions about medical services that apply only to an over-65 population, such as osteoporosis screenings and fall prevention counseling. If bots are only trained on real interactions, with no additional guidance, it may not perform as well on questions about those services, which disadvantages older adults who need that information.

In order to combat this at my company, we create synthetic training questions or seek another data source for questions about osteoporosis screenings and fall prevention counseling. By strategically enforcing more distribution and representativeness in our training data, we allow our bot to learn a wider range of topics, without unfair preference for the interests of the majority user demographic.

Read the source article in VentureBeat.