Alibaba to Challenge Amazon with a Cloud Service Push in Europe

Alibaba Group Holding Ltd. is in talks with BT Group PLC about a cloud services partnership as the Chinese internet giant challenges Amazon.com Inc.’s dominance in Europe. An agreement between Alibaba and the IT consulting unit of Britain’s former phone monopoly could be similar to Alibaba’s existing arrangement with Vodafone Group Plc in Germany, according […]

Alibaba Group Holding Ltd. is in talks with BT Group PLC about a cloud services partnership as the Chinese internet giant challenges Amazon.com Inc.’s dominance in Europe.

An agreement between Alibaba and the IT consulting unit of Britain’s former phone monopoly could be similar to Alibaba’s existing arrangement with Vodafone Group Plc in Germany, according to a person familiar with the matter, who asked not to be identified as the talks are private.

A BT spokeswoman confirmed by email that the U.K. telecom company is in talks with Alibaba Cloud and declined to give details. A spokesman for Alibaba declined to comment.

Started in 2009, Alibaba Cloud has expanded fast beyond China in a direct challenge to Amazon Web Services, the e-commerce giant’s division that dominates cloud computing. Alibaba Cloud is now the fourth-biggest global provider of cloud infrastructure and related services, behind Amazon, Microsoft Corp. and Alphabet Inc.’s Google, according to a report last month by Synergy Research Group.

Europe has become key to Alibaba Cloud’s success outside China, with prospects in the U.S. made murky by President Donald Trump’s America First agenda. Alibaba has pulled back in the U.S. just as tensions between America and China have escalated under Trump.

Alibaba started the German partnership with Vodafone in 2016. The Hangzhou, China-based company put its first European data center in Frankfurt, allowing Vodafone to resell Alibaba Cloud services such as data storage and analytics. Last week, Alibaba Cloud moved into France, agreeing to work with transport and communications company Bollore SA in cloud computing, big data and artificial intelligence.

Telecom dilemma

BT’s talks with Alibaba underscore a dilemma for the telecom industry. As big tech companies and consulting firms muscle in on their business installing and maintaining IT networks for large corporations, they must choose whether to resist them, or accept their help and decide which to ally with.

BT Global Services has struck up partnerships with Amazon, Microsoft and Cisco Systems Inc., while Spain’s Telefonica SA works with Amazon. In Germany, while Deutsche Telekom AG’s T-Systems has partners including China’s Huawei Technologies Co. and Cisco, it has structured its public cloud offering as an alternative to U.S. giants Amazon and Google—touting its ability to keep data within Germany where there are strict data-protection laws, 100% out of reach of U.S. authorities.

A deal with Alibaba could bolster BT’s cloud computing and big data skills as clients shift more of their IT capacity offsite to cut costs.

BT is undertaking a digital overhaul of its Global Services business in a restructuring involving thousands of job cuts after revenue at the division fell 9% last year. The poor performance of Global Services and the ouster last month of BT CEO Gavin Patterson have fueled speculation among some analysts that BT may sell the division. Still, the unit is seen by some investors as critical for BT’s relationships with multinational clients.

Read the source article in Digital Commerce 360.

Four Suggestions for Using a Kaggle Competition to Test AI in Business

According to a McKinsey report, only 20% of companies consider themselves adopters of AI technology while 41% remain uncertain about the benefits that AI provides. Considering the cost of implementing AI and the organizational challenges that come with it, it’s no surprise that smart companies seek ways to test the solutions before implementing them and get […]

According to a McKinsey report, only 20% of companies consider themselves adopters of AI technology while 41% remain uncertain about the benefits that AI provides. Considering the cost of implementing AI and the organizational challenges that come with it, it’s no surprise that smart companies seek ways to test the solutions before implementing them and get a sneak peek into the AI world without making a leap of faith.

That’s why more and more organizations are turning to data science competition platforms like Kaggle, CrowdAI and DrivenData. Making a data science-related challenge public and inviting the community to tackle it comes with many benefits:

  • Low initial cost – the company needs only to provide data scientists with data, pay the entrance fee and fund the award. There are no further costs.
  • Validating results – participants provide the company with verifiable, working solutions.
  • Establishing contacts – A lot of companies and professionals take part in Kaggle competitions. The ones who tackled the challenge may be potential vendors for your company.
  • Brainstorming the solution – data science is a creative field, and there’s often more than one way to solve a problem. Sponsoring a competition means you’re sponsoring a brainstorming session with thousands of professional and passionate data scientists, including the best of the best.
  • No further investment or involvement – the company gets immediate feedback. If an AI solution is deemed efficacious, the company can move forward with it and otherwise end involvement in funding the award and avoid further costs.

While numerous organizations – big e-commerce websites and state administrations among them – sponsor competitions and leverage the power of data science community, running a competition is not at all simple. An excellent example is the competition the US National Oceanic and Atmospheric Administration sponsored when it needed a solution that would recognize and differentiate individual right whales from the herd. Ultimately, what proved the most efficacious was the principle of facial recognition, but applied to the topsides of the whales, which were obscured by weather, water and the distance between the photographer above and the whales far below. To check if this was even possible, and how accurate a solution may be, the organization ran a Kaggle competition, which deepsense.ai won.

Having won several such competitions, we have encountered both brilliant and not-so-brilliant ones. That’s why we decided to prepare a guide for every organization interested in testing potential AI solutions in Kaggle, CrowdAI or DrivenData competitions.

Recommendation 1. Deliver participants high-quality data

The quality of your data is crucial to attaining a meaningful outcome. Minus the data, even the best machine learning model is useless. This also applies to data science competitions: without quality training data, the participants will not be able to build a working model. This is a great challenge when it comes to medical data, where obtaining enough information is problematic for both legal and practical reasons.

  • Scenario: A farming company wants to build a model to identify soil type from photos and probing results. Although there are six classes of farming soil, the company is able to deliver sample data for only four. Considering that, running the competition would make no sense – the machine learning model wouldn’t be able to recognize all the soil types.

Advice: Ensure your data is complete, clear and representative before launching the competition.

Recommendation 2. Build clear and descriptive rules

Competitions are put together to achieve goals, so the model has to produce a useful outcome. And “useful” is the point here. Because those participating in the competition are not professionals in the field they’re producing a solution for, the rules need to be based strictly on the case and the model’s further use. Including even basic guidelines will help them to address the challenge properly. Lacking these foundations, the outcome may be right but totally useless.

  • Scenario: Mapping the distribution of children below the age of 7 in the city will be used to optimize social, educational and healthcare policies. To make the mapping work, it is crucial to include additional guidelines in the rules. The areas mapped need to be bordered by streets, rivers, rail lines, districts and other topographical obstacles in the city. Lacking these, many of the models may map the distribution by cutting the city into 10-meter widths and kilometer-long stripes, where segmentation is done but the outcome is totally useless due to the lack of proper guidelines in the competition rules.

Advice: Think about usage and include the respective guidelines within the rules of the competition to make it highly goal-oriented and common sense driven.

Read the source article at deepsense.ai.

Here Are 5 Ways Big Data Is Revolutionizing the Agriculture Industry

Big data and analytics are helping to improve and transform a multitude of industries in the modern world. The most impactful thing such technologies do is provide detailed and real-time insights into operational and financial activities. In agriculture, this very thing is playing out as we speak. Farmers, for instance, are using data to calculate […]

Big data and analytics are helping to improve and transform a multitude of industries in the modern world. The most impactful thing such technologies do is provide detailed and real-time insights into operational and financial activities. In agriculture, this very thing is playing out as we speak.

Farmers, for instance, are using data to calculate harvest yields, fertilizer demands, costs savings and even to identify optimization strategies for future crops.

The question is less whether or not the technology offers benefits — it indeed does — and more about the “how” it achieves such a thing. Here are five ways in which big data in agriculture is improving conditions or operations.

#1: Monitoring Natural Trends

A significant risk factor in farming and agriculture is out of the control of those doing the brunt of the work. Pest and crop diseases, for example, can decimate entire harvests, as can natural disasters like storms or extreme weather. Before big data existed, it was almost impossible to predict such events. Yes, experienced farmers may be able to spot tell-tale signs of a pest problem — but it’s often already too late by then.

Big data and monitoring technologies can track such events and even predict them entirely. By feeding past and present data into a system and extracting insights through valid algorithms, data science can effectively boost future yields. This can save farmers and supply chain stakeholders a lot of money overall, as well as help facilitate distribution patterns and supply.

Big data drives the incorporation of modern tech into the field. UAVs or drones can be used to fly over and assess land patterns. The mapping data collected can then be analyzed and scoured for useful intel. Maybe erosion in a particular section of cropland warrants dealing with this year?

Alternatively, IoT sensors can track and monitor croplands and plants remotely.

#2: Advanced Supply Tracking

In farming and agriculture today, outside of more traditional scenarios, a farmer is often beholden to a particular supplier or partner. They may, for example, be sending a certain amount of their most recent harvest to a local grocer or department chain. Regardless of who is partnered up in agriculture, it’s not always possible to know precisely how much and when a particular crop is going to be ready. This coupled with changing demand on the consumer side can lead to severe supply issues.

Big data can alleviate some of the problems that arise in the supply chain, merely because it affords more oversight regarding the crops and harvest each season. This is true of not just the farmers working with the plants, but everyone else along the supply chain, too, including distributors, packagers, retailers, and more. When passed on, the data can genuinely help everyone prepare for the current progress, whether that includes greater or fewer quantities than expected.

#3: Risk Assessment

In general business, management and planning teams often have the benefit of detailed risk assessment reports. Until now, that’s never been possible in the world of agriculture. Sure, experience may dictate that taking a specific action is going to produce apparent consequences, but data-driven risk assessment affords so much more than that.

With big data, nearly every system, decision or event can be considered in the risk analysis plan. Every mistake or potential hurdle can be accounted for, along with not just the appropriate solution, but an expected list of results, too. Farmers can be sure that taking action won’t destroy their entire crop. More importantly, they can use real-time data to ensure damage remains minimal.

#4: Ideal Crops and Consumer Expectation

Let’s say spring and early summer are just over the horizon. Naturally, this is when the strawberry season kicks off — alongside many other crops. Except, over the coming year, the demand for strawberries is vastly lower than in previous seasons.

Rather than filling up an entire plot with strawberries, farmers can account for the lowered demand. This can be true in the opposite direction, too, when demands are higher. Big data enables this at a more advanced level than ever before.

Farmers can see precisely how much they produced in year’s past, what that meant for customer impact, how this affected supply and demand and even tips for ways to improve their operations. They could cut excess waste by producing fewer crops for a lower demand season, for instance, to save both money and space to grow alternatives.

#5: Data-Driven Industry

The other proponent of big data is that the systems are synced up with external platforms for a considerable amount of data and insights. It ties into the whole “connected” and smart side of technology.

Machine learning and algorithmic tools can be designed to factor in any number of external insights or information. Farmers can then use predictive modeling techniques to plan or act accordingly — think weather patterns, consumer demands and trends and even historical industry events. This data will help those in the agriculture industry to understand how the surrounding world affects their business.

What should they plant? When is the best time? What earnings can they expect? Are the prices of supplies rising, and how does this affect profits?

This all works to create a collaborative, data-driven industry that operates in new, innovative ways as opposed to following strategies used in the past. The beauty of this is that we don’t have to eliminate legacy strategies to make room for data-driven solutions. In fact, we can combine it all to create one of the most effective, successful operations ever to exist.

Read the source article in RT Insights.

A Strong Digital Base is Critical for Success with AI

By Jacques Bughin and Nicolas van Zeebroeck of McKinsey The diffusion of a new technology, whether ATMs in banking or radio-frequency identification tags in retailing, typically traces an S-curve. Early on, a few power users bet heavily on the innovation. Then, over time, as more companies rush to embrace the technology and capture the potential gains, the market […]

By Jacques Bughin and Nicolas van Zeebroeck of McKinsey

The diffusion of a new technology, whether ATMs in banking or radio-frequency identification tags in retailing, typically traces an S-curve. Early on, a few power users bet heavily on the innovation. Then, over time, as more companies rush to embrace the technology and capture the potential gains, the market opportunities for non-adopters dwindle. The cycle draws to a close with slow movers suffering damage.

Our research suggests that a technology race has started along the S-curve for artificial intelligence (AI), a set of new technologies now in the early stages of deployment. It appears that AI adopters can’t flourish without a solid base of core and advanced digital technologies. Companies that can assemble this bundle of capabilities are starting to pull away from the pack and will probably be AI’s ultimate winners.

Executives are becoming aware of what is at stake: our survey research shows that 45 percent of executives who have yet to invest in AI fear falling behind competitively. Our statistical analysis suggests that faced with AI-fueled competitive threats, companies are twice as likely to embrace AI as they were to adopt new technologies in past technology cycles.

AI builds on other technologies

To date, though, only a fraction of companies—about 10 percent—have tried to diffuse AI across the enterprise, and less than half of those companies are power users, diffusing a majority of the ten fundamental AI technologies. An additional quarter of companies have tested AI to a limited extent, while a long tail of two-thirds of companies have yet to adopt any AI technologies at all.

The adoption of AI, we found, is part of a continuum, the latest stage of investment beyond core and advanced digital technologies. To understand the relationship between a company’s digital capabilities and its ability to deploy the new tools, we looked at the specific technologies at the heart of AI. Our model tested the extent to which underlying clusters of core digital technologies (cloud computing, mobile, and the web) and of more advanced technologies (big data and advanced analytics) affected the likelihood that a company would adopt AI. As Exhibit 1 shows, companies with a strong base in these core areas were statistically more likely to have adopted each of the AI tools—about 30 percent more likely when the two clusters of technologies are combined. These companies presumably were better able to integrate AI with existing digital technologies, and that gave them a head start. This result is in keeping with what we have learned from our survey work. Seventy-five percent of the companies that adopted AI depended on knowledge gained from applying and mastering existing digital capabilities to do so.

Companies with a strong base in core digital technologies and big data analytics are more likely to have adopted an array of AI tools.

This digital substructure is still lacking in many companies, and that may be slowing the diffusion of AI. We estimate that only one in three companies had fully diffused the underlying digital technologies and that the biggest gaps were in more recent tools, such as big data, analytics, and the cloud. This weak base, according to our estimates, has put AI out of reach for a fifth of the companies we studied.

Leaders and laggards

Beyond the capability gap, there’s another explanation for the slower adoption of AI among some companies: they may believe that the case for it remains unproved or that it is a moving target and that advances in the offing will give them the chance to leapfrog to leadership positions without a need for early investments.

Read the source study at McKinsey.com.

Scientists Trained AI to Write Poetry; Now It’s Toe-to-Toe With Shakespeare

If science fiction has taught us anything it’s that artificial intelligence will one day lead to the downfall of the entirety of mankind. That day is (probably) still a long way away, if it ever actually happens, but for now we get to enjoy some of the nicer aspects of AI, such as its ability […]

If science fiction has taught us anything it’s that artificial intelligence will one day lead to the downfall of the entirety of mankind. That day is (probably) still a long way away, if it ever actually happens, but for now we get to enjoy some of the nicer aspects of AI, such as its ability to write poetic masterpieces.

Researchers in Australia in partnership with the University of Toronto have developed an algorithm capable of writing poetry. Far from your generic rhymes, this AI actually follows the rules, taking metre into account as it weaves its words. The AI is good. Really good. And it’s even capable of tricking humans into thinking that its poems were penned by a man instead of a machine.

According to the researchers, the AI was trained extensively on the rules it needed to follow to craft an acceptable poem. It was fed nearly 3,000 sonnets as training, and the algorithm tore them apart to teach itself how the words worked with each other. Once the bot was brought up to speed it was tasked with crafting some poems of its own. Here’s a sample:

With joyous gambols gay and still array
No longer when he twas, while in his day
At first to pass in all delightful ways
Around him, charming and of all his days

Not bad, huh? Of course, knowing that an AI made it might make it feel more stilted and dry than if you had read it without any preconceptions, but there’s no denying that it’s a fine poem. In fact, the poems written by the AI follow the rules of poetry even more closely than human poets like Shakespeare. I guess that’s the cold machine precision kicking in.

When the bot’s verses were mixed with human-written poems, and then scoured by volunteers, the readers were split 50-50 over who wrote them. That’s a pretty solid vote of confidence in the AI’s favor, but there were still some things that gave the bot away, including errors in wording and grammar.

Still, it’s a mighty impressive achievement. Perhaps when our robot overlords enslave humanity we’ll at least be treated to some nice poetry.

Read the source article in BGR.

Fluid Data Strategy Needed to Keep Tech Mapped to Business Plan

By Mahesh Lalwani, Vice President, Head of Data & Cognitive Analytics at Mphasis In today’s world, it should no longer be acceptable to have merely adaptive data. To win customers and market share, an organization must do far more and predict which strategy will unlock the potential its data has to offer. A company must envision […]

By Mahesh Lalwani, Vice President, Head of Data & Cognitive Analytics at Mphasis

In today’s world, it should no longer be acceptable to have merely adaptive data. To win customers and market share, an organization must do far more and predict which strategy will unlock the potential its data has to offer. A company must envision how it will compete against today’s known players and future disruptors. Additionally, it needs to anticipate how government rules and regulations will affect its playing field, and it must protect its brand in hostile environments.

Ask any CIO or CDO and they will tell you that it’s fairly complex.

To move an organization onto a more advanced plan of action, CIOs and other executives can think of data strategy in the simple terms of business drivers and technology enablers and how to constantly evolve it. Automation is a business driver that commonly prompts companies to consider new data strategies. As the imperative to run leaner operations grows, enterprises find it valuable to automate business processes to help expedite work that ordinarily takes up long periods of time. A fluid data strategy allows a business to mine the information on how a certain manual function was done in order to automate it. A common tech enabler that actualizes this transformation is Artificial Intelligence (AI). Mimicking the way the human mind works, tools enabled by AI can gather the needed data and build a prototype of the tasks that are to be automated.

Figure-1 illustrates some of the drivers that can shape your data strategy. On the vertical axis, it shows innovation versus risks and regulations, and on the horizontal, centralized IT versus business users, because they inhibit opposite priorities in most cases. The business drivers in the upper half help you increase top and bottom lines, whereas the lower half keeps you from paying hefty fines for non-compliance. The right half represents the priority of your business users and lines of businesses, and the left half is what keeps your centralized IT occupied.

Based on this situation, budget, resources, and predicted future needs, the recommendation would be to focus on just a few interconnected drivers for the next six months. As part of the data strategy, the organization would establish the selected drivers as business goals, allocate specific budgets, bring together teams that understand the impacted systems and processes, and define how to measure success and monitor progress.

In another example, suppose an organization wants to reduce costs through lean IT and create new products based on data insights. At the same time, they also want to identify technologies to enable this new data strategy for the next six months. Figure-2 indicates that the organization may want to focus on the creation of dashboards to show how its products and revenues stack up, along with the building of data lakes and automating of data ingestion from upstream sources. One will help identify strengths and gaps in offerings, and the other will create a platform for the future.

Redefining Data Strategy: The Holy Grail of Marketing

Once it has successfully achieved these goals, an organization may want to redefine its data strategy to take up more challenging goals such as the holy grail of marketing: a “cradle-to-grave” lifecycle journey. That will require allocating new budgets, adding experienced marketing analysts and data scientists to the team, and ingesting new datasets into the data lake from Web analytics, marketing automation, and CRM systems, among others.

With time, an organization can learn to (a) strike a balance between competing priorities and (b) keeping all teams in sync to  achieve new goals every few months as part of fluid data strategy and (c)  monitoring progress frequently. It can become a champion at predicting and defining the right drivers and selecting suitable technology enablers from the likes of Figure-1 and Figure-2 to create custom, fluid shapes outlining an organization’s Agile data strategy.

The new trends observed in the data landscape that will guide organizations in refining their data strategy are indicated in Figure-3.

Business Intelligence

Most business intelligence today is backward-looking and obsolete. Data science and AI give you the tools to mine your data and build models that accurately predict the future. Data science uncovers insights that are otherwise extremely difficult, if not impossible, to achieve. AI helps automate decision-making based on learning.

Data Warehousing

The role of data warehousing has been extended to include data lakes, saving cost and offering the flexibility of the cloud. Data lakes can help reduce computing and storage requirements and costs by ingesting raw data from the data warehouse, performing ETL, and returning aggregates to the data warehouse allowing existing downstream applications to work without any change.

Traditional Master Data Management (MDM)

Most traditional MDM initiatives are starting to be seen as never-ending and as providing little, if any, value. Instead, Agile MDM has emerged as far more productive and useful, with use-case specific minimum viable data, automated data quality improvements, and reference data updates with AI in the data pipeline.

Single Version of Truth

Most organizations have considered creating a single version of truth for some of their enterprise datasets. A few resourceful companies have even used semantic modeling to bring different versions closer. A better approach though, involves having a single source of truth but allowing many versions of truth. For instance, how many customers paid for a particular movie stream will likely differ from how many customers watched it in a given month. The first number is of interest to the accounts department and the second one to marketing, so while they both represent different versions of the truth they originate from a single source of data.

Batch and Files

One more trend we all have seen is the use of real-time streams instead of batches and files. Data’s value decreases quickly with time, so it is best to analyze it in-flight before storing it. Also the more we store, the more data debt (what we need to analyze) we collect. Most of the time, it makes sense to reduce or throw away the unimportant raw data and store only compact summarized or aggregate data, which should be made available as a service to other systems harnessing more value from your data.

In summation, all businesses clearly stand to gain from adopting what can be called a fluid data strategy. Such an approach gives enterprises the flexibility to pick only those business drivers and tech enablers that are relevant to their business plan. It also provides companies with the room to come back and review their choices every couple of months to tweak and rethread their strategy according to new trends and goals.

Read the source article at ITProPortal.

 

AI Robot, Immune to Moral Factors, Helping to Make China’s Foreign Policy

Attention, foreign-policy makers. You will soon be working with, or competing against, a new type of robot with the potential to change the game of international politics forever. Diplomacy is similar to a strategic board game. A country makes a move, the other(s) respond. All want to win. Artificial intelligence is good at board games. […]

Attention, foreign-policy makers. You will soon be working with, or competing against, a new type of robot with the potential to change the game of international politics forever.

Diplomacy is similar to a strategic board game. A country makes a move, the other(s) respond. All want to win.

Artificial intelligence is good at board games. To get the game started, the system analyses previous play, learns lessons from defeats or even repeatedly plays against itself to devise a strategy that can be never thought of before by humans.

It has defeated world champions in chess and Go. More recently, it has won at no-limit Texas Hold’em poker, an “imperfect information game” in which a player does not have access to all information at all times, a situation familiar in the world of diplomatic affairs.

Several prototypes of a diplomatic system using artificial intelligence are under development in China, according to researchers involved or familiar with the projects. One early-stage machine, built by the Chinese Academy of Sciences, is already being used by the Ministry of Foreign Affairs.

The ministry confirmed to the South China Morning Post that there was indeed a plan to use AI in diplomacy.

“Cutting-edge technology, including big data and artificial intelligence, is causing profound changes to the way people work and live. The applications in many industries and sectors are increasing on daily basis,” a ministry spokesman said last month.

The ministry “will actively adapt to the trend and explore the use of emerging technology for work enhancement and improvement”.

China’s ambition to become a world leader has significantly increased the burden and challenge to its diplomats. The “Belt and Road Initiative”, for instance, involves nearly 70 countries with 65 per cent of the world’s population.

The unprecedented development strategy requires up to a US$900 billion investment each year for infrastructure construction, some in areas with high political, economic or environmental risks.

The researchers said the AI “policymaker” was a strategic decision support system, with experts stressing that it will be humans who will make any final decision.

The system studies the strategy of international politics by drawing on a large amount of data, which can contain information varying from cocktail-party gossip to images taken by spy satellites.

When a policymaker needs to make a quick, accurate decision to achieve a specific goal in a complex, urgent situation, the system can provide a range of options with recommendations for the best move, sometimes in the blink of an eye.

Dr. Feng Shuai, senior fellow with the Shanghai Institutes for International Studies, whose research focuses on AI applications, said the technology of the AI policymaking system was already attracting attention despite being in its early stages.

Several research teams were developing these systems, Feng said. A conference discussing the impact of AI on diplomacy was hosted by the University of International Business and Economics last month in Beijing, in which researchers shared some recent progress.

“Artificial intelligence systems can use scientific and technological power to read and analyse data in a way that humans can’t match,” Feng said.

“Human beings can never get rid of the interference of hormones or glucose.”

The AI policymaker, however, would be immune to passion, honour, fear or other subjective factors. “It would not even consider the moral factors that conflict with strategic goals,” Feng added.

Other nations are believed to be conducting similar research into AI uses in policymaking fields, though details are not available publicly.

But AI does have its own problems, researchers say. It requires a large amount of data, some of which may not be immediately available in certain countries or regions. It requires a clear set of goals, which are sometimes absent at the start of diplomatic interaction. A system operator can also temper the results by altering some parameters.

Read the source article in the South China Morning Post.

3 Companies Using AI to Forge New Advances in Healthcare

When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector. However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations. Here we take a closer look at three intriguing […]

When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector.

However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations.

Here we take a closer look at three intriguing stocks using AI to forge new advances in treating and tackling disease. To pinpoint these three stocks, we used TipRanks’ data to scan for ‘Strong Buy’ stocks in the healthcare sector. These are stocks with substantial Street support, based on ratings from the last three months. We then singled out stocks making important headways in AI and machine learning.

BioXcel Therapeutics Inc.

This exciting clinical stage biopharma is certainly unique. BioXcel (BTAI) applies AI and big data technologies to identify the next wave of neuroscience and immuno-oncology medicines. According to BTAI this approach uses “existing approved drugs and/or clinically validated product candidates together with big data and proprietary machine learning algorithms to identify new therapeutic indices.”

The advantage is twofold: “The potential to reduce the cost and time of drug development in diseases with substantial unmet medical need,” says BioXcel. Indeed, we are talking $50 – 100 million of the cost (over $2 billion) typically associated with the development of novel drugs. Right now, BioXcel has several therapies in its pipeline including BXCL501 for prostate and pancreatic cancer. And it seems like the Street approves. The stock has received five buy ratings in the last three months with an average price target of $20.40 (115% upside potential).

“Unlocking efficiency in drug development” is how H.C Wainwright analyst Ram Selvaraju describes Bioxcel’s drug repurposing and repositioning. “The approach BioXcel Therapeutics is taking has been validated in recent years by the advent of several repurposed products that have gone on to become blockbuster franchises (>$1 billion in annual sales).” However, he adds that “we are not currently aware of many other firms that are utilizing a systematic AI-based approach to drug development, and certainly none with the benefit of the prior track record that BioXcel Therapeutics’ parent company, BioXcel Corp., possesses.”

Microsoft Corp.

Software giant Microsoft believes that we will soon live in a world infused with artificial intelligence. This includes healthcare.

According to Eric Horvitz, head of Microsoft Research’s Global Labs, “AI-based applications could improve health outcomes and the quality of life for millions of people in the coming years.” So it’s not surprising that Microsoft is seeking to stay ahead of the curve with its own Healthcare NExT initiative, launched in 2017. The goal of Healthcare NExT is to accelerate healthcare innovation through artificial intelligence and cloud computing. This already encompasses a number of promising solutions, projects and AI accelerators.

Take Project EmpowerMD, a research collaboration with UPMC. The purpose here is to use AI to create a system that listens and learns from what doctors say and do, dramatically reducing the burden of note-taking for physicians. According to Microsoft, “The goal is to allow physicians to spend more face-to-face time with patients, by bringing together many services from Microsoft’s Intelligent Cloud including Custom Speech Services (CSS) and Language Understanding Intelligent Services (LUIS), customized for the medical domain.”

On the other end of the scale, Microsoft is also employing AI for genome mapping (alongside St Jude Children’s Research Hospital) and disease diagnostics. Most notably, Microsoft recently partnered with one of the largest health systems in India, Apollo Hospitals, to create the AI Network for Healthcare. Microsoft explains: “Together, we will be developing and deploying new machine learning models to gauge patient risk for heart disease in hopes of preventing or reversing these life-threatening conditions.”

Globus Medical Inc.

This medical device company is pioneering minimally invasive surgery, including with the assistance of the ExcelsiusGPS robot. Globus Medical describes how the Excelsius manages to combine the benefits of navigation, imagery and robotics into one single technology. And the future possibilities are even more exciting.

According to top Canaccord Genuity analyst Kyle Rose, there are multiple growth opportunities for GMED. He explains: “Currently, ExcelsiusGPS supports the placement of nails and screws in both trauma and spine cases, and we expect Globus to leverage the platform for broader orthopedic indications in future years.” Encouragingly, Rose notes that management has already received positive early feedback and robust demand for the medical robot.

Indeed, in the first quarter Globus reported placing 13 robots vs. Rose’s estimate of just 5 robots. This extra success translated to ~$7.8 million in upside relative to his estimates. On the earnings call, Globus revealed reiterated their long-term vision for ExelsiusGPS as a robotic platform with far more advanced capabilities. This could even include using augmented reality to construct a 3D view of the patient’s external and internal anatomy.

Read the source article in TheStreet.

To Share or Not to Share: That is the Big Data Question

Between the disclosures this year about Facebook’s lax data sharing policies and the European Union’s GDPR (General Data Protection Regulation), a lot of people are talking about data privacy and consumer rights. How much data should you share as a consumer with companies like Facebook or Google? But what about businesses? Enterprise organizations may be […]

Between the disclosures this year about Facebook’s lax data sharing policies and the European Union’s GDPR (General Data Protection Regulation), a lot of people are talking about data privacy and consumer rights. How much data should you share as a consumer with companies like Facebook or Google?

But what about businesses?

Enterprise organizations may be dealing with their own data privacy dilemma — should they share their corporate data with partners or with vendors or with some other organization? If so, what data is OK to share, and what should they keep as private and proprietary? After all, data is the new oil. Amazon, Facebook, and Google have all built multi-billion dollar companies by collecting and leveraging data.

Although it is one of the top assets a company may have, there may be compelling reasons to share data, too. For instance, leading edge cancer centers could potentially speed up and advance society’s effort to cure cancer if they shared the data that each of them collected. But sharing it with a competitor could also erode their own competitive edge in the market.

Organizations may also be considering participation in a vendor program such as one under development at SAP called Data Intelligence that will anonymize enterprise customer data and allow those customers to benchmark themselves against the rest of the market.

“People are realizing that the data they have has some value, either for internal purposes or selling to a data partner, and that is leading to more awareness of how they can share data anonymously,” Mike Flannagan of SAP told InformationWeek in an interview earlier this year. He said that different companies are at different levels of maturity in terms of how they think about their data.

Even if you share data that has been anonymized in order to train an algorithm, the question remains whether you are giving away your competitive edge when you share your anonymized data assets. Organizations need to be careful.

“Data is extremely valuable,” said Ali Ghodsi, co-founder and CEO of Databricks (the big data platform with its origins offering hosted Spark) and an adjunct professor at the University of California, Berkeley. In Ghodsi’s experience, organizations don’t want to share their data, but they are willing to sell access to it. For instance, organizations might sell limited access to particular data sets for a finite period of time.

Data aggregators are companies that will create data sets to sell by scraping the web, Ghodsi said.

Then there are older companies that may have years or decades of data that have not been exposed yet to applied AI and machine learning, Ghodsi said, and those companies may hope to use those gigantic data sets to catch up and gain a competitive edge. For instance, any retailer with a loyalty card may have aggregated data over 10 or 20 years.

In Ghodsi’s experience, organizations want more data, but they are unwilling to share it, sometimes even within their own organizations. In many organizations, IT controls access to the data and may not always be willing to say yes to all the requests from data scientists in the line-of-business areas. That’s among the topics in a December 2017 paper co-authored by Ghodsi and other researchers from UC Berkeley titled A Berkeley View of Systems Challenges for AI. Ghodsi said that the group is doing research to find ways in which you can incentivize companies to share more of their data.  One of the ways is in the model itself — the machine learning model is a very compact summary of all the data.

Read the source article in Information Week.

Why GDPR will Make Machine Learning not so legal

Does GDPR require Machine Learning algorithms to explain their output? may be yes may be no or in short Probably not, but there is enough ambiguity to be clarified and keep DataScientists, Lawyers, industry influencers busy.

The post Why GDPR will Make Machine Learning not so legal appeared first on Vinod Sharma’s Blog.

Source

Does GDPR require Machine Learning algorithms to explain their output? may be yes may be no or in short Probably not, but there is enough ambiguity to be clarified and keep DataScientists, Lawyers, industry influencers busy.

The post Why GDPR will Make Machine Learning not so legal appeared first on Vinod Sharma's Blog.

Source