Alibaba to Challenge Amazon with a Cloud Service Push in Europe

Alibaba Group Holding Ltd. is in talks with BT Group PLC about a cloud services partnership as the Chinese internet giant challenges Amazon.com Inc.’s dominance in Europe. An agreement between Alibaba and the IT consulting unit of Britain’s former phone monopoly could be similar to Alibaba’s existing arrangement with Vodafone Group Plc in Germany, according […]

Alibaba Group Holding Ltd. is in talks with BT Group PLC about a cloud services partnership as the Chinese internet giant challenges Amazon.com Inc.’s dominance in Europe.

An agreement between Alibaba and the IT consulting unit of Britain’s former phone monopoly could be similar to Alibaba’s existing arrangement with Vodafone Group Plc in Germany, according to a person familiar with the matter, who asked not to be identified as the talks are private.

A BT spokeswoman confirmed by email that the U.K. telecom company is in talks with Alibaba Cloud and declined to give details. A spokesman for Alibaba declined to comment.

Started in 2009, Alibaba Cloud has expanded fast beyond China in a direct challenge to Amazon Web Services, the e-commerce giant’s division that dominates cloud computing. Alibaba Cloud is now the fourth-biggest global provider of cloud infrastructure and related services, behind Amazon, Microsoft Corp. and Alphabet Inc.’s Google, according to a report last month by Synergy Research Group.

Europe has become key to Alibaba Cloud’s success outside China, with prospects in the U.S. made murky by President Donald Trump’s America First agenda. Alibaba has pulled back in the U.S. just as tensions between America and China have escalated under Trump.

Alibaba started the German partnership with Vodafone in 2016. The Hangzhou, China-based company put its first European data center in Frankfurt, allowing Vodafone to resell Alibaba Cloud services such as data storage and analytics. Last week, Alibaba Cloud moved into France, agreeing to work with transport and communications company Bollore SA in cloud computing, big data and artificial intelligence.

Telecom dilemma

BT’s talks with Alibaba underscore a dilemma for the telecom industry. As big tech companies and consulting firms muscle in on their business installing and maintaining IT networks for large corporations, they must choose whether to resist them, or accept their help and decide which to ally with.

BT Global Services has struck up partnerships with Amazon, Microsoft and Cisco Systems Inc., while Spain’s Telefonica SA works with Amazon. In Germany, while Deutsche Telekom AG’s T-Systems has partners including China’s Huawei Technologies Co. and Cisco, it has structured its public cloud offering as an alternative to U.S. giants Amazon and Google—touting its ability to keep data within Germany where there are strict data-protection laws, 100% out of reach of U.S. authorities.

A deal with Alibaba could bolster BT’s cloud computing and big data skills as clients shift more of their IT capacity offsite to cut costs.

BT is undertaking a digital overhaul of its Global Services business in a restructuring involving thousands of job cuts after revenue at the division fell 9% last year. The poor performance of Global Services and the ouster last month of BT CEO Gavin Patterson have fueled speculation among some analysts that BT may sell the division. Still, the unit is seen by some investors as critical for BT’s relationships with multinational clients.

Read the source article in Digital Commerce 360.

How 3 Companies Use AI to Forge Advances in Healthcare

When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector. However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations. Here we take a closer look at three intriguing […]

When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector.

However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations.

Here we take a closer look at three intriguing stocks using AI to forge new advances in treating and tackling disease. To pinpoint these three stocks, we used TipRanks’ data to scan for ‘Strong Buy’ stocks in the healthcare sector. These are stocks with substantial Street support, based on ratings from the last three months. We then singled out stocks making important headways in AI and machine learning.

BioXcel Therapeutics Inc.

This exciting clinical stage biopharma is certainly unique. BioXcel (BTAI) applies AI and big data technologies to identify the next wave of neuroscience and immuno-oncology medicines. According to BTAI this approach uses “existing approved drugs and/or clinically validated product candidates together with big data and proprietary machine learning algorithms to identify new therapeutic indices.”

The advantage is twofold: “The potential to reduce the cost and time of drug development in diseases with substantial unmet medical need,” says BioXcel. Indeed, we are talking $50 – 100 million of the cost (over $2 billion) typically associated with the development of novel drugs. Right now, BioXcel has several therapies in its pipeline including BXCL501 for prostate and pancreatic cancer. And it seems like the Street approves. The stock has received five buy ratings in the last three months with an average price target of $20.40 (115% upside potential).

“Unlocking efficiency in drug development” is how H.C Wainwright analyst Ram Selvaraju describes Bioxcel’s drug repurposing and repositioning. “The approach BioXcel Therapeutics is taking has been validated in recent years by the advent of several repurposed products that have gone on to become blockbuster franchises (>$1 billion in annual sales).” However, he adds that “we are not currently aware of many other firms that are utilizing a systematic AI-based approach to drug development, and certainly none with the benefit of the prior track record that BioXcel Therapeutics’ parent company, BioXcel Corp., possesses.”

Microsoft Corp.

Software giant Microsoft (MSFT) believes that we will soon live in a world infused with artificial intelligence. This includes healthcare.

According to Eric Horvitz, head of Microsoft Research’s Global Labs, “AI-based applications could improve health outcomes and the quality of life for millions of people in the coming years.” So it’s not surprising that Microsoft is seeking to stay ahead of the curve with its own Healthcare NExT initiative, launched in 2017. The goal of Healthcare NExT is to accelerate healthcare innovation through artificial intelligence and cloud computing. This already encompasses a number of promising solutions, projects and AI accelerators.

Take Project EmpowerMD, a research collaboration with UPMC. The purpose here is to use AI to create a system that listens and learns from what doctors say and do, dramatically reducing the burden of note-taking for physicians. According to Microsoft, “The goal is to allow physicians to spend more face-to-face time with patients, by bringing together many services from Microsoft’s Intelligent Cloud including Custom Speech Services (CSS) and Language Understanding Intelligent Services (LUIS), customized for the medical domain.”

On the other end of the scale, Microsoft is also employing AI for genome mapping (alongside St Jude Children’s Research Hospital) and disease diagnostics. Most notably, Microsoft recently partnered with one of the largest health systems in India, Apollo Hospitals, to create the AI Network for Healthcare. Microsoft explains: “Together, we will be developing and deploying new machine learning models to gauge patient risk for heart disease in hopes of preventing or reversing these life-threatening conditions.”

Read the source article at TheStreet.com.

The Race is On to Find the Optimal AI Application Architecture

AI applications often benefit from fundamentally different architectures than those used by traditional enterprise apps. And vendors are turning somersaults to provide these new components. “The computing field is experiencing a near-Cambrian-like event as the surging interest in enterprise AI fuels innovations that make it easier to adopt and scale AI,” said Keith Strier, global and […]

AI applications often benefit from fundamentally different architectures than those used by traditional enterprise apps. And vendors are turning somersaults to provide these new components.

“The computing field is experiencing a near-Cambrian-like event as the surging interest in enterprise AI fuels innovations that make it easier to adopt and scale AI,” said Keith Strier, global and Americas AI leader, advisory services at EY.

“Investors are pouring capital into ventures that reduce the complexity of AI, while more established infrastructure providers are upgrading their offerings from chips and storage to networking and cloud services to accelerate deployment.”

The challenge for CIOs, he said, will be matching AI use cases to the type of artificial intelligence architecture best suited for the job.

Because AI is math at an enormous scale, it calls for a different set of technical and security requirements than traditional enterprise workloads, Strier said. Maximizing the value of AI use cases hinges, in part, on vendors being able to provide economical access to the technical infrastructure, cloud and related AI services that make these advanced computations possible.

But that is already happening, he said, and more advances in artificial intelligence architectures are on the horizon. Increased flexibility, power and speed in compute architectures will be catalyzed not only by the small band of high-performance computing firms at the forefront of the field, he said, but also from the broader HPC ecosystem that includes the chip- and cloud-service startups battling to set the new gold standard for AI computations.

As the bar lowers for entry-level AI projects, adoption will go up and the network effect will kick in, creating yet more innovation and business benefit for everyone — enterprises and vendors alike, he said.

In the meantime, CIOs can give their enterprises a leg up by becoming familiar with the challenges associated with building an artificial intelligence architecture for enterprise use.

Chip evolution

One key element of the transition from traditional compute architectures to AI architectures has been the rise of GPUs, field-programmable gate arrays (FPGAs) and special purpose AI chips. The adoption of GPU- and FPGA-based architectures enables new levels of performance and flexibility in compute and storage systems, which allows solution providers to offer a variety of advanced services for AI and machine learning applications.

“These are chip architectures that offload many of the more advanced functions [such as AI training] and can then deliver a streamlined compute and storage stack that delivers unmatched performance and efficiency,” said Surya Varanasi, co-founder and CTO of Vexata Inc., a data management solutions provider.

But new chips only get enterprises so far in being able to capitalize on artificial intelligence. Finding the best architecture for AI workloads involves a complicated calculus involving data bandwidth and latency. Faster networks are key. But many AI algorithms also must wait a full cycle to queue up the next set of data, so latency becomes a factor.

Red the source article at TechTarget’s SearchCIO.

StubHub Aims to Build Powerful AI Systems Working with Pivotal and Google Cloud

StubHub is best known as a destination for buying and selling event tickets. The company operates in 48 countries and sells a ticket every 1.3 seconds. But the company wants to go beyond that and provide its users with a far more comprehensive set of services around entertainment. To do that, it’s working on changing its […]

StubHub is best known as a destination for buying and selling event tickets. The company operates in 48 countries and sells a ticket every 1.3 seconds. But the company wants to go beyond that and provide its users with a far more comprehensive set of services around entertainment. To do that, it’s working on changing its development culture and infrastructure to become more nimble. As the company announced today, it’s betting on Google Cloud and Pivotal Cloud Foundry as the infrastructure for this move.

StubHub  CTO Matt Swann told me that the idea behind going with Pivotal — and the twelve-factor app model that entails — is to help the company accelerate its journey and give it an option to run new apps in both an on-premise and cloud environment.

“We’re coming from a place where we are largely on premise,” said Swann. “Our aim is to become increasingly agile — where we are going to focus on building balanced and focused teams with a global mindset.” To do that, Swann said, the team decided to go with the best platforms to enable that and that “remove the muck that comes with how developers work today.”

As for Google, Swann noted that this was an easy decision because the team wanted to leverage that company’s infrastructure and machine learning tools like Cloud ML. “We are aiming to build some of the most powerful AI systems focused on this space so we can be ahead of our customers,” he said. Given the number of users, StubHub sits on top of a lot of data — and that’s exactly what you need when you want to build AI-powered services. What exactly these will look like, though, remains to be seen, but Swann has only been on the job for six months. We can probably expect to see more for the company in this space in the coming months.

“Digital transformation is on the mind of every technology leader, especially in industries requiring the capability to rapidly respond to changing consumer expectations,” said Bill Cook, President of Pivotal . “To adapt, enterprises need to bring together the best of modern developer environments with software-driven customer experiences designed to drive richer engagement.”

Stubhub has already spun up its new development environment and plans to launch all new ups on this new infrastructure. Swann acknowledged that they company won’t be switching all of its workloads over to the new setup soon. But he does expect that the company will hit a tipping point in the next year or so.

He also noted that this over transformation means that the company will look beyond its own walls and toward working with more third-party APIs, especially with regard to transportation services and merchants that offer services around events.

Throughout our conversation, Swann also stressed that this isn’t a technology change for the sake of it.

Read the source article at TechCrunch.

Here is How the AI Cloud Can Produce the Richest Companies Ever

For years, Swami Sivasubramanian’s wife has wanted to get a look at the bears that come out of the woods on summer nights to plunder the trash cans at their suburban Seattle home. So over the Christmas break, Sivasubramanian, the head of Amazon’s AI division, began rigging up a system to let her do just that.­­­­­ […]

For years, Swami Sivasubramanian’s wife has wanted to get a look at the bears that come out of the woods on summer nights to plunder the trash cans at their suburban Seattle home. So over the Christmas break, Sivasubramanian, the head of Amazon’s AI division, began rigging up a system to let her do just that.­­­­­

So far he has designed a computer model that can train itself to identify bears—and ignore raccoons, dogs, and late-night joggers. He did it using an Amazon cloud service called SageMaker, a machine-learning product designed for app developers who know nothing about machine learning. Next, he’ll install Amazon’s new DeepLens wireless video camera on his garage. The $250 device, which will go on sale to the public in June, contains deep-learning software to put the model’s intelligence into action and send an alert to his wife’s cell phone whenever it thinks it sees an ursine visitor.

Sivasubramanian’s bear detector is not exactly a killer app for artificial intelligence, but its existence is a sign that the capabilities of machine learning are becoming far more accessible. For the past three years, Amazon, Google, and Microsoft have been folding features such as face recognition in online photos and language translation for speech into their respective cloud services—AWS, Google Cloud, and Azure. Now they are in a headlong rush to build on these basic capabilities to create AI-based platforms can be used by almost any type of company, regardless of its size and technical sophistication.

“Machine learning is where the relational database was in the early 1990s: everyone knew it would be useful for essentially every company, but very few companies had the ability to take advantage of it,” says Sivasubramanian.

Amazon, Google, and Microsoft—and to a lesser extent companies like Apple, IBM, Oracle, Salesforce, and SAP—have the massive computing resources and armies of talent required to build this AI utility. And they also have the business imperative to get in on what may be the most lucrative technology mega-trend yet.

“Ultimately, the cloud is how most companies are going to make use of AI—and how technology suppliers are going to make money off of it,” says Nick McQuire, an analyst with CCS Insight.

Quantifying the potential financial rewards is difficult, but for the leading AI cloud providers they could be unprecedented. AI could double the size of the $260 billion cloud market in coming years, says Rajen Sheth, senior director of product management in Google’s Cloud AI unit. And because of the nature of machine learning—the more data the system gets, the better the decisions it will make—customers are more likely to get locked in to an initial vendor.

In other words, whoever gets out to the early lead will be very difficult to unseat. “The prize will be to become the operating system of the next era of tech,” says Arun Sundararajan, who studies how digital technologies affect the economy at NYU’s Stern School of Business. And Puneet Shivam, president of Avendus Capital US, an investment bank, says: “The leaders in the AI cloud will become the most powerful companies in history.”

It’s not just Amazon, Google, and Microsoft that are pursuing dominance. Chinese giants such as Alibaba and Baidu are becoming major forces, particularly in Asian markets. Leading enterprise software companies including Oracle, Salesforce, and SAP are embedding machine learning into their apps. And thousands of AI-related startups have ambitions to become tomorrow’s AI leaders.

Read the source article at MIT Technology Review.

Google Cloud Platform cuts the price of GPUs by up to 36 percent

Google has announced lower prices for the use of Nvidia’s Tesla GPUs through its Compute Engine by up to 36 percent. In U.S. regions, using the somewhat older K80 GPUs will now cost $0.45 per hour while using the newer and more powerful P100 machines will cost $1.46 per hour (all with per-second billing). The company is also dropping the […]

Google has announced lower prices for the use of Nvidia’s Tesla GPUs through its Compute Engine by up to 36 percent. In U.S. regions, using the somewhat older K80 GPUs will now cost $0.45 per hour while using the newer and more powerful P100 machines will cost $1.46 per hour (all with per-second billing).

The company is also dropping the prices for preemptible local SSDs by almost 40 percent. “Preemptible local SSDs” refers to local SSDs attached to Google’s preemptible VMs. You can’t attach GPUs to preemptible instances, though, so this is a nice little bonus announcement — but it isn’t going to directly benefit GPU users.

As for the new GPU pricing, it’s clear that Google is aiming this feature at developers who want to run their own machine learning workloads on its cloud, though there also are a number of other applications — including physical simulations and molecular modeling — that greatly benefit from the hundreds of cores that are now available on these GPUs. The P100, which is officially still in beta on the Google Cloud Platform, features 3594 cores, for example.

Developers can attach up to four P100 and eight K80 dies to each instance. Like regular VMs, GPU users will also receive sustained-use discounts, though most users probably don’t keep their GPUs running for a full month.

It’s hard not to see this announcement in the light of AWS’s upcoming annual developer conference, which will take over most of Las Vegas’s hotel conference space next week. AWS is expected to make a number of AI and machine learning announcements, and chances are we’ll see some price cuts from AWS, too.

Read the source article at TechCrunch.