Alibaba to Challenge Amazon with a Cloud Service Push in Europe

Alibaba Group Holding Ltd. is in talks with BT Group PLC about a cloud services partnership as the Chinese internet giant challenges Amazon.com Inc.’s dominance in Europe. An agreement between Alibaba and the IT consulting unit of Britain’s former phone monopoly could be similar to Alibaba’s existing arrangement with Vodafone Group Plc in Germany, according […]

Alibaba Group Holding Ltd. is in talks with BT Group PLC about a cloud services partnership as the Chinese internet giant challenges Amazon.com Inc.’s dominance in Europe.

An agreement between Alibaba and the IT consulting unit of Britain’s former phone monopoly could be similar to Alibaba’s existing arrangement with Vodafone Group Plc in Germany, according to a person familiar with the matter, who asked not to be identified as the talks are private.

A BT spokeswoman confirmed by email that the U.K. telecom company is in talks with Alibaba Cloud and declined to give details. A spokesman for Alibaba declined to comment.

Started in 2009, Alibaba Cloud has expanded fast beyond China in a direct challenge to Amazon Web Services, the e-commerce giant’s division that dominates cloud computing. Alibaba Cloud is now the fourth-biggest global provider of cloud infrastructure and related services, behind Amazon, Microsoft Corp. and Alphabet Inc.’s Google, according to a report last month by Synergy Research Group.

Europe has become key to Alibaba Cloud’s success outside China, with prospects in the U.S. made murky by President Donald Trump’s America First agenda. Alibaba has pulled back in the U.S. just as tensions between America and China have escalated under Trump.

Alibaba started the German partnership with Vodafone in 2016. The Hangzhou, China-based company put its first European data center in Frankfurt, allowing Vodafone to resell Alibaba Cloud services such as data storage and analytics. Last week, Alibaba Cloud moved into France, agreeing to work with transport and communications company Bollore SA in cloud computing, big data and artificial intelligence.

Telecom dilemma

BT’s talks with Alibaba underscore a dilemma for the telecom industry. As big tech companies and consulting firms muscle in on their business installing and maintaining IT networks for large corporations, they must choose whether to resist them, or accept their help and decide which to ally with.

BT Global Services has struck up partnerships with Amazon, Microsoft and Cisco Systems Inc., while Spain’s Telefonica SA works with Amazon. In Germany, while Deutsche Telekom AG’s T-Systems has partners including China’s Huawei Technologies Co. and Cisco, it has structured its public cloud offering as an alternative to U.S. giants Amazon and Google—touting its ability to keep data within Germany where there are strict data-protection laws, 100% out of reach of U.S. authorities.

A deal with Alibaba could bolster BT’s cloud computing and big data skills as clients shift more of their IT capacity offsite to cut costs.

BT is undertaking a digital overhaul of its Global Services business in a restructuring involving thousands of job cuts after revenue at the division fell 9% last year. The poor performance of Global Services and the ouster last month of BT CEO Gavin Patterson have fueled speculation among some analysts that BT may sell the division. Still, the unit is seen by some investors as critical for BT’s relationships with multinational clients.

Read the source article in Digital Commerce 360.

3 Companies Using AI to Forge New Advances in Healthcare

When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector. However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations. Here we take a closer look at three intriguing […]

When you think of artificial intelligence (AI), you might not immediately think of the healthcare sector.

However, that would be a mistake. AI has the potential to do everything from predicting readmissions, cutting human error and managing epidemics to assisting surgeons to carry out complex operations.

Here we take a closer look at three intriguing stocks using AI to forge new advances in treating and tackling disease. To pinpoint these three stocks, we used TipRanks’ data to scan for ‘Strong Buy’ stocks in the healthcare sector. These are stocks with substantial Street support, based on ratings from the last three months. We then singled out stocks making important headways in AI and machine learning.

BioXcel Therapeutics Inc.

This exciting clinical stage biopharma is certainly unique. BioXcel (BTAI) applies AI and big data technologies to identify the next wave of neuroscience and immuno-oncology medicines. According to BTAI this approach uses “existing approved drugs and/or clinically validated product candidates together with big data and proprietary machine learning algorithms to identify new therapeutic indices.”

The advantage is twofold: “The potential to reduce the cost and time of drug development in diseases with substantial unmet medical need,” says BioXcel. Indeed, we are talking $50 – 100 million of the cost (over $2 billion) typically associated with the development of novel drugs. Right now, BioXcel has several therapies in its pipeline including BXCL501 for prostate and pancreatic cancer. And it seems like the Street approves. The stock has received five buy ratings in the last three months with an average price target of $20.40 (115% upside potential).

“Unlocking efficiency in drug development” is how H.C Wainwright analyst Ram Selvaraju describes Bioxcel’s drug repurposing and repositioning. “The approach BioXcel Therapeutics is taking has been validated in recent years by the advent of several repurposed products that have gone on to become blockbuster franchises (>$1 billion in annual sales).” However, he adds that “we are not currently aware of many other firms that are utilizing a systematic AI-based approach to drug development, and certainly none with the benefit of the prior track record that BioXcel Therapeutics’ parent company, BioXcel Corp., possesses.”

Microsoft Corp.

Software giant Microsoft believes that we will soon live in a world infused with artificial intelligence. This includes healthcare.

According to Eric Horvitz, head of Microsoft Research’s Global Labs, “AI-based applications could improve health outcomes and the quality of life for millions of people in the coming years.” So it’s not surprising that Microsoft is seeking to stay ahead of the curve with its own Healthcare NExT initiative, launched in 2017. The goal of Healthcare NExT is to accelerate healthcare innovation through artificial intelligence and cloud computing. This already encompasses a number of promising solutions, projects and AI accelerators.

Take Project EmpowerMD, a research collaboration with UPMC. The purpose here is to use AI to create a system that listens and learns from what doctors say and do, dramatically reducing the burden of note-taking for physicians. According to Microsoft, “The goal is to allow physicians to spend more face-to-face time with patients, by bringing together many services from Microsoft’s Intelligent Cloud including Custom Speech Services (CSS) and Language Understanding Intelligent Services (LUIS), customized for the medical domain.”

On the other end of the scale, Microsoft is also employing AI for genome mapping (alongside St Jude Children’s Research Hospital) and disease diagnostics. Most notably, Microsoft recently partnered with one of the largest health systems in India, Apollo Hospitals, to create the AI Network for Healthcare. Microsoft explains: “Together, we will be developing and deploying new machine learning models to gauge patient risk for heart disease in hopes of preventing or reversing these life-threatening conditions.”

Globus Medical Inc.

This medical device company is pioneering minimally invasive surgery, including with the assistance of the ExcelsiusGPS robot. Globus Medical describes how the Excelsius manages to combine the benefits of navigation, imagery and robotics into one single technology. And the future possibilities are even more exciting.

According to top Canaccord Genuity analyst Kyle Rose, there are multiple growth opportunities for GMED. He explains: “Currently, ExcelsiusGPS supports the placement of nails and screws in both trauma and spine cases, and we expect Globus to leverage the platform for broader orthopedic indications in future years.” Encouragingly, Rose notes that management has already received positive early feedback and robust demand for the medical robot.

Indeed, in the first quarter Globus reported placing 13 robots vs. Rose’s estimate of just 5 robots. This extra success translated to ~$7.8 million in upside relative to his estimates. On the earnings call, Globus revealed reiterated their long-term vision for ExelsiusGPS as a robotic platform with far more advanced capabilities. This could even include using augmented reality to construct a 3D view of the patient’s external and internal anatomy.

Read the source article in TheStreet.

AI Influencing Emerging Education Tech Companies

AI technology is being incorporated in solutions offered by some of the 11 startups or newcomers to education chosen from a field of more than 40 companies as “emerging partners” by the State Educational Technology Directors Association (SETDA) for the 2018-19 school year. The program introduces companies to state-level digital education leaders, where they can gain insights into […]

AI technology is being incorporated in solutions offered by some of the 11 startups or newcomers to education chosen from a field of more than 40 companies as “emerging partners” by the State Educational Technology Directors Association (SETDA) for the 2018-19 school year.

The program introduces companies to state-level digital education leaders, where they can gain insights into the K-12 market. For ed-tech directors, the focus is on locating companies that are “innovative and creative in solving problems that exist in the market,” said Melissa Greene, the director of strategic partnerships for SETDA, in an interview.

Among the newly selected emerging partners are Wonder Workshop, a company that built a national consumer brand, and is now looking to make inroads in K-8 coding instruction; a startup called Loose Canon launched by an author and former English teacher who wants to encourage educators to offer students free choice in the books they read for credit in classes, and Ask School Data, founded by a 35-year district technologist who wants teachers to be able to access student data by speaking to a device driven by artificial intelligence.

This is the sixth year that the collaboration has spotlighted new technologies for ed-tech leaders. “It opens the door to important relationships and conversations while providing valuable opportunities for these growing companies to get in front of nearly all 50 states at the same time,” said Tracy Weeks, SETDA’s executive director.

At the same time, the national organization benefits when the emerging partners “help teach us about what’s up and coming, what’s new,” said Greene. More companies are returning for year two of the program than ever before, she said.

Of the newcomers that applied, Greene said there were more niche ed-tech solutions this year than ever before. Where learning management systems and video education programs predominated in the past, this group of applicants represented providers aimed at more specific needs.

Companies Making State Connections

Returning for the third year of partnering with SETDA are Classcraft, LeaRn and MIDAS Education. Second-year returnees will be CatchOn, Cignition Inc., Readorium, Streamable Learning, and Vital Insight.

Here’s a look at the first-year cohort:

  • Ask School Data, based on Amazon’s Alexa platform, retrieves student data on voice command and recites it aloud to an educator;
  • Blending Education advances the idea of “microlearning” as a way of delivering content in small, manageable units, as an avenue to personalized learning;
  • GreyEd Solutions markets FilterED, an adaptive, cloud-based tool for school leaders to view the current technology landscape with the evidence, data, and context needed to prioritize, implement, measure, and monitor ongoing technology initiatives;
  • GoEnnounce offers a platform where students can build a positive digital image in their own learning e-portfolios;
  • Kiddom provides assessment, curriculum development, messaging, and analytics in one collaborative learning platform;
  • Leaderally is a learning platform that provides professional development;
  • Loose Canon is a web service designed to encourage English teachers to allow students to freely choose what they will read for credit;
  • RFPMatch.com is a source for locating and filtering RFP opportunities;
  • Tresit Group specializes in active threat response and risk management for schools and other organizations;
  • WISEDash Local is a Wisconsin nonprofit consortium connecting districts to data dashboards;
  • Wonder Workshop offers curriculum and professional development to teach coding in K-8 with its robots.

Read the source article at EdWeek.

StubHub Aims to Build Powerful AI Systems Working with Pivotal and Google Cloud

StubHub is best known as a destination for buying and selling event tickets. The company operates in 48 countries and sells a ticket every 1.3 seconds. But the company wants to go beyond that and provide its users with a far more comprehensive set of services around entertainment. To do that, it’s working on changing its […]

StubHub is best known as a destination for buying and selling event tickets. The company operates in 48 countries and sells a ticket every 1.3 seconds. But the company wants to go beyond that and provide its users with a far more comprehensive set of services around entertainment. To do that, it’s working on changing its development culture and infrastructure to become more nimble. As the company announced today, it’s betting on Google Cloud and Pivotal Cloud Foundry as the infrastructure for this move.

StubHub  CTO Matt Swann told me that the idea behind going with Pivotal — and the twelve-factor app model that entails — is to help the company accelerate its journey and give it an option to run new apps in both an on-premise and cloud environment.

“We’re coming from a place where we are largely on premise,” said Swann. “Our aim is to become increasingly agile — where we are going to focus on building balanced and focused teams with a global mindset.” To do that, Swann said, the team decided to go with the best platforms to enable that and that “remove the muck that comes with how developers work today.”

As for Google, Swann noted that this was an easy decision because the team wanted to leverage that company’s infrastructure and machine learning tools like Cloud ML. “We are aiming to build some of the most powerful AI systems focused on this space so we can be ahead of our customers,” he said. Given the number of users, StubHub sits on top of a lot of data — and that’s exactly what you need when you want to build AI-powered services. What exactly these will look like, though, remains to be seen, but Swann has only been on the job for six months. We can probably expect to see more for the company in this space in the coming months.

“Digital transformation is on the mind of every technology leader, especially in industries requiring the capability to rapidly respond to changing consumer expectations,” said Bill Cook, President of Pivotal . “To adapt, enterprises need to bring together the best of modern developer environments with software-driven customer experiences designed to drive richer engagement.”

Stubhub has already spun up its new development environment and plans to launch all new ups on this new infrastructure. Swann acknowledged that they company won’t be switching all of its workloads over to the new setup soon. But he does expect that the company will hit a tipping point in the next year or so.

He also noted that this over transformation means that the company will look beyond its own walls and toward working with more third-party APIs, especially with regard to transportation services and merchants that offer services around events.

Throughout our conversation, Swann also stressed that this isn’t a technology change for the sake of it.

Read the source article at TechCrunch.

Here is How the AI Cloud Can Produce the Richest Companies Ever

For years, Swami Sivasubramanian’s wife has wanted to get a look at the bears that come out of the woods on summer nights to plunder the trash cans at their suburban Seattle home. So over the Christmas break, Sivasubramanian, the head of Amazon’s AI division, began rigging up a system to let her do just that.­­­­­ […]

For years, Swami Sivasubramanian’s wife has wanted to get a look at the bears that come out of the woods on summer nights to plunder the trash cans at their suburban Seattle home. So over the Christmas break, Sivasubramanian, the head of Amazon’s AI division, began rigging up a system to let her do just that.­­­­­

So far he has designed a computer model that can train itself to identify bears—and ignore raccoons, dogs, and late-night joggers. He did it using an Amazon cloud service called SageMaker, a machine-learning product designed for app developers who know nothing about machine learning. Next, he’ll install Amazon’s new DeepLens wireless video camera on his garage. The $250 device, which will go on sale to the public in June, contains deep-learning software to put the model’s intelligence into action and send an alert to his wife’s cell phone whenever it thinks it sees an ursine visitor.

Sivasubramanian’s bear detector is not exactly a killer app for artificial intelligence, but its existence is a sign that the capabilities of machine learning are becoming far more accessible. For the past three years, Amazon, Google, and Microsoft have been folding features such as face recognition in online photos and language translation for speech into their respective cloud services—AWS, Google Cloud, and Azure. Now they are in a headlong rush to build on these basic capabilities to create AI-based platforms can be used by almost any type of company, regardless of its size and technical sophistication.

“Machine learning is where the relational database was in the early 1990s: everyone knew it would be useful for essentially every company, but very few companies had the ability to take advantage of it,” says Sivasubramanian.

Amazon, Google, and Microsoft—and to a lesser extent companies like Apple, IBM, Oracle, Salesforce, and SAP—have the massive computing resources and armies of talent required to build this AI utility. And they also have the business imperative to get in on what may be the most lucrative technology mega-trend yet.

“Ultimately, the cloud is how most companies are going to make use of AI—and how technology suppliers are going to make money off of it,” says Nick McQuire, an analyst with CCS Insight.

Quantifying the potential financial rewards is difficult, but for the leading AI cloud providers they could be unprecedented. AI could double the size of the $260 billion cloud market in coming years, says Rajen Sheth, senior director of product management in Google’s Cloud AI unit. And because of the nature of machine learning—the more data the system gets, the better the decisions it will make—customers are more likely to get locked in to an initial vendor.

In other words, whoever gets out to the early lead will be very difficult to unseat. “The prize will be to become the operating system of the next era of tech,” says Arun Sundararajan, who studies how digital technologies affect the economy at NYU’s Stern School of Business. And Puneet Shivam, president of Avendus Capital US, an investment bank, says: “The leaders in the AI cloud will become the most powerful companies in history.”

It’s not just Amazon, Google, and Microsoft that are pursuing dominance. Chinese giants such as Alibaba and Baidu are becoming major forces, particularly in Asian markets. Leading enterprise software companies including Oracle, Salesforce, and SAP are embedding machine learning into their apps. And thousands of AI-related startups have ambitions to become tomorrow’s AI leaders.

Read the source article at MIT Technology Review.

5 Things We Learned About Google on Earnings Call

Google-parent Alphabet’s revenue rose 24 percent year-over-year, to $32.32 billion, exceeding expectations. Net revenue also beat expectations, at $25.9 billion. And because of recent tax legislation in the last quarter, the company faced a net loss of $3.02 billion. But to better illustrate those metrics and potential growth areas for the company, Google CEO Sundar Pichai and […]

Google-parent Alphabet’s revenue rose 24 percent year-over-year, to $32.32 billion, exceeding expectations. Net revenue also beat expectations, at $25.9 billion. And because of recent tax legislation in the last quarter, the company faced a net loss of $3.02 billion.

But to better illustrate those metrics and potential growth areas for the company, Google CEO Sundar Pichai and Alphabet and Google CFO Ruth Porat sprinkled in some more tangible numbers about the company’s progress over the last quarter of 2017 and the year overall.

The figures they shared reveal where the company is headed and how its investments in its “three biggest bets,” — hardware, cloud and YouTube — have manifested themselves.

1. Google Assistant is now available on more than 400 million devices.
From the Google Home speaker to smartphones, tablets, headphones, televisions and even 400 different car models, the Google Assistant is finding its way into more and more devices.

Google has sold “tens of millions” of Google devices for the home over the past year, including the Google Home, Mini, Max and Chromecast.

“Our AI research and innovation leads the world,” Pichai said during Thursday’s earnings call. “Our mission to better organize the world’s information has been transformed by these technologies, with our search products and the Google Assistant at the heart.”

Pichai also noted that “There was a lot of excitement around the Google Assistant at CES from partners and consumers.” (That was thanks in part to the company’s aggressive advertising efforts in Las Vegas.)

2. People rang in the new year with 3 billion Google Photos.
“One area that’s really benefiting from our advancements in AI is photography,” Pichai said during Thursday’s call.

He noted that the Pixel 2 phone employs machine learning and video stabilization to up the ante on smartphone video standards.

He also shared a crazy stat about Google Photos: More than 3 billion photos and videos were uploaded to Google Photos on New Year’s Eve alone.

3. Google Cloud brings in $1 billion per quarter.
“Google Cloud, which includes Google Cloud platform and G suite, has reached meaningful scale,” Pichai said. “And I’m excited to share today that it’s already a billion-dollar-per-quarter business. In fact, we believe that Google Cloud platform, based on publicly reported data for the 12 months ending December 2017, is the fastest-growing major public cloud provider in the world.”

Read the source article at Entrepreneur.

CognitiveScale CEO: What to expect in AI in 2018

By Akshay Sabhikhi, CEO & Co-founder, CognitiveScale AI Will Not Be Commoditized Only one in 20 companies has extensively incorporated AI in offerings or processes. Less than 39% of all companies have an AI strategy in place. According to MIT Sloan Review, the largest companies — those with at least 100,000 employees — are the […]

By Akshay Sabhikhi, CEO & Co-founder, CognitiveScale

AI Will Not Be Commoditized

Only one in 20 companies has extensively incorporated AI in offerings or processes. Less than 39% of all companies have an AI strategy in place. According to MIT Sloan Review, the largest companies — those with at least 100,000 employees — are the most likely to have an AI strategy, but only half have one.

Despite claims that AI is already being subsumed into an array of applications, we’re not there yet and won’t be in 2018. It is still the early days of adoption, and those companies that are implementing AI now will see the biggest competitive value.

Funding for AI Will Move From Innovation to Operations

In 2018, we will see budgets for AI shifting from innovation to operations as more companies realize the transformative benefits of moving AI out of the lab and into practical operations within their organizations. Because there will be this shift, chief data/technology officers will serve a more important role within their organization as they take experimental AI and make it “real” business.

AI Will Be a Critical Part of Compliance

Companies in many industries, particularly financial services, must follow government and industry regulations. As a means to ensure compliance while simultaneously reducing the effort that’s involved, we will see a new interest in machine readable policies. AI will automate the labor-intensive process associated with compliance, freeing humans to focus on business-building efforts instead.

AI vs RPA: AI Will Win

Robotic process automation (or RPA) is an emerging form of clerical process automation technology based on the notion of software robots. However RPA has very little ability to actually learn, which will keep it focused on mundane, rules-driven, repetitive tasks.

While serving this need will keep RPA growing in popularity through 2018, true AI, driven by machine learning, will produce the greatest ROI. These two technologies will be mentioned together often in 2018, but for ROI, AI will be seen as the clear winner.

Ai Will Drive Cloud Adoption

While many companies long ago adopted cloud, there are still many businesses figuring out the right mix of cloud and on-premise. AI will cause them to make those decisions more quickly and aggressively. Many will choose to go all-in on the cloud right away. The promise of AI in the cloud – all the benefits combined with greater affordability and faster implementations – will drive overall cloud adoption rates even higher.

Two Worlds Collide: Software Development and Data Science Will No Longer Be Separate

There will be an expectation for software developers to have basic data science skill sets. Data science and software development within an enterprise remained very separate until the adoption of AI. As AI moves away from experimentation and into operation, developers with these skills will be in high demand.

Universities and developer training programs will adapt their curriculums to foster these skills.

Hardware Vendors Will Throw Their Hats in the Ai Ring

In 2018 we will see more hardware manufacturers (including chipmakers) invest in AI. By allowing machine learning to happen not in the cloud but on mobile devices, data is more secure and the benefits of AI are realized without depending on an Internet connection and processors in a remote data center.

Hardware vendors without an expressed AI strategy will fall behind.

NVIDIA GPU Cloud Now Available to Thousands of AI Researchers Using NVIDIA Desktop GPUs

NVIDIA this week announced that hundreds of thousands of AI researchers using desktop GPUs can now tap into the power of NVIDIA GPU Cloud (NGC) as the company has extended NGC support to NVIDIA TITAN. NVIDIA also announced expanded NGC capabilities — adding new software and other key updates to the NGC container registry — to provide researchers […]

NVIDIA this week announced that hundreds of thousands of AI researchers using desktop GPUs can now tap into the power of NVIDIA GPU Cloud (NGC) as the company has extended NGC support to NVIDIA TITAN.

NVIDIA also announced expanded NGC capabilities — adding new software and other key updates to the NGC container registry — to provide researchers a broader, more powerful set of tools to advance their AI and high performance computing research and development efforts.

Customers using NVIDIA® Pascal™ architecture-powered TITAN GPUs can sign up immediately for a no-charge NGC account and gain full access to a comprehensive catalog of GPU-optimized deep learning and HPC software and tools. Other supported computing platforms include NVIDIA DGX-1™, DGX Station and NVIDIA Volta-enabled instances on Amazon EC2.

Software available through NGC’s rapidly expanding container registry includes NVIDIA optimized deep learning frameworks such as TensorFlow and PyTorch, third-party managed HPC applications, NVIDIA HPC visualization tools, and NVIDIA’s programmable inference accelerator, NVIDIA TensorRT™ 3.0.

“We built NVIDIA GPU Cloud to give AI developers easy access to the software they need to do groundbreaking work,” said Jim McHugh, vice president and general manager of enterprise systems at NVIDIA. “With GPU-optimized software now available to hundreds of thousands of researchers using NVIDIA desktop GPUs, NGC will be a catalyst for AI breakthroughs and a go-to resource for developers worldwide.”

An early adopter of NGC is GE Healthcare. The first medical device maker to use NGC, the company is tapping the deep learning software in NGC’s container registry to accelerate bringing the most sophisticated AI to its 500,000 imaging devices globally with the goal of improving patient care.

Read the full press release at NVIDIA.com.

Google Cloud Platform cuts the price of GPUs by up to 36 percent

Google has announced lower prices for the use of Nvidia’s Tesla GPUs through its Compute Engine by up to 36 percent. In U.S. regions, using the somewhat older K80 GPUs will now cost $0.45 per hour while using the newer and more powerful P100 machines will cost $1.46 per hour (all with per-second billing). The company is also dropping the […]

Google has announced lower prices for the use of Nvidia’s Tesla GPUs through its Compute Engine by up to 36 percent. In U.S. regions, using the somewhat older K80 GPUs will now cost $0.45 per hour while using the newer and more powerful P100 machines will cost $1.46 per hour (all with per-second billing).

The company is also dropping the prices for preemptible local SSDs by almost 40 percent. “Preemptible local SSDs” refers to local SSDs attached to Google’s preemptible VMs. You can’t attach GPUs to preemptible instances, though, so this is a nice little bonus announcement — but it isn’t going to directly benefit GPU users.

As for the new GPU pricing, it’s clear that Google is aiming this feature at developers who want to run their own machine learning workloads on its cloud, though there also are a number of other applications — including physical simulations and molecular modeling — that greatly benefit from the hundreds of cores that are now available on these GPUs. The P100, which is officially still in beta on the Google Cloud Platform, features 3594 cores, for example.

Developers can attach up to four P100 and eight K80 dies to each instance. Like regular VMs, GPU users will also receive sustained-use discounts, though most users probably don’t keep their GPUs running for a full month.

It’s hard not to see this announcement in the light of AWS’s upcoming annual developer conference, which will take over most of Las Vegas’s hotel conference space next week. AWS is expected to make a number of AI and machine learning announcements, and chances are we’ll see some price cuts from AWS, too.

Read the source article at TechCrunch.

Intel poaches AMD’s Raja to counter Nvidia machine-learning lead

Intel has snatched rival AMD’s former SVP and Chief Architect of its Radeon GPU division, Raja Koduri (above), and tasked him with heading up the new Core and Visual Computing Group, a new division that Intel hopes will provide discrete GPU cards and integrated graphics to counter Nvidia’s incursion. It looks like Intel is about […]

Intel has snatched rival AMD’s former SVP and Chief Architect of its Radeon GPU division, Raja Koduri (above), and tasked him with heading up the new Core and Visual Computing Group, a new division that Intel hopes will provide discrete GPU cards and integrated graphics to counter Nvidia’s incursion. It looks like Intel is about to try and out-muscle Nvidia’s video cards with its own GPUs.

Koduri, the public face of the Radeon group, bowed out a few months ago, saying he planned to recover from the Ryzen and Vega projects and take some family time. However, it seems that Koduri was planning a new type of family, and was poached for the new job by Intel. AMD won’t be amused, but it’s an endorsement of their previous staffer that Intel is putting him in charge of a group that is squarely aimed at preventing Nvidia tearing chunks out of it.

Intel is talking about extending its integrated GPUs into edge-devices, which is hardly revolutionary, considering they are already on-board the CPUs it hopes to ship to power these sorts of gateways and monitoring devices. However, the company is also planning on developing high-end GPUs – hopefully with more success that the i740 and Larrabee (which actually eventually morphed into the x86-based Xeon Phi, which is losing ground to Nvidia).

However, Qualcomm’s new Centriq 2400 CPU is another threat that Intel needs to mitigate, as are server-grade CPUs from Cavium, which both Google and Microsoft supporting the ARM-based initiatives. Microsoft’s Project Olympus and its Open Compute community are notable examples, with the second-largest cloud computing player saying it planned on moving some of its workload onto ARM CPUs.

While those ARM chips might not be used in the most demanding applications, perhaps only seen in storage boxes where ARM’s low-power competence could help slash energy bills for the data center operators, Microsoft has also moved to make Windows compatible with ARM for laptops and desktops – something that Intel has warned Microsoft about, with threats of a lawsuit regarding x86 emulation on ARM.

For a long time, Intel has been able to view all data center compute market growth as assured sales for its Xeon CPUs – the workhorse behind all server-based applications, and fundamental to their applications. However, newer AI and ML demands currently favor GPU-based processing, and might eventually move to ASICs and other purpose-build chips like Google’s TPU (Tensor Processing Unit).

With all those new applications, which all contribute to overall growth in demand for data center processing requirements, Intel has to view them as threats to its Xeons. Now, a couple of Xeons might be used in a server rack that houses dozens of GPU acclerator cards, from the likes of Nvidia or AMD, whereas a few years ago, Intel would have expected the same rack to be packed to the gills with Xeons, in a CPU-only architecture. But that paradigm has shifted, and Intel knows this.

In a similar vein, edge-computing could damage the overall demand for data center processing of any kind. Bandwidth costs to move data from the edge to the cloud could act as a strong disincentive to developers, and there are benefits to carrying out data-based decision making at the edge for latency-sensitive applications, as that application doesn’t have data transported to a cloud and then await instructions.

Intel and AMD have also just partnered to develop a new part for laptops and tablets, which combines an Intel CPU with a Radeon GPU on a single PCB – aimed at developers searching for a powerful graphics option in a thin enough form factor. The exact specifications of both components are not clear, but Intel’s Embedded Multi-Die Interconnect Bridge (EMIB) tech is responsible for linking the two processors.

The move shows a united front against Nvidia in mobile devices, and comes despite historic hostility between the pair – where has long been the underdog, upset at the perceived abuse of Intel’s dominant x86 market position. Demand for PCs has been sluggish in the past few years, with different forecasts giving mixed views but a consensus of a stall and decline, and a new generation of ultra-thin laptops with powerful graphics capabilities could help turn that around.

Apple is also an AMD fan, and these new parts may well find their way into its PCs, but there were rumors that it was considering moving from Intel to AMD for its laptop CPUs – which might have prompted the deal.

Intel doesn’t have much to worry about in the PC market from AMD, thanks to its gargantuan R&D budget and current dominance. Anything AMD’s CPUs (the new Ryzen range) throw at Intel can be countered by a price cut or the release of the next feature or design that Intel has been sitting on in its labs. While its integrated Iris and GT GPUs do the job for basic tasks, discrete GPUs in desktops have been required for any sort of video-based task – and that’s a paradigm unlikely to change any time soon.

With the new group, it isn’t clear whether Intel is planning on adapting Iris to create a PCI-card product, or if it is planning on using an entirely new GPU design. Iris doesn’t have a great reputation among GPUs, but if Intel starts rolling out new GPUs, we would expect AMD to respond with some sort of legal challenge – given that it never got the chance to put Kudari on gardening leave. There also seems to be no form of no-compete clause, which has allowed him to waltz over to Intel.

Intel’s Chief Engineering Officer, Murthy Renduchintala, said “we have exciting plans to aggressively expand our computing and graphics capabilities, and build on our very strong and broad differentiated IP foundation. With Raja at the helm of our Core and Visual Computing Group, we will add to our portfolio of unmatched capabilities, advance our strategy to lead in computing and graphics, and ultimately be the driving force of the data revolution.”

As for Koduri, a series of tweets said that he had spent more than two-thirds of his adult life with Radeon, and that the AMD team will always be family. “It will be a massive understatement to say that I am beyond excited about my new role at Intel. I haven’t yet seen anything written that groks the magnitude of what I am pursuing. The scale of it is not even remotely close to what I was doing before.”

Source article posted by Rethink Technology Research.